Improving the efficiency of deconvolution algorithms for sound source localization
DEFF Research Database (Denmark)
Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.
2015-01-01
of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...
Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments
Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.
2008-04-01
We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.
Mixed Far-Field and Near-Field Source Localization Algorithm via Sparse Subarrays
Directory of Open Access Journals (Sweden)
Jiaqi Song
2018-01-01
Full Text Available Based on a dual-size shift invariance sparse linear array, this paper presents a novel algorithm for the localization of mixed far-field and near-field sources. First, by constructing a cumulant matrix with only direction-of-arrival (DOA information, the proposed algorithm decouples the DOA estimation from the range estimation. The cumulant-domain quarter-wavelength invariance yields unambiguous estimates of DOAs, which are then used as coarse references to disambiguate the phase ambiguities in fine estimates induced from the larger spatial invariance. Then, based on the estimated DOAs, another cumulant matrix is derived and decoupled to generate unambiguous and cyclically ambiguous estimates of range parameter. According to the coarse range estimation, the types of sources can be identified and the unambiguous fine range estimates of NF sources are obtained after disambiguation. Compared with some existing algorithms, the proposed algorithm enjoys extended array aperture and higher estimation accuracy. Simulation results are given to validate the performance of the proposed algorithm.
Directory of Open Access Journals (Sweden)
Yidong Xu
2017-10-01
Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.
Directory of Open Access Journals (Sweden)
Le Zuo
2018-02-01
Full Text Available This paper presents an analytic algorithm for estimating three-dimensional (3-D localization of a single source with uniform circular array (UCA interferometers. Fourier transforms are exploited to expand the phase distribution of a single source and the localization problem is reformulated as an equivalent spectrum manipulation problem. The 3-D parameters are decoupled to different spectrums in the Fourier domain. Algebraic relations are established between the 3-D localization parameters and the Fourier spectrums. Fourier sampling theorem ensures that the minimum element number for 3-D localization of a single source with a UCA is five. Accuracy analysis provides mathematical insights into the 3-D localization algorithm that larger number of elements gives higher estimation accuracy. In addition, the phase-based high-order difference invariance (HODI property of a UCA is found and exploited to realize phase range compression. Following phase range compression, ambiguity resolution is addressed by the HODI of a UCA. A major advantage of the algorithm is that the ambiguity resolution and 3-D localization estimation are both analytic and are processed simultaneously, hence computationally efficient. Numerical simulations and experimental results are provided to verify the effectiveness of the proposed 3-D localization algorithm.
Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao
2016-03-19
Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning.
Energy Technology Data Exchange (ETDEWEB)
Li, Xinya [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Deng, Zhiqun Daniel [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Rauchenstein, Lynn T. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Carlson, Thomas J. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based and maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.
The Chandra Source Catalog: Algorithms
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
International Nuclear Information System (INIS)
Saari, J.
1989-12-01
The paper describes procedures for automatic location of local events by using single-site, three-component (3c) seismogram records. Epicentral distance is determined from the time difference between P- and S-onsets. For onset time estimates a special phase picker algorithm is introduced. Onset detection is accomplished by comparing short-term average with long-term average after multiplication of north, east and vertical components of recording. For epicentral distances up to 100 km, errors seldom exceed 5 km. The slowness vector, essentially the azimuth, is estimated independently by using the Christoffersson et al. (1988) 'polarization' technique, although a priori knowledge of the P-onset time gives the best results. Differences between 'true' and observed azimuths are generally less than 12 deg C. Practical examples are given by demonstrating the viability of the procedures for automated 3c seismogram analysis. The results obtained compare favourably with those achieved by a miniarray of three stations. (orig.)
Han, Jooman; Sic Kim, June; Chung, Chun Kee; Park, Kwang Suk
2007-08-01
The imaging of neural sources of magnetoencephalographic data based on distributed source models requires additional constraints on the source distribution in order to overcome ill-posedness and obtain a plausible solution. The minimum lp norm (0 temporal gyrus.
Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M
2016-08-01
One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.
Energy Technology Data Exchange (ETDEWEB)
Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)
2016-01-15
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
International Nuclear Information System (INIS)
Maglevanny, I.I.; Smolar, V.A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Localization Algorithms of Underwater Wireless Sensor Networks: A Survey
Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng
2012-01-01
In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752
A locally adaptive algorithm for shadow correction in color images
Karnaukhov, Victor; Kober, Vitaly
2017-09-01
The paper deals with correction of color images distorted by spatially nonuniform illumination. A serious distortion occurs in real conditions when a part of the scene containing 3D objects close to a directed light source is illuminated much brighter than the rest of the scene. A locally-adaptive algorithm for correction of shadow regions in color images is proposed. The algorithm consists of segmentation of shadow areas with rank-order statistics followed by correction of nonuniform illumination with human visual perception approach. The performance of the proposed algorithm is compared to that of common algorithms for correction of color images containing shadow regions.
Vergallo, P.; Lay-Ekuakille, A.
2013-08-01
Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to
Local simulation algorithms for Coulombic interactions
Indian Academy of Sciences (India)
We consider a problem in dynamically constrained Monte Carlo dynamics and show that this leads to the generation of long ranged effective interactions. This allows us to construct a local algorithm for the simulation of charged systems without ever having to evaluate pair potentials or solve the Poisson equation.
Hybrid Firefly Variants Algorithm for Localization Optimization in WSN
Directory of Open Access Journals (Sweden)
P. SrideviPonmalar
2017-01-01
Full Text Available Localization is one of the key issues in wireless sensor networks. Several algorithms and techniques have been introduced for localization. Localization is a procedural technique of estimating the sensor node location. In this paper, a novel three hybrid algorithms based on firefly is proposed for localization problem. Hybrid Genetic Algorithm-Firefly Localization Algorithm (GA-FFLA, Hybrid Differential Evolution-Firefly Localization Algorithm (DE-FFLA and Hybrid Particle Swarm Optimization -Firefly Localization Algorithm (PSO-FFLA are analyzed, designed and implemented to optimize the localization error. The localization algorithms are compared based on accuracy of estimation of location, time complexity and iterations required to achieve the accuracy. All the algorithms have hundred percent estimation accuracy but with variations in the number of firefliesr requirements, variation in time complexity and number of iteration requirements. Keywords: Localization; Genetic Algorithm; Differential Evolution; Particle Swarm Optimization
A Source Identification Algorithm for INTEGRAL
Scaringi, Simone; Bird, Antony J.; Clark, David J.; Dean, Anthony J.; Hill, Adam B.; McBride, Vanessa A.; Shaw, Simon E.
2008-12-01
We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. The key steps of candidate searching, filtering and feature extraction are described. Three training and testing sets are created in order to deal with the diverse timescales and diverse objects encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the Transient Matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples.
Escaping "localisms" in IT sourcing
DEFF Research Database (Denmark)
Mola, L.; Carugati, Andrea
2012-01-01
Organizations are limited in their choices by the institutional environment in which they operate. This is particularly true for IT sourcing decisions that go beyond cost considerations and are constrained by traditions, geographical location, and social networks. This article investigates how......, organizations can strike a balance between the different institutional logics guiding IT sourcing decisions and eventually shift from the dominant logic of localism to a logic of market efficiency. This change does not depend from a choice but rather builds on a process through which IT management competences...
Alternative confidence measure for local matching stereo algorithms
CSIR Research Space (South Africa)
Ndhlovu, T
2009-11-01
Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...
A Study on Water Pollution Source Localization in Sensor Networks
Directory of Open Access Journals (Sweden)
Jun Yang
2016-01-01
Full Text Available The water pollution source localization is of great significance to water environment protection. In this paper, a study on water pollution source localization is presented. Firstly, the source detection is discussed. Then, the coarse localization methods and the localization methods based on diffusion models are introduced and analyzed, respectively. In addition, the localization method based on the contour is proposed. The detection and localization methods are compared in experiments finally. The results show that the detection method using hypotheses testing is more stable. The performance of the coarse localization algorithm depends on the nodes density. The localization based on the diffusion model can yield precise localization results; however, the results are not stable. The localization method based on the contour is better than the other two localization methods when the concentration contours are axisymmetric. Thus, in the water pollution source localization, the detection using hypotheses testing is more preferable in the source detection step. If concentration contours are axisymmetric, the localization method based on the contour is the first option. And, in case the nodes are dense and there is no explicit diffusion model, the coarse localization algorithm can be used, or else the localization based on diffusion models is a good choice.
ISINA: INTEGRAL Source Identification Network Algorithm
Scaringi, S.; Bird, A. J.; Clark, D. J.; Dean, A. J.; Hill, A. B.; McBride, V. A.; Shaw, S. E.
2008-11-01
We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using random forests, is applied to the IBIS/ISGRI data set in order to ease the production of unbiased future soft gamma-ray source catalogues. First, we introduce the data set and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse time-scales encountered when dealing with the gamma-ray sky. Three independent random forests are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the transient matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain), Czech Republic and Poland, and the participation of Russia and the USA. E-mail: simo@astro.soton.ac.uk
Algebraic Algorithm Design and Local Search
National Research Council Canada - National Science Library
Graham, Robert
1996-01-01
.... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...
Energy-Based Acoustic Source Localization Methods: A Survey
Directory of Open Access Journals (Sweden)
Wei Meng
2017-02-01
Full Text Available Energy-based source localization is an important problem in wireless sensor networks (WSNs, which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE and nonlinear-least-squares (NLS methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.
Hybridizing Evolutionary Algorithms with Opportunistic Local Search
DEFF Research Database (Denmark)
Gießen, Christian
2013-01-01
There is empirical evidence that memetic algorithms (MAs) can outperform plain evolutionary algorithms (EAs). Recently the first runtime analyses have been presented proving the aforementioned conjecture rigorously by investigating Variable-Depth Search, VDS for short (Sudholt, 2008). Sudholt...
An Algorithm for the Accurate Localization of Sounds
National Research Council Canada - National Science Library
MacDonald, Justin A
2005-01-01
.... The algorithm requires no a priori knowledge of the stimuli to be localized. The accuracy of the algorithm was tested using binaural recordings from a pair of microphones mounted in the ear canals of an acoustic mannequin...
A Scalable Local Algorithm for Distributed Multivariate Regression
National Aeronautics and Space Administration — This paper offers a local distributed algorithm for multivariate regression in large peer-to-peer environments. The algorithm can be used for distributed...
An Efficient Local Algorithm for Distributed Multivariate Regression
National Aeronautics and Space Administration — This paper offers a local distributed algorithm for multivariate regression in large peer-to-peer environments. The algorithm is designed for distributed...
Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Zhang Dongyang
2014-02-01
Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.
Robust iterative observer for source localization for Poisson equation
Majeed, Muhammad Usman
2017-01-05
Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.
Robust iterative observer for source localization for Poisson equation
Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem
2017-01-01
Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.
Local Community Detection Algorithm Based on Minimal Cluster
Directory of Open Access Journals (Sweden)
Yong Zhou
2016-01-01
Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.
Smoothed Analysis of Local Search Algorithms
Manthey, Bodo; Dehne, Frank; Sack, Jörg-Rüdiger; Stege, Ulrike
2015-01-01
Smoothed analysis is a method for analyzing the performance of algorithms for which classical worst-case analysis fails to explain the performance observed in practice. Smoothed analysis has been applied to explain the performance of a variety of algorithms in the last years. One particular class of
Near-Field Source Localization Using a Special Cumulant Matrix
Cui, Han; Wei, Gang
A new near-field source localization algorithm based on a uniform linear array was proposed. The proposed algorithm estimates each parameter separately but does not need pairing parameters. It can be divided into two important steps. The first step is bearing-related electric angle estimation based on the ESPRIT algorithm by constructing a special cumulant matrix. The second step is the other electric angle estimation based on the 1-D MUSIC spectrum. It offers much lower computational complexity than the traditional near-field 2-D MUSIC algorithm and has better performance than the high-order ESPRIT algorithm. Simulation results demonstrate that the performance of the proposed algorithm is close to the Cramer-Rao Bound (CRB).
Source localization of rhythmic ictal EEG activity
DEFF Research Database (Denmark)
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana
2013-01-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal...... EEG activity using a distributed source model....
Document localization algorithms based on feature points and straight lines
Skoryukina, Natalya; Shemiakina, Julia; Arlazarov, Vladimir L.; Faradjev, Igor
2018-04-01
The important part of the system of a planar rectangular object analysis is the localization: the estimation of projective transform from template image of an object to its photograph. The system also includes such subsystems as the selection and recognition of text fields, the usage of contexts etc. In this paper three localization algorithms are described. All algorithms use feature points and two of them also analyze near-horizontal and near- vertical lines on the photograph. The algorithms and their combinations are tested on a dataset of real document photographs. Also the method of localization quality estimation is proposed that allows configuring the localization subsystem independently of the other subsystems quality.
Institute of Scientific and Technical Information of China (English)
郑家芝
2016-01-01
为了准确的进行相邻的相干信号源定位，提出了一种基于多重信号分类群延迟(MUSIC-group delay)的改进算法。首先，将空间平滑技术引入到波达方向(DoA)估计当中去除部分相干信号。由于在信号源相邻的情况下子空间算法的性能降低，就结合了 MUSIC-Group Delay算法来区分相邻的信号源，这种方法因为自身的加和性通过 MUSIC 相位谱来计算群延迟函数，从而能估计出相邻的信号源。理论分析和仿真结果表明提出的方法估计相邻的相干信号源比子空间算法更精确，分辨率更高。%In this paper,the closely spaced coherent-source localization is considered,and an improved method based on the group delay of Multiple Signal Classification (MUSIC)is presented.Firstly,we introduce the spatial smoothing technique into direction of arrival (DoA)estimation to get rid of the coherent part of signals.Due to the degraded per-formance of sub-space based methods on the condition of nearby sources,we then utilize the MUSIC-Group Delay algo-rithm to distinguish the closely spaced sources,which can resolve spatially close sources by the use of the group delay function computed from the MUSIC phase spectrum for efficient DoA estimation owing to its spatial additive property. Theoretical analysis and simulation results demonstrate that the proposed approach can estimate the DoA of the coherent close signal sources more precisely and have higher resolution compared with sub-space based methods.
Rate-control algorithms testing by using video source model
DEFF Research Database (Denmark)
Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna
2008-01-01
In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....
Nakayama, Hiromasa
2006-01-01
We give an algorithm to compute the local $b$ function. In this algorithm, we use the Mora division algorithm in the ring of differential operators and an approximate division algorithm in the ring of differential operators with power series coefficient.
Theory and Algorithms for Global/Local Design Optimization
National Research Council Canada - National Science Library
Watson, Layne T; Guerdal, Zafer; Haftka, Raphael T
2005-01-01
The motivating application for this research is the global/local optimal design of composite aircraft structures such as wings and fuselages, but the theory and algorithms are more widely applicable...
Engineering local optimality in quantum Monte Carlo algorithms
Pollet, Lode; Van Houcke, Kris; Rombouts, Stefan M. A.
2007-08-01
Quantum Monte Carlo algorithms based on a world-line representation such as the worm algorithm and the directed loop algorithm are among the most powerful numerical techniques for the simulation of non-frustrated spin models and of bosonic models. Both algorithms work in the grand-canonical ensemble and can have a winding number larger than zero. However, they retain a lot of intrinsic degrees of freedom which can be used to optimize the algorithm. We let us guide by the rigorous statements on the globally optimal form of Markov chain Monte Carlo simulations in order to devise a locally optimal formulation of the worm algorithm while incorporating ideas from the directed loop algorithm. We provide numerical examples for the soft-core Bose-Hubbard model and various spin- S models.
Source localization using recursively applied and projected (RAP) MUSIC
Energy Technology Data Exchange (ETDEWEB)
Mosher, J.C. [Los Alamos National Lab., NM (United States); Leahy, R.M. [Univ. of Southern California, Los Angeles, CA (United States). Signal and Image Processing Inst.
1998-03-01
A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles, the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.
A dynamic global and local combined particle swarm optimization algorithm
International Nuclear Information System (INIS)
Jiao Bin; Lian Zhigang; Chen Qunxian
2009-01-01
Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.
Error Estimation for the Linearized Auto-Localization Algorithm
Directory of Open Access Journals (Sweden)
Fernando Seco
2012-02-01
Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.
A range-based predictive localization algorithm for WSID networks
Liu, Yuan; Chen, Junjie; Li, Gang
2017-11-01
Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.
Extended SVM algorithms for multilevel trans-Z-source inverter
Directory of Open Access Journals (Sweden)
Aida Baghbany Oskouei
2016-03-01
Full Text Available This paper suggests extended algorithms for multilevel trans-Z-source inverter. These algorithms are based on space vector modulation (SVM, which works with high switching frequency and does not generate the mean value of the desired load voltage in every switching interval. In this topology the output voltage is not limited to dc voltage source similar to traditional cascaded multilevel inverter and can be increased with trans-Z-network shoot-through state control. Besides, it is more reliable against short circuit, and due to several number of dc sources in each phase of this topology, it is possible to use it in hybrid renewable energy. Proposed SVM algorithms include the following: Combined modulation algorithm (SVPWM and shoot-through implementation in dwell times of voltage vectors algorithm. These algorithms are compared from viewpoint of simplicity, accuracy, number of switching, and THD. Simulation and experimental results are presented to demonstrate the expected representations.
A Dedicated Genetic Algorithm for Localization of Moving Magnetic Objects
Directory of Open Access Journals (Sweden)
Roger Alimi
2015-09-01
Full Text Available A dedicated Genetic Algorithm (GA has been developed to localize the trajectory of ferromagnetic moving objects within a bounded perimeter. Localization of moving ferromagnetic objects is an important tool because it can be employed in situations when the object is obscured. This work is innovative for two main reasons: first, the GA has been tuned to provide an accurate and fast solution to the inverse magnetic field equations problem. Second, the algorithm has been successfully tested using real-life experimental data. Very accurate trajectory localization estimations were obtained over a wide range of scenarios.
Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs
Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue
2011-01-01
This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972
Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs
Directory of Open Access Journals (Sweden)
Jian Wan
2011-06-01
Full Text Available This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.
EEG and MEG source localization using recursively applied (RAP) MUSIC
Energy Technology Data Exchange (ETDEWEB)
Mosher, J.C. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States). Signal and Image Processing Inst.
1996-12-31
The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which uses the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.
An improved cut-and-solve algorithm for the single-source capacitated facility location problem
DEFF Research Database (Denmark)
Gadegaard, Sune Lauth; Klose, Andreas; Nielsen, Lars Relund
2018-01-01
In this paper, we present an improved cut-and-solve algorithm for the single-source capacitated facility location problem. The algorithm consists of three phases. The first phase strengthens the integer program by a cutting plane algorithm to obtain a tight lower bound. The second phase uses a two......-level local branching heuristic to find an upper bound, and if optimality has not yet been established, the third phase uses the cut-and-solve framework to close the optimality gap. Extensive computational results are reported, showing that the proposed algorithm runs 10–80 times faster on average compared...
Source localization in an ocean waveguide using supervised machine learning.
Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter
2017-09-01
Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.
Local anesthesia selection algorithm in patients with concomitant somatic diseases.
Anisimova, E N; Sokhov, S T; Letunova, N Y; Orekhova, I V; Gromovik, M V; Erilin, E A; Ryazantsev, N A
2016-01-01
The paper presents basic principles of local anesthesia selection in patients with concomitant somatic diseases. These principles are history taking; analysis of drugs interaction with local anesthetic and sedation agents; determination of the functional status of the patient; patient anxiety correction; dental care with monitoring of hemodynamics parameters. It was found that adhering to this algorithm promotes prevention of urgent conditions in patients in outpatient dentistry.
Hearing aid controlled by binaural source localizer
2009-01-01
An adaptive directional hearing aid system comprising a left hearing aid and a right hearing aid, wherein a binaural acoustic source localizer is located in the left hearing aid or in the right hearing aid or in a separate body- worn device connected wirelessly to the left hearing aid and the right
Acoustic source localization : Exploring theory and practice
Wind, Jelmer
2009-01-01
Over the past few decades, noise pollution became an important issue in modern society. This has led to an increased effort in the industry to reduce noise. Acoustic source localization methods determine the location and strength of the vibrations which are the cause of sound based onmeasurements of
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S
2012-03-01
In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.
Linear Time Local Approximation Algorithm for Maximum Stable Marriage
Directory of Open Access Journals (Sweden)
Zoltán Király
2013-08-01
Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.
An alternative subspace approach to EEG dipole source localization
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
An alternative subspace approach to EEG dipole source localization
International Nuclear Information System (INIS)
Xu Xiaoliang; Xu, Bobby; He Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist
Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing
Directory of Open Access Journals (Sweden)
Jiayin Liu
2017-06-01
Full Text Available Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC, which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF, which is estimated by Kernel Density Estimation (KDE with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.
Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.
Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing
2017-06-12
Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.
GPS-Free Localization Algorithm for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Lei Wang
2010-06-01
Full Text Available Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time.
Search and localization of orphan sources
International Nuclear Information System (INIS)
Gayral, J.-P.
2001-01-01
The control of all radioactive materials should be a major and permanent concern of every state. This paper outlines some of the steps which should be taken in order to detect and localize orphan sources. Two of them are of great importance to any state wishing to resolve the orphan source problem. The first one is to analyse the situation and the second is to establish a strategy before taking action. It is the responsibility of the state to work on the first step; but for the second one it can draw on the advice of the IAEA specialists with experience grained from a variety of situations
A space-efficient algorithm for local similarities.
Huang, X Q; Hardison, R C; Miller, W
1990-10-01
Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.
Acoustic Source Localization and Beamforming: Theory and Practice
Directory of Open Access Journals (Sweden)
Chen Joe C
2003-01-01
Full Text Available We consider the theoretical and practical aspects of locating acoustic sources using an array of microphones. A maximum-likelihood (ML direct localization is obtained when the sound source is near the array, while in the far-field case, we demonstrate the localization via the cross bearing from several widely separated arrays. In the case of multiple sources, an alternating projection procedure is applied to determine the ML estimate of the DOAs from the observed data. The ML estimator is shown to be effective in locating sound sources of various types, for example, vehicle, music, and even white noise. From the theoretical Cramér-Rao bound analysis, we find that better source location estimates can be obtained for high-frequency signals than low-frequency signals. In addition, large range estimation error results when the source signal is unknown, but such unknown parameter does not have much impact on angle estimation. Much experimentally measured acoustic data was used to verify the proposed algorithms.
A Study on Improvement of Algorithm for Source Term Evaluation
International Nuclear Information System (INIS)
Park, Jeong Ho; Park, Do Hyung; Lee, Jae Hee
2010-03-01
The program developed by KAERI for source term assessment of radwastes from the advanced nuclear fuel cycle consists of spent fuel database analysis module, spent fuel arising projection module, and automatic characterization module for radwastes from pyroprocess. To improve the algorithm adopted the developed program, following items were carried out: - development of an algorithm to decrease analysis time for spent fuel database - development of setup routine for a analysis procedure - improvement of interface for spent fuel arising projection module - optimization of data management algorithm needed for massive calculation to estimate source terms of radwastes from advanced fuel cycle The program developed through this study has a capability to perform source term estimation although several spent fuel assemblies with different fuel design, initial enrichment, irradiation history, discharge burnup, and cooling time are processed at the same time in the pyroprocess. It is expected that this program will be very useful for the design of unit process of pyroprocess and disposal system
A novel algorithm for automatic localization of human eyes
Institute of Scientific and Technical Information of China (English)
Liang Tao (陶亮); Juanjuan Gu (顾涓涓); Zhenquan Zhuang (庄镇泉)
2003-01-01
Based on geometrical facial features and image segmentation, we present a novel algorithm for automatic localization of human eyes in grayscale or color still images with complex background. Firstly, a determination criterion of eye location is established by the prior knowledge of geometrical facial features. Secondly,a range of threshold values that would separate eye blocks from others in a segmented face image (I.e.,a binary image) are estimated. Thirdly, with the progressive increase of the threshold by an appropriate step in that range, once two eye blocks appear from the segmented image, they will be detected by the determination criterion of eye location. Finally, the 2D correlation coefficient is used as a symmetry similarity measure to check the factuality of the two detected eyes. To avoid the background interference, skin color segmentation can be applied in order to enhance the accuracy of eye detection. The experimental results demonstrate the high efficiency of the algorithm and correct localization rate.
Brain source localization using a fourth-order deflation scheme
Albera, Laurent; Ferréol, Anne; Cosandier-Rimélé, Delphine; Merlet, Isabel; Wendling, Fabrice
2008-01-01
A high resolution method for solving potentially ill-posed inverse problems is proposed. This method named FO-D-MUSIC allows for localization of brain current sources with unconstrained orientations from surface electro- or magnetoencephalographic data using spherical or realistic head geometries. The FO-D-MUSIC method is based on i) the separability of the data transfer matrix as a function of location and orientation parameters, ii) the Fourth Order (FO) virtual array theory, and iii) the deflation concept extended to FO statistics accounting for the presence of potentially but not completely statistically dependent sources. Computer results display the superiority of the FO-D-MUSIC approach in different situations (very closed sources, small number of electrodes, additive Gaussian noise with unknown spatial covariance, …) compared to classical algorithms. PMID:18269984
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
Material sound source localization through headphones
Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada
2012-09-01
In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.
Improved Bevatron local injector ion source performance
International Nuclear Information System (INIS)
Stover, G.; Zajec, E.
1985-05-01
Performance tests of the improved Bevatron Local Injector PIG Ion Source using particles of Si 4 + , Ne 3 + , and He 2 + are described. Initial measurements of the 8.4 keV/nucleon Si 4 + beam show an intensity of 100 particle microamperes with a normalized emittance of .06 π cm-mrad. A low energy beam transport line provides mass analysis, diagnostics, and matching into a 200 MHz RFQ linac. The RFQ accelerates the beam from 8.4 to 200 keV/nucleon. The injector is unusual in the sense that all ion source power supplies, the ac distribution network, vacuum control equipment, and computer control system are contained in a four bay rack mounted on insulators which is located on a floor immediately above the ion source. The rack, transmission line, and the ion source housing are raised by a dc power supply to 80 kilovolts above earth ground. All power supplies, which are referenced to rack ground, are modular in construction and easily removable for maintenance. AC power is delivered to the rack via a 21 kVA, 3-phase transformer. 2 refs., 5 figs., 1 tab
Plagiarism Detection Algorithm for Source Code in Computer Science Education
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
Application of Matrix Pencil Algorithm to Mobile Robot Localization Using Hybrid DOA/TOA Estimation
Directory of Open Access Journals (Sweden)
Lan Anh Trinh
2012-12-01
Full Text Available Localization plays an important role in robotics for the tasks of monitoring, tracking and controlling a robot. Much effort has been made to address robot localization problems in recent years. However, despite many proposed solutions and thorough consideration, in terms of developing a low-cost and fast processing method for multiple-source signals, the robot localization problem is still a challenge. In this paper, we propose a solution for robot localization with regards to these concerns. In order to locate the position of a robot, both the coordinate and the orientation of a robot are necessary. We develop a localization method using the Matrix Pencil (MP algorithm for hybrid detection of direction of arrival (DOA and time of arrival (TOA. TOA of the signal is estimated for computing the distance between the mobile robot and a base station (BS. Based on the distance and the estimated DOA, we can estimate the mobile robot's position. The characteristics of the algorithm are examined through analysing simulated experiments and the results demonstrate the advantages of our method over previous works in dealing with the above challenges. The method is constructed based on the low-cost infrastructure of radio frequency devices; the DOA/TOA estimation is performed with just single value decomposition for fast processing. Finally, the MP algorithm combined with tracking using a Kalman filter allows our proposed method to locate the positions of multiple source signals.
Liu, Hua-Long; Liu, Hua-Dong
2014-10-01
Partial discharge (PD) in power transformers is one of the prime reasons resulting in insulation degradation and power faults. Hence, it is of great importance to study the techniques of the detection and localization of PD in theory and practice. The detection and localization of PD employing acoustic emission (AE) techniques, as a kind of non-destructive testing, plus due to the advantages of powerful capability of locating and high precision, have been paid more and more attention. The localization algorithm is the key factor to decide the localization accuracy in AE localization of PD. Many kinds of localization algorithms exist for the PD source localization adopting AE techniques including intelligent and non-intelligent algorithms. However, the existed algorithms possess some defects such as the premature convergence phenomenon, poor local optimization ability and unsuitability for the field applications. To overcome the poor local optimization ability and easily caused premature convergence phenomenon of the fundamental genetic algorithm (GA), a new kind of improved GA is proposed, namely the sequence quadratic programming-genetic algorithm (SQP-GA). For the hybrid optimization algorithm, SQP-GA, the sequence quadratic programming (SQP) algorithm which is used as a basic operator is integrated into the fundamental GA, so the local searching ability of the fundamental GA is improved effectively and the premature convergence phenomenon is overcome. Experimental results of the numerical simulations of benchmark functions show that the hybrid optimization algorithm, SQP-GA, is better than the fundamental GA in the convergence speed and optimization precision, and the proposed algorithm in this paper has outstanding optimization effect. At the same time, the presented SQP-GA in the paper is applied to solve the ultrasonic localization problem of PD in transformers, then the ultrasonic localization method of PD in transformers based on the SQP-GA is proposed. And
Adaptive local backlight dimming algorithm based on local histogram and image characteristics
DEFF Research Database (Denmark)
Nadernejad, Ehsan; Burini, Nino; Korhonen, Jari
2013-01-01
-off between power consumption and image quality preservation than the other algorithms representing the state of the art among feature based backlight algorithms. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.......Liquid Crystal Display (LCDs) with Light Emitting Diode (LED) backlight is a very popular display technology, used for instance in television sets, monitors and mobile phones. This paper presents a new backlight dimming algorithm that exploits the characteristics of the target image......, such as the local histograms and the average pixel intensity of each backlight segment, to reduce the power consumption of the backlight and enhance image quality. The local histogram of the pixels within each backlight segment is calculated and, based on this average, an adaptive quantile value is extracted...
Algorithms for the process management of sealed source brachytherapy
International Nuclear Information System (INIS)
Engler, M.J.; Ulin, K.; Sternick, E.S.
1996-01-01
Incidents and misadministrations suggest that brachytherapy may benefit form clarification of the quality management program and other mandates of the US Nuclear Regulatory Commission. To that end, flowcharts of step by step subprocesses were developed and formatted with dedicated software. The overall process was similarly organized in a complex flowchart termed a general process map. Procedural and structural indicators associated with each flowchart and map were critiqued and pre-existing documentation was revised. open-quotes Step-regulation tablesclose quotes were created to refer steps and subprocesses to Nuclear Regulatory Commission rules and recommendations in their sequences of applicability. Brachytherapy algorithms were specified as programmable, recursive processes, including therapeutic dose determination and monitoring doses to the public. These algorithms are embodied in flowcharts and step-regulation tables. A general algorithm is suggested as a template form which other facilities may derive tools to facilitate process management of sealed source brachytherapy. 11 refs., 9 figs., 2 tabs
Three-dimensional localization of low activity gamma-ray sources in real-time scenarios
Energy Technology Data Exchange (ETDEWEB)
Sharma, Manish K., E-mail: mksrkf@mst.edu; Alajo, Ayodeji B.; Lee, Hyoung K.
2016-03-21
Radioactive source localization plays an important role in tracking radiation threats in homeland security tasks. Its real-time application requires computationally efficient and reasonably accurate algorithms even with limited data to support detection with minimum uncertainty. This paper describes a statistic-based grid-refinement method for backtracing the position of a gamma-ray source in a three-dimensional domain in real-time. The developed algorithm used measurements from various known detector positions to localize the source. This algorithm is based on an inverse-square relationship between source intensity at a detector and the distance from the source to the detector. The domain discretization was developed and implemented in MATLAB. The algorithm was tested and verified from simulation results of an ideal case of a point source in non-attenuating medium. Subsequently, an experimental validation of the algorithm was performed to determine the suitability of deploying this scheme in real-time scenarios. Using the measurements from five known detector positions and for a measurement time of 3 min, the source position was estimated with an accuracy of approximately 53 cm. The accuracy improved and stabilized to approximately 25 cm for higher measurement times. It was concluded that the error in source localization was primarily due to detection uncertainties. In verification and experimental validation of the algorithm, the distance between {sup 137}Cs source and any detector position was between 0.84 m and 1.77 m. The results were also compared with the least squares method. Since the discretization algorithm was validated with a weak source, it is expected that it can localize the source of higher activity in real-time. It is believed that for the same physical placement of source and detectors, a source of approximate activity 0.61–0.92 mCi can be localized in real-time with 1 s of measurement time and same accuracy. The accuracy and computational
An inverse source location algorithm for radiation portal monitor applications
International Nuclear Information System (INIS)
Miller, Karen A.; Charlton, William S.
2010-01-01
Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.
Tracking of Multiple Moving Sources Using Recursive EM Algorithm
Directory of Open Access Journals (Sweden)
Böhme Johann F
2005-01-01
Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.
SCALCE: boosting sequence compression algorithms using locally consistent encoding.
Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk
2012-12-01
The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip
Near-Field Source Localization by Using Focusing Technique
He, Hongyang; Wang, Yide; Saillard, Joseph
2008-12-01
We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.
Near-Field Source Localization by Using Focusing Technique
Directory of Open Access Journals (Sweden)
Joseph Saillard
2008-12-01
Full Text Available We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007 is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics
Synthesis of blind source separation algorithms on reconfigurable FPGA platforms
Du, Hongtao; Qi, Hairong; Szu, Harold H.
2005-03-01
Recent advances in intelligence technology have boosted the development of micro- Unmanned Air Vehicles (UAVs) including Sliver Fox, Shadow, and Scan Eagle for various surveillance and reconnaissance applications. These affordable and reusable devices have to fit a series of size, weight, and power constraints. Cameras used on such micro-UAVs are therefore mounted directly at a fixed angle without any motion-compensated gimbals. This mounting scheme has resulted in the so-called jitter effect in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum a-posteriori probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application
Fast weighted centroid algorithm for single particle localization near the information limit.
Fish, Jeremie; Scrimgeour, Jan
2015-07-10
A simple weighting scheme that enhances the localization precision of center of mass calculations for radially symmetric intensity distributions is presented. The algorithm effectively removes the biasing that is common in such center of mass calculations. Localization precision compares favorably with other localization algorithms used in super-resolution microscopy and particle tracking, while significantly reducing the processing time and memory usage. We expect that the algorithm presented will be of significant utility when fast computationally lightweight particle localization or tracking is desired.
RSS-based localization of isotropically decaying source with unknown power and pathloss factor
International Nuclear Information System (INIS)
Sun, Shunyuan; Sun, Li; Ding, Zhiguo
2016-01-01
This paper addresses the localization of an isotropically decaying source based on the received signal strength (RSS) measurements that are collected from nearby active sensors that are position-known and wirelessly connected, and it propose a novel iterative algorithm for RSS-based source localization in order to improve the location accuracy and realize real-time location and automatic monitoring for hospital patients and medical equipment in the smart hospital. In particular, we consider the general case where the source power and pathloss factor are both unknown. For such a source localization problem, we propose an iterative algorithm, in which the unknown source position and two other unknown parameters (i.e. the source power and pathloss factor) are estimated in an alternating way based on each other, with our proposed sub-optimum initial estimate on source position obtained based on the RSS measurements that are collected from a few (closest) active sensors with largest RSS values. Analysis and simulation study show that our proposed iterative algorithm guarantees globally convergence to the least-squares (LS) solution, where for our suitably assumed independent and identically distributed (i.i.d.) zero-mean Gaussian RSS measurement errors the converged localization performance achieves the optimum that corresponds to the Cramer–Rao lower bound (CRLB).
Blind source separation advances in theory, algorithms and applications
Wang, Wenwu
2014-01-01
Blind Source Separation intends to report the new results of the efforts on the study of Blind Source Separation (BSS). The book collects novel research ideas and some training in BSS, independent component analysis (ICA), artificial intelligence and signal processing applications. Furthermore, the research results previously scattered in many journals and conferences worldwide are methodically edited and presented in a unified form. The book is likely to be of interest to university researchers, R&D engineers and graduate students in computer science and electronics who wish to learn the core principles, methods, algorithms, and applications of BSS. Dr. Ganesh R. Naik works at University of Technology, Sydney, Australia; Dr. Wenwu Wang works at University of Surrey, UK.
A block matching-based registration algorithm for localization of locally advanced lung tumors
Energy Technology Data Exchange (ETDEWEB)
Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D., E-mail: gdhugo@vcu.edu [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia, 23298 (United States)
2014-04-15
Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm{sup 3}), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0
A block matching-based registration algorithm for localization of locally advanced lung tumors
International Nuclear Information System (INIS)
Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.
2014-01-01
Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm 3 ), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0.001). Left
Wodzinski, Marek; Skalski, Andrzej; Ciepiela, Izabela; Kuszewski, Tomasz; Kedzierawski, Piotr; Gajda, Janusz
2018-02-01
Knowledge about tumor bed localization and its shape analysis is a crucial factor for preventing irradiation of healthy tissues during supportive radiotherapy and as a result, cancer recurrence. The localization process is especially hard for tumors placed nearby soft tissues, which undergo complex, nonrigid deformations. Among them, breast cancer can be considered as the most representative example. A natural approach to improving tumor bed localization is the use of image registration algorithms. However, this involves two unusual aspects which are not common in typical medical image registration: the real deformation field is discontinuous, and there is no direct correspondence between the cancer and its bed in the source and the target 3D images respectively. The tumor no longer exists during radiotherapy planning. Therefore, a traditional evaluation approach based on known, smooth deformations and target registration error are not directly applicable. In this work, we propose alternative artificial deformations which model the tumor bed creation process. We perform a comprehensive evaluation of the most commonly used deformable registration algorithms: B-Splines free form deformations (B-Splines FFD), different variants of the Demons and TV-L1 optical flow. The evaluation procedure includes quantitative assessment of the dedicated artificial deformations, target registration error calculation, 3D contour propagation and medical experts visual judgment. The results demonstrate that the currently, practically applied image registration (rigid registration and B-Splines FFD) are not able to correctly reconstruct discontinuous deformation fields. We show that the symmetric Demons provide the most accurate soft tissues alignment in terms of the ability to reconstruct the deformation field, target registration error and relative tumor volume change, while B-Splines FFD and TV-L1 optical flow are not an appropriate choice for the breast tumor bed localization problem
DEFF Research Database (Denmark)
Chen, Qinyin; Hu, Y.; Chen, Zhe
2016-01-01
Node localization technology is an important technology for the Wireless Sensor Networks (WSNs) applications. An improved 3D node localization algorithm is proposed in this paper, which is based on a Multi-dimensional Scaling (MDS) node localization algorithm for large electrical equipment monito...
Extraction of Tantalum from locally sourced Tantalite using ...
African Journals Online (AJOL)
acer
Extraction of Tantalum from locally sourced Tantalite using ... ABSTRACT: The ability of polyethylene glycol solution to extract tantalum from locally .... metal ion in question by the particular extractant. ... Loparite, a rare-earth ore (Ce, Na,.
Local multiplicative Schwarz algorithms for convection-diffusion equations
Cai, Xiao-Chuan; Sarkis, Marcus
1995-01-01
We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.
Acoustic Transient Source Localization From an Aerostat
National Research Council Canada - National Science Library
Scanlon, Michael; Reiff, Christian; Noble, John
2006-01-01
The Army Research Laboratory (ARL) has conducted experiments using acoustic sensor arrays suspended below tethered aerostats to detect and localize transient signals from mortars, artillery and small arms fire...
Coded moderator approach for fast neutron source detection and localization at standoff
Energy Technology Data Exchange (ETDEWEB)
Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)
2015-06-01
Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.
Study on Data Clustering and Intelligent Decision Algorithm of Indoor Localization
Liu, Zexi
2018-01-01
Indoor positioning technology enables the human beings to have the ability of positional perception in architectural space, and there is a shortage of single network coverage and the problem of location data redundancy. So this article puts forward the indoor positioning data clustering algorithm and intelligent decision-making research, design the basic ideas of multi-source indoor positioning technology, analyzes the fingerprint localization algorithm based on distance measurement, position and orientation of inertial device integration. By optimizing the clustering processing of massive indoor location data, the data normalization pretreatment, multi-dimensional controllable clustering center and multi-factor clustering are realized, and the redundancy of locating data is reduced. In addition, the path is proposed based on neural network inference and decision, design the sparse data input layer, the dynamic feedback hidden layer and output layer, low dimensional results improve the intelligent navigation path planning.
Sound source localization and segregation with internally coupled ears
DEFF Research Database (Denmark)
Bee, Mark A; Christensen-Dalsgaard, Jakob
2016-01-01
to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla......, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating...
Monte Carlo algorithms with absorbing Markov chains: Fast local algorithms for slow dynamics
International Nuclear Information System (INIS)
Novotny, M.A.
1995-01-01
A class of Monte Carlo algorithms which incorporate absorbing Markov chains is presented. In a particular limit, the lowest order of these algorithms reduces to the n-fold way algorithm. These algorithms are applied to study the escape from the metastable state in the two-dimensional square-lattice nearest-neighbor Ising ferromagnet in an unfavorable applied field, and the agreement with theoretical predictions is very good. It is demonstrated that the higher-order algorithms can be many orders of magnitude faster than either the traditional Monte Carlo or n-fold way algorithms
Using Distant Sources in Local Seismic Tomography
Julian, Bruce; Foulgr, Gillian
2014-05-01
Seismic tomography methods such as the 'ACH' method of Aki, Christoffersson & Husebye (1976, 1977) are subject to significant bias caused by the unknown wave-speed structure outside the study volume, whose effects are mathematically of the same order as the local-structure effects being studied. Computational experiments using whole-mantle wave-speed models show that the effects are also of comparable numerical magnitude (Masson & Trampert, 1997). Failure to correct for these effects will significantly corrupt computed local structures. This bias can be greatly reduced by solving for additional parameters defining the shapes, orientations, and arrival times of the incident wavefronts. The procedure is exactly analogous to solving for hypocentral locations in local-earthquake tomography. For planar incident wavefronts, each event adds three free parameters and the forward problem is surprisingly simple: The first-order change in the theoretical arrival time at observation point B resulting from perturbations in the incident-wave time t0 and slowness vector s is δtB ≡ δt0 + δs · rA = δtA, the change in the time of the plane wave at the point A where the un-perturbed ray enters the study volume (Julian and Foulger, submitted). This consequence of Fermat's principle apparently has not previously been recognized. In addition to eliminating the biasing effect of structure outside the study volume, this formalism enables us to combine data from local and distant events in studies of local structure, significantly improving resolution of deeper structure, particularly in places such as volcanic and geothermal areas where seismicity is confined to shallow depths. Many published models that were derived using ACH and similar methods probably contain significant artifacts and are in need of re-evaluation.
On the influence of microphone array geometry on HRTF-based Sound Source Localization
DEFF Research Database (Denmark)
Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua
2015-01-01
The direction dependence of Head Related Transfer Functions (HRTFs) forms the basis for HRTF-based Sound Source Localization (SSL) algorithms. In this paper, we show how spectral similarities of the HRTFs of different directions in the horizontal plane influence performance of HRTF-based SSL...... algorithms; the more similar the HRTFs of different angles to the HRTF of the target angle, the worse the performance. However, we also show how the microphone array geometry can assist in differentiating between the HRTFs of the different angles, thereby improving performance of HRTF-based SSL algorithms....... Furthermore, to demonstrate the analysis results, we show the impact of HRTFs similarities and microphone array geometry on an exemplary HRTF-based SSL algorithm, called MLSSL. This algorithm is well-suited for this purpose as it allows to estimate the Direction-of-Arrival (DoA) of the target sound using any...
TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization
Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi
2011-12-01
We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.
Theory and Algorithms for Global/Local Design Optimization
National Research Council Canada - National Science Library
Haftka, Raphael T
2004-01-01
... the component and overall design as well as on exploration of global optimization algorithms. In the former category, heuristic decomposition was followed with proof that it solves the original problem...
A novel iris localization algorithm using correlation filtering
Pohit, Mausumi; Sharma, Jitu
2015-06-01
Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.
Second Sound for Heat Source Localization
Vennekate, Hannes; Uhrmacher, Michael; Quadt, Arnulf; Grosse-Knetter, Joern
2011-01-01
Defects on the surface of superconducting cavities can limit their accelerating gradient by localized heating. This results in a phase transition to the normal conduction state | a quench. A new application, involving Oscillating Superleak Transducers (OST) to locate such quench inducing heat spots on the surface of the cavities, has been developed by D. Hartill et al. at Cornell University in 2008. The OSTs enable the detection of heat transfer via second sound in super uid helium. This thesis presents new results on the analysis of their signal. Its behavior has been studied for dierent circumstances at setups at the University of Gottingen and at CERN. New approaches for an automated signal processing have been developed. Furthermore, a rst test setup for a single-cell Superconducting Proton Linac (SPL) cavity has been prepared. Recommendations of a better signal retrieving for its operation are presented.
Genetic local search algorithm for optimization design of diffractive optical elements.
Zhou, G; Chen, Y; Wang, Z; Song, H
1999-07-10
We propose a genetic local search algorithm (GLSA) for the optimization design of diffractive optical elements (DOE's). This hybrid algorithm incorporates advantages of both genetic algorithm (GA) and local search techniques. It appears better able to locate the global minimum compared with a canonical GA. Sample cases investigated here include the optimization design of binary-phase Dammann gratings, continuous surface-relief grating array generators, and a uniform top-hat focal plane intensity profile generator. Two GLSA's whose incorporated local search techniques are the hill-climbing method and the simulated annealing algorithm are investigated. Numerical experimental results demonstrate that the proposed algorithm is highly efficient and robust. DOE's that have high diffraction efficiency and excellent uniformity can be achieved by use of the algorithm we propose.
Optimal configuration of power grid sources based on optimal particle swarm algorithm
Wen, Yuanhua
2018-04-01
In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.
Predicting Subcellular Localization of Proteins by Bioinformatic Algorithms
DEFF Research Database (Denmark)
Nielsen, Henrik
2015-01-01
was used. Various statistical and machine learning algorithms are used with all three approaches, and various measures and standards are employed when reporting the performances of the developed methods. This chapter presents a number of available methods for prediction of sorting signals and subcellular...
Astrometric and Timing Effects of Gravitational Waves from Localized Sources
Kopeikin, Sergei M.; Schafer, Gerhard; Gwinn, Carl R.; Eubanks, T. Marshall
1998-01-01
A consistent approach for an exhaustive solution of the problem of propagation of light rays in the field of gravitational waves emitted by a localized source of gravitational radiation is developed in the first post-Minkowskian and quadrupole approximation of General Relativity. We demonstrate that the equations of light propagation in the retarded gravitational field of an arbitrary localized source emitting quadrupolar gravitational waves can be integrated exactly. The influence of the gra...
MR-based source localization for MR-guided HDR brachytherapy
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
Directory of Open Access Journals (Sweden)
Junjie Ma
2018-02-01
Full Text Available Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.
Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping
2018-02-16
Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.
International Nuclear Information System (INIS)
Liu, L; Yuan, F G
2008-01-01
Wireless structural health monitoring (SHM) systems have emerged as a promising technology for robust and cost-effective structural monitoring. However, the applications of wireless sensors on active diagnosis for structural health monitoring (SHM) have not been extensively investigated. Due to limited energy sources, battery-powered wireless sensors can only perform limited functions and are expected to operate at a low duty cycle. Conventional designs are not suitable for sensing high frequency signals, e.g. in the ultrasonic frequency range. More importantly, algorithms to detect structural damage with a vast amount of data usually require considerable processing and communication time and result in unaffordable power consumption for wireless sensors. In this study, an energy-efficient wireless sensor for supporting high frequency signals and a distributed damage localization algorithm for plate-like structures are proposed, discussed and validated to supplement recent advances made for active sensing-based SHM. First, the power consumption of a wireless sensor is discussed and identified. Then the design of a wireless sensor for active diagnosis using piezoelectric sensors is introduced. The newly developed wireless sensor utilizes an optimized combination of field programmable gate array (FPGA) and conventional microcontroller to address the tradeoff between power consumption and speed requirement. The proposed damage localization algorithm, based on an energy decay model, enables wireless sensors to be practically used in active diagnosis. The power consumption for data communication can be minimized while the power budget for data processing can still be affordable for a battery-powered wireless sensor. The Levenberg–Marquardt method is employed in a mains-powered sensor node or PC to locate damage. Experimental results and discussion on the improvement of power efficiency are given
Localization from near-source quasi-static electromagnetic fields
Energy Technology Data Exchange (ETDEWEB)
Mosher, John Compton [Univ. of Southern California, Los Angeles, CA (United States)
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Mathematical model and algorithm of operation scheduling for monitoring situation in local waters
Directory of Open Access Journals (Sweden)
Sokolov Boris
2017-01-01
Full Text Available A multiple-model approach to description and investigation of control processes in regional maritime security system is presented. The processes considered in this paper were qualified as control processes of computing operations providing monitoring of the situation adding in the local water area and connected to relocation of different ships classes (further the active mobile objects (AMO. Previously developed concept of active moving object (AMO is used. The models describe operation of AMO automated monitoring and control system (AMCS elements as well as their interaction with objects-in-service that are sources or recipients of information being processed. The unified description of various control processes allows synthesizing simultaneously both technical and functional structures of AMO AMCS. The algorithm for solving the scheduling problem is described in terms of the classical theory of optimal automatic control.
DNA evolutionary algorithm (DNAEA) for source term identification in convection-diffusion equation
International Nuclear Information System (INIS)
Yang, X-H; Hu, X-X; Shen, Z-Y
2008-01-01
The source identification problem is changed into an optimization problem in this paper. This is a complicated nonlinear optimization problem. It is very intractable with traditional optimization methods. So DNA evolutionary algorithm (DNAEA) is presented to solve the discussed problem. In this algorithm, an initial population is generated by a chaos algorithm. With the shrinking of searching range, DNAEA gradually directs to an optimal result with excellent individuals obtained by DNAEA. The position and intensity of pollution source are well found with DNAEA. Compared with Gray-coded genetic algorithm and pure random search algorithm, DNAEA has rapider convergent speed and higher calculation precision
An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.
Cheng, Jing; Xia, Linyuan
2016-08-31
Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.
Source localization analysis using seismic noise data acquired in exploration geophysics
Roux, P.; Corciulo, M.; Campillo, M.; Dubuq, D.
2011-12-01
Passive monitoring using seismic noise data shows a growing interest at exploration scale. Recent studies demonstrated source localization capability using seismic noise cross-correlation at observation scales ranging from hundreds of kilometers to meters. In the context of exploration geophysics, classical localization methods using travel-time picking fail when no evident first arrivals can be detected. Likewise, methods based on the intensity decrease as a function of distance to the source also fail when the noise intensity decay gets more complicated than the power-law expected from geometrical spreading. We propose here an automatic procedure developed in ocean acoustics that permits to iteratively locate the dominant and secondary noise sources. The Matched-Field Processing (MFP) technique is based on the spatial coherence of raw noise signals acquired on a dense array of receivers in order to produce high-resolution source localizations. Standard MFP algorithms permits to locate the dominant noise source by matching the seismic noise Cross-Spectral Density Matrix (CSDM) with the equivalent CSDM calculated from a model and a surrogate source position that scans each position of a 3D grid below the array of seismic sensors. However, at exploration scale, the background noise is mostly dominated by surface noise sources related to human activities (roads, industrial platforms,..), which localization is of no interest for the monitoring of the hydrocarbon reservoir. In other words, the dominant noise sources mask lower-amplitude noise sources associated to the extraction process (in the volume). Their location is therefore difficult through standard MFP technique. The Multi-Rate Adaptative Beamforming (MRABF) is a further improvement of the MFP technique that permits to locate low-amplitude secondary noise sources using a projector matrix calculated from the eigen-value decomposition of the CSDM matrix. The MRABF approach aims at cancelling the contributions of
Blind Source Separation Based on Covariance Ratio and Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Lei Chen
2014-01-01
Full Text Available The computation amount in blind source separation based on bioinspired intelligence optimization is high. In order to solve this problem, we propose an effective blind source separation algorithm based on the artificial bee colony algorithm. In the proposed algorithm, the covariance ratio of the signals is utilized as the objective function and the artificial bee colony algorithm is used to solve it. The source signal component which is separated out, is then wiped off from mixtures using the deflation method. All the source signals can be recovered successfully by repeating the separation process. Simulation experiments demonstrate that significant improvement of the computation amount and the quality of signal separation is achieved by the proposed algorithm when compared to previous algorithms.
A Local Scalable Distributed EM Algorithm for Large P2P Networks
National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
A fingerprint classification algorithm based on combination of local and global information
Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu
2011-12-01
Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.
National Aeronautics and Space Administration — In this paper we develop a local distributed privacy preserving algorithm for feature selection in a large peer-to-peer environment. Feature selection is often used...
A voting-based star identification algorithm utilizing local and global distribution
Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua
2018-03-01
A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.
Li, Wei; Yang, Zhen; Hu, Haifeng
2014-01-01
Graphical models have been widely applied in solving distributed inference problems in wireless networks. In this paper, we formulate the cooperative localization problem in a mobile network as an inference problem on a factor graph. Using a sequential schedule of message updates, a sequential uniformly reweighted sum-product algorithm (SURW-SPA) is developed for mobile localization problems. The proposed algorithm combines the distributed nature of belief propagation (BP) with the improved p...
Localization of Vibrating Noise Sources in Nuclear Reactor Cores
International Nuclear Information System (INIS)
Hultqvist, Pontus
2004-09-01
In this thesis the possibility of locating vibrating noise sources in a nuclear reactor core from the neutron noise has been investigated using different localization methods. The influence of the vibrating noise source has been considered to be a small perturbation of the neutron flux inside the reactor. Linear perturbation theory has been used to construct the theoretical framework upon which the localization methods are based. Two different cases have been considered: one where a one-dimensional one-group model has been used and another where a two-dimensional two-energy group noise simulator has been used. In the first case only one localization method is able to determine the position with good accuracy. This localization method is based on finding roots of an equation and is sensitive to other perturbations of the neutron flux. It will therefore work better with the assistance of approximative methods that reconstruct the noise source to determine if the results are reliable or not. In the two-dimensional case the results are more promising. There are several different localization techniques that reproduce both the vibrating noise source position and the direction of vibration with enough precision. The approximate methods that reconstruct the noise source are substantially better and are able to support the root finding method in a more constructive way. By combining the methods, the results will be more reliable
Ambiguity Resolution for Phase-Based 3-D Source Localization under Fixed Uniform Circular Array.
Chen, Xin; Liu, Zhen; Wei, Xizhang
2017-05-11
Under fixed uniform circular array (UCA), 3-D parameter estimation of a source whose half-wavelength is smaller than the array aperture would suffer from a serious phase ambiguity problem, which also appears in a recently proposed phase-based algorithm. In this paper, by using the centro-symmetry of UCA with an even number of sensors, the source's angles and range can be decoupled and a novel algorithm named subarray grouping and ambiguity searching (SGAS) is addressed to resolve angle ambiguity. In the SGAS algorithm, each subarray formed by two couples of centro-symmetry sensors can obtain a batch of results under different ambiguities, and by searching the nearest value among subarrays, which is always corresponding to correct ambiguity, rough angle estimation with no ambiguity is realized. Then, the unambiguous angles are employed to resolve phase ambiguity in a phase-based 3-D parameter estimation algorithm, and the source's range, as well as more precise angles, can be achieved. Moreover, to improve the practical performance of SGAS, the optimal structure of subarrays and subarray selection criteria are further investigated. Simulation results demonstrate the satisfying performance of the proposed method in 3-D source localization.
A Separation Algorithm for Sources with Temporal Structure Only Using Second-order Statistics
Directory of Open Access Journals (Sweden)
J.G. Wang
2013-09-01
Full Text Available Unlike conventional blind source separation (BSS deals with independent identically distributed (i.i.d. sources, this paper addresses the separation from mixtures of sources with temporal structure, such as linear autocorrelations. Many sequential extraction algorithms have been reported, resulting in inevitable cumulated errors introduced by the deflation scheme. We propose a robust separation algorithm to recover original sources simultaneously, through a joint diagonalizer of several average delayed covariance matrices at positions of the optimal time delay and its integers. The proposed algorithm is computationally simple and efficient, since it is based on the second-order statistics only. Extensive simulation results confirm the validity and high performance of the algorithm. Compared with related extraction algorithms, its separation signal-to-noise rate for a desired source can reach 20dB higher, and it seems rather insensitive to the estimation error of the time delay.
Localization of gravitational wave sources with networks of advanced detectors
International Nuclear Information System (INIS)
Klimenko, S.; Mitselmakher, G.; Pankow, C.; Vedovato, G.; Drago, M.; Prodi, G.; Mazzolo, G.; Salemi, F.; Re, V.; Yakushin, I.
2011-01-01
Coincident observations with gravitational wave (GW) detectors and other astronomical instruments are among the main objectives of the experiments with the network of LIGO, Virgo, and GEO detectors. They will become a necessary part of the future GW astronomy as the next generation of advanced detectors comes online. The success of such joint observations directly depends on the source localization capabilities of the GW detectors. In this paper we present studies of the sky localization of transient GW sources with the future advanced detector networks and describe their fundamental properties. By reconstructing sky coordinates of ad hoc signals injected into simulated detector noise, we study the accuracy of the source localization and its dependence on the strength of injected signals, waveforms, and network configurations.
Extension to HiRLoc Algorithm for Localization Error Computation in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Swati Saxena
2013-09-01
Full Text Available Wireless sensor networks (WSNs have gained importance in recent years as this support a large spectrum of applications such as automotive, health, military, environmental, home and office. Various algorithms have been proposed for making this technology more adaptive the existing algorithms address issues such as safety, security, power consumption, lifetime and localization. This paper presents an extension to HiRLoc algorithm and highlights its benefits. Extended HiRLoc significantly reduce the average localization error by suggesting a new method directional antenna based scheme.
Pollution source localization in an urban water supply network based on dynamic water demand.
Yan, Xuesong; Zhu, Zhixin; Li, Tian
2017-10-27
Urban water supply networks are susceptible to intentional, accidental chemical, and biological pollution, which pose a threat to the health of consumers. In recent years, drinking-water pollution incidents have occurred frequently, seriously endangering social stability and security. The real-time monitoring for water quality can be effectively implemented by placing sensors in the water supply network. However, locating the source of pollution through the data detection obtained by water quality sensors is a challenging problem. The difficulty lies in the limited number of sensors, large number of water supply network nodes, and dynamic user demand for water, which leads the pollution source localization problem to an uncertainty, large-scale, and dynamic optimization problem. In this paper, we mainly study the dynamics of the pollution source localization problem. Previous studies of pollution source localization assume that hydraulic inputs (e.g., water demand of consumers) are known. However, because of the inherent variability of urban water demand, the problem is essentially a fluctuating dynamic problem of consumer's water demand. In this paper, the water demand is considered to be stochastic in nature and can be described using Gaussian model or autoregressive model. On this basis, an optimization algorithm is proposed based on these two dynamic water demand change models to locate the pollution source. The objective of the proposed algorithm is to find the locations and concentrations of pollution sources that meet the minimum between the analogue and detection values of the sensor. Simulation experiments were conducted using two different sizes of urban water supply network data, and the experimental results were compared with those of the standard genetic algorithm.
Directory of Open Access Journals (Sweden)
Kaifeng Yang
2014-01-01
Full Text Available A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.
Yang, Kaifeng; Mu, Li; Yang, Dongdong; Zou, Feng; Wang, Lei; Jiang, Qiaoyong
2014-01-01
A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.
Localization of Point Sources for Poisson Equation using State Observers
Majeed, Muhammad Usman
2016-08-09
A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Localization of Point Sources for Poisson Equation using State Observers
Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem
2016-01-01
A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Study of localized photon source in space of measures
International Nuclear Information System (INIS)
Lisi, M.
2010-01-01
In this paper we study a three-dimensional photon transport problem in an interstellar cloud, with a localized photon source inside. The problem is solved indirectly, by defining the adjoint of an operator acting on an appropriate space of continuous functions. By means of sun-adjoint semi groups theory of operators in a Banach space of regular Borel measures, we prove existence and uniqueness of the solution of the problem. A possible approach to identify the localization of the photon source is finally proposed.
An algorithm of local earthquake detection from digital records
Directory of Open Access Journals (Sweden)
A. PROZOROV
1978-06-01
Full Text Available The problem of automatical detection of earthquake signals in seismograms
and definition of first arrivals of p and s waves is considered.
The algorithm is based on the analysis of t(A function which represents
the time of first appearence of a number of going one after another
swings of amplitudes greather than A in seismic signals. It allows to explore
such common features of seismograms of earthquakes as sudden
first p-arrivals of amplitude greater than general amplitude of noise and
after the definite interval of time before s-arrival the amplitude of which
overcomes the amplitude of p-arrival. The method was applied to
3-channel recods of Friuli aftershocks, ¿'-arrivals were defined correctly
in all cases; p-arrivals were defined in most cases using strict criteria of
detection. Any false signals were not detected. All p-arrivals were defined
using soft criteria of detection but less reliability and two false events
were obtained.
Partial differential equation-based localization of a monopole source from a circular array.
Ando, Shigeru; Nara, Takaaki; Levy, Tsukassa
2013-10-01
Wave source localization from a sensor array has long been the most active research topics in both theory and application. In this paper, an explicit and time-domain inversion method for the direction and distance of a monopole source from a circular array is proposed. The approach is based on a mathematical technique, the weighted integral method, for signal/source parameter estimation. It begins with an exact form of the source-constraint partial differential equation that describes the unilateral propagation of wide-band waves from a single source, and leads to exact algebraic equations that include circular Fourier coefficients (phase mode measurements) as their coefficients. From them, nearly closed-form, single-shot and multishot algorithms are obtained that is suitable for use with band-pass/differential filter banks. Numerical evaluation and several experimental results obtained using a 16-element circular microphone array are presented to verify the validity of the proposed method.
Coulomb interactions via local dynamics: a molecular-dynamics algorithm
International Nuclear Information System (INIS)
Pasichnyk, Igor; Duenweg, Burkhard
2004-01-01
We derive and describe in detail a recently proposed method for obtaining Coulomb interactions as the potential of mean force between charges which are dynamically coupled to a local electromagnetic field. We focus on the molecular dynamics version of the method and show that it is intimately related to the Car-Parrinello approach, while being equivalent to solving Maxwell's equations with a freely adjustable speed of light. Unphysical self-energies arise as a result of the lattice interpolation of charges, and are corrected by a subtraction scheme based on the exact lattice Green function. The method can be straightforwardly parallelized using standard domain decomposition. Some preliminary benchmark results are presented
Computationally efficient near-field source localization using third-order moments
Chen, Jian; Liu, Guohong; Sun, Xiaoying
2014-12-01
In this paper, a third-order moment-based estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm is proposed for passive localization of near-field sources. By properly choosing sensor outputs of the symmetric uniform linear array, two special third-order moment matrices are constructed, in which the steering matrix is the function of electric angle γ, while the rotational factor is the function of electric angles γ and ϕ. With the singular value decomposition (SVD) operation, all direction-of-arrivals (DOAs) are estimated from a polynomial rooting version. After substituting the DOA information into the steering matrix, the rotational factor is determined via the total least squares (TLS) version, and the related range estimations are performed. Compared with the high-order ESPRIT method, the proposed algorithm requires a lower computational burden, and it avoids the parameter-match procedure. Computer simulations are carried out to demonstrate the performance of the proposed algorithm.
Source localization with an advanced gravitational wave detector network
International Nuclear Information System (INIS)
Fairhurst, Stephen
2011-01-01
We derive an expression for the accuracy with which sources can be localized using a network of gravitational wave detectors. The result is obtained via triangulation, using timing accuracies at each detector and is applicable to a network with any number of detectors. We use this result to investigate the ability of advanced gravitational wave detector networks to accurately localize signals from compact binary coalescences. We demonstrate that additional detectors can significantly improve localization results and illustrate our findings with networks comprised of the advanced LIGO, advanced Virgo and LCGT. In addition, we evaluate the benefits of relocating one of the advanced LIGO detectors to Australia.
COM-LOC: A Distributed Range-Free Localization Algorithm in Wireless Networks
Dil, B.J.; Havinga, Paul J.M.; Marusic, S; Palaniswami, M; Gubbi, J.; Law, Y.W.
2009-01-01
This paper investigates distributed range-free localization in wireless networks using a communication protocol called sum-dist which is commonly employed by localization algorithms. With this protocol, the reference nodes flood the network in order to estimate the shortest distance between the
A probabilistic framework for acoustic emission source localization in plate-like structures
International Nuclear Information System (INIS)
Dehghan Niri, E; Salamone, S
2012-01-01
This paper proposes a probabilistic approach for acoustic emission (AE) source localization in isotropic plate-like structures based on an extended Kalman filter (EKF). The proposed approach consists of two main stages. During the first stage, time-of-flight (TOF) measurements of Lamb waves are carried out by a continuous wavelet transform (CWT), accounting for systematic errors due to the Heisenberg uncertainty; the second stage uses an EKF to iteratively estimate the AE source location and the wave velocity. The advantages of the proposed algorithm over the traditional methods include the capability of: (1) taking into account uncertainties in TOF measurements and wave velocity and (2) efficiently fusing multi-sensor data to perform AE source localization. The performance of the proposed approach is validated through pencil-lead breaks performed on an aluminum plate at systematic grid locations. The plate was instrumented with an array of four piezoelectric transducers in two different configurations. (paper)
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Novel applications of locally sourced montmorillonite (MMT) clay as ...
African Journals Online (AJOL)
This work explores the application of a locally sourced raw material, montmorillonite (MMT) clay, as a disintegrant in the formulation of an analgesic pharmaceutical product - paracetamol. The raw MMT was refined and treated with 0.IM NaCl to yield sodium montmorillonite (NaMMT) and the powder properties established in ...
MEG source localization using invariance of noise space.
Directory of Open Access Journals (Sweden)
Junpeng Zhang
Full Text Available We propose INvariance of Noise (INN space as a novel method for source localization of magnetoencephalography (MEG data. The method is based on the fact that modulations of source strengths across time change the energy in signal subspace but leave the noise subspace invariant. We compare INN with classical MUSIC, RAP-MUSIC, and beamformer approaches using simulated data while varying signal-to-noise ratios as well as distance and temporal correlation between two sources. We also demonstrate the utility of INN with actual auditory evoked MEG responses in eight subjects. In all cases, INN performed well, especially when the sources were closely spaced, highly correlated, or one source was considerably stronger than the other.
Reality Check Algorithm for Complex Sources in Early Warning
Karakus, G.; Heaton, T. H.
2013-12-01
In almost all currently operating earthquake early warning (EEW) systems, presently available seismic data are used to predict future shaking. In most cases, location and magnitude are estimated. We are developing an algorithm to test the goodness of that prediction in real time. We monitor envelopes of acceleration, velocity, and displacement; if they deviate significantly from the envelope predicted by Cua's envelope gmpe's then we declare an overfit (perhaps false alarm) or an underfit (possibly a larger event has just occurred). This algorithm is designed to provide a robust measure and to work as quickly as possible in real-time. We monitor the logarithm of the ratio between the envelopes of the ongoing observed event and the envelopes derived from the predicted envelopes of channels of ground motion of the Virtual Seismologist (VS) (Cua, G. and Heaton, T.). Then, we recursively filter this result with a simple running median (de-spiking operator) to minimize the effect of one single high value. Depending on the result of the filtered value we make a decision such as if this value is large enough (e.g., >1), then we would declare, 'that a larger event is in progress', or similarly if this value is small enough (e.g., <-1), then we would declare a false alarm. We design the algorithm to work at a wide range of amplitude scales; that is, it should work for both small and large events.
Location of an electric source facility and local area promotion
International Nuclear Information System (INIS)
Shimohirao, Isao
1999-01-01
Here were described on energy demand and supply, energy policy and local area promotion policy for basic problems important on location of electric source facilities. At present, co-existence business between electricity business and electric source location area is lacking in its activity. It seems to be necessary to enforce some systems to intend to promote it earnestly, and to effort to promote industry promotions such as introduction of some national projects, induction of electricity cost reduction for a means of business invitation, and so forth. And it is necessary to promote them under cooperations with electricity businesses, governments, universities and communities for the industrial promotion and fixation of the youth at local areas. In order to realize such necessities, further larger efforts are expected for national and local governments. (G.K.)
A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.
Li, Bing; Cui, Wei; Wang, Bin
2015-09-16
Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.
Van den Bogaert, Tim; Doclo, Simon; Wouters, Jan; Moonen, Marc
2008-07-01
This paper evaluates the influence of three multimicrophone noise reduction algorithms on the ability to localize sound sources. Two recently developed noise reduction techniques for binaural hearing aids were evaluated, namely, the binaural multichannel Wiener filter (MWF) and the binaural multichannel Wiener filter with partial noise estimate (MWF-N), together with a dual-monaural adaptive directional microphone (ADM), which is a widely used noise reduction approach in commercial hearing aids. The influence of the different algorithms on perceived sound source localization and their noise reduction performance was evaluated. It is shown that noise reduction algorithms can have a large influence on localization and that (a) the ADM only preserves localization in the forward direction over azimuths where limited or no noise reduction is obtained; (b) the MWF preserves localization of the target speech component but may distort localization of the noise component. The latter is dependent on signal-to-noise ratio and masking effects; (c) the MWF-N enables correct localization of both the speech and the noise components; (d) the statistical Wiener filter approach introduces a better combination of sound source localization and noise reduction performance than the ADM approach.
Iterative algorithm for joint zero diagonalization with application in blind source separation.
Zhang, Wei-Tao; Lou, Shun-Tian
2011-07-01
A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.
Directory of Open Access Journals (Sweden)
Kae Y. Foo
2010-01-01
Full Text Available The task of localizing underwater assets involves the relative localization of each unit using only pairwise distance measurements, usually obtained from time-of-arrival or time-delay-of-arrival measurements. In the fluctuating underwater environment, a complete set of pair-wise distance measurements can often be difficult to acquire, thus hindering a straightforward closed-form solution in deriving the assets' relative coordinates. An iterative multidimensional scaling approach is presented based upon a weighted-majorization algorithm that tolerates missing or inaccurate distance measurements. Substantial modifications are proposed to optimize the algorithm, while the effects of refractive propagation paths are considered. A parametric study of the algorithm based upon simulation results is shown. An acoustic field-trial was then carried out, presenting field measurements to highlight the practical implementation of this algorithm.
A Sustainable City Planning Algorithm Based on TLBO and Local Search
Zhang, Ke; Lin, Li; Huang, Xuanxuan; Liu, Yiming; Zhang, Yonggang
2017-09-01
Nowadays, how to design a city with more sustainable features has become a center problem in the field of social development, meanwhile it has provided a broad stage for the application of artificial intelligence theories and methods. Because the design of sustainable city is essentially a constraint optimization problem, the swarm intelligence algorithm of extensive research has become a natural candidate for solving the problem. TLBO (Teaching-Learning-Based Optimization) algorithm is a new swarm intelligence algorithm. Its inspiration comes from the “teaching” and “learning” behavior of teaching class in the life. The evolution of the population is realized by simulating the “teaching” of the teacher and the student “learning” from each other, with features of less parameters, efficient, simple thinking, easy to achieve and so on. It has been successfully applied to scheduling, planning, configuration and other fields, which achieved a good effect and has been paid more and more attention by artificial intelligence researchers. Based on the classical TLBO algorithm, we propose a TLBO_LS algorithm combined with local search. We design and implement the random generation algorithm and evaluation model of urban planning problem. The experiments on the small and medium-sized random generation problem showed that our proposed algorithm has obvious advantages over DE algorithm and classical TLBO algorithm in terms of convergence speed and solution quality.
Directory of Open Access Journals (Sweden)
Orhan TÜRKBEY
2002-02-01
Full Text Available Memetic algorithms, which use local search techniques, are hybrid structured algorithms like genetic algorithms among evolutionary algorithms. In this study, for Quadratic Assignment Problem (QAP, a memetic structured algorithm using a local search heuristic like 2-opt is developed. Developed in the algorithm, a crossover operator that has not been used before for QAP is applied whereas, Eshelman procedure is used in order to increase thesolution variability. The developed memetic algorithm is applied on test problems taken from QAP-LIB, the results are compared with the present techniques in the literature.
Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding
Directory of Open Access Journals (Sweden)
Yongjian Nian
2013-01-01
Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.
DATA SECURITY IN LOCAL AREA NETWORK BASED ON FAST ENCRYPTION ALGORITHM
Directory of Open Access Journals (Sweden)
G. Ramesh
2010-06-01
Full Text Available Hacking is one of the greatest problems in the wireless local area networks. Many algorithms have been used to prevent the outside attacks to eavesdrop or prevent the data to be transferred to the end-user safely and correctly. In this paper, a new symmetrical encryption algorithm is proposed that prevents the outside attacks. The new algorithm avoids key exchange between users and reduces the time taken for the encryption and decryption. It operates at high data rate in comparison with The Data Encryption Standard (DES, Triple DES (TDES, Advanced Encryption Standard (AES-256, and RC6 algorithms. The new algorithm is applied successfully on both text file and voice message.
Random noise suppression of seismic data using non-local Bayes algorithm
Chang, De-Kuan; Yang, Wu-Yang; Wang, Yi-Hui; Yang, Qing; Wei, Xin-Jian; Feng, Xiao-Ying
2018-02-01
For random noise suppression of seismic data, we present a non-local Bayes (NL-Bayes) filtering algorithm. The NL-Bayes algorithm uses the Gaussian model instead of the weighted average of all similar patches in the NL-means algorithm to reduce the fuzzy of structural details, thereby improving the denoising performance. In the denoising process of seismic data, the size and the number of patches in the Gaussian model are adaptively calculated according to the standard deviation of noise. The NL-Bayes algorithm requires two iterations to complete seismic data denoising, but the second iteration makes use of denoised seismic data from the first iteration to calculate the better mean and covariance of the patch Gaussian model for improving the similarity of patches and achieving the purpose of denoising. Tests with synthetic and real data sets demonstrate that the NL-Bayes algorithm can effectively improve the SNR and preserve the fidelity of seismic data.
Directory of Open Access Journals (Sweden)
Ali Wagdy Mohamed
2014-11-01
Full Text Available In this paper, a novel version of Differential Evolution (DE algorithm based on a couple of local search mutation and a restart mechanism for solving global numerical optimization problems over continuous space is presented. The proposed algorithm is named as Restart Differential Evolution algorithm with Local Search Mutation (RDEL. In RDEL, inspired by Particle Swarm Optimization (PSO, a novel local mutation rule based on the position of the best and the worst individuals among the entire population of a particular generation is introduced. The novel local mutation scheme is joined with the basic mutation rule through a linear decreasing function. The proposed local mutation scheme is proven to enhance local search tendency of the basic DE and speed up the convergence. Furthermore, a restart mechanism based on random mutation scheme and a modified Breeder Genetic Algorithm (BGA mutation scheme is combined to avoid stagnation and/or premature convergence. Additionally, an exponent increased crossover probability rule and a uniform scaling factors of DE are introduced to promote the diversity of the population and to improve the search process, respectively. The performance of RDEL is investigated and compared with basic differential evolution, and state-of-the-art parameter adaptive differential evolution variants. It is discovered that the proposed modifications significantly improve the performance of DE in terms of quality of solution, efficiency and robustness.
Vector-Sensor MUSIC for Polarized Seismic Sources Localization
Directory of Open Access Journals (Sweden)
Jérôme I. Mars
2005-01-01
Full Text Available This paper addresses the problem of high-resolution polarized source detection and introduces a new eigenstructure-based algorithm that yields direction of arrival (DOA and polarization estimates using a vector-sensor (or multicomponent-sensor array. This method is based on separation of the observation space into signal and noise subspaces using fourth-order tensor decomposition. In geophysics, in particular for reservoir acquisition and monitoring, a set of Nx-multicomponent sensors is laid on the ground with constant distance ÃŽÂ”x between them. Such a data acquisition scheme has intrinsically three modes: time, distance, and components. The proposed method needs multilinear algebra in order to preserve data structure and avoid reorganization. The data is thus stored in tridimensional arrays rather than matrices. Higher-order eigenvalue decomposition (HOEVD for fourth-order tensors is considered to achieve subspaces estimation and to compute the eigenelements. We propose a tensorial version of the MUSIC algorithm for a vector-sensor array allowing a joint estimation of DOA and signal polarization estimation. Performances of the proposed algorithm are evaluated.
A Local Search Algorithm for the Flow Shop Scheduling Problem with Release Dates
Directory of Open Access Journals (Sweden)
Tao Ren
2015-01-01
Full Text Available This paper discusses the flow shop scheduling problem to minimize the makespan with release dates. By resequencing the jobs, a modified heuristic algorithm is obtained for handling large-sized problems. Moreover, based on some properties, a local search scheme is provided to improve the heuristic to gain high-quality solution for moderate-sized problems. A sequence-independent lower bound is presented to evaluate the performance of the algorithms. A series of simulation results demonstrate the effectiveness of the proposed algorithms.
Hitting times of local and global optima in genetic algorithms with very high selection pressure
Directory of Open Access Journals (Sweden)
Eremeev Anton V.
2017-01-01
Full Text Available The paper is devoted to upper bounds on the expected first hitting times of the sets of local or global optima for non-elitist genetic algorithms with very high selection pressure. The results of this paper extend the range of situations where the upper bounds on the expected runtime are known for genetic algorithms and apply, in particular, to the Canonical Genetic Algorithm. The obtained bounds do not require the probability of fitness-decreasing mutation to be bounded by a constant which is less than one.
Localization of sources of the hyperinsulinism through the image methods
International Nuclear Information System (INIS)
Abath, C.G.A.
1990-01-01
Pancreatic insulinomas are small tumours, manifested early by the high hormonal production. Microscopic changes, like islet cell hyperplasia or nesidioblastosis, are also sources of hyperinsulinism. The pre-operative localization of the lesions is important, avoiding unnecessary or insufficient blind pancreatectomies. It is presented the experience with 26 patients with hyperinsulinism, of whom six were examined by ultrasound, nine by computed tomography, 25 by angiography and 16 by pancreatic venous sampling for hormone assay, in order to localize the lesions. Percutaneous transhepatic portal and pancreatic vein catheterization with measurement of insuline concentrations was the most reliable and sensitive method for detecting the lesions, including those non-palpable during the surgical exploration (author)
Inversion of Atmospheric Tracer Measurements, Localization of Sources
Issartel, J.-P.; Cabrit, B.; Hourdin, F.; Idelkadi, A.
When abnormal concentrations of a pollutant are observed in the atmosphere, the question of its origin arises immediately. The radioactivity from Tchernobyl was de- tected in Sweden before the accident was announced. This situation emphasizes the psychological, political and medical stakes of a rapid identification of sources. In tech- nical terms, most industrial sources can be modeled as a fixed point at ground level with undetermined duration. The classical method of identification involves the cal- culation of a backtrajectory departing from the detector with an upstream integration of the wind field. We were first involved in such questions as we evaluated the ef- ficiency of the international monitoring network planned in the frame of the Com- prehensive Test Ban Treaty. We propose a new approach of backtracking based upon the use of retroplumes associated to available measurements. Firstly the retroplume is related to inverse transport processes, describing quantitatively how the air in a sam- ple originates from regions that are all the more extended and diffuse as we go back far in the past. Secondly it clarifies the sensibility of the measurement with respect to all potential sources. It is therefore calculated by adjoint equations including of course diffusive processes. Thirdly, the statistical interpretation, valid as far as sin- gle particles are concerned, should not be used to investigate the position and date of a macroscopic source. In that case, the retroplume rather induces a straightforward constraint between the intensity of the source and its position. When more than one measurements are available, including zero valued measurements, the source satisfies the same number of linear relations tightly related to the retroplumes. This system of linear relations can be handled through the simplex algorithm in order to make the above intensity-position correlation more restrictive. This method enables to manage in a quantitative manner the
2MASS Extended Source Catalog: Overview and Algorithms
Jarrett, T.; Chester, T.; Cutri, R.; Schneider, S.; Skrutskie, M.; Huchra, J.
1999-01-01
The 2 Micron All-Sky Survey (2MASS)will observe over one-million galaxies and extended Galactic sources covering the entire sky at wavelenghts between 1 and 2 m. Most of these galaxies, from 70 to 80%, will be newly catalogued objetcs.
AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM
International Nuclear Information System (INIS)
Farrell, Sean A.; Murphy, Tara; Lo, Kitty K.
2015-01-01
In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of a random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.
Directory of Open Access Journals (Sweden)
Peter Brida
2013-01-01
Full Text Available Medical implants based on wireless communication will play crucial role in healthcare systems. Some applications need to know the exact position of each implant. RF positioning seems to be an effective approach for implant localization. The two most common positioning data typically used for RF positioning are received signal strength and time of flight of a radio signal between transmitter and receivers (medical implant and network of reference devices with known position. This leads to positioning methods: received signal strength (RSS and time of arrival (ToA. Both methods are based on trilateration. Used positioning data are very important, but the positioning algorithm which estimates the implant position is important as well. In this paper, the proposal of novel algorithm for trilateration is presented. The proposed algorithm improves the quality of basic trilateration algorithms with the same quality of measured positioning data. It is called Enhanced Positioning Trilateration Algorithm (EPTA. The proposed algorithm can be divided into two phases. The first phase is focused on the selection of the most suitable sensors for position estimation. The goal of the second one lies in the positioning accuracy improving by adaptive algorithm. Finally, we provide performance analysis of the proposed algorithm by computer simulations.
Designing localized electromagnetic fields in a source-free space
International Nuclear Information System (INIS)
Borzdov, George N.
2002-01-01
An approach to characterizing and designing localized electromagnetic fields, based on the use of differentiable manifolds, differentiable mappings, and the group of rotation, is presented. By way of illustration, novel families of exact time-harmonic solutions to Maxwell's equations in the source-free space - localized fields defined by the rotation group - are obtained. The proposed approach provides a broad spectrum of tools to design localized fields, i.e., to build-in symmetry properties of oscillating electric and magnetic fields, to govern the distributions of their energy densities (both size and form of localization domains), and to set the structure of time-average energy fluxes. It is shown that localized fields can be combined as constructive elements to obtain a complex field structure with desirable properties, such as one-, two-, or three-dimensional field gratings. The proposed approach can be used in designing localized electromagnetic fields to govern motion and state of charged and neutral particles. As an example, motion of relativistic electrons in one-dimensional and three-dimensional field gratings is treated
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Localizing gravitational wave sources with single-baseline atom interferometers
Graham, Peter W.; Jung, Sunghoon
2018-02-01
Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. We show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization. The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.
A simple algorithm for estimation of source-to-detector distance in Compton imaging
International Nuclear Information System (INIS)
Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.
2008-01-01
Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data
Fiber optic distributed temperature sensing for fire source localization
Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong
2017-08-01
A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.
An efficient central DOA tracking algorithm for multiple incoherently distributed sources
Hassen, Sonia Ben; Samet, Abdelaziz
2015-12-01
In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.
Luo, Junhai; Fu, Liang
2017-06-09
With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.
Directory of Open Access Journals (Sweden)
Junhai Luo
2017-06-01
Full Text Available With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS, which is collected from Access Points (APs. The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.
Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications
DEFF Research Database (Denmark)
Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua
2015-01-01
Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...
Iterative Object Localization Algorithm Using Visual Images with a Reference Coordinate
Directory of Open Access Journals (Sweden)
We-Duke Cho
2008-09-01
Full Text Available We present a simplified algorithm for localizing an object using multiple visual images that are obtained from widely used digital imaging devices. We use a parallel projection model which supports both zooming and panning of the imaging devices. Our proposed algorithm is based on a virtual viewable plane for creating a relationship between an object position and a reference coordinate. The reference point is obtained from a rough estimate which may be obtained from the preestimation process. The algorithm minimizes localization error through the iterative process with relatively low-computational complexity. In addition, nonlinearity distortion of the digital image devices is compensated during the iterative process. Finally, the performances of several scenarios are evaluated and analyzed in both indoor and outdoor environments.
Non-fragile consensus algorithms for a network of diffusion PDEs with boundary local interaction
Xiong, Jun; Li, Junmin
2017-07-01
In this study, non-fragile consensus algorithm is proposed to solve the average consensus problem of a network of diffusion PDEs, modelled by boundary controlled heat equations. The problem deals with the case where the Neumann-type boundary controllers are corrupted by additive persistent disturbances. To achieve consensus between agents, a linear local interaction rule addressing this requirement is given. The proposed local interaction rules are analysed by applying a Lyapunov-based approach. The multiplicative and additive non-fragile feedback control algorithms are designed and sufficient conditions for the consensus of the multi-agent systems are presented in terms of linear matrix inequalities, respectively. Simulation results are presented to support the effectiveness of the proposed algorithms.
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments
Directory of Open Access Journals (Sweden)
Gianluca Gennarelli
2017-10-01
Full Text Available Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS and non-line of sight (NLOS conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.
Directory of Open Access Journals (Sweden)
Wei Ke
2017-01-01
Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.
DEFF Research Database (Denmark)
Neumann, Frank; Witt, Carsten
2015-01-01
combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very...
Directory of Open Access Journals (Sweden)
R. Rajakumar
2017-01-01
Full Text Available Seyedali Mirjalili et al. (2014 introduced a completely unique metaheuristic technique particularly grey wolf optimization (GWO. This algorithm mimics the social behavior of grey wolves whereas it follows the leadership hierarchy and attacking strategy. The rising issue in wireless sensor network (WSN is localization problem. The objective of this problem is to search out the geographical position of unknown nodes with the help of anchor nodes in WSN. In this work, GWO algorithm is incorporated to spot the correct position of unknown nodes, so as to handle the node localization problem. The proposed work is implemented using MATLAB 8.2 whereas nodes are deployed in a random location within the desired network area. The parameters like computation time, percentage of localized node, and minimum localization error measures are utilized to analyse the potency of GWO rule with other variants of metaheuristics algorithms such as particle swarm optimization (PSO and modified bat algorithm (MBA. The observed results convey that the GWO provides promising results compared to the PSO and MBA in terms of the quick convergence rate and success rate.
Trilateration-based localization algorithm for ADS-B radar systems
Huang, Ming-Shih
Rapidly increasing growth and demand in various unmanned aerial vehicles (UAV) have pushed governmental regulation development and numerous technology research advances toward integrating unmanned and manned aircraft into the same civil airspace. Safety of other airspace users is the primary concern; thus, with the introduction of UAV into the National Airspace System (NAS), a key issue to overcome is the risk of a collision with manned aircraft. The challenge of UAV integration is global. As automatic dependent surveillance-broadcast (ADS-B) system has gained wide acceptance, additional exploitations of the radioed satellite-based information are topics of current interest. One such opportunity includes the augmentation of the communication ADS-B signal with a random bi-phase modulation for concurrent use as a radar signal for detecting other aircraft in the vicinity. This dissertation provides detailed discussion about the ADS-B radar system, as well as the formulation and analysis of a suitable non-cooperative multi-target tracking method for the ADS-B radar system using radar ranging techniques and particle filter algorithms. In order to deal with specific challenges faced by the ADS-B radar system, several estimation algorithms are studied. Trilateration-based localization algorithms are proposed due to their easy implementation and their ability to work with coherent signal sources. The centroid of three most closely spaced intersections of constant-range loci is conventionally used as trilateration estimate without rigorous justification. In this dissertation, we address the quality of trilateration intersections through range scaling factors. A number of well-known triangle centers, including centroid, incenter, Lemoine point (LP), and Fermat point (FP), are discussed in detail. To the author's best knowledge, LP was never associated with trilateration techniques. According our study, LP is proposed as the best trilateration estimator thanks to the
Directory of Open Access Journals (Sweden)
Kriangkrai Maneerat
2016-01-01
Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.
Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio
2005-10-01
This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.
International Nuclear Information System (INIS)
Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio
2005-01-01
This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG
Wang, Rongxiao; Chen, B.; Qiu, S.; Ma, Liang; Zhu, Zhengqiu; Wang, Yiping; Qiu, Xiaogang
2018-01-01
Locating and quantifying the emission source plays a significant role in the emergency management of hazardous gas leak accidents. Due to the lack of a desirable atmospheric dispersion model, current source estimation algorithms cannot meet the requirements of both accuracy and efficiency. In
A comparison of optimization algorithms for localized in vivo B0 shimming.
Nassirpour, Sahar; Chang, Paul; Fillmer, Ariane; Henning, Anke
2018-02-01
To compare several different optimization algorithms currently used for localized in vivo B 0 shimming, and to introduce a novel, fast, and robust constrained regularized algorithm (ConsTru) for this purpose. Ten different optimization algorithms (including samples from both generic and dedicated least-squares solvers, and a novel constrained regularized inversion method) were implemented and compared for shimming in five different shimming volumes on 66 in vivo data sets from both 7 T and 9.4 T. The best algorithm was chosen to perform single-voxel spectroscopy at 9.4 T in the frontal cortex of the brain on 10 volunteers. The results of the performance tests proved that the shimming algorithm is prone to unstable solutions if it depends on the value of a starting point, and is not regularized to handle ill-conditioned problems. The ConsTru algorithm proved to be the most robust, fast, and efficient algorithm among all of the chosen algorithms. It enabled acquisition of spectra of reproducible high quality in the frontal cortex at 9.4 T. For localized in vivo B 0 shimming, the use of a dedicated linear least-squares solver instead of a generic nonlinear one is highly recommended. Among all of the linear solvers, the constrained regularized method (ConsTru) was found to be both fast and most robust. Magn Reson Med 79:1145-1156, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
A Mobile Anchor Assisted Localization Algorithm Based on Regular Hexagon in Wireless Sensor Networks
Rodrigues, Joel J. P. C.
2014-01-01
Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints of cost and power consumption make it infeasible to equip each sensor node in the network with a global position system (GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use several mobile anchors which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. This paper proposes a mobile anchor assisted localization algorithm based on regular hexagon (MAALRH) in two-dimensional WSNs, which can cover the whole monitoring area with a boundary compensation method. Unknown nodes calculate their positions by using trilateration. We compare the MAALRH with HILBERT, CIRCLES, and S-CURVES algorithms in terms of localization ratio, localization accuracy, and path length. Simulations show that the MAALRH can achieve high localization ratio and localization accuracy when the communication range is not smaller than the trajectory resolution. PMID:25133212
Multi-hop localization algorithm based on grid-scanning for wireless sensor networks.
Wan, Jiangwen; Guo, Xiaolei; Yu, Ning; Wu, Yinfeng; Feng, Renjian
2011-01-01
For large-scale wireless sensor networks (WSNs) with a minority of anchor nodes, multi-hop localization is a popular scheme for determining the geographical positions of the normal nodes. However, in practice existing multi-hop localization methods suffer from various kinds of problems, such as poor adaptability to irregular topology, high computational complexity, low positioning accuracy, etc. To address these issues in this paper, we propose a novel Multi-hop Localization algorithm based on Grid-Scanning (MLGS). First, the factors that influence the multi-hop distance estimation are studied and a more realistic multi-hop localization model is constructed. Then, the feasible regions of the normal nodes are determined according to the intersection of bounding square rings. Finally, a verifiably good approximation scheme based on grid-scanning is developed to estimate the coordinates of the normal nodes. Additionally, the positioning accuracy of the normal nodes can be improved through neighbors' collaboration. Extensive simulations are performed in isotropic and anisotropic networks. The comparisons with some typical algorithms of node localization confirm the effectiveness and efficiency of our algorithm.
Gas source localization and gas distribution mapping with a micro-drone
Energy Technology Data Exchange (ETDEWEB)
Neumann, Patrick P.
2013-07-01
The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF
Gas source localization and gas distribution mapping with a micro-drone
International Nuclear Information System (INIS)
Neumann, Patrick P.
2013-01-01
The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm
Gas source localization and gas distribution mapping with a micro-drone
Energy Technology Data Exchange (ETDEWEB)
Neumann, Patrick P.
2013-07-01
The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm
Gas Source Localization via Behaviour Based Mobile Robot and Weighted Arithmetic Mean
Yeon, Ahmad Shakaff Ali; Kamarudin, Kamarulzaman; Visvanathan, Retnam; Mamduh Syed Zakaria, Syed Muhammad; Zakaria, Ammar; Munirah Kamarudin, Latifah
2018-03-01
This work is concerned with the localization of gas source in dynamic indoor environment using a single mobile robot system. Algorithms such as Braitenberg, Zig-Zag and the combination of the two were implemented on the mobile robot as gas plume searching and tracing behaviours. To calculate the gas source location, a weighted arithmetic mean strategy was used. All experiments were done on an experimental testbed consisting of a large gas sensor array (LGSA) to monitor real-time gas concentration within the testbed. Ethanol gas was released within the testbed and the source location was marked using a pattern that can be tracked by a pattern tracking system. A pattern template was also mounted on the mobile robot to track the trajectory of the mobile robot. Measurements taken by the mobile robot and the LGSA were then compared to verify the experiments. A combined total of 36.5 hours of real time experimental runs were done and the typical results from such experiments were presented in this paper. From the results, we obtained gas source localization errors between 0.4m to 1.2m from the real source location.
Cryogenic technology review of cold neutron source facility for localization
Energy Technology Data Exchange (ETDEWEB)
Lee, Hun Cheol; Park, D. S.; Moon, H. M.; Soon, Y. P. [Daesung Cryogenic Research Institute, Ansan (Korea); Kim, J. H. [United Pacific Technology, Inc., Ansan (Korea)
1998-02-01
This Research is performed to localize the cold neutron source(CNS) facility in HANARO and the report consists of two parts. In PART I, the local and foreign technology for CNS facility is investigated and examined. In PART II, safety and licensing are investigated. CNS facility consists of cryogenic and warm part. Cryogenic part includes a helium refrigerator, vacuum insulated pipes, condenser, cryogenic fluid tube and moderator cell. Warm part includes moderator gas control, vacuum equipment, process monitoring system. Warm part is at high level as a result of the development of semiconductor industries and can be localized. However, even though cryogenic technology is expected to play a important role in developing the 21st century's cutting technology, it lacks of specialists and the research facility since the domestic market is small and the research institutes and government do not recognize the importance. Therefore, it takes a long research time in order to localize the facility. The safety standard of reactor for hydrogen gas in domestic nuclear power regulations is compared with that of the foreign countries, and the licensing method for installation of CNS facility is examined. The system failure and its influence are also analyzed. 23 refs., 59 figs., 26 tabs. (Author)
Three-dimensional tomosynthetic image restoration for brachytherapy source localization
International Nuclear Information System (INIS)
Persons, Timothy M.
2001-01-01
Tomosynthetic image reconstruction allows for the production of a virtually infinite number of slices from a finite number of projection views of a subject. If the reconstructed image volume is viewed in toto, and the three-dimensional (3D) impulse response is accurately known, then it is possible to solve the inverse problem (deconvolution) using canonical image restoration methods (such as Wiener filtering or solution by conjugate gradient least squares iteration) by extension to three dimensions in either the spatial or the frequency domains. This dissertation presents modified direct and iterative restoration methods for solving the inverse tomosynthetic imaging problem in 3D. The significant blur artifact that is common to tomosynthetic reconstructions is deconvolved by solving for the entire 3D image at once. The 3D impulse response is computed analytically using a fiducial reference schema as realized in a robust, self-calibrating solution to generalized tomosynthesis. 3D modulation transfer function analysis is used to characterize the tomosynthetic resolution of the 3D reconstructions. The relevant clinical application of these methods is 3D imaging for brachytherapy source localization. Conventional localization schemes for brachytherapy implants using orthogonal or stereoscopic projection radiographs suffer from scaling distortions and poor visibility of implanted seeds, resulting in compromised source tracking (reported errors: 2-4 mm) and dosimetric inaccuracy. 3D image reconstruction (using a well-chosen projection sampling scheme) and restoration of a prostate brachytherapy phantom is used for testing. The approaches presented in this work localize source centroids with submillimeter error in two Cartesian dimensions and just over one millimeter error in the third
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation
International Nuclear Information System (INIS)
Chen, Ming; Yu, Hengyong
2015-01-01
Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units
A review of feature detection and match algorithms for localization and mapping
Li, Shimiao
2017-09-01
Localization and mapping is an essential ability of a robot to keep track of its own location in an unknown environment. Among existing methods for this purpose, vision-based methods are more effective solutions for being accurate, inexpensive and versatile. Vision-based methods can generally be categorized as feature-based approaches and appearance-based approaches. The feature-based approaches prove higher performance in textured scenarios. However, their performance depend highly on the applied feature-detection algorithms. In this paper, we surveyed algorithms for feature detection, which is an essential step in achieving vision-based localization and mapping. In this pater, we present mathematical models of the algorithms one after another. To compare the performances of the algorithms, we conducted a series of experiments on their accuracy, speed, scale invariance and rotation invariance. The results of the experiments showed that ORB is the fastest algorithm in detecting and matching features, the speed of which is more than 10 times that of SURF and approximately 40 times that of SIFT. And SIFT, although with no advantage in terms of speed, shows the most correct matching pairs and proves its accuracy.
Insulin in the brain: sources, localization and functions.
Ghasemi, Rasoul; Haeri, Ali; Dargahi, Leila; Mohamed, Zahurin; Ahmadiani, Abolhassan
2013-02-01
Historically, insulin is best known for its role in peripheral glucose homeostasis, and insulin signaling in the brain has received less attention. Insulin-independent brain glucose uptake has been the main reason for considering the brain as an insulin-insensitive organ. However, recent findings showing a high concentration of insulin in brain extracts, and expression of insulin receptors (IRs) in central nervous system tissues have gathered considerable attention over the sources, localization, and functions of insulin in the brain. This review summarizes the current status of knowledge of the peripheral and central sources of insulin in the brain, site-specific expression of IRs, and also neurophysiological functions of insulin including the regulation of food intake, weight control, reproduction, and cognition and memory formation. This review also considers the neuromodulatory and neurotrophic effects of insulin, resulting in proliferation, differentiation, and neurite outgrowth, introducing insulin as an attractive tool for neuroprotection against apoptosis, oxidative stress, beta amyloid toxicity, and brain ischemia.
Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien
2011-06-01
A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA
Fault Diagnosis of Supervision and Homogenization Distance Based on Local Linear Embedding Algorithm
Directory of Open Access Journals (Sweden)
Guangbin Wang
2015-01-01
Full Text Available In view of the problems of uneven distribution of reality fault samples and dimension reduction effect of locally linear embedding (LLE algorithm which is easily affected by neighboring points, an improved local linear embedding algorithm of homogenization distance (HLLE is developed. The method makes the overall distribution of sample points tend to be homogenization and reduces the influence of neighboring points using homogenization distance instead of the traditional Euclidean distance. It is helpful to choose effective neighboring points to construct weight matrix for dimension reduction. Because the fault recognition performance improvement of HLLE is limited and unstable, the paper further proposes a new local linear embedding algorithm of supervision and homogenization distance (SHLLE by adding the supervised learning mechanism. On the basis of homogenization distance, supervised learning increases the category information of sample points so that the same category of sample points will be gathered and the heterogeneous category of sample points will be scattered. It effectively improves the performance of fault diagnosis and maintains stability at the same time. A comparison of the methods mentioned above was made by simulation experiment with rotor system fault diagnosis, and the results show that SHLLE algorithm has superior fault recognition performance.
Energy Technology Data Exchange (ETDEWEB)
Penny, Robert D., E-mail: robert.d.penny@leidos.com [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Crowley, Tanya M.; Gardner, Barbara M.; Mandell, Myron J.; Guo, Yanlin; Haas, Eric B.; Knize, Duane J.; Kuharski, Robert A.; Ranta, Dale; Shyffer, Ryan [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Labov, Simon; Nelson, Karl; Seilhan, Brandon [Lawrence Livermore National Laboratory, Livermore, CA (United States); Valentine, John D. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)
2015-06-01
A novel approach and algorithm have been developed to rapidly detect and localize both moving and static radiological/nuclear (R/N) sources from an airborne platform. Current aerial systems with radiological sensors are limited in their ability to compensate for variable naturally occurring radioactive material (NORM) background. The proposed approach suppresses the effects of NORM background by incorporating additional information to segment the survey area into regions over which the background is likely to be uniform. The method produces pixelated Source Activity Maps (SAMs) of both target and background radionuclide activity over the survey area. The task of producing the SAMs requires (1) the development of a forward model which describes the transformation of radionuclide activity to detector measurements and (2) the solution of the associated inverse problem. The inverse problem is ill-posed as there are typically fewer measurements than unknowns. In addition the measurements are subject to Poisson statistical noise. The Maximum-Likelihood Expectation-Maximization (MLEM) algorithm is used to solve the inverse problem as it is well suited for under-determined problems corrupted by Poisson noise. A priori terrain information is incorporated to segment the reconstruction space into regions within which we constrain NORM background activity to be uniform. Descriptions of the algorithm and examples of performance with and without segmentation on simulated data are presented.
Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm
Directory of Open Access Journals (Sweden)
V. D. Sulimov
2014-01-01
Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search
Directory of Open Access Journals (Sweden)
Meng Zhi-Jun
2016-01-01
Full Text Available This paper addresses a new application of the local fractional variational iteration algorithm III to solve the local fractional diffusion equation defined on Cantor sets associated with non-differentiable heat transfer.
Energy Efficient Routing Algorithms in Dynamic Optical Core Networks with Dual Energy Sources
DEFF Research Database (Denmark)
Wang, Jiayuan; Fagertun, Anna Manolova; Ruepp, Sarah Renée
2013-01-01
This paper proposes new energy efficient routing algorithms in optical core networks, with the application of solar energy sources and bundled links. A comprehensive solar energy model is described in the proposed network scenarios. Network performance in energy savings, connection blocking...... probability, resource utilization and bundled link usage are evaluated with dynamic network simulations. Results show that algorithms proposed aiming for reducing the dynamic part of the energy consumption of the network may raise the fixed part of the energy consumption meanwhile....
DEFF Research Database (Denmark)
Tonelli, Oscar; Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão
2013-01-01
Next generation wireless networks aim at a significant improvement of the spectral efficiency in order to meet the dramatic increase in data service demand. In local area scenarios user-deployed base stations are expected to take place, thus making the centralized planning of frequency resources...... activities with the Autonomous Component Carrier Selection (ACCS) algorithm, a distributed solution for interference management among small neighboring cells. A preliminary evaluation of the algorithm performance is provided considering its live execution on a software defined radio network testbed...
A Combinatorial Benders’ Cuts Algorithm for the Local Container Drayage Problem
Directory of Open Access Journals (Sweden)
Zhaojie Xue
2015-01-01
Full Text Available This paper examines the local container drayage problem under a special operation mode in which tractors and trailers can be separated; that is, tractors can be assigned to a new task at another location while trailers with containers are waiting for packing or unpacking. Meanwhile, the strategy of sharing empty containers between different customers is also considered to improve the efficiency and lower the operation cost. The problem is formulated as a vehicle routing and scheduling problem with temporal constraints. We adopt combinatorial benders’ cuts algorithm to solve this problem. Numerical experiments are performed on a group of randomly generated instances to test the performance of the proposed algorithm.
International Nuclear Information System (INIS)
Liu, Xiaozheng; Yuan, Zhenming; Zhu, Junming; Xu, Dongrong
2013-01-01
The demons algorithm is a popular algorithm for non-rigid image registration because of its computational efficiency and simple implementation. The deformation forces of the classic demons algorithm were derived from image gradients by considering the deformation to decrease the intensity dissimilarity between images. However, the methods using the difference of image intensity for medical image registration are easily affected by image artifacts, such as image noise, non-uniform imaging and partial volume effects. The gradient magnitude image is constructed from the local information of an image, so the difference in a gradient magnitude image can be regarded as more reliable and robust for these artifacts. Then, registering medical images by considering the differences in both image intensity and gradient magnitude is a straightforward selection. In this paper, based on a diffeomorphic demons algorithm, we propose a chain-type diffeomorphic demons algorithm by combining the differences in both image intensity and gradient magnitude for medical image registration. Previous work had shown that the classic demons algorithm can be considered as an approximation of a second order gradient descent on the sum of the squared intensity differences. By optimizing the new dissimilarity criteria, we also present a set of new demons forces which were derived from the gradients of the image and gradient magnitude image. We show that, in controlled experiments, this advantage is confirmed, and yields a fast convergence. (paper)
A new PWM algorithm for battery-source three-phase inverters
Energy Technology Data Exchange (ETDEWEB)
Chan, C.C. (Dept. of Electrical and Electronic Engineering, Univ. of Hong Kong, Pokfulam Road (HK)); Chau, K.T. (Dept. of Electrical Engineering, Hong Kong Polytechnic, Hung Hom (HK))
1991-01-01
A new PWM algorithm for battery-source three-phase inverters is described in this paper. The concept of the algorithm is to determine the pulsewidths by equating the areas of the segments of the sinusodial reference with the related output pulse areas. The algorithm is particularly suitable to handle a non-constant voltage source with good harmonic suppression. Since the pulsewidths are computable in real time with minimal storage requirement as well as compact hardware and software, it is especially suitable for single-chip microcomputer implementation. Experimental results show that the single-chip microcomputer Intel 8095-based battery-source inverter can control a 3 kW synchronous motor drive satisfactorily over a frequency range of 2 to 100Hz.
Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Shuai Li
2008-03-01
Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Rao, Akshay; Elara, Mohan Rajesh; Elangovan, Karthikeyan
This paper aims to develop a local path planning algorithm for a bio-inspired, reconfigurable crawling robot. A detailed description of the robotic platform is first provided, and the suitability for deployment of each of the current state-of-the-art local path planners is analyzed after an extensive literature review. The Enhanced Vector Polar Histogram algorithm is described and reformulated to better fit the requirements of the platform. The algorithm is deployed on the robotic platform in crawling configuration and favorably compared with other state-of-the-art local path planning algorithms.
Research on fully distributed optical fiber sensing security system localization algorithm
Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen
2013-12-01
A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.
Directory of Open Access Journals (Sweden)
Weitian Lin
2014-01-01
Full Text Available Particle swarm optimization algorithm (PSOA is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA, and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA. Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly.
Algorithms for biomagnetic source imaging with prior anatomical and physiological information
Energy Technology Data Exchange (ETDEWEB)
Hughett, Paul William [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences
1995-12-01
This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.
Parareal algorithms with local time-integrators for time fractional differential equations
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
High-frequency asymptotics of the local vertex function. Algorithmic implementations
Energy Technology Data Exchange (ETDEWEB)
Tagliavini, Agnese; Wentzell, Nils [Institut fuer Theoretische Physik, Eberhard Karls Universitaet, 72076 Tuebingen (Germany); Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Li, Gang; Rohringer, Georg; Held, Karsten; Toschi, Alessandro [Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Taranto, Ciro [Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Max Planck Institute for Solid State Research, D-70569 Stuttgart (Germany); Andergassen, Sabine [Institut fuer Theoretische Physik, Eberhard Karls Universitaet, 72076 Tuebingen (Germany)
2016-07-01
Local vertex functions are a crucial ingredient of several forefront many-body algorithms in condensed matter physics. However, the full treatment of their frequency dependence poses a huge limitation to the numerical performance. A significant advancement requires an efficient treatment of the high-frequency asymptotic behavior of the vertex functions. We here provide a detailed diagrammatic analysis of the high-frequency asymptotic structures and their physical interpretation. Based on these insights, we propose a frequency parametrization, which captures the whole high-frequency asymptotics for arbitrary values of the local Coulomb interaction and electronic density. We present its algorithmic implementation in many-body solvers based on parquet-equations as well as functional renormalization group schemes and assess its validity by comparing our results for the single impurity Anderson model with exact diagonalization calculations.
Energy Technology Data Exchange (ETDEWEB)
Wang, Cheng-Der, E-mail: jdwang@iner.gov.tw [Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan, ROC (China); Lin, Chaung [National Tsing Hua University, Department of Engineering and System Science, 101, Section 2, Kuang Fu Road, Hsinchu 30013, Taiwan (China)
2013-02-15
Highlights: ► The PSO algorithm was adopted to automatically design a BWR CRP. ► The local search procedure was added to improve the result of PSO algorithm. ► The results show that the obtained CRP is the same good as that in the previous work. -- Abstract: This study developed a method for the automatic design of a boiling water reactor (BWR) control rod pattern (CRP) using the particle swarm optimization (PSO) algorithm. The PSO algorithm is more random compared to the rank-based ant system (RAS) that was used to solve the same BWR CRP design problem in the previous work. In addition, the local search procedure was used to make improvements after PSO, by adding the single control rod (CR) effect. The design goal was to obtain the CRP so that the thermal limits and shutdown margin would satisfy the design requirement and the cycle length, which is implicitly controlled by the axial power distribution, would be acceptable. The results showed that the same acceptable CRP found in the previous work could be obtained.
Impelluso, Thomas J
2003-06-01
An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.
Local structure information by EXAFS analysis using two algorithms for Fourier transform calculation
International Nuclear Information System (INIS)
Aldea, N; Pintea, S; Rednic, V; Matei, F; Hu Tiandou; Xie Yaning
2009-01-01
The present work is a comparison study between different algorithms of Fourier transform for obtaining very accurate local structure results using Extended X-ray Absorption Fine Structure technique. In this paper we focus on the local structural characteristics of supported nickel catalysts and Fe 3 O 4 core-shell nanocomposites. The radial distribution function could be efficiently calculated by the fast Fourier transform when the coordination shells are well separated while the Filon quadrature gave remarkable results for close-shell coordination.
A Robust and Efficient Algorithm for Tool Recognition and Localization for Space Station Robot
Directory of Open Access Journals (Sweden)
Lingbo Cheng
2014-12-01
Full Text Available This paper studies a robust target recognition and localization method for a maintenance robot in a space station, and its main goal is to solve the target affine transformation caused by microgravity and the strong reflection and refraction of sunlight and lamplight in the cabin, as well as the occlusion of other objects. In this method, an Affine Scale Invariant Feature Transform (Affine-SIFT algorithm is proposed to extract enough local feature points with a fully affine invariant, and the stable matching point is obtained from the above point for target recognition by the selected Random Sample Consensus (RANSAC algorithm. Then, in order to localize the target, the effective and appropriate 3D grasping scope of the target is defined, and we determine and evaluate the grasping precision with the estimated affine transformation parameters presented in this paper. Finally, the threshold of RANSAC is optimized to enhance the accuracy and efficiency of target recognition and localization, and the scopes of illumination, vision distance and viewpoint angle for robot are evaluated to obtain effective image data by Root-Mean-Square Error (RMSE. An experimental system to simulate the illumination environment in a space station is established. Enough experiments have been carried out, and the experimental results show both the validity of the proposed definition of the grasping scope and the feasibility of the proposed recognition and localization method.
Directory of Open Access Journals (Sweden)
Victor eHernandez Bennetts
2012-01-01
Full Text Available Roboticists often take inspiration from animals for designing sensors, actuators or algorithms that control the behaviour of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odour plume. In particular the task of tracking an odour plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behaviour of moths, dungbeetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro drone in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is at the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behaviour of animals, can be usefully translated into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically-inspired algorithms.
Hernandez Bennetts, Victor; Lilienthal, Achim J; Neumann, Patrick P; Trincavelli, Marco
2011-01-01
Roboticists often take inspiration from animals for designing sensors, actuators, or algorithms that control the behavior of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odor plume. In particular the task of tracking an odor plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behavior of moths, dung beetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro-drone) in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behavior of animals, can be usefully "translated" into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically inspired algorithms.
Verlinden, Christopher M.
Controlled acoustic sources have typically been used for imaging the ocean. These sources can either be used to locate objects or characterize the ocean environment. The processing involves signal extraction in the presence of ambient noise, with shipping being a major component of the latter. With the advent of the Automatic Identification System (AIS) which provides accurate locations of all large commercial vessels, these major noise sources can be converted from nuisance to beacons or sources of opportunity for the purpose of studying the ocean. The source localization method presented here is similar to traditional matched field processing, but differs in that libraries of data-derived measured replicas are used in place of modeled replicas. In order to account for differing source spectra between library and target vessels, cross-correlation functions are compared instead of comparing acoustic signals directly. The library of measured cross-correlation function replicas is extrapolated using waveguide invariant theory to fill gaps between ship tracks, fully populating the search grid with estimated replicas allowing for continuous tracking. In addition to source localization, two ocean sensing techniques are discussed in this dissertation. The feasibility of estimating ocean sound speed and temperature structure, using ship noise across a drifting volumetric array of hydrophones suspended beneath buoys, in a shallow water marine environment is investigated. Using the attenuation of acoustic energy along eigenray paths to invert for ocean properties such as temperature, salinity, and pH is also explored. In each of these cases, the theory is developed, tested using numerical simulations, and validated with data from acoustic field experiments.
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks.
Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek; You, Ilsun
2017-11-29
As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks' survivability, in terms of anti-interference, network energy saving, etc.
A practical algorithm for distribution state estimation including renewable energy sources
Energy Technology Data Exchange (ETDEWEB)
Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)
2009-11-15
Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)
Optimization of source pencil deployment based on plant growth simulation algorithm
International Nuclear Information System (INIS)
Yang Lei; Liu Yibao; Liu Yujuan
2009-01-01
A plant growth simulation algorithm was proposed for optimizing source pencil deployment for a 60 Co irradiator. A method used to evaluate the calculation results was presented with the objective function defined by relative standard deviation of the exposure rate at the reference points, and the method to transform two kinds of control variables, i.e., position coordinates x j and y j of source pencils in the source plaque, into proper integer variables was also analyzed and solved. The results show that the plant growth simulation algorithm, which possesses both random and directional search mechanism, has good global search ability and can be used conveniently. The results are affected a little by initial conditions, and improve the uniformity in the irradiation fields. It creates a dependable field for the optimization of source bars arrangement at irradiation facility. (authors)
Nonlinear estimation-based dipole source localization for artificial lateral line systems
International Nuclear Information System (INIS)
Abdulsadda, Ahmad T; Tan Xiaobo
2013-01-01
As a flow-sensing organ, the lateral line system plays an important role in various behaviors of fish. An engineering equivalent of a biological lateral line is of great interest to the navigation and control of underwater robots and vehicles. A vibrating sphere, also known as a dipole source, can emulate the rhythmic movement of fins and body appendages, and has been widely used as a stimulus in the study of biological lateral lines. Dipole source localization has also become a benchmark problem in the development of artificial lateral lines. In this paper we present two novel iterative schemes, referred to as Gauss–Newton (GN) and Newton–Raphson (NR) algorithms, for simultaneously localizing a dipole source and estimating its vibration amplitude and orientation, based on the analytical model for a dipole-generated flow field. The performance of the GN and NR methods is first confirmed with simulation results and the Cramer–Rao bound (CRB) analysis. Experiments are further conducted on an artificial lateral line prototype, consisting of six millimeter-scale ionic polymer–metal composite sensors with intra-sensor spacing optimized with CRB analysis. Consistent with simulation results, the experimental results show that both GN and NR schemes are able to simultaneously estimate the source location, vibration amplitude and orientation with comparable precision. Specifically, the maximum localization error is less than 5% of the body length (BL) when the source is within the distance of one BL. Experimental results have also shown that the proposed schemes are superior to the beamforming method, one of the most competitive approaches reported in literature, in terms of accuracy and computational efficiency. (paper)
Localizing Brain Activity from Multiple Distinct Sources via EEG
Directory of Open Access Journals (Sweden)
George Dassios
2014-01-01
Full Text Available An important question arousing in the framework of electroencephalography (EEG is the possibility to recognize, by means of a recorded surface potential, the number of activated areas in the brain. In the present paper, employing a homogeneous spherical conductor serving as an approximation of the brain, we provide a criterion which determines whether the measured surface potential is evoked by a single or multiple localized neuronal excitations. We show that the uniqueness of the inverse problem for a single dipole is closely connected with attaining certain relations connecting the measured data. Further, we present the necessary and sufficient conditions which decide whether the collected data originates from a single dipole or from numerous dipoles. In the case where the EEG data arouses from multiple parallel dipoles, an isolation of the source is, in general, not possible.
Shah, Syed Awais Wahab
2017-11-24
This paper addresses the problem of blind demixing of instantaneous mixtures in a multiple-input multiple-output communication system. The main objective is to present efficient blind source separation (BSS) algorithms dedicated to moderate or high-order QAM constellations. Four new iterative batch BSS algorithms are presented dealing with the multimodulus (MM) and alphabet matched (AM) criteria. For the optimization of these cost functions, iterative methods of Givens and hyperbolic rotations are used. A pre-whitening operation is also utilized to reduce the complexity of design problem. It is noticed that the designed algorithms using Givens rotations gives satisfactory performance only for large number of samples. However, for small number of samples, the algorithms designed by combining both Givens and hyperbolic rotations compensate for the ill-whitening that occurs in this case and thus improves the performance. Two algorithms dealing with the MM criterion are presented for moderate order QAM signals such as 16-QAM. The other two dealing with the AM criterion are presented for high-order QAM signals. These methods are finally compared with the state of art batch BSS algorithms in terms of signal-to-interference and noise ratio, symbol error rate and convergence rate. Simulation results show that the proposed methods outperform the contemporary batch BSS algorithms.
Shah, Syed Awais Wahab; Abed-Meraim, Karim; Al-Naffouri, Tareq Y.
2017-01-01
This paper addresses the problem of blind demixing of instantaneous mixtures in a multiple-input multiple-output communication system. The main objective is to present efficient blind source separation (BSS) algorithms dedicated to moderate or high-order QAM constellations. Four new iterative batch BSS algorithms are presented dealing with the multimodulus (MM) and alphabet matched (AM) criteria. For the optimization of these cost functions, iterative methods of Givens and hyperbolic rotations are used. A pre-whitening operation is also utilized to reduce the complexity of design problem. It is noticed that the designed algorithms using Givens rotations gives satisfactory performance only for large number of samples. However, for small number of samples, the algorithms designed by combining both Givens and hyperbolic rotations compensate for the ill-whitening that occurs in this case and thus improves the performance. Two algorithms dealing with the MM criterion are presented for moderate order QAM signals such as 16-QAM. The other two dealing with the AM criterion are presented for high-order QAM signals. These methods are finally compared with the state of art batch BSS algorithms in terms of signal-to-interference and noise ratio, symbol error rate and convergence rate. Simulation results show that the proposed methods outperform the contemporary batch BSS algorithms.
Pavlov, V. M.
2017-07-01
The problem of calculating complete synthetic seismograms from a point dipole with an arbitrary seismic moment tensor in a plane parallel medium composed of homogeneous elastic isotropic layers is considered. It is established that the solutions of the system of ordinary differential equations for the motion-stress vector have a reciprocity property, which allows obtaining a compact formula for the derivative of the motion vector with respect to the source depth. The reciprocity theorem for Green's functions with respect to the interchange of the source and receiver is obtained for a medium with cylindrical boundary. The differentiation of Green's functions with respect to the coordinates of the source leads to the same calculation formulas as the algorithm developed in the previous work (Pavlov, 2013). A new algorithm appears when the derivatives with respect to the horizontal coordinates of the source is replaced by the derivatives with respect to the horizontal coordinates of the receiver (with the minus sign). This algorithm is more transparent, compact, and economic than the previous one. It requires calculating the wavenumbers associated with Bessel function's roots of order 0 and order 1, whereas the previous algorithm additionally requires the second order roots.
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be
A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.
Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet
2018-04-23
In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.
Aerosol retrieval algorithm for the characterization of local aerosol using MODIS L1B data
International Nuclear Information System (INIS)
Wahab, A M; Sarker, M L R
2014-01-01
Atmospheric aerosol plays an important role in radiation budget, climate change, hydrology and visibility. However, it has immense effect on the air quality, especially in densely populated areas where high concentration of aerosol is associated with premature death and the decrease of life expectancy. Therefore, an accurate estimation of aerosol with spatial distribution is essential, and satellite data has increasingly been used to estimate aerosol optical depth (AOD). Aerosol product (AOD) from Moderate Resolution Imaging Spectroradiometer (MODIS) data is available at global scale but problems arise due to low spatial resolution, time-lag availability of AOD product as well as the use of generalized aerosol models in retrieval algorithm instead of local aerosol models. This study focuses on the aerosol retrieval algorithm for the characterization of local aerosol in Hong Kong for a long period of time (2006-2011) using high spatial resolution MODIS level 1B data (500 m resolution) and taking into account the local aerosol models. Two methods (dark dense vegetation and MODIS land surface reflectance product) were used for the estimation of the surface reflectance over land and Santa Barbara DISORT Radiative Transfer (SBDART) code was used to construct LUTs for calculating the aerosol reflectance as a function of AOD. Results indicate that AOD can be estimated at the local scale from high resolution MODIS data, and the obtained accuracy (ca. 87%) is very much comparable with the accuracy obtained from other studies (80%-95%) for AOD estimation
A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm
Directory of Open Access Journals (Sweden)
Yun-Ting Wang
2018-04-01
Full Text Available In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.
International Nuclear Information System (INIS)
Chan, Apple L.S.; Hanby, Vic I.; Chow, T.T.
2007-01-01
A district cooling system is a sustainable means of distribution of cooling energy through mass production. A cooling medium like chilled water is generated at a central refrigeration plant and supplied to serve a group of consumer buildings through a piping network. Because of the substantial capital investment involved, an optimal design of the distribution piping configuration is one of the crucial factors for successful implementation of the district cooling scheme. In the present study, genetic algorithm (GA) incorporated with local search techniques was developed to find the optimal/near optimal configuration of the piping network in a hypothetical site. The effect of local search, mutation rate and frequency of local search on the performance of the GA in terms of both solution quality and computation time were investigated and presented in this paper
CAMPAIGN: an open-source library of GPU-accelerated data clustering algorithms.
Kohlhoff, Kai J; Sosnick, Marc H; Hsu, William T; Pande, Vijay S; Altman, Russ B
2011-08-15
Data clustering techniques are an essential component of a good data analysis toolbox. Many current bioinformatics applications are inherently compute-intense and work with very large datasets. Sequential algorithms are inadequate for providing the necessary performance. For this reason, we have created Clustering Algorithms for Massively Parallel Architectures, Including GPU Nodes (CAMPAIGN), a central resource for data clustering algorithms and tools that are implemented specifically for execution on massively parallel processing architectures. CAMPAIGN is a library of data clustering algorithms and tools, written in 'C for CUDA' for Nvidia GPUs. The library provides up to two orders of magnitude speed-up over respective CPU-based clustering algorithms and is intended as an open-source resource. New modules from the community will be accepted into the library and the layout of it is such that it can easily be extended to promising future platforms such as OpenCL. Releases of the CAMPAIGN library are freely available for download under the LGPL from https://simtk.org/home/campaign. Source code can also be obtained through anonymous subversion access as described on https://simtk.org/scm/?group_id=453. kjk33@cantab.net.
Reactive searching and infotaxis in odor source localization.
Directory of Open Access Journals (Sweden)
Nicole Voges
2014-10-01
Full Text Available Male moths aiming to locate pheromone-releasing females rely on stimulus-adapted search maneuvers complicated by a discontinuous distribution of pheromone patches. They alternate sequences of upwind surge when perceiving the pheromone and cross- or downwind casting when the odor is lost. We compare four search strategies: three reactive versus one cognitive. The former consist of pre-programmed movement sequences triggered by pheromone detections while the latter uses Bayesian inference to build spatial probability maps. Based on the analysis of triphasic responses of antennal lobe neurons (On, inhibition, Off, we propose three reactive strategies. One combines upwind surge (representing the On response to a pheromone detection and spiral casting, only. The other two additionally include crosswind (zigzag casting representing the Off phase. As cognitive strategy we use the infotaxis algorithm which was developed for searching in a turbulent medium. Detection events in the electroantennogram of a moth attached to a robot indirectly control this cyborg, depending on the strategy in use. The recorded trajectories are analyzed with regard to success rates, efficiency, and other features. In addition, we qualitatively compare our robotic trajectories to behavioral search paths. Reactive searching is more efficient (yielding shorter trajectories for higher pheromone doses whereas cognitive searching works better for lower doses. With respect to our experimental conditions (2 m from starting position to pheromone source, reactive searching with crosswind zigzag yields the shortest trajectories (for comparable success rates. Assuming that the neuronal Off response represents a short-term memory, zigzagging is an efficient movement to relocate a recently lost pheromone plume. Accordingly, such reactive strategies offer an interesting alternative to complex cognitive searching.
Reactive searching and infotaxis in odor source localization.
Voges, Nicole; Chaffiol, Antoine; Lucas, Philippe; Martinez, Dominique
2014-10-01
Male moths aiming to locate pheromone-releasing females rely on stimulus-adapted search maneuvers complicated by a discontinuous distribution of pheromone patches. They alternate sequences of upwind surge when perceiving the pheromone and cross- or downwind casting when the odor is lost. We compare four search strategies: three reactive versus one cognitive. The former consist of pre-programmed movement sequences triggered by pheromone detections while the latter uses Bayesian inference to build spatial probability maps. Based on the analysis of triphasic responses of antennal lobe neurons (On, inhibition, Off), we propose three reactive strategies. One combines upwind surge (representing the On response to a pheromone detection) and spiral casting, only. The other two additionally include crosswind (zigzag) casting representing the Off phase. As cognitive strategy we use the infotaxis algorithm which was developed for searching in a turbulent medium. Detection events in the electroantennogram of a moth attached to a robot indirectly control this cyborg, depending on the strategy in use. The recorded trajectories are analyzed with regard to success rates, efficiency, and other features. In addition, we qualitatively compare our robotic trajectories to behavioral search paths. Reactive searching is more efficient (yielding shorter trajectories) for higher pheromone doses whereas cognitive searching works better for lower doses. With respect to our experimental conditions (2 m from starting position to pheromone source), reactive searching with crosswind zigzag yields the shortest trajectories (for comparable success rates). Assuming that the neuronal Off response represents a short-term memory, zigzagging is an efficient movement to relocate a recently lost pheromone plume. Accordingly, such reactive strategies offer an interesting alternative to complex cognitive searching.
Neighbor Discovery Algorithm in Wireless Local Area Networks Using Multi-beam Directional Antennas
Wang, Jin; Peng, Wei; Liu, Song
2017-10-01
Neighbor discovery is an important step for Wireless Local Area Networks (WLAN) and the use of multi-beam directional antennas can greatly improve the network performance. However, most neighbor discovery algorithms in WLAN, based on multi-beam directional antennas, can only work effectively in synchronous system but not in asynchro-nous system. And collisions at AP remain a bottleneck for neighbor discovery. In this paper, we propose two asynchrono-us neighbor discovery algorithms: asynchronous hierarchical scanning (AHS) and asynchronous directional scanning (ADS) algorithm. Both of them are based on three-way handshaking mechanism. AHS and ADS reduce collisions at AP to have a good performance in a hierarchical way and directional way respectively. In the end, the performance of the AHS and ADS are tested on OMNeT++. Moreover, it is analyzed that different application scenarios and the factors how to affect the performance of these algorithms. The simulation results show that AHS is suitable for the densely populated scenes around AP while ADS is suitable for that most of the neighborhood nodes are far from AP.
Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points
Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.
2009-01-01
This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.
International Nuclear Information System (INIS)
Vaegler, Sven; Sauer, Otto; Stsepankou, Dzmitry; Hesser, Juergen
2015-01-01
The reduction of dose in cone beam computer tomography (CBCT) arises from the decrease of the tube current for each projection as well as from the reduction of the number of projections. In order to maintain good image quality, sophisticated image reconstruction techniques are required. The Prior Image Constrained Compressed Sensing (PICCS) incorporates prior images into the reconstruction algorithm and outperforms the widespread used Feldkamp-Davis-Kress-algorithm (FDK) when the number of projections is reduced. However, prior images that contain major variations are not appropriately considered so far in PICCS. We therefore propose the partial-PICCS (pPICCS) algorithm. This framework is a problem-specific extension of PICCS and enables the incorporation of the reliability of the prior images additionally. We assumed that the prior images are composed of areas with large and small deviations. Accordingly, a weighting matrix considered the assigned areas in the objective function. We applied our algorithm to the problem of image reconstruction from few views by simulations with a computer phantom as well as on clinical CBCT projections from a head-and-neck case. All prior images contained large local variations. The reconstructed images were compared to the reconstruction results by the FDK-algorithm, by Compressed Sensing (CS) and by PICCS. To show the gain of image quality we compared image details with the reference image and used quantitative metrics (root-mean-square error (RMSE), contrast-to-noise-ratio (CNR)). The pPICCS reconstruction framework yield images with substantially improved quality even when the number of projections was very small. The images contained less streaking, blurring and inaccurately reconstructed structures compared to the images reconstructed by FDK, CS and conventional PICCS. The increased image quality is also reflected in large RMSE differences. We proposed a modification of the original PICCS algorithm. The pPICCS algorithm
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
Magnet sorting algorithms for insertion devices for the Advanced Light Source
International Nuclear Information System (INIS)
Humphries, D.; Hoyer, E.; Kincaid, B.; Marks, S.; Schlueter, R.
1994-01-01
Insertion devices for the Advanced Light Source (ALS) incorporate up to 3,000 magnet blocks each for pole energization. In order to minimize field errors, these magnets must be measured, sorted and assigned appropriate locations and orientation in the magnetic structures. Sorting must address multiple objectives, including pole excitation and minimization of integrated multipole fields from minor field components in the magnets. This is equivalent to a combinatorial minimization problem with a large configuration space. Multi-stage sorting algorithms use ordering and pairing schemes in conjunction with other combinatorial methods to solve the minimization problem. This paper discusses objective functions, solution algorithms and results of application to magnet block measurement data
Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A
2014-09-22
We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.
Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad
2014-10-01
Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.
A Localization Algorithm Based on AOA for Ad-Hoc Sensor Networks
Directory of Open Access Journals (Sweden)
Yang Sun Lee
2012-01-01
Full Text Available Knowledge of positions of sensor nodes in Wireless Sensor Networks (WSNs will make possible many applications such as asset monitoring, object tracking and routing. In WSNs, the errors may happen in the measurement of distances and angles between pairs of nodes in WSN and these errors will be propagated to different nodes, the estimation of positions of sensor nodes can be difficult and have huge errors. In this paper, we will propose localization algorithm based on both distance and angle to landmark. So, we introduce a method of incident angle to landmark and the algorithm to exchange physical data such as distances and incident angles and update the position of a node by utilizing multiple landmarks and multiple paths to landmarks.
Directory of Open Access Journals (Sweden)
Abhijeet Ravankar
2016-05-01
Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.
local alternative sources for cogeneration combined heat and power system
Agll, Abdulhakim Amer
Global demand for energy continues to grow while countries around the globe race to reduce their reliance on fossil fuels and greenhouse gas emissions by implementing policy measures and advancing technology. Sustainability has become an important issue in transportation and infrastructure development projects. While several agencies are trying to incorporate a range of sustainability measures in their goals and missions, only a few planning agencies have been able to implement these policies and they are far from perfect. The low rate of success in implementing sustainable policies is primarily due to incomplete understanding of the system and the interaction between various elements of the system. The conventional planning efforts focuses mainly on performance measures pertaining to the system and its impact on the environment but seldom on the social and economic impacts. The objective of this study is to use clean and alternative energy can be produced from many sources, and even use existing materials for energy generation. One such pathway is using wastewater, animal and organic waste, or landfills to create biogas for energy production. There are three tasks for this study. In topic one evaluated the energy saving that produced from combined hydrogen, heat, and power and mitigate greenhouse gas emissions by using local sustainable energy at the Missouri S&T campus to reduce energy consumption and fossil fuel usage. Second topic aimed to estimate energy recovery and power generation from alternative energy source by using Rankin steam cycle from municipal solid waste at Benghazi-Libya. And the last task is in progress. The results for topics one and two have been presented.
A Hybrid DV-Hop Algorithm Using RSSI for Localization in Large-Scale Wireless Sensor Networks.
Cheikhrouhou, Omar; M Bhatti, Ghulam; Alroobaea, Roobaea
2018-05-08
With the increasing realization of the Internet-of-Things (IoT) and rapid proliferation of wireless sensor networks (WSN), estimating the location of wireless sensor nodes is emerging as an important issue. Traditional ranging based localization algorithms use triangulation for estimating the physical location of only those wireless nodes that are within one-hop distance from the anchor nodes. Multi-hop localization algorithms, on the other hand, aim at localizing the wireless nodes that can physically be residing at multiple hops away from anchor nodes. These latter algorithms have attracted a growing interest from research community due to the smaller number of required anchor nodes. One such algorithm, known as DV-Hop (Distance Vector Hop), has gained popularity due to its simplicity and lower cost. However, DV-Hop suffers from reduced accuracy due to the fact that it exploits only the network topology (i.e., number of hops to anchors) rather than the distances between pairs of nodes. In this paper, we propose an enhanced DV-Hop localization algorithm that also uses the RSSI values associated with links between one-hop neighbors. Moreover, we exploit already localized nodes by promoting them to become additional anchor nodes. Our simulations have shown that the proposed algorithm significantly outperforms the original DV-Hop localization algorithm and two of its recently published variants, namely RSSI Auxiliary Ranging and the Selective 3-Anchor DV-hop algorithm. More precisely, in some scenarios, the proposed algorithm improves the localization accuracy by almost 95%, 90% and 70% as compared to the basic DV-Hop, Selective 3-Anchor, and RSSI DV-Hop algorithms, respectively.
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Directory of Open Access Journals (Sweden)
Ligang Cui
2013-01-01
Full Text Available The capacitated vehicle routing problem (CVRP is the most classical vehicle routing problem (VRP; many solution techniques are proposed to find its better answer. In this paper, a new improved quantum evolution algorithm (IQEA with a mixed local search procedure is proposed for solving CVRPs. First, an IQEA with a double chain quantum chromosome, new quantum rotation schemes, and self-adaptive quantum Not gate is constructed to initialize and generate feasible solutions. Then, to further strengthen IQEA's searching ability, three local search procedures 1-1 exchange, 1-0 exchange, and 2-OPT, are adopted. Experiments on a small case have been conducted to analyze the sensitivity of main parameters and compare the performances of the IQEA with different local search strategies. Together with results from the testing of CVRP benchmarks, the superiorities of the proposed algorithm over the PSO, SR-1, and SR-2 have been demonstrated. At last, a profound analysis of the experimental results is presented and some suggestions on future researches are given.
Improved semianalytic algorithms for finding the flux from a cylindrical source
International Nuclear Information System (INIS)
Wallace, O.J.
1992-01-01
Hand-calculation methods involving semianalytic approximations of exact flux formulas continue to be useful in shielding calculations because they enable shield design personnel to make quick estimates of dose rates, check calculations made be more exact and time-consuming methods, and rapidly determine the scope of problems. They are also a valuable teaching tool. The most useful approximate flux formula is that for the flux at a lateral detector point from a cylindrical source with an intervening slab shield. Such an approximate formula is given by Rockwell. An improved formula for this case is given by Ono and Tsuro. Shure and Wallace also give this formula together with function tables and a detailed survey of its accuracy. The second section of this paper provides an algorithm for significantly improving the accuracy of the formula of Ono and Tsuro. The flux at a detector point outside the radial and axial extensions of a cylindrical source, again with an intervening slab shield, is another case of interest, but nowhere in the literature is this arrangement of source, shield, and detector point treated. In the third section of this paper, an algorithm for this case is given, based on superposition of sources and the algorithm of Section II. 6 refs., 1 fig., 1 tab
Nonlinear simulations of particle source effects on edge localized mode
Energy Technology Data Exchange (ETDEWEB)
Huang, J.; Tang, C. J. [College of Physical Science and Technology, Sichuan University, Chengdu 610065 (China); Key Laboratory of High Energy Density Physics and Technology of Ministry of Education, Sichuan University, Chengdu 610064 (China); Chen, S. Y., E-mail: sychen531@163.com [College of Physical Science and Technology, Sichuan University, Chengdu 610065 (China); Key Laboratory of High Energy Density Physics and Technology of Ministry of Education, Sichuan University, Chengdu 610064 (China); Southwestern Institute of Physics, Chengdu 610041 (China); Wang, Z. H. [Southwestern Institute of Physics, Chengdu 610041 (China)
2015-12-15
The effects of particle source (PS) with different intensities and located positions on Edge Localized Mode (ELM) are systematically studied with BOUT++ code. The results show the ELM size strongly decreases with increasing the PS intensity once the PS is located in the middle or bottom of the pedestal. The effects of PS on ELM depend on the located position of PS. When it is located at the top of the pedestal, peeling-ballooning (P-B) modes can extract more free energy from the pressure gradient and grow up to be a large filament at the initial crash phase and the broadening of mode spectrum can be suppressed by PS, which leads to more energy loss. When it is located in the middle or bottom of the pedestal, the extraction of free energy by P-B modes can be suppressed, and a small filament is generated. During the turbulence transport phase, the broader mode spectrum suppresses the turbulence transport when PS is located in the middle, while the zonal flow plays an important role in damping the turbulence transport when PS is located at the bottom.
Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret
1992-01-01
Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.
A Modified Load Flow Algorithm in Power Systems with Alternative Energy Sources
International Nuclear Information System (INIS)
Contreras, D.L.; Cañedo, J.M.
2017-01-01
In this paper an algorithm for calculating the steady state of electrical networks including wind and photovoltaic generation is presented. The wind generators considered are; asynchronous (squirrel cage and doubly fed) and synchronous generators using permanent magnets. The proposed algorithm is based on the formulation of nodal power injections that is solved with the modified Newton Raphson technique in its polar formulation using complex matrices notation. Each power injection of wind and photovoltaic generators is calculated independently in each iteration according to its particular mathematical model, which is generally non-linear. Results are presented with a 30-node test system. The computation time of the proposed algorithm is compared with the conventional methodology to include alternative energy sources in power flows studies. (author)
Energy Technology Data Exchange (ETDEWEB)
Stavrov, Andrei; Yamamoto, Eugene [Rapiscan Systems, Inc., 14000 Mead Street, Longmont, CO, 80504 (United States)
2015-07-01
Radiation Portal Monitors (RPM) with plastic detectors represent the main instruments used for primary border (customs) radiation control. RPM are widely used because they are simple, reliable, relatively inexpensive and have a high sensitivity. However, experience using the RPM in various countries has revealed the systems have some grave shortcomings. There is a dramatic decrease of the probability of detection of radioactive sources under high suppression of the natural gamma background (radiation control of heavy cargoes, containers and, especially, trains). NORM (Naturally Occurring Radioactive Material) existing in objects under control trigger the so-called 'nuisance alarms', requiring a secondary inspection for source verification. At a number of sites, the rate of such alarms is so high it significantly complicates the work of customs and border officers. This paper presents a brief description of new variant of algorithm ASIA-New (New Advanced Source Identification Algorithm), which was developed by the authors and based on some experimental test results. It also demonstrates results of different tests and the capability of a new system to overcome the shortcomings stated above. New electronics and ASIA-New enables RPM to detect radioactive sources under a high background suppression (tested at 15-30%) and to verify the detected NORM (KCl) and the artificial isotopes (Co-57, Ba-133 and other). New variant of ASIA is based on physical principles and does not require a lot of special tests to attain statistical data for its parameters. That is why this system can be easily installed into any RPM with plastic detectors. This algorithm was tested for 1,395 passages of different transports (cars, trucks and trailers) without radioactive sources. It also was tested for 4,015 passages of these transports with radioactive sources of different activity (Co-57, Ba-133, Cs-137, Co-60, Ra-226, Th-232) and these sources masked by NORM (K-40) as well
A New Curve Tracing Algorithm Based on Local Feature in the Vectorization of Paper Seismograms
Directory of Open Access Journals (Sweden)
Maofa Wang
2014-02-01
Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction. The vectorization of paper seismograms is an import problem to be resolved. Auto tracing of waveform curves is a key technology for the vectorization of paper seismograms. It can transform an original scanning image into digital waveform data. Accurately tracing out all the key points of each curve in seismograms is the foundation for vectorization of paper seismograms. In the paper, we present a new curve tracing algorithm based on local feature, applying to auto extraction of earthquake waveform in paper seismograms.
Iterative local Chi2 alignment algorithm for the ATLAS Pixel detector
Göttfert, Tobias
The existing local chi2 alignment approach for the ATLAS SCT detector was extended to the alignment of the ATLAS Pixel detector. This approach is linear, aligns modules separately, and uses distance of closest approach residuals and iterations. The derivation and underlying concepts of the approach are presented. To show the feasibility of the approach for Pixel modules, a simplified, stand-alone track simulation, together with the alignment algorithm, was developed with the ROOT analysis software package. The Pixel alignment software was integrated into Athena, the ATLAS software framework. First results and the achievable accuracy for this approach with a simulated dataset are presented.
Hua, Boyang; Wang, Yanbo; Park, Seongjin; Han, Kyu Young; Singh, Digvijay; Kim, Jin H; Cheng, Wei; Ha, Taekjip
2018-03-13
Here, we demonstrate that the use of the single-molecule centroid localization algorithm can improve the accuracy of fluorescence binding assays. Two major artifacts in this type of assay, i.e., nonspecific binding events and optically overlapping receptors, can be detected and corrected during analysis. The effectiveness of our method was confirmed by measuring two weak biomolecular interactions, the interaction between the B1 domain of streptococcal protein G and immunoglobulin G and the interaction between double-stranded DNA and the Cas9-RNA complex with limited sequence matches. This analysis routine requires little modification to common experimental protocols, making it readily applicable to existing data and future experiments.
Open-source chemogenomic data-driven algorithms for predicting drug-target interactions.
Hao, Ming; Bryant, Stephen H; Wang, Yanli
2018-02-06
While novel technologies such as high-throughput screening have advanced together with significant investment by pharmaceutical companies during the past decades, the success rate for drug development has not yet been improved prompting researchers looking for new strategies of drug discovery. Drug repositioning is a potential approach to solve this dilemma. However, experimental identification and validation of potential drug targets encoded by the human genome is both costly and time-consuming. Therefore, effective computational approaches have been proposed to facilitate drug repositioning, which have proved to be successful in drug discovery. Doubtlessly, the availability of open-accessible data from basic chemical biology research and the success of human genome sequencing are crucial to develop effective in silico drug repositioning methods allowing the identification of potential targets for existing drugs. In this work, we review several chemogenomic data-driven computational algorithms with source codes publicly accessible for predicting drug-target interactions (DTIs). We organize these algorithms by model properties and model evolutionary relationships. We re-implemented five representative algorithms in R programming language, and compared these algorithms by means of mean percentile ranking, a new recall-based evaluation metric in the DTI prediction research field. We anticipate that this review will be objective and helpful to researchers who would like to further improve existing algorithms or need to choose appropriate algorithms to infer potential DTIs in the projects. The source codes for DTI predictions are available at: https://github.com/minghao2016/chemogenomicAlg4DTIpred. Published by Oxford University Press 2018. This work is written by US Government employees and is in the public domain in the US.
Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.
Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J
2018-02-15
Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
A Rule-Based Local Search Algorithm for General Shift Design Problems in Airport Ground Handling
DEFF Research Database (Denmark)
Clausen, Tommy
We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework with mul......We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework...... with multiple neighborhoods and a loosely coupled rule engine based on simulated annealing is presented. Computational experiments on real-life data from various airport ground handling organization show the performance and flexibility of the proposed algorithm....
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
Directory of Open Access Journals (Sweden)
E. Dall'Asta
2014-06-01
Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
Dall'Asta, E.; Roncella, R.
2014-06-01
Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
Indoor Localization Algorithms for an Ambulatory Human Operated 3D Mobile Mapping System
Directory of Open Access Journals (Sweden)
Nicholas Corso
2013-12-01
Full Text Available Indoor localization and mapping is an important problem with many applications such as emergency response, architectural modeling, and historical preservation. In this paper, we develop an automatic, off-line pipeline for metrically accurate, GPS-denied, indoor 3D mobile mapping using a human-mounted backpack system consisting of a variety of sensors. There are three novel contributions in our proposed mapping approach. First, we present an algorithm which automatically detects loop closure constraints from an occupancy grid map. In doing so, we ensure that constraints are detected only in locations that are well conditioned for scan matching. Secondly, we address the problem of scan matching with poor initial condition by presenting an outlier-resistant, genetic scan matching algorithm that accurately matches scans despite a poor initial condition. Third, we present two metrics based on the amount and complexity of overlapping geometry in order to vet the estimated loop closure constraints. By doing so, we automatically prevent erroneous loop closures from degrading the accuracy of the reconstructed trajectory. The proposed algorithms are experimentally verified using both controlled and real-world data. The end-to-end system performance is evaluated using 100 surveyed control points in an office environment and obtains a mean accuracy of 10 cm. Experimental results are also shown on three additional datasets from real world environments including a 1500 meter trajectory in a warehouse sized retail shopping center.
An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation
Asiri, Sharefa M.; Zayane, Chadia; Laleg-Kirati, Taous-Meriem
2015-01-01
Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.
An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation
Asiri, Sharefa M.
2015-08-31
Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.
Directory of Open Access Journals (Sweden)
Mohammad Saied Fallah Niasar
2017-02-01
Full Text Available he school bus routing problem (SBRP represents a variant of the well-known vehicle routing problem. The main goal of this study is to pick up students allocated to some bus stops and generate routes, including the selected stops, in order to carry students to school. In this paper, we have proposed a simple but effective metaheuristic approach that employs two features: first, it utilizes large neighborhood structures for a deeper exploration of the search space; second, the proposed heuristic executes an efficient transition between the feasible and infeasible portions of the search space. Exploration of the infeasible area is controlled by a dynamic penalty function to convert the unfeasible solution into a feasible one. Two metaheuristics, called N-ILS (a variant of the Nearest Neighbourhood with Iterated Local Search algorithm and I-ILS (a variant of Insertion with Iterated Local Search algorithm are proposed to solve SBRP. Our experimental procedure is based on the two data sets. The results show that N-ILS is able to obtain better solutions in shorter computing times. Additionally, N-ILS appears to be very competitive in comparison with the best existing metaheuristics suggested for SBRP
Sub-OBB based object recognition and localization algorithm using range images
International Nuclear Information System (INIS)
Hoang, Dinh-Cuong; Chen, Liang-Chia; Nguyen, Thanh-Hung
2017-01-01
This paper presents a novel approach to recognize and estimate pose of the 3D objects in cluttered range images. The key technical breakthrough of the developed approach can enable robust object recognition and localization under undesirable condition such as environmental illumination variation as well as optical occlusion to viewing the object partially. First, the acquired point clouds are segmented into individual object point clouds based on the developed 3D object segmentation for randomly stacked objects. Second, an efficient shape-matching algorithm called Sub-OBB based object recognition by using the proposed oriented bounding box (OBB) regional area-based descriptor is performed to reliably recognize the object. Then, the 3D position and orientation of the object can be roughly estimated by aligning the OBB of segmented object point cloud with OBB of matched point cloud in a database generated from CAD model and 3D virtual camera. To detect accurate pose of the object, the iterative closest point (ICP) algorithm is used to match the object model with the segmented point clouds. From the feasibility test of several scenarios, the developed approach is verified to be feasible for object pose recognition and localization. (paper)
Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
Directory of Open Access Journals (Sweden)
Xu Yu
2018-01-01
Full Text Available Cross-domain collaborative filtering (CDCF solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR. We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
A local adaptive algorithm for emerging scale-free hierarchical networks
International Nuclear Information System (INIS)
Gomez Portillo, I J; Gleiser, P M
2010-01-01
In this work we study a growing network model with chaotic dynamical units that evolves using a local adaptive rewiring algorithm. Using numerical simulations we show that the model allows for the emergence of hierarchical networks. First, we show that the networks that emerge with the algorithm present a wide degree distribution that can be fitted by a power law function, and thus are scale-free networks. Using the LaNet-vi visualization tool we present a graphical representation that reveals a central core formed only by hubs, and also show the presence of a preferential attachment mechanism. In order to present a quantitative analysis of the hierarchical structure we analyze the clustering coefficient. In particular, we show that as the network grows the clustering becomes independent of system size, and also presents a power law decay as a function of the degree. Finally, we compare our results with a similar version of the model that has continuous non-linear phase oscillators as dynamical units. The results show that local interactions play a fundamental role in the emergence of hierarchical networks.
Distributed, signal strength-based indoor localization algorithm for use in healthcare environments.
Wyffels, Jeroen; De Brabanter, Jos; Crombez, Pieter; Verhoeve, Piet; Nauwelaers, Bart; De Strycker, Lieven
2014-11-01
In current healthcare environments, a trend toward mobile and personalized interactions between people and nurse call systems is strongly noticeable. Therefore, it should be possible to locate patients at all times and in all places throughout the care facility. This paper aims at describing a method by which a mobile node can locate itself indoors, based on signal strength measurements and a minimal amount of yes/no decisions. The algorithm has been developed specifically for use in a healthcare environment. With extensive testing and statistical support, we prove that our algorithm can be used in a healthcare setting with an envisioned level of localization accuracy up to room revel (or region level in a corridor), while avoiding heavy investments since the hardware of an existing nurse call network can be reused. The approach opted for leads to very high scalability, since thousands of mobile nodes can locate themselves. Network timing issues and localization update delays are avoided, which ensures that a patient can receive the needed care in a time and resources efficient way.
Jun, James Jaeyoon; Longtin, André; Maler, Leonard
2013-01-01
In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source
Directory of Open Access Journals (Sweden)
James Jaeyoon Jun
Full Text Available In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal's positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole
Bai, Mingsian R; Lai, Chang-Sheng; Wu, Po-Chen
2017-07-01
Circular microphone arrays (CMAs) are sufficient in many immersive audio applications because azimuthal angles of sources are considered more important than the elevation angles in those occasions. However, the fact that CMAs do not resolve the elevation angle well can be a limitation for some applications which involves three-dimensional sound images. This paper proposes a 2.5-dimensional (2.5-D) CMA comprised of a CMA and a vertical logarithmic-spacing linear array (LLA) on the top. In the localization stage, two delay-and-sum beamformers are applied to the CMA and the LLA, respectively. The direction of arrival (DOA) is estimated from the product of two array output signals. In the separation stage, Tikhonov regularization and convex optimization are employed to extract the source amplitudes on the basis of the estimated DOA. The extracted signals from two arrays are further processed by the normalized least-mean-square algorithm with the internal iteration to yield the source signal with improved quality. To validate the 2.5-D CMA experimentally, a three-dimensionally printed circular array comprised of a 24-element CMA and an eight-element LLA is constructed. Objective perceptual evaluation of speech quality test and a subjective listening test are also undertaken.
Evaluation of parasitic contamination from local sources of drinking ...
African Journals Online (AJOL)
A survey on the parasitic contamination of drinking-water sources was carried out ... the extent of contamination of these water sources and their public health implication. ... of the water bodies and boil their drinking-water before consumption.
Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang
2017-07-01
Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.
An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.
Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D
2016-05-01
Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Tatjewski, Marcin; Kierczak, Marcin; Plewczynski, Dariusz
2017-01-01
Here, we present two perspectives on the task of predicting post translational modifications (PTMs) from local sequence fragments using machine learning algorithms. The first is the description of the fundamental steps required to construct a PTM predictor from the very beginning. These steps include data gathering, feature extraction, or machine-learning classifier selection. The second part of our work contains the detailed discussion of more advanced problems which are encountered in PTM prediction task. Probably the most challenging issues which we have covered here are: (1) how to address the training data class imbalance problem (we also present statistics describing the problem); (2) how to properly set up cross-validation folds with an approach which takes into account the homology of protein data records, to address this problem we present our folds-over-clusters algorithm; and (3) how to efficiently reach for new sources of learning features. Presented techniques and notes resulted from intense studies in the field, performed by our and other groups, and can be useful both for researchers beginning in the field of PTM prediction and for those who want to extend the repertoire of their research techniques.
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Multi-sources model and control algorithm of an energy management system for light electric vehicles
International Nuclear Information System (INIS)
Hannan, M.A.; Azidin, F.A.; Mohamed, A.
2012-01-01
Highlights: ► An energy management system (EMS) is developed for a scooter under normal and heavy power load conditions. ► The battery, FC, SC, EMS, DC machine and vehicle dynamics are modeled and designed for the system. ► State-based logic control algorithms provide an efficient and feasible multi-source EMS for light electric vehicles. ► Vehicle’s speed and power are closely matched with the ECE-47 driving cycle under normal and heavy load conditions. ► Sources of energy changeover occurred at 50% of the battery state of charge level in heavy load conditions. - Abstract: This paper presents the multi-sources energy models and ruled based feedback control algorithm of an energy management system (EMS) for light electric vehicle (LEV), i.e., scooters. The multiple sources of energy, such as a battery, fuel cell (FC) and super-capacitor (SC), EMS and power controller, DC machine and vehicle dynamics are designed and modeled using MATLAB/SIMULINK. The developed control strategies continuously support the EMS of the multiple sources of energy for a scooter under normal and heavy power load conditions. The performance of the proposed system is analyzed and compared with that of the ECE-47 test drive cycle in terms of vehicle speed and load power. The results show that the designed vehicle’s speed and load power closely match those of the ECE-47 test driving cycle under normal and heavy load conditions. This study’s results suggest that the proposed control algorithm provides an efficient and feasible EMS for LEV.
Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong
2016-06-06
We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.
Directory of Open Access Journals (Sweden)
Miao Sun
2016-06-01
Full Text Available We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
International Nuclear Information System (INIS)
Vieira, Jose Wilson; Leal Neto, Viriato; Lima Filho, Jose de Melo; Lima, Fernando Roberto de Andrade
2013-01-01
This paper presents as algorithm of a planar and isotropic radioactive source and by rotating the probability density function (PDF) Gaussian standard subjected to a translatory method which displaces its maximum throughout its field changes its intensity and makes the dispersion around the mean right asymmetric. The algorithm was used to generate samples of photons emerging from a plane and reach a semicircle involving a phantom voxels. The PDF describing this problem is already known, but the generating function of random numbers (FRN) associated with it can not be deduced by direct MC techniques. This is a significant problem because it can be adjusted to simulations involving natural terrestrial radiation or accidents in medical establishments or industries where the radioactive material spreads in a plane. Some attempts to obtain a FRN for the PDF of the problem have already been implemented by the Research Group in Numerical Dosimetry (GND) from Recife-PE, Brazil, always using the technique rejection sampling MC. This article followed methodology of previous work, except on one point: The problem of the PDF was replaced by a normal PDF transferred. To perform dosimetric comparisons, we used two MCES: the MSTA (Mash standing, composed by the adult male voxel phantom in orthostatic position, MASH (male mesh), available from the Department of Nuclear Energy (DEN) of the Federal University of Pernambuco (UFPE), coupled to MC EGSnrc code and the GND planar source based on the rejection technique) and MSTA N T. The two MCES are similar in all but FRN used in planar source. The results presented and discussed in this paper establish the new algorithm for a planar source to be used by GND
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean
Provost, Floriane; Hibert, Clément; Malet, Jean-Philippe; Stumpf, André; Doubre, Cécile
2016-04-01
Different studies have shown the presence of microseismic activity in soft-rock landslides. The seismic signals exhibit significantly different features in the time and frequency domains which allow their classification and interpretation. Most of the classes could be associated with different mechanisms of deformation occurring within and at the surface (e.g. rockfall, slide-quake, fissure opening, fluid circulation). However, some signals remain not fully understood and some classes contain few examples that prevent any interpretation. To move toward a more complete interpretation of the links between the dynamics of soft-rock landslides and the physical processes controlling their behaviour, a complete catalog of the endogeneous seismicity is needed. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forest classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied to the seismicity recorded at the Super-Sauze landslide between 2013 and 2015. We selected a dozen of seismic signal features that characterize precisely its spectral content (e.g. central frequency, spectrum width, energy in several frequency bands, spectrogram shape, spectrum local and global maxima
Open Source Communities in Technical Writing: Local Exigence, Global Extensibility
Conner, Trey; Gresham, Morgan; McCracken, Jill
2011-01-01
By offering open-source software (OSS)-based networks as an affordable technology alternative, we partnered with a nonprofit community organization. In this article, we narrate the client-based experiences of this partnership, highlighting the ways in which OSS and open-source culture (OSC) transformed our students' and our own expectations of…
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
Wodecki, Jacek; Michalak, Anna; Zimroz, Radoslaw
2018-03-01
Harsh industrial conditions present in underground mining cause a lot of difficulties for local damage detection in heavy-duty machinery. For vibration signals one of the most intuitive approaches of obtaining signal with expected properties, such as clearly visible informative features, is prefiltration with appropriately prepared filter. Design of such filter is very broad field of research on its own. In this paper authors propose a novel approach to dedicated optimal filter design using progressive genetic algorithm. Presented method is fully data-driven and requires no prior knowledge of the signal. It has been tested against a set of real and simulated data. Effectiveness of operation has been proven for both healthy and damaged case. Termination criterion for evolution process was developed, and diagnostic decision making feature has been proposed for final result determinance.
Assessment of Cooperative and Heterogeneous Indoor Localization Algorithms with Real Radio Devices
DEFF Research Database (Denmark)
Nielsen, Jimmy Jessen; Noureddine, Hadi; Amiot, Nicolas
2014-01-01
In this paper we present results of real-life local- ization experiments performed in an unprecedented cooperative and heterogeneous wireless context. The experiments covered measurements of different radio devices packed together on a trolley, emulating a multi-standard Mobile Terminal (MT) along...... representative trajectories in a crowded office environment. Among all the radio access technologies involved in this campaign (including LTE, WiFi...), the focus is herein put mostly on Impulse Radio - Ultra Wideband (IR-UWB) and ZigBee sub-systems, which are enabled with peer-to-peer ranging capabilities based...... on Time of Arrival (ToA) estimation and Received Signal Strength (RSS) measurements respectively. Single-link model parameters are preliminarily drawn and discussed. In comparison with existing similar campaigns, new algorithms are also applied to the measurement data, showing the interest of advanced de...
Directory of Open Access Journals (Sweden)
Yuliang Su
2015-04-01
Full Text Available A turning machine tool is a kind of new type of machine tool that is equipped with more than one spindle and turret. The distinctive simultaneous and parallel processing abilities of turning machine tool increase the complexity of process planning. The operations would not only be sequenced and satisfy precedence constraints, but also should be scheduled with multiple objectives such as minimizing machining cost, maximizing utilization of turning machine tool, and so on. To solve this problem, a hybrid genetic algorithm was proposed to generate optimal process plans based on a mixed 0-1 integer programming model. An operation precedence graph is used to represent precedence constraints and help generate a feasible initial population of hybrid genetic algorithm. Encoding strategy based on data structure was developed to represent process plans digitally in order to form the solution space. In addition, a local search approach for optimizing the assignments of available turrets would be added to incorporate scheduling with process planning. A real-world case is used to prove that the proposed approach could avoid infeasible solutions and effectively generate a global optimal process plan.
Directory of Open Access Journals (Sweden)
Dong-Sup Lee
2015-01-01
Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
Performance of an open-source heart sound segmentation algorithm on eight independent databases.
Liu, Chengyu; Springer, David; Clifford, Gari D
2017-08-01
Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for
Acoustic Source Localization in Aircraft Interiors Using Microphone Array Technologies
Sklanka, Bernard J.; Tuss, Joel R.; Buehrle, Ralph D.; Klos, Jacob; Williams, Earl G.; Valdivia, Nicolas
2006-01-01
Using three microphone array configurations at two aircraft body stations on a Boeing 777-300ER flight test, the acoustic radiation characteristics of the sidewall and outboard floor system are investigated by experimental measurement. Analysis of the experimental data is performed using sound intensity calculations for closely spaced microphones, PATCH Inverse Boundary Element Nearfield Acoustic Holography, and Spherical Nearfield Acoustic Holography. Each method is compared assessing strengths and weaknesses, evaluating source identification capability for both broadband and narrowband sources, evaluating sources during transient and steady-state conditions, and quantifying field reconstruction continuity using multiple array positions.
Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.
Huang, Cai; Mezencev, Roman; McDonald, John F; Vannberg, Fredrik
2017-01-01
Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM) algorithm combined with a standard recursive feature elimination (RFE) approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60). The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC) patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.
Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.
Directory of Open Access Journals (Sweden)
Cai Huang
Full Text Available Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM algorithm combined with a standard recursive feature elimination (RFE approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60. The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.
Directory of Open Access Journals (Sweden)
Ying Zhang
2016-02-01
Full Text Available Due to their special environment, Underwater Wireless Sensor Networks (UWSNs are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.
Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei
2016-02-06
Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object's mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.
Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.
Subjective Response to Foot-Fall Noise, Including Localization of the Source Position
DEFF Research Database (Denmark)
Brunskog, Jonas; Hwang, Ha Dong; Jeong, Cheol-Ho
2011-01-01
annoyance, using simulated binaural room impulse responses, with sources being a moving point source or a nonmoving surface source, and rooms being a room with a reverberation time of 0.5 s or an anechoic room. The paper concludes that no strong effect of the source localization on the annoyance can...
DEFF Research Database (Denmark)
Khoobi, Saeed; Halvaei, Abolfazl; Hajizadeh, Amin
2016-01-01
Energy and power distribution between multiple energy sources of electric vehicles (EVs) is the main challenge to achieve optimum performance from EV. Fuzzy inference systems are powerful tools due to nonlinearity and uncertainties of EV system. Design of fuzzy controllers for energy management...... of EV relies too much on the expert experience and it may lead to sub-optimal performance. This paper develops an optimized fuzzy controller using genetic algorithm (GA) for an electric vehicle equipped with two power bank including battery and super-capacitor. The model of EV and optimized fuzzy...
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-03-12
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.
Sources of uncertainty in future changes in local precipitation
Energy Technology Data Exchange (ETDEWEB)
Rowell, David P. [Met Office Hadley Centre, Exeter (United Kingdom)
2012-10-15
This study considers the large uncertainty in projected changes in local precipitation. It aims to map, and begin to understand, the relative roles of uncertain modelling and natural variability, using 20-year mean data from four perturbed physics or multi-model ensembles. The largest - 280-member - ensemble illustrates a rich pattern in the varying contribution of modelling uncertainty, with similar features found using a CMIP3 ensemble (despite its limited sample size, which restricts it value in this context). The contribution of modelling uncertainty to the total uncertainty in local precipitation change is found to be highest in the deep tropics, particularly over South America, Africa, the east and central Pacific, and the Atlantic. In the moist maritime tropics, the highly uncertain modelling of sea-surface temperature changes is transmitted to a large uncertain modelling of local rainfall changes. Over tropical land and summer mid-latitude continents (and to a lesser extent, the tropical oceans), uncertain modelling of atmospheric processes, land surface processes and the terrestrial carbon cycle all appear to play an additional substantial role in driving the uncertainty of local rainfall changes. In polar regions, inter-model variability of anomalous sea ice drives an uncertain precipitation response, particularly in winter. In all these regions, there is therefore the potential to reduce the uncertainty of local precipitation changes through targeted model improvements and observational constraints. In contrast, over much of the arid subtropical and mid-latitude oceans, over Australia, and over the Sahara in winter, internal atmospheric variability dominates the uncertainty in projected precipitation changes. Here, model improvements and observational constraints will have little impact on the uncertainty of time means shorter than at least 20 years. Last, a supplementary application of the metric developed here is that it can be interpreted as a measure
Directory of Open Access Journals (Sweden)
Shunfang Wang
2015-12-01
Full Text Available An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC, pseudo-amino acid composition (PseAAC and position specific scoring matrix (PSSM, are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one.
Wang, Shunfang; Liu, Shuhui
2015-12-19
An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC), pseudo-amino acid composition (PseAAC) and position specific scoring matrix (PSSM), are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA) is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one.
Directory of Open Access Journals (Sweden)
Oscar Karnalim
2017-01-01
Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.
Directory of Open Access Journals (Sweden)
Hosseinali Salemi
2016-04-01
Full Text Available Facility location models are observed in many diverse areas such as communication networks, transportation, and distribution systems planning. They play significant role in supply chain and operations management and are one of the main well-known topics in strategic agenda of contemporary manufacturing and service companies accompanied by long-lasting effects. We define a new approach for solving stochastic single source capacitated facility location problem (SSSCFLP. Customers with stochastic demand are assigned to set of capacitated facilities that are selected to serve them. It is demonstrated that problem can be transformed to deterministic Single Source Capacitated Facility Location Problem (SSCFLP for Poisson demand distribution. A hybrid algorithm which combines Lagrangian heuristic with adjusted mixture of Ant colony and Genetic optimization is proposed to find lower and upper bounds for this problem. Computational results of various instances with distinct properties indicate that proposed solving approach is efficient.
Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.
2015-08-01
A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.
XTALOPT version r11: An open-source evolutionary algorithm for crystal structure prediction
Avery, Patrick; Falls, Zackary; Zurek, Eva
2018-01-01
Version 11 of XTALOPT, an evolutionary algorithm for crystal structure prediction, has now been made available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. Whereas the previous versions of XTALOPT were published under the Gnu Public License (GPL), the current version is made available under the 3-Clause BSD License, which is an open source license that is recognized by the Open Source Initiative. Importantly, the new version can be executed via a command line interface (i.e., it does not require the use of a Graphical User Interface). Moreover, the new version is written as a stand-alone program, rather than an extension to AVOGADRO.
Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang
2017-11-01
Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.
International Nuclear Information System (INIS)
Korchuganov, V.N.; Smygacheva, A.S.; Fomin, E.A.
2018-01-01
One of the best ways to design, research and optimize accelerators and synchrotron radiation sources is to use numerical simulation. Nevertheless, very often during complex physical process simulation considering many nonlinear effects the use of classical optimization methods is difficult. The article deals with the application of multiobjective optimization using genetic algorithms for accelerators and light sources design. These algorithms allow both simple linear and complex nonlinear lattices to be efficiently optimized when obtaining the required facility parameters.
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
Energy Technology Data Exchange (ETDEWEB)
Khasawneh, Mohammed A., E-mail: mkha@ieee.org [Department of Electrical Engineering, Jordan University of Science and Technology (Jordan); Al-Shboul, Zeina Aman M., E-mail: xeinaaman@gmail.com [Department of Electrical Engineering, Jordan University of Science and Technology (Jordan); Jaradat, Mohammad A., E-mail: majaradat@just.edu.jo [Department of Mechanical Engineering, Jordan University of Science and Technology (Jordan); Malkawi, Mohammad I., E-mail: mmalkawi@aimws.com [College of Engineering, Jadara University, Irbid 221 10 (Jordan)
2013-06-15
Highlights: ► A new navigation algorithm for Radiation Evasion around nuclear facilities. ► An optimization criteria minimized under algorithm operation. ► A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. ► Benefits of using localized navigation as opposed to global navigation schemas. ► A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. -- Abstract: In this extension from part I (Khasawneh et al., in press), we modify the navigation algorithm which was presented with the objective of optimizing the “Radiation Evasion” Criterion so that navigation would optimize the criterion of “Nearest Exit”. Under this modification, algorithm would yield navigation paths that would guide occupational workers towards Nearest Exit points. Again, under this optimization criterion, algorithm leverages the use of localized information acquired through a well designed and distributed wireless sensor network, as it averts the need for any long-haul communication links or centralized decision and monitoring facility thereby achieving a more reliable performance under dynamic environments. As was done in part I, the proposed algorithm under the “Nearest Exit” Criterion is designed to leverage nearest neighbor information coming in through the sensory network overhead, in computing successful navigational paths from one point to another. For comparison purposes, the proposed algorithm is tested under the two optimization criteria: “Radiation Evasion” and “Nearest Exit”, for different numbers of step look-ahead. We verify the performance of the algorithm by means of simulations, whereby navigational paths are calculated for different radiation fields. We, via simulations, also, verify the performance of the algorithm in comparison with a well-known global navigation algorithm upon which we draw our conclusions.
International Nuclear Information System (INIS)
Khasawneh, Mohammed A.; Al-Shboul, Zeina Aman M.; Jaradat, Mohammad A.; Malkawi, Mohammad I.
2013-01-01
Highlights: ► A new navigation algorithm for Radiation Evasion around nuclear facilities. ► An optimization criteria minimized under algorithm operation. ► A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. ► Benefits of using localized navigation as opposed to global navigation schemas. ► A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. -- Abstract: In this extension from part I (Khasawneh et al., in press), we modify the navigation algorithm which was presented with the objective of optimizing the “Radiation Evasion” Criterion so that navigation would optimize the criterion of “Nearest Exit”. Under this modification, algorithm would yield navigation paths that would guide occupational workers towards Nearest Exit points. Again, under this optimization criterion, algorithm leverages the use of localized information acquired through a well designed and distributed wireless sensor network, as it averts the need for any long-haul communication links or centralized decision and monitoring facility thereby achieving a more reliable performance under dynamic environments. As was done in part I, the proposed algorithm under the “Nearest Exit” Criterion is designed to leverage nearest neighbor information coming in through the sensory network overhead, in computing successful navigational paths from one point to another. For comparison purposes, the proposed algorithm is tested under the two optimization criteria: “Radiation Evasion” and “Nearest Exit”, for different numbers of step look-ahead. We verify the performance of the algorithm by means of simulations, whereby navigational paths are calculated for different radiation fields. We, via simulations, also, verify the performance of the algorithm in comparison with a well-known global navigation algorithm upon which we draw our conclusions
Bar-Cohen, Yaniv; Khairy, Paul; Morwood, James; Alexander, Mark E; Cecchin, Frank; Berul, Charles I
2006-07-01
ECG algorithms used to localize accessory pathways (AP) in patients with Wolff-Parkinson-White (WPW) syndrome have been validated in adults, but less is known of their use in children, especially in patients with congenital heart disease (CHD). We hypothesize that these algorithms have low diagnostic accuracy in children and even lower in those with CHD. Pre-excited ECGs in 43 patients with WPW and CHD (median age 5.4 years [0.9-32 years]) were evaluated and compared to 43 consecutive WPW control patients without CHD (median age 14.5 years [1.8-18 years]). Two blinded observers predicted AP location using 2 adult and 1 pediatric WPW algorithms, and a third blinded observer served as a tiebreaker. Predicted locations were compared with ablation-verified AP location to identify (a) exact match for AP location and (b) match for laterality (left-sided vs right-sided AP). In control children, adult algorithms were accurate in only 56% and 60%, while the pediatric algorithm was correct in 77%. In 19 patients with Ebstein's anomaly, diagnostic accuracy was similar to controls with at times an even better ability to predict laterality. In non-Ebstein's CHD, however, the algorithms were markedly worse (29% for the adult algorithms and 42% for the pediatric algorithms). A relatively large degree of interobserver variability was seen (kappa values from 0.30 to 0.58). Adult localization algorithms have poor diagnostic accuracy in young patients with and without CHD. Both adult and pediatric algorithms are particularly misleading in non-Ebstein's CHD patients and should be interpreted with caution.
Magnetic source localization of early visual mismatch response
Susac, A.; Heslenfeld, D.J.; Huonker, R.; Supek, S.
2014-01-01
Previous studies have reported a visual analogue of the auditory mismatch negativity (MMN) response that is based on sensory memory. The neural generators and attention dependence of the visual MMN (vMMN) still remain unclear. We used magnetoencephalography (MEG) and spatio-temporal source
Local sources of pollution and their impacts in Alaska (Invited)
Molders, N.
2013-12-01
The movie 'Into the Wilde' evoke the impression of the last frontier in a great wide and pristine land. With over half a million people living in Alaska an area as larger as the distance from the US West to the East Coast, this idea comes naturally. The three major cities are the main emission source in an otherwise relative clean atmosphere. On the North Slope oil drilling and production is the main anthropogenic emission sources. Along Alaska's coasts ship traffic including cruises is another anthropogenic emission source that is expected to increase as sea-ice recedes. In summer, wildfires in Alaska, Canada and/or Siberia may cause poor air quality. In winter inversions may lead poor air quality and in spring. In spring, aged polluted air is often advected into Alaska. These different emission sources yield quite different atmospheric composition and air quality impacts. While this may make understanding Alaska's atmospheric composition at-large a challenging task, it also provides great opportunities to examine impacts without co-founders. The talk will give a review of the performed research, and insight into the challenges.
Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.
2011-01-01
Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.
López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Chen, Hao; Cai, Ye; Chen, Yingcong
2017-10-01
An indoor positioning algorithm based on visible light communication (VLC) is presented. This algorithm is used to calculate a three-dimensional (3-D) coordinate of an indoor optical wireless environment, which includes sufficient orders of multipath reflections from reflecting surfaces of the room. Leveraging the global optimization ability of the genetic algorithm (GA), an innovative framework for 3-D position estimation based on a modified genetic algorithm is proposed. Unlike other techniques using VLC for positioning, the proposed system can achieve indoor 3-D localization without making assumptions about the height or acquiring the orientation angle of the mobile terminal. Simulation results show that an average localization error of less than 1.02 cm can be achieved. In addition, in most VLC-positioning systems, the effect of reflection is always neglected and its performance is limited by reflection, which makes the results not so accurate for a real scenario and the positioning errors at the corners are relatively larger than other places. So, we take the first-order reflection into consideration and use artificial neural network to match the model of a nonlinear channel. The studies show that under the nonlinear matching of direct and reflected channels the average positioning errors of four corners decrease from 11.94 to 0.95 cm. The employed algorithm is emerged as an effective and practical method for indoor localization and outperform other existing indoor wireless localization approaches.
Abbiati, Giuseppe; La Salandra, Vincenzo; Bursi, Oreste S.; Caracoglia, Luca
2018-02-01
Successful online hybrid (numerical/physical) dynamic substructuring simulations have shown their potential in enabling realistic dynamic analysis of almost any type of non-linear structural system (e.g., an as-built/isolated viaduct, a petrochemical piping system subjected to non-stationary seismic loading, etc.). Moreover, owing to faster and more accurate testing equipment, a number of different offline experimental substructuring methods, operating both in time (e.g. the impulse-based substructuring) and frequency domains (i.e. the Lagrange multiplier frequency-based substructuring), have been employed in mechanical engineering to examine dynamic substructure coupling. Numerous studies have dealt with the above-mentioned methods and with consequent uncertainty propagation issues, either associated with experimental errors or modelling assumptions. Nonetheless, a limited number of publications have systematically cross-examined the performance of the various Experimental Dynamic Substructuring (EDS) methods and the possibility of their exploitation in a complementary way to expedite a hybrid experiment/numerical simulation. From this perspective, this paper performs a comparative uncertainty propagation analysis of three EDS algorithms for coupling physical and numerical subdomains with a dual assembly approach based on localized Lagrange multipliers. The main results and comparisons are based on a series of Monte Carlo simulations carried out on a five-DoF linear/non-linear chain-like systems that include typical aleatoric uncertainties emerging from measurement errors and excitation loads. In addition, we propose a new Composite-EDS (C-EDS) method to fuse both online and offline algorithms into a unique simulator. Capitalizing from the results of a more complex case study composed of a coupled isolated tank-piping system, we provide a feasible way to employ the C-EDS method when nonlinearities and multi-point constraints are present in the emulated system.
Perlee, Caroline J.; Casasent, David P.
1990-09-01
Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.
Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management
Chiu, Y.; Nishikawa, T.; Martin, P.
2008-12-01
Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond
International Nuclear Information System (INIS)
Poynee, L A
2003-01-01
Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation
International Nuclear Information System (INIS)
Khasawneh, Mohammed A.; Al-Shboul, Zeina Aman M.; Jaradat, Mohammad A.
2013-01-01
Highlights: ► A new navigation algorithm for radiation evasion around nuclear facilities. ► An optimization criteria minimized under algorithm operation. ► A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. ► Benefits of using localized navigation as opposed to global navigation schemas. ► A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. -- Abstract: In this paper, we introduce a navigation algorithm having general utility for occupational workers at nuclear facilities and places where radiation poses serious health hazards. This novel algorithm leverages the use of localized information for its operation. Therefore, the need for central processing and decision resources is avoided, since information processing and the ensuing decision-making are done aboard a man-borne device. To acquire the information needed for path planning in radiation avoidance, a well-designed and distributed wireless sensory infrastructure is needed. This will automatically benefit from the most recent trends in technology developments in both sensor networks and wireless communication. When used to navigate based on local radiation information, the algorithm will behave more reliably when accidents happen, since no long-haul communication links are required for information exchange. In essence, the proposed algorithm is designed to leverage nearest neighbor information coming in through the sensory network overhead, to compute successful navigational paths from one point to another. The proposed algorithm is tested under the “Radiation Evasion” criterion. It is also tested for the case when more information, beyond nearest neighbors, is made available; here, we test its operation for different numbers of step look-ahead. We verify algorithm performance by means of simulations, whereby navigational paths are calculated for different radiation fields
Energy Technology Data Exchange (ETDEWEB)
Khasawneh, Mohammed A., E-mail: mkha@ieee.org [Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan); Al-Shboul, Zeina Aman M., E-mail: xeinaaman@gmail.com [Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan); Jaradat, Mohammad A., E-mail: majaradat@just.edu.jo [Department of Mechanical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan)
2013-06-15
Highlights: ► A new navigation algorithm for radiation evasion around nuclear facilities. ► An optimization criteria minimized under algorithm operation. ► A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. ► Benefits of using localized navigation as opposed to global navigation schemas. ► A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. -- Abstract: In this paper, we introduce a navigation algorithm having general utility for occupational workers at nuclear facilities and places where radiation poses serious health hazards. This novel algorithm leverages the use of localized information for its operation. Therefore, the need for central processing and decision resources is avoided, since information processing and the ensuing decision-making are done aboard a man-borne device. To acquire the information needed for path planning in radiation avoidance, a well-designed and distributed wireless sensory infrastructure is needed. This will automatically benefit from the most recent trends in technology developments in both sensor networks and wireless communication. When used to navigate based on local radiation information, the algorithm will behave more reliably when accidents happen, since no long-haul communication links are required for information exchange. In essence, the proposed algorithm is designed to leverage nearest neighbor information coming in through the sensory network overhead, to compute successful navigational paths from one point to another. The proposed algorithm is tested under the “Radiation Evasion” criterion. It is also tested for the case when more information, beyond nearest neighbors, is made available; here, we test its operation for different numbers of step look-ahead. We verify algorithm performance by means of simulations, whereby navigational paths are calculated for different radiation fields.
Directory of Open Access Journals (Sweden)
Alexander V Maltsev
Full Text Available Local Ca2+ Releases (LCRs are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame sensitivity algorithm applied to each pixel (cell location. An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves, sparks and
Glushkov, Dmitry Olegovich; Strizhak, Pavel Alexandrovich; Vershinina, Kseniya Yurievna
2015-01-01
Numerical investigation of flammable interaction processes of local energy sources with liquid condensed substances has been carried out. Basic integrated characteristic values of process have been defined – ignition delay time at different energy sources parameters. Recommendations have been formulated to ensure fire safety of technological processes, characterized by possible local heat sources formation (cutting, welding, friction, metal grinding etc.) in the vicinity of storage areas, tra...
Directory of Open Access Journals (Sweden)
Glushkov Dmitrii O.
2015-01-01
Full Text Available Numerical investigation of flammable interaction processes of local energy sources with liquid condensed substances has been carried out. Basic integrated characteristic values of process have been defined – ignition delay time at different energy sources parameters. Recommendations have been formulated to ensure fire safety of technological processes, characterized by possible local heat sources formation (cutting, welding, friction, metal grinding etc. in the vicinity of storage areas, transportation, transfer and processing of flammable liquids (gasoline, kerosene, diesel fuel.
Turbulence generation through intense localized sources of energy
Maqui, Agustin; Donzis, Diego
2015-11-01
Mechanisms to generate turbulence in controlled conditions have been studied for nearly a century. Most common methods include passive and active grids with a focus on incompressible turbulence. However, little attention has been given to compressible flows, and even less to hypersonic flows, where phenomena such as thermal non-equilibrium can be present. Using intense energy from lasers, extreme molecule velocities can be generated from photo-dissociation. This creates strong localized changes in both the hydrodynamics and thermodynamics of the flow, which may perturb the flow in a way similar to an active grid to generate turbulence in hypersonic flows. A large database of direct numerical simulations (DNS) are used to study the feasibility of such an approach. An extensive analysis of single and two point statistics, as well as spectral dynamics is used to characterize the evolution of the flow towards realistic turbulence. Local measures of enstrophy and dissipation are studied to diagnose the main mechanisms for energy exchange. As commonly done in compressible flows, dilatational and solenoidal components are separated to understand the effect of acoustics on the development of turbulence. Further results for cases that assimilate laboratory conditions will be discussed. The authors gratefully acknowledge the support of AFOSR.
Piccininni, A.; Palumbo, G.; Franco, A. Lo; Sorgente, D.; Tricarico, L.; Russello, G.
2018-05-01
The continuous research for lightweight components for transport applications to reduce the harmful emissions drives the attention to the light alloys as in the case of Aluminium (Al) alloys, capable to combine low density with high values of the strength-to-weight ratio. Such advantages are partially counterbalanced by the poor formability at room temperature. A viable solution is to adopt a localized heat treatment by laser of the blank before the forming process to obtain a tailored distribution of material properties so that the blank can be formed at room temperature by means of conventional press machines. Such an approach has been extensively investigated for age hardenable alloys, but in the present work the attention is focused on the 5000 series; in particular, the optimization of the deep drawing process of the alloy AA5754 H32 is proposed through a numerical/experimental approach. A preliminary investigation was necessary to correctly tune the laser parameters (focus length, spot dimension) to effectively obtain the annealed state. Optimal process parameters were then obtained coupling a 2D FE model with an optimization platform managed by a multi-objective genetic algorithm. The optimal solution (i.e. able to maximize the LDR) in terms of blankholder force and extent of the annealed region was thus evaluated and validated through experimental trials. A good matching between experimental and numerical results was found. The optimal solution allowed to obtain an LDR of the locally heat treated blank larger than the one of the material either in the wrought condition (H32) either in the annealed condition (H111).
Source Localization by Entropic Inference and Backward Renormalization Group Priors
Directory of Open Access Journals (Sweden)
Nestor Caticha
2015-04-01
Full Text Available A systematic method of transferring information from coarser to finer resolution based on renormalization group (RG transformations is introduced. It permits building informative priors in finer scales from posteriors in coarser scales since, under some conditions, RG transformations in the space of hyperparameters can be inverted. These priors are updated using renormalized data into posteriors by Maximum Entropy. The resulting inference method, backward RG (BRG priors, is tested by doing simulations of a functional magnetic resonance imaging (fMRI experiment. Its results are compared with a Bayesian approach working in the finest available resolution. Using BRG priors sources can be partially identified even when signal to noise ratio levels are up to ~ -25dB improving vastly on the single step Bayesian approach. For low levels of noise the BRG prior is not an improvement over the single scale Bayesian method. Analysis of the histograms of hyperparameters can show how to distinguish if the method is failing, due to very high levels of noise, or if the identification of the sources is, at least partially possible.
Matrix kernels for MEG and EEG source localization and imaging
International Nuclear Information System (INIS)
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1994-01-01
The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell's equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ''gain'' or ''transfer'' matrices used in multiple dipole and source imaging models
Directory of Open Access Journals (Sweden)
Yu Zhang
2015-10-01
Full Text Available In this article, we begin with the non-homogeneous model for the non-differentiable heat flow, which is described using the local fractional vector calculus, from the first law of thermodynamics in fractal media point view. We employ the local fractional variational iteration algorithm II to solve the fractal heat equations. The obtained results show the non-differentiable behaviors of temperature fields of fractal heat flow defined on Cantor sets.
Campos, Andre N.; Souza, Efren L.; Nakamura, Fabiola G.; Nakamura, Eduardo F.; Rodrigues, Joel J. P. C.
2012-01-01
Target tracking is an important application of wireless sensor networks. The networks' ability to locate and track an object is directed linked to the nodes' ability to locate themselves. Consequently, localization systems are essential for target tracking applications. In addition, sensor networks are often deployed in remote or hostile environments. Therefore, density control algorithms are used to increase network lifetime while maintaining its sensing capabilities. In this work, we analyze the impact of localization algorithms (RPE and DPE) and density control algorithms (GAF, A3 and OGDC) on target tracking applications. We adapt the density control algorithms to address the k-coverage problem. In addition, we analyze the impact of network density, residual integration with density control, and k-coverage on both target tracking accuracy and network lifetime. Our results show that DPE is a better choice for target tracking applications than RPE. Moreover, among the evaluated density control algorithms, OGDC is the best option among the three. Although the choice of the density control algorithm has little impact on the tracking precision, OGDC outperforms GAF and A3 in terms of tracking time. PMID:22969329
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders
2013-10-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method
Subber, Waad; Salvadori, Alberto; Lee, Sangmin; Matous, Karel
2017-06-01
The reverse Taylor impact is a common experiment to investigate the dynamical response of materials at high strain rates. To better understand the physical phenomena and to provide a platform for code validation and Uncertainty Quantification (UQ), a co-designed simulation and experimental paradigm is investigated. For validation under uncertainty, quantities of interest (QOIs) within subregions of the computational domain are introduced. For such simulations where regions of interest can be identified, the computational cost for UQ can be reduced by confining the random variability within these regions of interest. This observation inspired us to develop an asynchronous space and time computational algorithm with localized UQ. In the region of interest, the high resolution space and time discretization schemes are used for a stochastic model. Apart from the region of interest, low spatial and temporal resolutions are allowed for a stochastic model with low dimensional representation of uncertainty. The model is exercised on the linear elastodynamics and shows a potential in reducing the UQ computational cost. Although, we consider wave prorogation in solid, the proposed framework is general and can be used for fluid flow problems as well. Department of Energy, National Nuclear Security Administration (PSAAP-II).
Source localization in electromyography using the inverse potential problem
van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.
2011-02-01
We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.
Source localization in electromyography using the inverse potential problem
International Nuclear Information System (INIS)
Van den Doel, Kees; Ascher, Uri M; Pai, Dinesh K
2011-01-01
We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting
Directory of Open Access Journals (Sweden)
Milan Djordjevic
2012-01-01
Full Text Available Background: The Travelling Salesman Problem is an NP-hard problem in combinatorial optimization with a number of practical implications. There are many heuristic algorithms and exact methods for solving the problem. Objectives: In this paper we study the influence of hybridization of a genetic algorithm with a local optimizer on solving instances of the Travelling Salesman Problem. Methods/ Approach: Our algorithm uses hybridization that occurs at various percentages of generations of a genetic algorithm. Moreover, we have also studied at which generations to apply the hybridization and hence applied it at random generations, at the initial generations, and at the last ones. Results: We tested our algorithm on instances with sizes ranging from 76 to 439 cities. On the one hand, the less frequent application of hybridization decreased the average running time of the algorithm from 14.62 sec to 2.78 sec at 100% and 10% hybridization respectively, while on the other hand, the quality of the solution on average deteriorated only from 0.21% till 1.40% worse than the optimal solution. Conclusions: In the paper we have shown that even a small hybridization substantially improves the quality of the result. Moreover, the hybridization in fact does not deteriorate the running time too much. Finally, our experiments show that the best results are obtained when hybridization occurs in the last generations of the genetic algorithm.
A multiple objective magnet sorting algorithm for the Advanced Light Source insertion devices
International Nuclear Information System (INIS)
Humphries, D.; Goetz, F.; Kownacki, P.; Marks, S.; Schlueter, R.
1995-01-01
Insertion devices for the Advanced Light Source (ALS) incorporate large numbers of permanent magnets which have a variety of magnetization orientation errors. These orientation errors can produce field errors which affect both the spectral brightness of the insertion devices and the storage ring electron beam dynamics. A perturbation study was carried out to quantify the effects of orientation errors acting in a hybrid magnetic structure. The results of this study were used to develop a multiple stage sorting algorithm which minimizes undesirable integrated field errors and essentially eliminates pole excitation errors. When applied to a measured magnet population for an existing insertion device, an order of magnitude reduction in integrated field errors was achieved while maintaining near zero pole excitation errors
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
Blahut-Arimoto algorithm and code design for action-dependent source coding problems
DEFF Research Database (Denmark)
Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar
2013-01-01
The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....
International Nuclear Information System (INIS)
Peerenboom, Kim; Van Boxtel, Jochem; Janssen, Jesper; Van Dijk, Jan
2014-01-01
The usage of the local thermodynamic equilibrium (LTE) approximation can be a very powerful assumption for simulations of plasmas in or close to equilibrium. In general, the elemental composition in LTE is not constant in space and effects of mixing and demixing have to be taken into account using the Stefan–Maxwell diffusion description. In this paper, we will introduce a method to discretize the resulting coupled set of elemental continuity equations. The coupling between the equations is taken into account by the introduction of the concept of a Péclet matrix. It will be shown analytically and numerically that the mass and charge conservation constraints can be fulfilled exactly. Furthermore, a case study is presented to demonstrate the applicability of the method to a simulation of a mercury-free metal-halide lamp. The source code for the simulations presented in this paper is provided as supplementary material (stacks.iop.org/JPhysD/47/425202/mmedia). (paper)
Adaptive Source Localization Based Station Keeping of Autonomous Vehicles
Guler, Samet; Fidan, Baris; Dasgupta, Soura; Anderson, Brian D.O.; Shames, Iman
2016-01-01
We study the problem of driving a mobile sensory agent to a target whose location is specied only in terms of the distances to a set of sensor stations or beacons. The beacon positions are unknown, but the agent can continuously measure its distances to them as well as its own position. This problem has two particular applications: (1) capturing a target signal source whose distances to the beacons are measured by these beacons and broadcasted to a surveillance agent, (2) merging a single agent to an autonomous multi-agent system so that the new agent is positioned at desired distances from the existing agents. The problem is solved using an adaptive control framework integrating a parameter estimator producing beacon location estimates, and an adaptive motion control law fed by these estimates to steer the agent toward the target. For location estimation, a least-squares adaptive law is used. The motion control law aims to minimize a convex cost function with unique minimizer at the target location, and is further augmented for persistence of excitation. Stability and convergence analysis is provided, as well as simulation results demonstrating performance and transient behavior.
Adaptive Source Localization Based Station Keeping of Autonomous Vehicles
Guler, Samet
2016-10-26
We study the problem of driving a mobile sensory agent to a target whose location is specied only in terms of the distances to a set of sensor stations or beacons. The beacon positions are unknown, but the agent can continuously measure its distances to them as well as its own position. This problem has two particular applications: (1) capturing a target signal source whose distances to the beacons are measured by these beacons and broadcasted to a surveillance agent, (2) merging a single agent to an autonomous multi-agent system so that the new agent is positioned at desired distances from the existing agents. The problem is solved using an adaptive control framework integrating a parameter estimator producing beacon location estimates, and an adaptive motion control law fed by these estimates to steer the agent toward the target. For location estimation, a least-squares adaptive law is used. The motion control law aims to minimize a convex cost function with unique minimizer at the target location, and is further augmented for persistence of excitation. Stability and convergence analysis is provided, as well as simulation results demonstrating performance and transient behavior.
Effect of Brain-to-Skull Conductivity Ratio on EEG Source Localization Accuracy
Gang Wang; Doutian Ren
2013-01-01
The goal of this study was to investigate the influence of the brain-to-skull conductivity ratio (BSCR) on EEG source localization accuracy. In this study, we evaluated four BSCRs: 15, 20, 25, and 80, which were mainly discussed according to the literature. The scalp EEG signals were generated by BSCR-related forward computation for each cortical dipole source. Then, for each scalp EEG measurement, the source reconstruction was performed to identify the estimated dipole sources by the actual ...
Global and Local Page Replacement Algorithms on Virtual Memory Systems for Image Processing
WADA, Ben Tsutom
1985-01-01
Three virtual memory systems for image processing, different one another in frame allocation algorithms and page replacement algorithms, were examined experimentally upon their page-fault characteristics. The hypothesis, that global page replacement algorithms are susceptible to thrashing, held in the raster scan experiment, while it did not in another non raster-scan experiment. The results of the experiments may be useful also in making parallel image processors more efficient, while they a...
Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei
2017-11-01
In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.
Directory of Open Access Journals (Sweden)
Li Ran
2017-01-01
Full Text Available Optimal allocation of generalized power sources in distribution network is researched. A simple index of voltage stability is put forward. Considering the investment and operation benefit, the stability of voltage and the pollution emissions of generalized power sources in distribution network, a multi-objective optimization planning model is established. A multi-objective particle swarm optimization algorithm is proposed to solve the optimal model. In order to improve the global search ability, the strategies of fast non-dominated sorting, elitism and crowding distance are adopted in this algorithm. Finally, tested the model and algorithm by IEEE-33 node system to find the best configuration of GP, the computed result shows that with the generalized power reasonable access to the active distribution network, the investment benefit and the voltage stability of the system is improved, and the proposed algorithm has better global search capability.
"Closing the Loop": Overcoming barriers to locally sourcing food in Fort Collins, Colorado
DeMets, C. M.
2012-12-01
Environmental sustainability has become a focal point for many communities in recent years, and restaurants are seeking creative ways to become more sustainable. As many chefs realize, sourcing food locally is an important step towards sustainability and towards building a healthy, resilient community. Review of literature on sustainability in restaurants and the local food movement revealed that chefs face many barriers to sourcing their food locally, but that there are also many solutions for overcoming these barriers that chefs are in the early stages of exploring. Therefore, the purpose of this research is to identify barriers to local sourcing and investigate how some restaurants are working to overcome those barriers in the city of Fort Collins, Colorado. To do this, interviews were conducted with four subjects who guide purchasing decisions for restaurants in Fort Collins. Two of these restaurants have created successful solutions and are able to source most of their food locally. The other two are interested in and working towards sourcing locally but have not yet been able to overcome barriers, and therefore only source a few local items. Findings show that there are four barriers and nine solutions commonly identified by each of the subjects. The research found differences between those who source most of their food locally and those who have not made as much progress in local sourcing. Based on these results, two solution flowcharts were created, one for primary barriers and one for secondary barriers, for restaurants to assess where they are in the local food chain and how they can more successfully source food locally. As there are few explicit connections between this research question and climate change, it is important to consider the implicit connections that motivate and justify this research. The question of whether or not greenhouse gas emissions are lower for locally sourced food is a topic of much debate, and while there are major developments
Open-source algorithm for detecting sea ice surface features in high-resolution optical imagery
Directory of Open Access Journals (Sweden)
N. C. Wright
2018-04-01
Full Text Available Snow, ice, and melt ponds cover the surface of the Arctic Ocean in fractions that change throughout the seasons. These surfaces control albedo and exert tremendous influence over the energy balance in the Arctic. Increasingly available meter- to decimeter-scale resolution optical imagery captures the evolution of the ice and ocean surface state visually, but methods for quantifying coverage of key surface types from raw imagery are not yet well established. Here we present an open-source system designed to provide a standardized, automated, and reproducible technique for processing optical imagery of sea ice. The method classifies surface coverage into three main categories: snow and bare ice, melt ponds and submerged ice, and open water. The method is demonstrated on imagery from four sensor platforms and on imagery spanning from spring thaw to fall freeze-up. Tests show the classification accuracy of this method typically exceeds 96 %. To facilitate scientific use, we evaluate the minimum observation area required for reporting a representative sample of surface coverage. We provide an open-source distribution of this algorithm and associated training datasets and suggest the community consider this a step towards standardizing optical sea ice imagery processing. We hope to encourage future collaborative efforts to improve the code base and to analyze large datasets of optical sea ice imagery.
Multi-objective optimization of a vertical ground source heat pump using evolutionary algorithm
International Nuclear Information System (INIS)
Sayyaadi, Hoseyn; Amlashi, Emad Hadaddi; Amidpour, Majid
2009-01-01
Thermodynamic and thermoeconomic optimization of a vertical ground source heat pump system has been studied. A model based on the energy and exergy analysis is presented here. An economic model of the system is developed according to the Total Revenue Requirement (TRR) method. The objective functions based on the thermodynamic and thermoeconomic analysis are developed. The proposed vertical ground source heat pump system including eight decision variables is considered for optimization. An artificial intelligence technique known as evolutionary algorithm (EA) has been utilized as an optimization method. This approach has been applied to minimize either the total levelized cost of the system product or the exergy destruction of the system. Three levels of optimization including thermodynamic single objective, thermoeconomic single objective and multi-objective optimizations are performed. In Multi-objective optimization, both thermodynamic and thermoeconomic objectives are considered, simultaneously. In the case of multi-objective optimization, an example of decision-making process for selection of the final solution from available optimal points on Pareto frontier is presented. The results obtained using the various optimization approaches are compared and discussed. Further, the sensitivity of optimized systems to the interest rate, to the annual number of operating hours and to the electricity cost are studied in detail.
Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai
2017-07-01
Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.
Open-source algorithm for detecting sea ice surface features in high-resolution optical imagery
Wright, Nicholas C.; Polashenski, Chris M.
2018-04-01
Snow, ice, and melt ponds cover the surface of the Arctic Ocean in fractions that change throughout the seasons. These surfaces control albedo and exert tremendous influence over the energy balance in the Arctic. Increasingly available meter- to decimeter-scale resolution optical imagery captures the evolution of the ice and ocean surface state visually, but methods for quantifying coverage of key surface types from raw imagery are not yet well established. Here we present an open-source system designed to provide a standardized, automated, and reproducible technique for processing optical imagery of sea ice. The method classifies surface coverage into three main categories: snow and bare ice, melt ponds and submerged ice, and open water. The method is demonstrated on imagery from four sensor platforms and on imagery spanning from spring thaw to fall freeze-up. Tests show the classification accuracy of this method typically exceeds 96 %. To facilitate scientific use, we evaluate the minimum observation area required for reporting a representative sample of surface coverage. We provide an open-source distribution of this algorithm and associated training datasets and suggest the community consider this a step towards standardizing optical sea ice imagery processing. We hope to encourage future collaborative efforts to improve the code base and to analyze large datasets of optical sea ice imagery.
Iterative observer based method for source localization problem for Poisson equation in 3D
Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem
2017-01-01
A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data
Kurugol, Sila; Dy, Jennifer G.; Rajadhyaksha, Milind; Gossage, Kirk W.; Weissmann, Jesse; Brooks, Dana H.
2011-03-01
The examination of the dermis/epidermis junction (DEJ) is clinically important for skin cancer diagnosis. Reflectance confocal microscopy (RCM) is an emerging tool for detection of skin cancers in vivo. However, visual localization of the DEJ in RCM images, with high accuracy and repeatability, is challenging, especially in fair skin, due to low contrast, heterogeneous structure and high inter- and intra-subject variability. We recently proposed a semi-automated algorithm to localize the DEJ in z-stacks of RCM images of fair skin, based on feature segmentation and classification. Here we extend the algorithm to dark skin. The extended algorithm first decides the skin type and then applies the appropriate DEJ localization method. In dark skin, strong backscatter from the pigment melanin causes the basal cells above the DEJ to appear with high contrast. To locate those high contrast regions, the algorithm operates on small tiles (regions) and finds the peaks of the smoothed average intensity depth profile of each tile. However, for some tiles, due to heterogeneity, multiple peaks in the depth profile exist and the strongest peak might not be the basal layer peak. To select the correct peak, basal cells are represented with a vector of texture features. The peak with most similar features to this feature vector is selected. The results show that the algorithm detected the skin types correctly for all 17 stacks tested (8 fair, 9 dark). The DEJ detection algorithm achieved an average distance from the ground truth DEJ surface of around 4.7μm for dark skin and around 7-14μm for fair skin.
Energy Technology Data Exchange (ETDEWEB)
Soufi, M [Shahid Beheshti University, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)
2015-06-15
Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and
International Nuclear Information System (INIS)
Soufi, M; Asl, A Kamali; Geramifar, P
2015-01-01
Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm 3 . For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and
Directory of Open Access Journals (Sweden)
Filip Petru
2010-12-01
Full Text Available This paper intends to answer mainly the questions: What are the consequences of the taxation base increase? What forms does the taxation base have? What can local authorities do in order to make certain areas attractive? Which are the specific players involved in the local economic development? Also, beyond the rigours imposed by the mathematical presentation of the sustainable economic development, we appreciate that for the financial management, too, knowing the gear determined by the allocation of public resources and generation of additional revenues will be very useful in establishing and underlying the decisions to invest in the public infrastructure and, also, to calculate the time period in which these can be depreciated especially based on the financial flows from supplementary revenues.
Self-organized spectrum chunk selection algorithm for Local Area LTE-Advanced
DEFF Research Database (Denmark)
Kumar, Sanjay; Wang, Yuanye; Marchetti, Nicola
2010-01-01
This paper presents a self organized spectrum chunk selection algorithm in order to minimize the mutual intercell interference among Home Node Bs (HeNBs), aiming to improve the system throughput performance compared to the existing frequency reuse one scheme. The proposed algorithm is useful...
3D source localization of interictal spikes in epilepsy patients with MRI lesions
Ding, Lei; Worrell, Gregory A.; Lagerlund, Terrence D.; He, Bin
2006-08-01
The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R2 values achieved by FINE than MUSIC
3D source localization of interictal spikes in epilepsy patients with MRI lesions
International Nuclear Information System (INIS)
Ding Lei; Worrell, Gregory A; Lagerlund, Terrence D; He Bin
2006-01-01
The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R 2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R 2 values achieved by FINE than MUSIC
DOA Estimation of Multiple LFM Sources Using a STFT-based and FBSS-based MUSIC Algorithm
Directory of Open Access Journals (Sweden)
K. B. Cui
2017-12-01
Full Text Available Direction of arrival (DOA estimation is an important problem in array signal processing. An effective multiple signal classification (MUSIC method based on the short-time Fourier transform (STFT and forward/ backward spatial smoothing (FBSS techniques for the DOA estimation problem of multiple time-frequency (t-f joint LFM sources is addressed. Previous work in the area e. g. STFT-MUSIC algorithm cannot resolve the t-f completely or largely joint sources because they can only select the single-source t-f points. The proposed method con¬structs the spatial t-f distributions (STFDs by selecting the multiple-source t-f points and uses the FBSS techniques to solve the problem of rank loss. In this way, the STFT-FBSS-MUSIC algorithm can resolve the t-f largely joint or completely joint LFM sources. In addition, the proposed algorithm also owns pretty low computational complexity when resolving multiple LFM sources because it can reduce the times of the feature decomposition and spectrum search. The performance of the proposed method is compared with that of the existing t-f based MUSIC algorithms through computer simulations and the results show its good performance.
Kim, Jinsul
In this letter, we propose distorted scenes enhancement algorithm in order to provide end user perceptual QoE-guaranteed IPTV service. The block edge detection with weight factor and partition-based local color values method can be applied for the degraded video frames which are affected by network transmission errors such as out of order, jitter, and packet loss to improve QoE efficiently. Based on the result of quality metric after using the distorted scenes enhancement algorithm, the distorted scenes have been restored better than others.
Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.
1986-01-01
The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.
land as a source of revenue mobilisation for local authorities in ghana
African Journals Online (AJOL)
Prince Acheampong
available to the local authorities to raise money from their land resources. ... In Ghana, the Local Government Act, 1993 (Act 462) lists ten main sources of ..... The betterment levy is a tax, which has been little used – perhaps because it is ...
A Research of RSSI-AM Localization Algorithm Based on Data Encryption in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Wang Wei
2014-07-01
Full Text Available In practical application of wireless sensor networks, because of open environment, signal is easy to be attacked and traditional RSSI location technology produces errors. By analyzing the location modal of RSSI, this paper proposes a new encryption modulation algorithm: RSSI-AM, which is unlike most approaches. The location algorithm has the following advantages: simple calculation, strong security, powerful anti-interference ability and no hardware expansion required. Besides, the simulation experiment shows the location precision of ranging method based on RSSI-AM has obvious improvement compared with traditional algorithm. It can be used in the environment of wireless sensor network nodes with low cost and performance of hardware.
Steinwandt, Jens; Roemer, Florian; Haardt, Martin; Galdo, Giovanni Del
2014-09-01
High-resolution parameter estimation algorithms designed to exploit the prior knowledge about incident signals from strictly second-order (SO) non-circular (NC) sources allow for a lower estimation error and can resolve twice as many sources. In this paper, we derive the R-D NC Standard ESPRIT and the R-D NC Unitary ESPRIT algorithms that provide a significantly better performance compared to their original versions for arbitrary source signals. They are applicable to shift-invariant R-D antenna arrays and do not require a centrosymmetric array structure. Moreover, we present a first-order asymptotic performance analysis of the proposed algorithms, which is based on the error in the signal subspace estimate arising from the noise perturbation. The derived expressions for the resulting parameter estimation error are explicit in the noise realizations and asymptotic in the effective signal-to-noise ratio (SNR), i.e., the results become exact for either high SNRs or a large sample size. We also provide mean squared error (MSE) expressions, where only the assumptions of a zero mean and finite SO moments of the noise are required, but no assumptions about its statistics are necessary. As a main result, we analytically prove that the asymptotic performance of both R-D NC ESPRIT-type algorithms is identical in the high effective SNR regime. Finally, a case study shows that no improvement from strictly non-circular sources can be achieved in the special case of a single source.
Lee, Jun Chang; Nam, Kyoung Won; Jang, Dong Pyo; Kim, In Young
2015-12-01
Previously suggested diagonal-steering algorithms for binaural hearing support devices have commonly assumed that the direction of the speech signal is known in advance, which is not always the case in many real circumstances. In this study, a new diagonal-steering-based binaural speech localization (BSL) algorithm is proposed, and the performances of the BSL algorithm and the binaural beamforming algorithm, which integrates the BSL and diagonal-steering algorithms, were evaluated using actual speech-in-noise signals in several simulated listening scenarios. Testing sounds were recorded in a KEMAR mannequin setup and two objective indices, improvements in signal-to-noise ratio (SNRi ) and segmental SNR (segSNRi ), were utilized for performance evaluation. Experimental results demonstrated that the accuracy of the BSL was in the 90-100% range when input SNR was -10 to +5 dB range. The average differences between the γ-adjusted and γ-fixed diagonal-steering algorithms (for -15 to +5 dB input SNR) in the talking in the restaurant scenario were 0.203-0.937 dB for SNRi and 0.052-0.437 dB for segSNRi , and in the listening while car driving scenario, the differences were 0.387-0.835 dB for SNRi and 0.259-1.175 dB for segSNRi . In addition, the average difference between the BSL-turned-on and the BSL-turned-off cases for the binaural beamforming algorithm in the listening while car driving scenario was 1.631-4.246 dB for SNRi and 0.574-2.784 dB for segSNRi . In all testing conditions, the γ-adjusted diagonal-steering and BSL algorithm improved the values of the indices more than the conventional algorithms. The binaural beamforming algorithm, which integrates the proposed BSL and diagonal-steering algorithm, is expected to improve the performance of the binaural hearing support devices in noisy situations. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Liu, Y.; Arntsen, B.; Wapenaar, C.P.A.; Van der Neut, J.R.
2014-01-01
The virtual source method has been applied successfully to retrieve the impulse response between pairs of receivers in the subsurface. This method is further improved by an updown separation prior to the crosscorrelation to suppress the reflections from the overburden and the free surface. In a
Tenke, Craig E.; Kayser, Jürgen
2012-01-01
The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039
Autonomous Micro-Air-Vehicle Control Based on Visual Sensing for Odor Source Localization
Directory of Open Access Journals (Sweden)
Kenzo Kurotsuchi
2017-07-01
Full Text Available In this paper, we propose a novel control method for autonomous-odor-source localization using visual and odor sensing by micro air vehicles (MAVs. Our method is based on biomimetics, which enable highly autonomous localization. Our method does not need any instruction signals, including even global positioning system (GPS signals. An experimenter simply blows a whistle, and the MAV will then start to hover, to seek an odor source, and to keep hovering near the source. The GPS-signal-free control based on visual sense enables indoor/underground use. Moreover, the MAV is light-weight (85 grams and does not cause harm to others even if it accidentally falls. Experiments conducted in the real world were successful in enabling odor source localization using the MAV with a bio-inspired searching method. The distance error of the localization was 63 cm, more accurate than the target distance of 120 cm for individual identification. Our odor source localization is the first step to a proof of concept for a danger warning system. These localization experiments were the first step to a proof of concept for a danger warning system to enable a safer and more secure society.
Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.
2016-12-01
It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03
Multi-scale spatial modeling of human exposure from local sources to global intake
DEFF Research Database (Denmark)
Wannaz, Cedric; Fantke, Peter; Jolliet, Olivier
2018-01-01
Exposure studies, used in human health risk and impact assessments of chemicals are largely performed locally or regionally. It is usually not known how global impacts resulting from exposure to point source emissions compare to local impacts. To address this problem, we introduce Pangea......, an innovative multi-scale, spatial multimedia fate and exposure assessment model. We study local to global population exposure associated with emissions from 126 point sources matching locations of waste-to-energy plants across France. Results for three chemicals with distinct physicochemical properties...... occur within a 100 km radius from the source. This suggests that, by neglecting distant low-level exposure, local assessments might only account for fractions of global cumulative intakes. We also study ~10,000 emission locations covering France more densely to determine per chemical and exposure route...
Acoustic Source Localization via Subspace Based Method Using Small Aperture MEMS Arrays
Directory of Open Access Journals (Sweden)
Xin Zhang
2014-01-01
Full Text Available Small aperture microphone arrays provide many advantages for portable devices and hearing aid equipment. In this paper, a subspace based localization method is proposed for acoustic source using small aperture arrays. The effects of array aperture on localization are analyzed by using array response (array manifold. Besides array aperture, the frequency of acoustic source and the variance of signal power are simulated to demonstrate how to optimize localization performance, which is carried out by introducing frequency error with the proposed method. The proposed method for 5 mm array aperture is validated by simulations and experiments with MEMS microphone arrays. Different types of acoustic sources can be localized with the highest precision of 6 degrees even in the presence of wind noise and other noises. Furthermore, the proposed method reduces the computational complexity compared with other methods.
Directory of Open Access Journals (Sweden)
Weizhen Rao
2016-01-01
Full Text Available The classical model of vehicle routing problem (VRP generally minimizes either the total vehicle travelling distance or the total number of dispatched vehicles. Due to the increased importance of environmental sustainability, one variant of VRPs that minimizes the total vehicle fuel consumption has gained much attention. The resulting fuel consumption VRP (FCVRP becomes increasingly important yet difficult. We present a mixed integer programming model for the FCVRP, and fuel consumption is measured through the degree of road gradient. Complexity analysis of FCVRP is presented through analogy with the capacitated VRP. To tackle the FCVRP’s computational intractability, we propose an efficient two-objective hybrid local search algorithm (TOHLS. TOHLS is based on a hybrid local search algorithm (HLS that is also used to solve FCVRP. Based on the Golden CVRP benchmarks, 60 FCVRP instances are generated and tested. Finally, the computational results show that the proposed TOHLS significantly outperforms the HLS.
Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem
2017-01-01
In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.
Asiri, Sharefa M.
2017-08-22
In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.
Xu, Yunjun; Remeikas, Charles; Pham, Khanh
2014-03-01
Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal
International Nuclear Information System (INIS)
Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng
2015-01-01
The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy. (paper)
Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad
2018-02-01
Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.
DEFF Research Database (Denmark)
Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede
2015-01-01
Interconnected renewable energy sources (RES) require fast and accurate fault ride through (FRT) operation, in order to support the power grid, when faults occur. This paper proposes an adaptive phase-locked loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response...
Underwater Broadband Source Localization Based on Modal Filtering and Features Extraction
Directory of Open Access Journals (Sweden)
Dominique Fattaccioli
2010-01-01
Full Text Available Passive source localization is a crucial issue in underwater acoustics. In this paper, we focus on shallow water environment (0 to 400 m and broadband Ultra-Low Frequency acoustic sources (1 to 100 Hz. In this configuration and at a long range, the acoustic propagation can be described by normal mode theory. The propagating signal breaks up into a series of depth-dependent modes. These modes carry information about the source position. Mode excitation factors and mode phases analysis allow, respectively, localization in depth and distance. We propose two different approaches to achieve the localization: multidimensional approach (using a horizontal array of hydrophones based on frequency-wavenumber transform (F-K method and monodimensional approach (using a single hydrophone based on adapted spectral representation (FTa method. For both approaches, we propose first complete tools for modal filtering, and then depth and distance estimators. We show that adding mode sign and source spectrum informations improves considerably the localization performance in depth. The reference acoustic field needed for depth localization is simulated with the new realistic propagation modelMoctesuma. The feasibility of both approaches, F-K and FTa, are validated on data simulated in shallow water for different configurations. The performance of localization, in depth and distance, is very satisfactory.
Underwater Broadband Source Localization Based on Modal Filtering and Features Extraction
Directory of Open Access Journals (Sweden)
Cristol Xavier
2010-01-01
Full Text Available Passive source localization is a crucial issue in underwater acoustics. In this paper, we focus on shallow water environment (0 to 400 m and broadband Ultra-Low Frequency acoustic sources (1 to 100 Hz. In this configuration and at a long range, the acoustic propagation can be described by normal mode theory. The propagating signal breaks up into a series of depth-dependent modes. These modes carry information about the source position. Mode excitation factors and mode phases analysis allow, respectively, localization in depth and distance. We propose two different approaches to achieve the localization: multidimensional approach (using a horizontal array of hydrophones based on frequency-wavenumber transform ( method and monodimensional approach (using a single hydrophone based on adapted spectral representation ( method. For both approaches, we propose first complete tools for modal filtering, and then depth and distance estimators. We show that adding mode sign and source spectrum informations improves considerably the localization performance in depth. The reference acoustic field needed for depth localization is simulated with the new realistic propagation modelMoctesuma. The feasibility of both approaches, and , are validated on data simulated in shallow water for different configurations. The performance of localization, in depth and distance, is very satisfactory.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Directory of Open Access Journals (Sweden)
Rasheda Arman Chowdhury
Full Text Available Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG or Magneto-EncephaloGraphy (MEG signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i brain activity may be modeled using cortical parcels and (ii brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM and the Hierarchical Bayesian (HB source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2 to 30 cm(2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Pomareda, Víctor; Magrans, Rudys; Jiménez-Soto, Juan M; Martínez, Dani; Tresánchez, Marcel; Burgués, Javier; Palacín, Jordi; Marco, Santiago
2017-04-20
We present the estimation of a likelihood map for the location of the source of a chemical plume dispersed under atmospheric turbulence under uniform wind conditions. The main contribution of this work is to extend previous proposals based on Bayesian inference with binary detections to the use of concentration information while at the same time being robust against the presence of background chemical noise. For that, the algorithm builds a background model with robust statistics measurements to assess the posterior probability that a given chemical concentration reading comes from the background or from a source emitting at a distance with a specific release rate. In addition, our algorithm allows multiple mobile gas sensors to be used. Ten realistic simulations and ten real data experiments are used for evaluation purposes. For the simulations, we have supposed that sensors are mounted on cars which do not have among its main tasks navigating toward the source. To collect the real dataset, a special arena with induced wind is built, and an autonomous vehicle equipped with several sensors, including a photo ionization detector (PID) for sensing chemical concentration, is used. Simulation results show that our algorithm, provides a better estimation of the source location even for a low background level that benefits the performance of binary version. The improvement is clear for the synthetic data while for real data the estimation is only slightly better, probably because our exploration arena is not able to provide uniform wind conditions. Finally, an estimation of the computational cost of the algorithmic proposal is presented.
A Survey of Sound Source Localization Methods in Wireless Acoustic Sensor Networks
Directory of Open Access Journals (Sweden)
Maximo Cobos
2017-01-01
Full Text Available Wireless acoustic sensor networks (WASNs are formed by a distributed group of acoustic-sensing devices featuring audio playing and recording capabilities. Current mobile computing platforms offer great possibilities for the design of audio-related applications involving acoustic-sensing nodes. In this context, acoustic source localization is one of the application domains that have attracted the most attention of the research community along the last decades. In general terms, the localization of acoustic sources can be achieved by studying energy and temporal and/or directional features from the incoming sound at different microphones and using a suitable model that relates those features with the spatial location of the source (or sources of interest. This paper reviews common approaches for source localization in WASNs that are focused on different types of acoustic features, namely, the energy of the incoming signals, their time of arrival (TOA or time difference of arrival (TDOA, the direction of arrival (DOA, and the steered response power (SRP resulting from combining multiple microphone signals. Additionally, we discuss methods not only aimed at localizing acoustic sources but also designed to locate the nodes themselves in the network. Finally, we discuss current challenges and frontiers in this field.
Rodebaugh, Raymond Francis, Jr.
2000-11-01
In this project we applied modifications of the Fermi- Eyges multiple scattering theory to attempt to achieve the goals of a fast, accurate electron dose calculation algorithm. The dose was first calculated for an ``average configuration'' based on the patient's anatomy using a modification of the Hogstrom algorithm. It was split into a measured central axis depth dose component based on the material between the source and the dose calculation point, and an off-axis component based on the physics of multiple coulomb scattering for the average configuration. The former provided the general depth dose characteristics along the beam fan lines, while the latter provided the effects of collimation. The Gaussian localized heterogeneities theory of Jette provided the lateral redistribution of the electron fluence by heterogeneities. Here we terminated Jette's infinite series of fluence redistribution terms after the second term. Experimental comparison data were collected for 1 cm thick x 1 cm diameter air and aluminum pillboxes using the Varian 2100C linear accelerator at Rush-Presbyterian- St. Luke's Medical Center. For an air pillbox, the algorithm results were in reasonable agreement with measured data at both 9 and 20 MeV. For the Aluminum pill box, there were significant discrepancies between the results of this algorithm and experiment. This was particularly apparent for the 9 MeV beam. Of course a one cm thick Aluminum heterogeneity is unlikely to be encountered in a clinical situation; the thickness, linear stopping power, and linear scattering power of Aluminum are all well above what would normally be encountered. We found that the algorithm is highly sensitive to the choice of the average configuration. This is an indication that the series of fluence redistribution terms does not converge fast enough to terminate after the second term. It also makes it difficult to apply the algorithm to cases where there are no a priori means of choosing the best average
Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun
2017-03-05
In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission
International Nuclear Information System (INIS)
Lin Chaung; Lin, Tung-Hsien
2012-01-01
Highlights: ► The automatic procedure was developed to design the radial enrichment and gadolinia (Gd) distribution of fuel lattice. ► The method is based on a particle swarm optimization algorithm and local search. ► The design goal were to achieve the minimum local peaking factor. ► The number of fuel pins with Gd and Gd concentration are fixed to reduce search complexity. ► In this study, three axial sections are design and lattice performance is calculated using CASMO-4. - Abstract: The axial section of fuel assembly in a boiling water reactor (BWR) consists of five or six different distributions; this requires a radial lattice design. In this study, an automatic procedure based on a particle swarm optimization (PSO) algorithm and local search was developed to design the radial enrichment and gadolinia (Gd) distribution of the fuel lattice. The design goals were to achieve the minimum local peaking factor (LPF), and to come as close as possible to the specified target average enrichment and target infinite multiplication factor (k ∞ ), in which the number of fuel pins with Gd and Gd concentration are fixed. In this study, three axial sections are designed, and lattice performance is calculated using CASMO-4. Finally, the neutron cross section library of the designed lattice is established by CMSLINK; the core status during depletion, such as thermal limits, cold shutdown margin and cycle length, are then calculated using SIMULATE-3 in order to confirm that the lattice design satisfies the design requirements.
Energy Technology Data Exchange (ETDEWEB)
Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)
2016-01-11
Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.
International Nuclear Information System (INIS)
Saidullah, S.; Shah, B.
2016-01-01
Background: To ablate accessory pathway successfully and conveniently, accurate localization of the pathway is needed. Electrophysiologists use different algorithms before taking the patients to the electrophysiology (EP) laboratory to plan the intervention accordingly. In this study, we used Arruda algorithm to locate the accessory pathway. The objective of the study was to determine the accuracy of the Arruda algorithm for locating the pathway on surface ECG. Methods: It was a cross-sectional observational study conducted from January 2014 to January 2016 in the electrophysiology department of Hayat Abad Medical Complex Peshawar Pakistan. A total of fifty nine (n=59) consecutive patients of both genders between age 14-60 years presented with WPW syndrome (Symptomatic tachycardia with delta wave on surface ECG) were included in the study. Patient's electrocardiogram (ECG) before taking patients to laboratory was analysed on Arruda algorithm. Standard four wires protocol was used for EP study before ablation. Once the findings were confirmed the pathway was ablated as per standard guidelines. Results: A total of fifty nine (n=59) patients between the age 14-60 years were included in the study. Cumulative mean age was 31.5 years ± 12.5 SD. There were 56.4% (n=31) males with mean age 28.2 years ± 10.2 SD and 43.6% (n=24) were females with mean age 35.9 years ± 14.0 SD. Arruda algorithm was found to be accurate in predicting the exact accessory pathway (AP) in 83.6% (n=46) cases. Among all inaccurate predictions (n=9), Arruda inaccurately predicted two third (n=6; 66.7%) pathways towards right side (right posteroseptal, right posterolateral and right antrolateral). Conclusion: Arruda algorithm was found highly accurate in predicting accessory pathway before ablation. (author)
A localization algorithm of adaptively determining the ROI of the reference circle in image
Xu, Zeen; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen
2018-03-01
Aiming at solving the problem of accurately positioning the detection probes underwater, this paper proposed a method based on computer vision which can effectively solve this problem. The theory of this method is that: First, because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Second, we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the under test heat tubes, then we can know the precise location of the probes underwater. However, how to choose reference circle in image is a difficult problem. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle. In this area, there will be only one circle, and the circle is the reference circle. The test results show that the accuracy of the algorithm of extracting the reference circle in the whole picture without using ROI (region of interest) of the reference circle is only 58.76% and the proposed algorithm is 95.88%. The experimental results indicate that the proposed algorithm can effectively improve the efficiency of the tubes detection.
Hypersensitivity to local anaesthetics--update and proposal of evaluation algorithm
DEFF Research Database (Denmark)
Thyssen, Jacob Pontoppidan; Menné, Torkil; Elberling, Jesper
2008-01-01
of patients suspected with immediate- and delayed-type immune reactions. Literature was examined using PubMed-Medline, EMBASE, Biosis and Science Citation Index. Based on the literature, the proposed algorithm may safely and rapidly distinguish between immediate-type and delayed-type allergic immune reactions....
Directory of Open Access Journals (Sweden)
Li Gong-Hua
2010-08-01
Full Text Available Abstract Background The rapid development of structural genomics has resulted in many "unknown function" proteins being deposited in Protein Data Bank (PDB, thus, the functional prediction of these proteins has become a challenge for structural bioinformatics. Several sequence-based and structure-based methods have been developed to predict protein function, but these methods need to be improved further, such as, enhancing the accuracy, sensitivity, and the computational speed. Here, an accurate algorithm, the CMASA (Contact MAtrix based local Structural Alignment algorithm, has been developed to predict unknown functions of proteins based on the local protein structural similarity. This algorithm has been evaluated by building a test set including 164 enzyme families, and also been compared to other methods. Results The evaluation of CMASA shows that the CMASA is highly accurate (0.96, sensitive (0.86, and fast enough to be used in the large-scale functional annotation. Comparing to both sequence-based and global structure-based methods, not only the CMASA can find remote homologous proteins, but also can find the active site convergence. Comparing to other local structure comparison-based methods, the CMASA can obtain the better performance than both FFF (a method using geometry to predict protein function and SPASM (a local structure alignment method; and the CMASA is more sensitive than PINTS and is more accurate than JESS (both are local structure alignment methods. The CMASA was applied to annotate the enzyme catalytic sites of the non-redundant PDB, and at least 166 putative catalytic sites have been suggested, these sites can not be observed by the Catalytic Site Atlas (CSA. Conclusions The CMASA is an accurate algorithm for detecting local protein structural similarity, and it holds several advantages in predicting enzyme active sites. The CMASA can be used in large-scale enzyme active site annotation. The CMASA can be available by the
Olfactory source localization in the open field using one or both nostrils.
Welge-Lussen, A; Looser, G L; Westermann, B; Hummel, T
2014-03-01
This study aims to examine humans ́ abilities to localize odorants within the open field. Young participants were tested on a localization task using a relatively selective olfactory stimulus (2-phenylethyl-alcohol, PEA) and cineol, an odorant with a strong trigeminal component. Participants were blindfolded and had to localize an odorant source at 2 m distance (far-field condition) and a 0.4 m distance (near-field condition) with either two nostrils open or only one open nostril. For the odorant with trigeminal properties, the number of correct trials did not differ when one or both nostrils were used, while more PEA localization trials were correctly completed with both rather than one nostril. In the near-field condition, correct localization was possible in 72-80% of the trials, irrespective of the odorant and the number of nostrils used. Localization accuracy, measured as spatial deviation from the olfactory source, was significantly higher in the near-field compared to the far-field condition, but independent of the odorant being localized. Odorant localization within the open field is difficult, but possible. In contrast to the general view, humans seem to be able to exploit the two-nostril advantage with increasing task difficulty.
Directory of Open Access Journals (Sweden)
Zhang Min
2014-04-01
Full Text Available Due to the deficiencies in the conventional multiple-receiver localization systems based on direction of arrival (DOA such as system complexity of interferometer or array and amplitude/phase unbalance between multiple receiving channels and constraint on antenna configuration, a new radiated source localization method using the changing rate of phase difference (CRPD measured by a long baseline interferometer (LBI only is studied. To solve the strictly nonlinear problem, a two-stage closed-form solution is proposed. In the first stage, the DOA and its changing rate are estimated from the CRPD of each observer by the pseudolinear least square (PLS method, and then in the second stage, the source position and velocity are found by another PLS minimization. The bias of the algorithm caused by the correlation between the measurement matrix and the noise in the second stage is analyzed. To reduce this bias, an instrumental variable (IV method is derived. A weighted IV estimator is given in order to reduce the estimation variance. The proposed method does not need any initial guess and the computation is small. The Cramer–Rao lower bound (CRLB and mean square error (MSE are also analyzed. Simulation results show that the proposed method can be close to the CRLB with moderate Gaussian measurement noise.
Liu, Quanying; Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2018-01-01
Resting state networks (RSNs) in the human brain were recently detected using high-density electroencephalography (hdEEG). This was done by using an advanced analysis workflow to estimate neural signals in the cortex and to assess functional connectivity (FC) between distant cortical regions. FC analyses were conducted either using temporal (tICA) or spatial independent component analysis (sICA). Notably, EEG-RSNs obtained with sICA were very similar to RSNs retrieved with sICA from functional magnetic resonance imaging data. It still remains to be clarified, however, what technological aspects of hdEEG acquisition and analysis primarily influence this correspondence. Here we examined to what extent the detection of EEG-RSN maps by sICA depends on the electrode density, the accuracy of the head model, and the source localization algorithm employed. Our analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction. Overall, our results confirm the potential of hdEEG for mapping the functional architecture of the human brain, and highlight at the same time the interplay between acquisition technology and innovative solutions in data analysis. PMID:29551969
Moving source localization with a single hydrophone using multipath time delays in the deep ocean.
Duan, Rui; Yang, Kunde; Ma, Yuanliang; Yang, Qiulong; Li, Hui
2014-08-01
Localizing a source of radial movement at moderate range using a single hydrophone can be achieved in the reliable acoustic path by tracking the time delays between the direct and surface-reflected arrivals (D-SR time delays). The problem is defined as a joint estimation of the depth, initial range, and speed of the source, which are the state parameters for the extended Kalman filter (EKF). The D-SR time delays extracted from the autocorrelation functions are the measurements for the EKF. Experimental results using pseudorandom signals show that accurate localization results are achieved by offline iteration of the EKF.
DEFF Research Database (Denmark)
Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene
2017-01-01
Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe....... Assuming that the distribution of the neutrino sources follows that of matter we look for correlations between `warm' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance...... (including that of IceCube-Gen2) we demonstrate that sources with local density exceeding $10^{-6} \\, \\text{Mpc}^{-3}$ and neutrino luminosity $L_{\
Stoneham, Melissa; Dodds, James
2014-08-01
The Western Australian (WA) Public Health Bill will replace the antiquated Health Act 1911. One of the proposed clauses of the Bill requires all WA local governments to develop a Public Health Plan. The Bill states that Public Health Plans should be based on evidence from all levels, including national and statewide priorities, community needs, local statistical evidence, and stakeholder data. This exploratory study, which targeted 533 WA local government officers, aimed to identify the sources of evidence used to generate the list of public health risks to be included in local government Public Health Plans. The top four sources identified for informing local policy were: observation of the consequences of the risks in the local community (24.5%), statewide evidence (17.6%), local evidence (17.6%) and coverage in local media (16.2%). This study confirms that both hard and soft data are used to inform policy decisions at the local level. Therefore, the challenge that this study has highlighted is in the definition or constitution of evidence. SO WHAT? Evidence is critical to the process of sound policy development. This study highlights issues associated with what actually constitutes evidence in the policy development process at the local government level. With the exception of those who work in an extremely narrow field, it is difficult for local government officers, whose role includes policymaking, to read the vast amount of information that has been published in their area of expertise. For those who are committed to the notion of evidence-based policymaking, as advocated within the WA Public Health Bill, this presents a considerable challenge.
Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A
2015-10-26
Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.
An MEF-Based Localization Algorithm against Outliers in Wireless Sensor Networks.
Wang, Dandan; Wan, Jiangwen; Wang, Meimei; Zhang, Qiang
2016-07-07
Precise localization has attracted considerable interest in Wireless Sensor Networks (WSNs) localization systems. Due to the internal or external disturbance, the existence of the outliers, including both the distance outliers and the anchor outliers, severely decreases the localization accuracy. In order to eliminate both kinds of outliers simultaneously, an outlier detection method is proposed based on the maximum entropy principle and fuzzy set theory. Since not all the outliers can be detected in the detection process, the Maximum Entropy Function (MEF) method is utilized to tolerate the errors and calculate the optimal estimated locations of unknown nodes. Simulation results demonstrate that the proposed localization method remains stable while the outliers vary. Moreover, the localization accuracy is highly improved by wisely rejecting outliers.
Development on advanced technology of local dosimetry for various radiation sources
International Nuclear Information System (INIS)
Odano, Naoteru; Ohnishi, Seiki; Ueki, Kohtaro
2004-01-01
The development aims at measuring local dose distribution accurately and handy and at enhancing precision of dose evaluation, so that personnel exposure can be reduced. A sheet type device and a sheet data reader were produced for trial and their performance testing were made under Sr-90 standard radiation and synchrotron radiation sources. Also a computer code was developed to analyze two-dimensional local dose distribution and to evaluate the precision of the sheet type dosimeter and data reader. The code enables to calculate local exposure doses of phantom quickly and simply for various beam irradiation conditions. (H. Yokoo)
Localization of the gamma-radiation sources using the gamma-visor
Directory of Open Access Journals (Sweden)
Ivanov Kirill E.
2008-01-01
Full Text Available The search of the main gamma-radiation sources at the site of the temporary storage of solid radioactive wastes was carried out. The relative absorbed dose rates were measured for some of the gamma-sources before and after the rehabilitation procedures. The effectiveness of the rehabilitation procedures in the years 2006-2007 was evaluated qualitatively and quantitatively. The decrease of radiation background at the site of the temporary storage of the solid radioactive wastes after the rehabilitation procedures allowed localizing the new gamma-source.
Localization of the gamma-radiation sources using the gamma-visor
International Nuclear Information System (INIS)
Ivanov, K. E.; Ponomaryev-Stepnoi, N. N.; Stepennov, B. S.; Teterin, Y. A.; Teterin, A. Y.; Kharitonov, V. V.
2008-01-01
The search of the main gamma-radiation sources at the site of the temporary storage of solid radioactive wastes was carried out. The relative absorbed dose rates were measured for some of the gamma-sources before and after the rehabilitation procedures. The effectiveness of the rehabilitation procedures in the years 2006-2007 was evaluated qualitatively and quantitatively. The decrease of radiation background at the site of the temporary storage of the solid radioactive wastes after the rehabilitation procedures al lowed localizing the new gamma-source. (author)
Collins, William
1989-01-01
The magnetohydrodynamic wave emission from several localized, periodic, kinematically specified fluid velocity fields are calculated using Lighthill's method for finding the far-field wave forms. The waves propagate through an isothermal and uniform plasma with a constant B field. General properties of the energy flux are illustrated with models of pulsating flux tubes and convective rolls. Interference theory from geometrical optics is used to find the direction of minimum fast-wave emission from multipole sources and slow-wave emission from discontinuous sources. The distribution of total flux in fast and slow waves varies with the ratios of the source dimensions l to the acoustic and Alfven wavelengths.
Directory of Open Access Journals (Sweden)
Daniel M. Wonohadidjojo
2017-03-01
was applied. The results of local contrast enhancement using both methods were compared with the results using histogram equalization method. The tests were conducted using two MDCK cell images. The results of local contrast enhancement using both methods were evaluated by observing the enhanced images and IEM values. The results show that the methods outperform the histogram equalization method. Furthermore, the method using IFSABC is better than the IFS method.
Beamforming with a circular microphone array for localization of environmental noise sources
DEFF Research Database (Denmark)
Tiana Roig, Elisabet; Jacobsen, Finn; Fernandez Grande, Efren
2010-01-01
It is often enough to localize environmental sources of noise from different directions in a plane. This can be accomplished with a circular microphone array, which can be designed to have practically the same resolution over 360. The microphones can be suspended in free space or they can...
Lorsque la recherche locale est source de changements véritables ...
International Development Research Centre (IDRC) Digital Library (Canada)
18 févr. 2011 ... Les think tanks africains se penchent sur certaines des difficultés les plus ... Lorsque la recherche locale est source de changements véritables en Afrique ... De quel type d'information les acteurs de la sphère politique ont-ils ...
Lewis, Michael A., Robert L. Quarles, Darrin D. Dantin and James C. Moore. 2004. Evaluation of a Coastal Golf Complex as a Local and Watershed Source of Bioavailable Contaminants. Mar. Pollut. Bull. 48(3-4):254-262. (ERL,GB 1183). Contaminant fate in coastal areas impacte...
EEG source localization in full-term newborns with hypoxic-ischemia
Jennekens, W.; Dankers, F.; Blijham, P.; Cluitmans, P.; van Pul, C.; Andriessen, P.
2013-01-01
The aim of this study was to evaluate EEG source localization by standardized weighted low-resolution brain electromagnetic tomography (swLORETA) for monitoring of fullterm newborns with hypoxic-ischemic encephalopathy, using a standard anatomic head model. Three representative examples of neonatal
Linearized versus non-linear inverse methods for seismic localization of underground sources
DEFF Research Database (Denmark)
Oh, Geok Lian; Jacobsen, Finn
2013-01-01
The problem of localization of underground sources from seismic measurements detected by several geophones located on the ground surface is addressed. Two main approaches to the solution of the problem are considered: a beamforming approach that is derived from the linearized inversion problem, a...
Evolution of Sound Source Localization Circuits in the Nonmammalian Vertebrate Brainstem
DEFF Research Database (Denmark)
Walton, Peggy L; Christensen-Dalsgaard, Jakob; Carr, Catherine E
2017-01-01
The earliest vertebrate ears likely subserved a gravistatic function for orientation in the aquatic environment. However, in addition to detecting acceleration created by the animal's own movements, the otolithic end organs that detect linear acceleration would have responded to particle movement...... to increased sensitivity to a broader frequency range and to modification of the preexisting circuitry for sound source localization....
Analysis of filtration properties of locally sourced base oil for the ...
African Journals Online (AJOL)
This study examines the use of locally sourced oil like, groundnut oil, melon oil, vegetable oil, soya oil and palm oil as substitute for diesel oil in formulating oil base drilling fluids relative to filtration properties. The filtrate volumes of each of the oils were obtained for filtration control analysis. With increasing potash and ...
International Nuclear Information System (INIS)
Yang Lei; Gong Xueyu; Wang Ling
2013-01-01
Combined with standard mathematical model for evaluating quality of deploying results, a new high-performance parallel algorithm for source pencils' deployment was obtained by using parallel plant growth simulation algorithm which was completely parallelized with CUDA execute model, and the corresponding code can run on GPU. Based on such work, several instances in various scales were used to test the new version of algorithm. The results show that, based on the advantage of old versions. the performance of new one is improved more than 500 times comparing with the CPU version, and also 30 times with the CPU plus GPU hybrid version. The computation time of new version is less than ten minutes for the irradiator of which the activity is less than 111 PBq. For a single GTX275 GPU, the maximum computing power of new version is no more than 167 PBq as well as the computation time is no more than 25 minutes, and for multiple GPUs, the power can be improved more. Overall, the new version of algorithm running on GPU can satisfy the requirement of source pencils' deployment of any domestic irradiator, and it is of high competitiveness. (authors)
Demonstration of acoustic source localization in air using single pixel compressive imaging
Rogers, Jeffrey S.; Rohde, Charles A.; Guild, Matthew D.; Naify, Christina J.; Martin, Theodore P.; Orris, Gregory J.
2017-12-01
Acoustic source localization often relies on large sensor arrays that can be electronically complex and have large data storage requirements to process element level data. Recently, the concept of a single-pixel-imager has garnered interest in the electromagnetics literature due to its ability to form high quality images with a single receiver paired with shaped aperture screens that allow for the collection of spatially orthogonal measurements. Here, we present a method for creating an acoustic analog to the single-pixel-imager found in electromagnetics for the purpose of source localization. Additionally, diffraction is considered to account for screen openings comparable to the acoustic wavelength. A diffraction model is presented and incorporated into the single pixel framework. In this paper, we explore the possibility of applying single pixel localization to acoustic measurements. The method is experimentally validated with laboratory measurements made in an air waveguide.
Directory of Open Access Journals (Sweden)
M. Zacharek
2017-05-01
Full Text Available These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters, but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.
swLORETA: a novel approach to robust source localization and synchronization tomography
International Nuclear Information System (INIS)
Palmero-Soler, Ernesto; Dolan, Kevin; Hadamschek, Volker; Tass, Peter A
2007-01-01
Standardized low-resolution brain electromagnetic tomography (sLORETA) is a widely used technique for source localization. However, this technique still has some limitations, especially under realistic noisy conditions and in the case of deep sources. To overcome these problems, we present here swLORETA, an improved version of sLORETA, obtained by incorporating a singular value decomposition-based lead field weighting. We show that the precision of the source localization can further be improved by a tomographic phase synchronization analysis based on swLORETA. The phase synchronization analysis turns out to be superior to a standard linear coherence analysis, since the latter cannot distinguish between real phase locking and signal mixing
Directory of Open Access Journals (Sweden)
Javier Macias-Guarasa
2012-10-01
Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.
Effect of conductor geometry on source localization: Implications for epilepsy studies
International Nuclear Information System (INIS)
Schlitt, H.; Heller, L.; Best, E.; Ranken, D.; Aaron, R.
1994-01-01
We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we must first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head
Directory of Open Access Journals (Sweden)
Anup Kumar Paul
2017-10-01
Full Text Available Localization is an important aspect in the field of wireless sensor networks (WSNs that has developed significant research interest among academia and research community. Wireless sensor network is formed by a large number of tiny, low energy, limited processing capability and low-cost sensors that communicate with each other in ad-hoc fashion. The task of determining physical coordinates of sensor nodes in WSNs is known as localization or positioning and is a key factor in today’s communication systems to estimate the place of origin of events. As the requirement of the positioning accuracy for different applications varies, different localization methods are used in different applications and there are several challenges in some special scenarios such as forest fire detection. In this paper, we survey different measurement techniques and strategies for range based and range free localization with an emphasis on the latter. Further, we discuss different localization-based applications, where the estimation of the location information is crucial. Finally, a comprehensive discussion of the challenges such as accuracy, cost, complexity, and scalability are given.
DEFF Research Database (Denmark)
Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede
2013-01-01
Interconnected renewable energy sources require fast and accurate fault ride through operation in order to support the power grid when faults occur. This paper proposes an adaptive Phase-Locked Loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response of the grid...... side converter control of a renewable energy source, especially under fault ride through operation. The adaptive dαβPLL is based on modifying the control parameters of the dαβPLL according to the type and voltage characteristic of the grid fault with the purpose of accelerating the performance...
A combined joint diagonalization-MUSIC algorithm for subsurface targets localization
Wang, Yinlin; Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon
2014-06-01
This paper presents a combined joint diagonalization (JD) and multiple signal classification (MUSIC) algorithm for estimating subsurface objects locations from electromagnetic induction (EMI) sensor data, without solving ill-posed inverse-scattering problems. JD is a numerical technique that finds the common eigenvectors that diagonalize a set of multistatic response (MSR) matrices measured by a time-domain EMI sensor. Eigenvalues from targets of interest (TOI) can be then distinguished automatically from noise-related eigenvalues. Filtering is also carried out in JD to improve the signal-to-noise ratio (SNR) of the data. The MUSIC algorithm utilizes the orthogonality between the signal and noise subspaces in the MSR matrix, which can be separated with information provided by JD. An array of theoreticallycalculated Green's functions are then projected onto the noise subspace, and the location of the target is estimated by the minimum of the projection owing to the orthogonality. This combined method is applied to data from the Time-Domain Electromagnetic Multisensor Towed Array Detection System (TEMTADS). Examples of TEMTADS test stand data and field data collected at Spencer Range, Tennessee are analyzed and presented. Results indicate that due to its noniterative mechanism, the method can be executed fast enough to provide real-time estimation of objects' locations in the field.
Adopting the FAB-MAP algorithm for indoor localization with WiFi fingerprints
Wietrzykowski, Jan; Nowicki, Michał; Skrzypczyński, Piotr
2016-01-01
Personal indoor localization is usually accomplished by fusing information from various sensors. A common choice is to use the WiFi adapter that provides information about Access Points that can be found in the vicinity. Unfortunately, state-of-the-art approaches to WiFi-based localization often employ very dense maps of the WiFi signal distribution, and require a time-consuming process of parameter selection. On the other hand, camera images are commonly used for visual place recognition, de...
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of
International Nuclear Information System (INIS)
Parwani, Ajit K.; Talukdar, Prabal; Subbarao, P.M.V.
2013-01-01
An inverse heat transfer problem is discussed to estimate simultaneously the unknown position and timewise varying strength of a heat source by utilizing differential evolution approach. A two dimensional enclosure with isothermal and black boundaries containing non-scattering, absorbing and emitting gray medium is considered. Both radiation and conduction heat transfer are included. No prior information is used for the functional form of timewise varying strength of heat source. The finite volume method is used to solve the radiative transfer equation and the energy equation. In this work, instead of measured data, some temperature data required in the solution of the inverse problem are taken from the solution of the direct problem. The effect of measurement errors on the accuracy of estimation is examined by introducing errors in the temperature data of the direct problem. The prediction of source strength and its position by the differential evolution (DE) algorithm is found to be quite reasonable. -- Highlights: •Simultaneous estimation of strength and position of a heat source. •A conducting and radiatively participating medium is considered. •Implementation of differential evolution algorithm for such kind of problems. •Profiles with discontinuities can be estimated accurately. •No limitation in the determination of source strength at the final time
Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.
Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott
2016-04-19
To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.
Directory of Open Access Journals (Sweden)
Arko Djajadi
2009-12-01
Full Text Available This project is intended to analyze, design and implement a realtime sound source localization system by using a mobile robot as the media. The implementated system uses 2 microphones as the sensors, Arduino Duemilanove microcontroller system with ATMega328p as the microprocessor, two permanent magnet DC motors as the actuators for the mobile robot and a servo motor as the actuator to rotate the webcam directing to the location of the sound source, and a laptop/PC as the simulation and display media. In order to achieve the objective of finding the position of a specific sound source, beamforming theory is applied to the system. Once the location of the sound source is detected and determined, the choice is either the mobile robot will adjust its position according to the direction of the sound source or only webcam will rotate in the direction of the incoming sound simulating the use of this system in a video conference. The integrated system has been tested and the results show the system could localize in realtime a sound source placed randomly on a half circle area (0 - 1800 with a radius of 0.3m - 3m, assuming the system is the center point of the circle. Due to low ADC and processor speed, achievable best angular resolution is still limited to 25o.
International Nuclear Information System (INIS)
Tanabe, Akira; Yamamoto, Toru; Shinfuku, Kimihiro; Nakamae, Takuji; Nishide, Fusayo.
1995-01-01
Previously a two-layered neural network model was developed to predict the relation between fissile enrichment of each fuel rod and local power distribution in a BWR fuel bundle. This model was obtained intuitively based on 33 patterns of training signals after an intensive survey of the models. Recently, a learning algorithm with forgetting was reported to simplify neural network models. It is an interesting subject what kind of model will be obtained if this algorithm is applied to the complex three-layered model which learns the same training signals. A three-layered model which is expanded to have direct connections between the 1st and the 3rd layer elements has been constructed and the learning method of normal back propagation was applied first to this model. The forgetting algorithm was then added to this learning process. The connections concerned with the 2nd layer elements disappeared and the 2nd layer has become unnecessary. It took a longer computing time by an order to learn the same training signals than the simple back propagation, but the two-layered model was obtained autonomously from the expanded three-layered model. (author)
Closed-Form Algorithm for 3-D Near-Field OFDM Signal Localization under Uniform Circular Array.
Su, Xiaolong; Liu, Zhen; Chen, Xin; Wei, Xizhang
2018-01-14
Due to its widespread application in communications, radar, etc., the orthogonal frequency division multiplexing (OFDM) signal has become increasingly urgent in the field of localization. Under uniform circular array (UCA) and near-field conditions, this paper presents a closed-form algorithm based on phase difference for estimating the three-dimensional (3-D) location (azimuth angle, elevation angle, and range) of the OFDM signal. In the algorithm, considering that it is difficult to distinguish the frequency of the OFDM signal's subcarriers and the phase-based method is always affected by errors of the frequency estimation, this paper employs sparse representation (SR) to obtain the super-resolution frequencies and the corresponding phases of subcarriers. Further, as the phase differences of the adjacent sensors including azimuth angle, elevation angle and range parameters can be expressed as indefinite equations, the near-field OFDM signal's 3-D location is obtained by employing the least square method, where the phase differences are based on the average of the estimated subcarriers. Finally, the performance of the proposed algorithm is demonstrated by several simulations.
Sample size determination algorithm for fingerprint-based indoor localization systems
Kanaris, L.; Kokkinis, A.; Fortino, G.; Liotta, A.; Stavrou, S.
2016-01-01
Provision of accurate location information is an important task in the Internet of Things (IoT) applications and scenarios. This need has boosted the research and development of fingerprint based, indoor localization systems, since GPS information is not available in indoor environments. Performance
Energy Technology Data Exchange (ETDEWEB)
Stavrov, Andrei; Yamamoto, Eugene [Rapiscan Systems, Inc., 14000 Mead Street, Longmont, CO, 80504 (United States)
2015-07-01
Radiation Portal Monitors (RPM) with plastic detectors represent the main instruments used for primary border (customs) radiation control. RPM are widely used because they are simple, reliable, relatively inexpensive and have a high sensitivity. However, experience using the RPM in various countries has revealed the systems have some grave shortcomings. There is a dramatic decrease of the probability of detection of radioactive sources under high suppression of the natural gamma background (radiation control of heavy cargoes, containers and, especially, trains). NORM (Naturally Occurring Radioactive Material) existing in objects under control trigger the so-called 'nuisance alarms', requiring a secondary inspection for source verification. At a number of sites, the rate of such alarms is so high it significantly complicates the work of customs and border officers. This paper presents a brief description of new variant of algorithm ASIA-New (New Advanced Source Identification Algorithm), which was developed by the Rapiscan company. It also demonstrates results of different tests and the capability of a new system to overcome the shortcomings stated above. New electronics and ASIA-New enables RPM to detect radioactive sources under a high background suppression (tested at 15-30%) and to verify the detected NORM (KCl) and the artificial isotopes (Co- 57, Ba-133 and other). New variant of ASIA is based on physical principles, a phenomenological approach and analysis of some important parameter changes during the vehicle passage through the monitor control area. Thanks to this capability main advantage of new system is that this system can be easily installed into any RPM with plastic detectors. Taking into account that more than 4000 RPM has been installed worldwide their upgrading by ASIA-New may significantly increase probability of detection and verification of radioactive sources even masked by NORM. This algorithm was tested for 1,395 passages of
Source localization using a non-cocentered orthogonal loop and dipole (NCOLD) array
Institute of Scientific and Technical Information of China (English)
Liu Zhaoting; Xu Tongyang
2013-01-01
A uniform array of scalar-sensors with intersensor spacings over a large aperture size generally offers enhanced resolution and source localization accuracy, but it may also lead to cyclic ambiguity. By exploiting the polarization information of impinging waves, an electromagnetic vec-tor-sensor array outperforms the unpolarized scalar-sensor array in resolving this cyclic ambiguity. However, the electromagnetic vector-sensor array usually consists of cocentered orthogonal loops and dipoles (COLD), which is easily subjected to mutual coupling across these cocentered dipoles/loops. As a result, the source localization performance of the COLD array may substantially degrade rather than being improved. This paper proposes a new source localization method with a non-cocentered orthogonal loop and dipole (NCOLD) array. The NCOLD array contains only one dipole or loop on each array grid, and the intersensor spacings are larger than a half-wave-length. Therefore, unlike the COLD array, these well separated dipoles/loops minimize the mutual coupling effects and extend the spatial aperture as well. With the NCOLD array, the proposed method can efficiently exploit the polarization information to offer high localization precision.
Directory of Open Access Journals (Sweden)
Rabindra Kumar Sahu
2016-03-01
Full Text Available This paper presents the design and analysis of Proportional-Integral-Double Derivative (PIDD controller for Automatic Generation Control (AGC of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization (TLBO algorithm. At first, a two-area reheat thermal power system with appropriate Generation Rate Constraint (GRC is considered. The design problem is formulated as an optimization problem and TLBO is employed to optimize the parameters of the PIDD controller. The superiority of the proposed TLBO based PIDD controller has been demonstrated by comparing the results with recently published optimization technique such as hybrid Firefly Algorithm and Pattern Search (hFA-PS, Firefly Algorithm (FA, Bacteria Foraging Optimization Algorithm (BFOA, Genetic Algorithm (GA and conventional Ziegler Nichols (ZN for the same interconnected power system. Also, the proposed approach has been extended to two-area power system with diverse sources of generation like thermal, hydro, wind and diesel units. The system model includes boiler dynamics, GRC and Governor Dead Band (GDB non-linearity. It is observed from simulation results that the performance of the proposed approach provides better dynamic responses by comparing the results with recently published in the literature. Further, the study is extended to a three unequal-area thermal power system with different controllers in each area and the results are compared with published FA optimized PID controller for the same system under study. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions in the range of ±25% from their nominal values to test the robustness.
Interpretation of the MEG-MUSIC scan in biomagnetic source localization
Energy Technology Data Exchange (ETDEWEB)
Mosher, J.C.; Lewis, P.S. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States). Signal and Image Processing Inst.
1993-09-01
MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak at unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.
Local wisdom of Ngata Toro community in utilizing forest resources as a learning source of biology
Yuliana, Sriyati, Siti; Sanjaya, Yayan
2017-08-01
Indonesian society is a pluralistic society with different cultures and local potencies that exist in each region. Some of local community still adherethe tradition from generation to generation in managing natural resources wisely. The application of the values of local wisdom is necessary to teach back to student to be more respect the culture and local potentials in the region. There are many ways developing student character by exploring local wisdom and implementing them as a learning resources. This study aims at revealing the values of local wisdom Ngata Toro indigenous people of Central Sulawesi Province in managing forest as a source of learning biology. This research was conducted by in-depth interviews, participant non-observation, documentation studies, and field notes. The data were analyzed with triangulation techniques by using a qualitative interaction analysis that is data collection, data reduction, and data display. Ngata Toro local community manage forest by dividing the forest into several zones, those arewana ngkiki, wana, pangale, pahawa pongko, oma, and balingkea accompanied by rules in the management of result-based forest conservation and sustainable utilization. By identifying the purpose of zonation and regulation of the forest, such values as the value of environmental conservation, balance value, sustainable value, and the value of mutual cooperation. These values are implemented as a biological learning resource which derived from the competences standard of analyze the utilization and conservation of the environment.
Spatial resolution limits for the localization of noise sources using direct sound mapping
DEFF Research Database (Denmark)
Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren
2016-01-01
the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...
Chorus source region localization in the Earth's outer magnetosphere using THEMIS measurements
Directory of Open Access Journals (Sweden)
O. Agapitov
2010-06-01
Full Text Available Discrete ELF/VLF chorus emissions, the most intense electromagnetic plasma waves observed in the Earth's radiation belts and outer magnetosphere, are thought to propagate roughly along magnetic field lines from a localized source region near the magnetic equator towards the magnetic poles. THEMIS project Electric Field Instrument (EFI and Search Coil Magnetometer (SCM measurements were used to determine the spatial scale of the chorus source localization region on the day side of the Earth's outer magnetosphere. We present simultaneous observations of the same chorus elements registered onboard several THEMIS spacecraft in 2007 when all the spacecraft were in the same orbit. Discrete chorus elements were observed at 0.15–0.25 of the local electron gyrofrequency, which is typical for the outer magnetosphere. We evaluated the Poynting flux and wave vector distribution and obtained chorus wave packet quasi-parallel propagation to the local magnetic field. Amplitude and phase correlation data analysis allowed us to estimate the characteristic spatial correlation scale transverse to the local magnetic field to be in the 2800–3200 km range.
Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng
2018-04-01
This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.
A comparison of two open source LiDAR surface classification algorithms
With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are op...
Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code
Taherkhani, Ahmad; Malmi, Lauri
2013-01-01
In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…
A comparison of two open source LiDAR surface classification algorithms
Wade T. Tinkham; Hongyu Huang; Alistair M.S. Smith; Rupesh Shrestha; Michael J. Falkowski; Andrew T. Hudak; Timothy E. Link; Nancy F. Glenn; Danny G. Marks
2011-01-01
With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are openly available, free to use, and are supported by published results....
Energy Technology Data Exchange (ETDEWEB)
Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk [Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)
2017-03-01
Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.
Beamspace fast fully adaptive brain source localization for limited data sequences
International Nuclear Information System (INIS)
Ravan, Maryam
2017-01-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data. (paper)
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-07
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Wojdyga, Krzysztof; Malicki, Marcin
2017-11-01
Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.
International Nuclear Information System (INIS)
Kopka, P; Wawrzynczak, A; Borysiewicz, M
2015-01-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found. (paper)
Distributed 3D Source Localization from 2D DOA Measurements Using Multiple Linear Arrays
Directory of Open Access Journals (Sweden)
Antonio Canclini
2017-01-01
Full Text Available This manuscript addresses the problem of 3D source localization from direction of arrivals (DOAs in wireless acoustic sensor networks. In this context, multiple sensors measure the DOA of the source, and a central node combines the measurements to yield the source location estimate. Traditional approaches require 3D DOA measurements; that is, each sensor estimates the azimuth and elevation of the source by means of a microphone array, typically in a planar or spherical configuration. The proposed methodology aims at reducing the hardware and computational costs by combining measurements related to 2D DOAs estimated from linear arrays arbitrarily displaced in the 3D space. Each sensor measures the DOA in the plane containing the array and the source. Measurements are then translated into an equivalent planar geometry, in which a set of coplanar equivalent arrays observe the source preserving the original DOAs. This formulation is exploited to define a cost function, whose minimization leads to the source location estimation. An extensive simulation campaign validates the proposed approach and compares its accuracy with state-of-the-art methodologies.
Time domain localization technique with sparsity constraint for imaging acoustic sources
Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain
2017-09-01
This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.
Energy Technology Data Exchange (ETDEWEB)
Hosseini, Seyed Abolfazl, E-mail: sahosseini@sharif.edu [Department of Energy Engineering, Sharif University of Technology, Tehran 8639-11365 (Iran, Islamic Republic of); Afrakoti, Iman Esmaili Paeen [Faculty of Engineering & Technology, University of Mazandaran, Pasdaran Street, P.O. Box: 416, Babolsar 47415 (Iran, Islamic Republic of)
2017-04-11
Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The {sup 241}Am-{sup 9}Be and {sup 252}Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions. - Highlights: • The neutron pulse height distribution was simulated using MCNPX-ESUT. • The energy spectrum of the neutron source was unfolded using GMDH. • The energy spectrum of the neutron source was
A Multiple-Label Guided Clustering Algorithm for Historical Document Dating and Localization.
He, Sheng; Samara, Petros; Burgers, Jan; Schomaker, Lambert
2016-11-01
It is of essential importance for historians to know the date and place of origin of the documents they study. It would be a huge advancement for historical scholars if it would be possible to automatically estimate the geographical and temporal provenance of a handwritten document by inferring them from the handwriting style of such a document. We propose a multiple-label guided clustering algorithm to discover the correlations between the concrete low-level visual elements in historical documents and abstract labels, such as date and location. First, a novel descriptor, called histogram of orientations of handwritten strokes, is proposed to extract and describe the visual elements, which is built on a scale-invariant polar-feature space. In addition, the multi-label self-organizing map (MLSOM) is proposed to discover the correlations between the low-level visual elements and their labels in a single framework. Our proposed MLSOM can be used to predict the labels directly. Moreover, the MLSOM can also be considered as a pre-structured clustering method to build a codebook, which contains more discriminative information on date and geography. The experimental results on the medieval paleographic scale data set demonstrate that our method achieves state-of-the-art results.
Mouthaan, Brian E; Rados, Matea; Barsi, Péter; Boon, Paul; Carmichael, David W; Carrette, Evelien; Craiu, Dana; Cross, J Helen; Diehl, Beate; Dimova, Petia; Fabo, Daniel; Francione, Stefano; Gaskin, Vladislav; Gil-Nagel, Antonio; Grigoreva, Elena; Guekht, Alla; Hirsch, Edouard; Hecimovic, Hrvoje; Helmstaedter, Christoph; Jung, Julien; Kalviainen, Reetta; Kelemen, Anna; Kimiskidis, Vasilios; Kobulashvili, Teia; Krsek, Pavel; Kuchukhidze, Giorgi; Larsson, Pål G; Leitinger, Markus; Lossius, Morten I; Luzin, Roman; Malmgren, Kristina; Mameniskiene, Ruta; Marusic, Petr; Metin, Baris; Özkara, Cigdem; Pecina, Hrvoje; Quesada, Carlos M; Rugg-Gunn, Fergus; Rydenhag, Bertil; Ryvlin, Philippe; Scholly, Julia; Seeck, Margitta; Staack, Anke M; Steinhoff, Bernhard J; Stepanov, Valentin; Tarta-Arsene, Oana; Trinka, Eugen; Uzan, Mustafa; Vogt, Viola L; Vos, Sjoerd B; Vulliémoz, Serge; Huiskamp, Geertjan; Leijten, Frans S S; Van Eijsden, Pieter; Braun, Kees P J
2016-05-01
In 2014 the European Union-funded E-PILEPSY project was launched to improve awareness of, and accessibility to, epilepsy surgery across Europe. We aimed to investigate the current use of neuroimaging, electromagnetic source localization, and imaging postprocessing procedures in participating centers. A survey on the clinical use of imaging, electromagnetic source localization, and postprocessing methods in epilepsy surgery candidates was distributed among the 25 centers of the consortium. A descriptive analysis was performed, and results were compared to existing guidelines and recommendations. Response rate was 96%. Standard epilepsy magnetic resonance imaging (MRI) protocols are acquired at 3 Tesla by 15 centers and at 1.5 Tesla by 9 centers. Three centers perform 3T MRI only if indicated. Twenty-six different MRI sequences were reported. Six centers follow all guideline-recommended MRI sequences with the proposed slice orientation and slice thickness or voxel size. Additional sequences are used by 22 centers. MRI postprocessing methods are used in 16 centers. Interictal positron emission tomography (PET) is available in 22 centers; all using 18F-fluorodeoxyglucose (FDG). Seventeen centers perform PET postprocessing. Single-photon emission computed tomography (SPECT) is used by 19 centers, of which 15 perform postprocessing. Four centers perform neither PET nor SPECT in children. Seven centers apply magnetoencephalography (MEG) source localization, and nine apply electroencephalography (EEG) source localization. Fourteen combinations of inverse methods and volume conduction models are used. We report a large variation in the presurgical diagnostic workup among epilepsy surgery centers across Europe. This diversity underscores the need for high-quality systematic reviews, evidence-based recommendations, and harmonization of available diagnostic presurgical methods. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.
Mustapha Gimba Kumshe; Kagu Bukar
2013-01-01
The main objective of this paper was to focus on the elements, objectives, goals and importance of cash management; and also to examine the sources of revenue and cost effective collections for local governments. The elements of cash management are identified as establishing bank relations, preparing cash flow statements, estimating collection receipts and analyzing cash flow and preparing a budget. Amongst the objectives of cash management is to ensure availability of cash resources at all t...
Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.
Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle
2011-05-01
We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Luis C. J. Moreira
2010-12-01
Full Text Available Em face da importância em conhecer a evapotranspiração (ET para uso racional da água na irrigação no contexto atual de escassez desse recurso, algoritmos de estimativa da ET a nível regional foram desenvolvidos utilizando-se de ferramentas de sensoriamento remoto. Este estudo objetivou aplicar o algoritmo SEBAL (Surface Energy Balance Algorithms for Land em três imagens do satélite Landsat 5, do segundo semestre de 2006. As imagens correspondem a áreas irrigadas, floresta nativa densa e a Caatinga do Estado do Ceará (Baixo Acaraú, Chapada do Apodi e Chapada do Araripe. Este algoritmo calcula a evapotranspiração horária a partir do fluxo de calor latente, estimado como resíduo do balanço de energia na superfície. Os valores de ET obtidos nas três regiões foram superiores a 0,60 mm h-1 nas áreas irrigadas ou de vegetação nativa densa. As áreas de vegetação nativa menos densa apresentaram taxa da ET horária de 0,35 a 0,60 mm h-1, e valores quase nulos em áreas degradadas. A análise das médias de evapotranspiração horária pelo teste de Tukey a 5% de probabilidade permitiu evidenciar uma variabilidade significativa local, bem como regional no Estado do Ceará.In the context of water resources scarcity, the rational use of water for irrigation is necessary, implying precise estimations of the actual evapotranspiration (ET. With the recent progresses of remote-sensed technologies, regional algorithms estimating evapotranspiration from satellite observations were developed. This work aimed at applying the SEBAL algorithm (Surface Energy Balance Algorithms for Land at three Landsat-5 images during the second semester of 2006. These images cover irrigated areas, dense native forest areas and caatinga areas in three regions of the state of Ceará (Baixo Acaraú, Chapada do Apodi and Chapada do Araripe. The SEBAL algorithm calculates the hourly evapotranspiration from the latent heat flux, estimated from the surface energy
Worthmann, Brian M; Song, H C; Dowling, David R
2015-12-01
Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.
fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization
Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda
2010-03-01
Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.
Directory of Open Access Journals (Sweden)
Omar Mabrok Bouzid
2015-01-01
Full Text Available Structural health monitoring (SHM is important for reducing the maintenance and operation cost of safety-critical components and systems in offshore wind turbines. This paper proposes an in situ wireless SHM system based on an acoustic emission (AE technique. By using this technique a number of challenges are introduced due to high sampling rate requirements, limitations in the communication bandwidth, memory space, and power resources. To overcome these challenges, this paper focused on two elements: (1 the use of an in situ wireless SHM technique in conjunction with the utilization of low sampling rates; (2 localization of acoustic sources which could emulate impact damage or audible cracks caused by different objects, such as tools, bird strikes, or strong hail, all of which represent abrupt AE events and could affect the structural health of a monitored wind turbine blade. The localization process is performed using features extracted from aliased AE signals based on a developed constraint localization model. To validate the performance of these elements, the proposed system was tested by testing the localization of the emulated AE sources acquired in the field.
The impact of source initialization on performance of the FMBMC-ICEU algorithm
International Nuclear Information System (INIS)
Wenner, Michael T.; Haghighat, Alireza
2011-01-01
Recent work in the completely fission matrix based Monte Carlo (FMBMC) eigenvalue methodology showed that the fission matrix coefficients are independent of the source eigenvector in the limit of small mesh sizes. As a result, fission matrix element autocorrelation should be insignificant. We have developed a modified fission matrix based Monte Carlo methodology for achieving unbiased solutions even for high Dominance Ratio (DR) problems. This methodology utilizes an initial source from a deterministic calculation using the PENTRAN 3-D Parallel SN code, autocorrelation and normality tests, and a Monte Carlo Iterated Confidence Interval (ICI) formulation for estimation of uncertainties in the fundamental eigenvalue and eigenfunction. This methodology is referred to as Fission Matrix Based Monte Carlo Initial-source Controlled Elements with Uncertainties (FMBMC-ICEU). In this paper, we will investigate the impact of different starting sources (PENTRAN initialized with a flat source and a boundary source) on the final results of a test problem with high source correlation. It is shown that although the fission matrix element correlation is significantly reduced, a good initial guess is still important within the framework of the FMBMC-ICEU methodology since the FMBMC-ICEU methodology still utilizes a standard source iteration scheme. (author)
International Nuclear Information System (INIS)
Zhou, Hongming; Soh, Yeng Chai; Wu, Xiaoying
2015-01-01
Maintaining a desired comfort level while minimizing the total energy consumed is an interesting optimization problem in Heating, ventilating and air conditioning (HVAC) system control. This paper proposes a localized control strategy that uses Computational Fluid Dynamics (CFD) simulation results and K-means clustering algorithm to optimally partition an air-conditioned room into different zones. The temperature and air velocity results from CFD simulation are combined in two ways: 1) based on the relationship indicated in predicted mean vote (PMV) formula; 2) based on the relationship extracted from ASHRAE RP-884 database using extreme learning machine (ELM). Localized control can then be effected in which each of the zones can be treated individually and an optimal control strategy can be developed based on the partitioning result. - Highlights: • The paper provides a visual guideline for thermal comfort analysis. • CFD, K-means, PMV and ELM are used to analyze thermal conditions within a room. • Localized control strategy could be developed based on our clustering results
Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher
2017-09-01
Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.
International Nuclear Information System (INIS)
Pabroa, Preciosa Corazon B.; Bautista VII, Angel T.; Santos, Flora L.; Racho, Joseph Michael D.
2011-01-01
Ambient fine particulate matter (PM 2 .5) levels at the Metro Manila air sampling stations of the Philippine Nuclear Research Research Institute were found to be above the WHO guideline value of 10 μg m 3 indicating, in general, very poor air quality in the area. The elemental components of the fine particulate matter were obtained using the energy-dispersive x-ray fluorescence spectrometry. Positive matrix factorization, a receptor modelling tool, was used to identify and apportion air pollution sources. Location of probable transboundary air pollutants were evaluated using HYSPLIT (Hybrid Single Particle Lagrangian Integrated Trajectory Model) while location of probable local air pollutant sources were determined using the conditional probability function (CPF). Air pollutant sources can either be natural or anthropogenic. This study has shown natural air pollutant sources such as volcanic eruptions from Bulusan volcano in 2006 and from Anatahan volcano in 2005 to have impacted on the region. Fine soils was shown to have originated from China's Mu US Desert some time in 2004. Smoke in the fine fraction in 2006 show indications of coming from forest fires in Sumatra and Borneo. Fine particulate Pb in Valenzuela was shown to be coming from the surrounding area. Many more significant air pollution impacts can be evaluated with the identification of probable air pollutant sources with the use of elemental fingerprints and locating these sources with the use of HYSPLIT and CPF. (author)
Heuristic algorithm for determination of local properties of scale-free networks
Mitrovic, M
2006-01-01
Complex networks are everywhere. Many phenomena in nature can be modeled as networks: - brain structures - protein-protein interaction networks - social interactions - the Internet and WWW. They can be represented in terms of nodes and edges connecting them. Important characteristics: - these networks are not random; they have a structured architecture. Structure of different networks are similar: - all have power law degree distribution (scale-free property) - despite large size there is usually relatively short path between any two nodes (small world property). Global characteristics: - degree distribution, clustering coefficient and the diameter. Local structure: - frequency of subgraphs of given type (subgraph of order k is a part of the network consisting of k nodes and edges between them). There are different types of subgraphs of the same order.
Iterated local search algorithm for solving the orienteering problem with soft time windows.
Aghezzaf, Brahim; Fahim, Hassan El
2016-01-01
In this paper we study the orienteering problem with time windows (OPTW) and the impact of relaxing the time windows on the profit collected by the vehicle. The way of relaxing time windows adopted in the orienteering problem with soft time windows (OPSTW) that we study in this research is a late service relaxation that allows linearly penalized late services to customers. We solve this problem heuristically by considering a hybrid iterated local search. The results of the computational study show that the proposed approach is able to achieve promising solutions on the OPTW test instances available in the literature, one new best solution is found. On the newly generated test instances of the OPSTW, the results show that the profit collected by the OPSTW is better than the profit collected by the OPTW.
A General Algorithm for Robot Formations Using Local Sensing and Minimal Communication
DEFF Research Database (Denmark)
Fredslund, Jakob; Matarić, Maja J
2002-01-01
the friend in the sensor's field of view. We also present a general analytical measure for evaluating formations and apply it to the position data from both simulation and physical robot experiments. We used two lasers to track the physical robots to obtain ground truth validation data....... simulation exper- iments, and 40+ experiments with four physical robots, showing the viability of our approach. The key idea is that each robot keeps a single friend at a desired angle , using some appropriate sensor. By panning the sensor by degrees, the goal for all formations be- comes simply to center......We study the problem of achieving global behavior in a group of distributed robots using only local sensing and minimal communication, in the context of formations. The goal is to have mobile robots establish and maintain some predetermined geo- metric shape. We report results from extensive...
Directory of Open Access Journals (Sweden)
A. A. Zolotin
2015-07-01
Full Text Available Posteriori inference is one of the three kinds of probabilistic-logic inferences in the probabilistic graphical models theory and the base for processing of knowledge patterns with probabilistic uncertainty using Bayesian networks. The paper deals with a task of local posteriori inference description in algebraic Bayesian networks that represent a class of probabilistic graphical models by means of matrix-vector equations. The latter are essentially based on the use of tensor product of matrices, Kronecker degree and Hadamard product. Matrix equations for calculating posteriori probabilities vectors within posteriori inference in knowledge patterns with quanta propositions are obtained. Similar equations of the same type have already been discussed within the confines of the theory of algebraic Bayesian networks, but they were built only for the case of posteriori inference in the knowledge patterns on the ideals of conjuncts. During synthesis and development of matrix-vector equations on quanta propositions probability vectors, a number of earlier results concerning normalizing factors in posteriori inference and assignment of linear projective operator with a selector vector was adapted. We consider all three types of incoming evidences - deterministic, stochastic and inaccurate - combined with scalar and interval estimation of probability truth of propositional formulas in the knowledge patterns. Linear programming problems are formed. Their solution gives the desired interval values of posterior probabilities in the case of inaccurate evidence or interval estimates in a knowledge pattern. That sort of description of a posteriori inference gives the possibility to extend the set of knowledge pattern types that we can use in the local and global posteriori inference, as well as simplify complex software implementation by use of existing third-party libraries, effectively supporting submission and processing of matrices and vectors when
An efficient algorithm to perform local concerted movements of a chain molecule.
Directory of Open Access Journals (Sweden)
Stefano Zamuner
Full Text Available The devising of efficient concerted rotation moves that modify only selected local portions of chain molecules is a long studied problem. Possible applications range from speeding the uncorrelated sampling of polymeric dense systems to loop reconstruction and structure refinement in protein modeling. Here, we propose and validate, on a few pedagogical examples, a novel numerical strategy that generalizes the notion of concerted rotation. The usage of the Denavit-Hartenberg parameters for chain description allows all possible choices for the subset of degrees of freedom to be modified in the move. They can be arbitrarily distributed along the chain and can be distanced between consecutive monomers as well. The efficiency of the methodology capitalizes on the inherent geometrical structure of the manifold defined by all chain configurations compatible with the fixed degrees of freedom. The chain portion to be moved is first opened along a direction chosen in the tangent space to the manifold, and then closed in the orthogonal space. As a consequence, in Monte Carlo simulations detailed balance is easily enforced without the need of using Jacobian reweighting. Moreover, the relative fluctuations of the degrees of freedom involved in the move can be easily tuned. We show different applications: the manifold of possible configurations is explored in a very efficient way for a protein fragment and for a cyclic molecule; the "local backbone volume", related to the volume spanned by the manifold, reproduces the mobility profile of all-α helical proteins; the refinement of small protein fragments with different secondary structures is addressed. The presented results suggest our methodology as a valuable exploration and sampling tool in the context of bio-molecular simulations.
Energy Technology Data Exchange (ETDEWEB)
Doert, Marlene [Technische Universitaet Dortmund (Germany); Ruhr-Universitaet Bochum (Germany); Einecke, Sabrina [Technische Universitaet Dortmund (Germany); Errando, Manel [Barnard College, Columbia University, New York City (United States)
2015-07-01
The second Fermi-LAT source catalog (2FGL) is the deepest all-sky survey of the gamma-ray sky currently available to the community. Out of the 1873 catalog sources, 576 remain unassociated. We present a search for active galactic nuclei (AGN) among these unassociated objects, which aims at a reduction of the number of unassociated gamma-ray sources and a more complete characterization of the population of gamma-ray emitting AGN. Our study uses two complimentary machine learning algorithms which are individually trained on the gamma-ray properties of associated 2FGL sources and thereafter applied to the unassociated sample. The intersection of the two methods yields a high-confidence sample of 231 AGN candidate sources. We estimate the performance of the classification by taking inherent differences between the samples of associated and unassociated 2FGL sources into account. A search for infra-red counterparts and first results from follow-up studies in the X-ray band using Swift satellite data for a subset of our AGN candidates are also presented.
MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields
Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria
2015-08-01
We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and
A novel method for direct localized sound speed measurement using the virtual source paradigm
DEFF Research Database (Denmark)
Byram, Brett; Trahey, Gregg E.; Jensen, Jørgen Arendt
2007-01-01
) mediums. The inhomogeneous mediums were arranged as an oil layer, one 6 mm thick and the other 11 mm thick, on top of a water layer. To complement the phantom studies, sources of error for spatial registration of virtual detectors were simulated. The sources of error presented here are multiple sound...... registered virtual detector. Between a pair of registered virtual detectors a spherical wave is propagated. By beamforming the received data the time of flight between the two virtual sources can be calculated. From this information the local sound speed can be estimated. Validation of the estimator used...... both phantom and simulation results. The phantom consisted of two wire targets located near the transducer's axis at depths of 17 and 28 mm. Using this phantom the sound speed between the wires was measured for a homogeneous (water) medium and for two inhomogeneous (DB-grade castor oil and water...
Heidari, Morteza; Zargari Khuzani, Abolfazl; Hollingsworth, Alan B.; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin
2018-02-01
In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.
Directory of Open Access Journals (Sweden)
Zhigang Lian
2010-01-01
Full Text Available The Job-shop scheduling problem (JSSP is a branch of production scheduling, which is among the hardest combinatorial optimization problems. Many different approaches have been applied to optimize JSSP, but for some JSSP even with moderate size cannot be solved to guarantee optimality. The original particle swarm optimization algorithm (OPSOA, generally, is used to solve continuous problems, and rarely to optimize discrete problems such as JSSP. In OPSOA, through research I find that it has a tendency to get stuck in a near optimal solution especially for middle and large size problems. The local and global search combine particle swarm optimization algorithm (LGSCPSOA is used to solve JSSP, where particle-updating mechanism benefits from the searching experience of one particle itself, the best of all particles in the swarm, and the best of particles in neighborhood population. The new coding method is used in LGSCPSOA to optimize JSSP, and it gets all sequences are feasible solutions. Three representative instances are made computational experiment, and simulation shows that the LGSCPSOA is efficacious for JSSP to minimize makespan.
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin
2018-03-01
Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p breast cancer detected in the next subsequent mammography screening.
XTALOPT: An open-source evolutionary algorithm for crystal structure prediction
Lonie, David C.; Zurek, Eva
2011-02-01
The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely
Online Identification of Photovoltaic Source Parameters by Using a Genetic Algorithm
Directory of Open Access Journals (Sweden)
Giovanni Petrone
2017-12-01
Full Text Available In this paper, an efficient method for the online identification of the photovoltaic single-diode model parameters is proposed. The combination of a genetic algorithm with explicit equations allows obtaining precise results without the direct measurement of short circuit current and open circuit voltage that is typically used in offline identification methods. Since the proposed method requires only voltage and current values close to the maximum power point, it can be easily integrated into any photovoltaic system, and it operates online without compromising the power production. The proposed approach has been implemented and tested on an embedded system, and it exhibits a good performance for monitoring/diagnosis applications.
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Novakova, T.; Matys Grygar, T.; Bábek, O.; Faměra, M.; Mihaljevič, M.; Strnad, L.
2012-04-01
Industrial pollution can provide a useful tool to study spatiotemporal distribution of modern floodplain sediments, trace their provenance, and allow their dating. Regional contamination of southern Moravia (the south-eastern part of the Czech Republic) by heavy metals during the 20th century was determined in fluvial sediments of the Morava River by means of enrichment factors. The influence of local sources and sampling sites heterogeneity were studied in overbank fines with different lithology and facies. For this purpose, samples were obtained from hand-drilled cores from regulated channel banks, with well-defined local sources of contamination (factories in Zlín and Otrokovice) and also from near naturally inundated floodplains in two nature protected areas (at 30 km distance). The analyses were performed by X-ray fluorescence spectroscopy (ED XRF), ICP MS (EDXRF samples calibration, 206Pb/207Pb ratio), magnetic susceptibility, cation exchange capacity (CEC), and 137Cs and 210Pb activities. Enrichment factors (EF) of heavy metals (Pb, Zn, Cu and Cr) and magnetic susceptibility of overbank fines in near-naturally (near annually) inundated areas allowed us to reconstruct historical contamination by heavy metals in the entire study area independently on lithofacies. Measured lithological background values were then used for calculation of EFs in the channel sediments and in floodplain sediments deposited within narrow part of a former floodplain which is now reduced to about one quarter of its original width by flood defences. Sediments from regulated channel banks were found stratigraphically and lithologically "erratic", unreliable for quantification of regional contamination due to a high variability of sedimentary environment. On the other hand, these sediments are very sensitive to the nearby local sources of heavy metals. For a practical work one must first choose whether large scale, i.e. a really averaged regional contamination should be reconstructed
Aerosol-Cloud Interactions During Puijo Cloud Experiments - The effects of weather and local sources
Komppula, Mika; Portin, Harri; Leskinen, Ari; Romakkaniemi, Sami; Brus, David; Neitola, Kimmo; Hyvärinen, Antti-Pekka; Kortelainen, Aki; Hao, Liqing; Miettinen, Pasi; Jaatinen, Antti; Ahmad, Irshad; Lihavainen, Heikki; Laaksonen, Ari; Lehtinen, Kari E. J.
2013-04-01
The Puijo measurement station has provided continuous data on aerosol-cloud interactions since 2006. The station is located on top of the Puijo observation tower (306 m a.s.l, 224 m above the surrounding lake level) in Kuopio, Finland. The top of the tower is covered by cloud about 15 % of the time, offering perfect conditions for studying aerosol-cloud interactions. With a twin-inlet setup (total and interstitial inlets) we are able to separate the activated particles from the interstitial (non-activated) particles. The continuous twin-inlet measurements include aerosol size distribution, scattering and absorption. In addition cloud droplet number and size distribution are measured continuously with weather parameters. During the campaigns the twin-inlet system was additionally equipped with aerosol mass spectrometer (AMS) and Single Particle Soot Photometer (SP-2). This way we were able to define the differences in chemical composition of the activated and non-activated particles. Potential cloud condensation nuclei (CCN) in different supersaturations were measured with two CCN counters (CCNC). The other CCNC was operated with a Differential Mobility Analyzer (DMA) to obtain size selected CCN spectra. Other additional measurements included Hygroscopic Tandem Differential Mobility Analyzer (HTDMA) for particle hygroscopicity. Additionally the valuable vertical wind profiles (updraft velocities) are available from Halo Doppler lidar during the 2011 campaign. Cloud properties (droplet number and effective radius) from MODIS instrument onboard Terra and Aqua satellites were retrieved and compared with the measured values. This work summarizes the two latest intensive campaigns, Puijo Cloud Experiments (PuCE) 2010 & 2011. We study especially the effect of the local sources on the cloud activation behaviour of the aerosol particles. The main local sources include a paper mill, a heating plant, traffic and residential areas. The sources can be categorized and identified
Deconvolution for the localization of sound sources using a circular microphone array
DEFF Research Database (Denmark)
Tiana Roig, Elisabet; Jacobsen, Finn
2013-01-01
During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated by a c......-negative least squares, and the Richardson-Lucy. This investigation examines the matter with computer simulations and measurements....... that the beamformer's point-spread function is shift-invariant. This makes it possible to apply computationally efficient deconvolution algorithms that consist of spectral procedures in the entire region of interest, such as the deconvolution approach for the mapping of the acoustic sources 2, the Fourier-based non...
Valin, Jean-Marc; Michaud, François; Hadjou, Brahim; Rouat, Jean
2016-01-01
Mobile robots in real-life settings would benefit from being able to localize sound sources. Such a capability can nicely complement vision to help localize a person or an interesting event in the environment, and also to provide enhanced processing for other capabilities such as speech recognition. In this paper we present a robust sound source localization method in three-dimensional space using an array of 8 microphones. The method is based on a frequency-domain implementation of a steered...
Crane, P.; Silliman, S. E.; Boukari, M.; Atoro, I.; Azonsi, F.
2005-12-01
Deteriorating groundwater quality, as represented by high nitrates, in the Colline province of Benin, West Africa was identified by the Benin national water agency, Direction Hydraulique. For unknown reasons the Colline province had consistently higher nitrate levels than any other region of the country. In an effort to address this water quality issue, a collaborative team was created that incorporated professionals from the Universite d'Abomey-Calavi (Benin), the University of Notre Dame (USA), Direction l'Hydraulique (a government water agency in Benin), Centre Afrika Obota (an educational NGO in Benin), and the local population of the village of Adourekoman. The goals of the project were to: (i) identify the source of nitrates, (ii) test field techniques for long term, local monitoring, and (iii) identify possible solutions to the high levels of groundwater nitrates. In order to accomplish these goals, the following methods were utilized: regional sampling of groundwater quality, field methods that allowed the local population to regularly monitor village groundwater quality, isotopic analysis, and sociological methods of surveys, focus groups, and observations. It is through the combination of these multi-disciplinary methods that all three goals were successfully addressed leading to preliminary identification of the sources of nitrates in the village of Adourekoman, confirmation of utility of field techniques, and initial assessment of possible solutions to the contamination problem.
Tornga, Shawn R.
The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as
Pi, Shaohua; Wang, Bingjie; Zhao, Jiang; Sun, Qi
2016-10-10
In the Sagnac fiber optic interferometer system, the phase difference signal can be illustrated as a convolution of the waveform of the invasion with its occurring-position-associated transfer function h(t); deconvolution is introduced to improve the spatial resolution of the localization. In general, to get a 26 m spatial resolution at a sampling rate of 4×106 s-1, the algorithm should mainly go through three steps after the preprocessing operations. First, the decimated phase difference signal is transformed from the time domain into the real cepstrum domain, where a probable region of invasion distance can be ascertained. Second, a narrower region of invasion distance is acquired by coarsely assuming and sweeping a transfer function h(t) within the probable region and examining where the restored invasion waveform x(t) gets its minimum standard deviation. Third, fine sweeping the narrow region point by point with the same criteria is used to get the final localization. Also, the original waveform of invasion can be restored for the first time as a by-product, which provides more accurate and pure characteristics for further processing, such as subsequent pattern recognition.
Physics-based approach to chemical source localization using mobile robotic swarms
Zarzhitsky, Dimitri
2008-07-01
Recently, distributed computation has assumed a dominant role in the fields of artificial intelligence and robotics. To improve system performance, engineers are combining multiple cooperating robots into cohesive collectives called swarms. This thesis illustrates the application of basic principles of physicomimetics, or physics-based design, to swarm robotic systems. Such principles include decentralized control, short-range sensing and low power consumption. We show how the application of these principles to robotic swarms results in highly scalable, robust, and adaptive multi-robot systems. The emergence of these valuable properties can be predicted with the help of well-developed theoretical methods. In this research effort, we have designed and constructed a distributed physicomimetics system for locating sources of airborne chemical plumes. This task, called chemical plume tracing (CPT), is receiving a great deal of attention due to persistent homeland security threats. For this thesis, we have created a novel CPT algorithm called fluxotaxis that is based on theoretical principles of fluid dynamics. Analytically, we show that fluxotaxis combines the essence, as well as the strengths, of the two most popular biologically-inspired CPT methods-- chemotaxis and anemotaxis. The chemotaxis strategy consists of navigating in the direction of the chemical density gradient within the plume, while the anemotaxis approach is based on an upwind traversal of the chemical cloud. Rigorous and extensive experimental evaluations have been performed in simulated chemical plume environments. Using a suite of performance metrics that capture the salient aspects of swarm-specific behavior, we have been able to evaluate and compare the three CPT algorithms. We demonstrate the improved performance of our fluxotaxis approach over both chemotaxis and anemotaxis in these realistic simulation environments, which include obstacles. To test our understanding of CPT on actual hardware
Anderson, Jill T; Geber, Monica A
2010-02-01
In heterogeneous landscapes, divergent selection can favor the evolution of locally adapted ecotypes, especially when interhabitat gene flow is minimal. However, if habitats differ in size or quality, source-sink dynamics can shape evolutionary trajectories. Upland and bottomland forests of the southeastern USA differ in water table depth, light availability, edaphic conditions, and plant community. We conducted a multiyear reciprocal transplant experiment to test whether Elliott's blueberry (Vaccinium elliottii) is locally adapted to these contrasting environments. Additionally, we exposed seedlings and cuttings to prolonged drought and flooding in the greenhouse to assess fitness responses to abiotic stress. Contrary to predictions of local adaptation, V. elliottii families exhibited significantly higher survivorship and growth in upland than in bottomland forests and under drought than flooded conditions, regardless of habitat of origin. Neutral population differentiation was minimal, suggesting widespread interhabitat migration. Population density, reproductive output, and genetic diversity were all significantly greater in uplands than in bottomlands. These disparities likely result in asymmetric gene flow from uplands to bottomlands. Thus, adaptation to a marginal habitat can be constrained by small populations, limited fitness, and immigration from a benign habitat. Our study highlights the importance of demography and genetic diversity in the evolution of local (mal)adaptation.