WorldWideScience

Sample records for subpixel centroid estimation

  1. Demonstration of biased membrane static figure mapping by optical beam subpixel centroid shift

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Fabrizio, E-mail: fpinto@jazanu.edu.sa [Laboratory for Quantum Vacuum Applications, Department of Physics, Faculty of Science, Jazan University, P.O. Box 114, Gizan 45142 (Saudi Arabia)

    2016-06-10

    The measurement of Casimir forces by means of condenser microphones has been shown to be quite promising since its early introduction almost half-a-century ago. However, unlike the remarkable progress achieved in characterizing the vibrating membrane in the dynamical case, the accurate determination of the membrane static figure under electrostatic bias remains a challenge. In this paper, we discuss our first data obtained by measuring the centroid shift of an optical beam with subpixel accuracy by charge coupled device (CCD) and by an extensive analysis of noise sources present in the experimental setup.

  2. Ordinal Regression Based Subpixel Shift Estimation for Video Super-Resolution

    Directory of Open Access Journals (Sweden)

    Petrovic Nemanja

    2007-01-01

    Full Text Available We present a supervised learning-based approach for subpixel motion estimation which is then used to perform video super-resolution. The novelty of this work is the formulation of the problem of subpixel motion estimation in a ranking framework. The ranking formulation is a variant of classification and regression formulation, in which the ordering present in class labels namely, the shift between patches is explicitly taken into account. Finally, we demonstrate the applicability of our approach on superresolving synthetically generated images with global subpixel shifts and enhancing real video frames by accounting for both local integer and subpixel shifts.

  3. Bayesian centroid estimation for motif discovery.

    Science.gov (United States)

    Carvalho, Luis

    2013-01-01

    Biological sequences may contain patterns that signal important biomolecular functions; a classical example is regulation of gene expression by transcription factors that bind to specific patterns in genomic promoter regions. In motif discovery we are given a set of sequences that share a common motif and aim to identify not only the motif composition, but also the binding sites in each sequence of the set. We propose a new centroid estimator that arises from a refined and meaningful loss function for binding site inference. We discuss the main advantages of centroid estimation for motif discovery, including computational convenience, and how its principled derivation offers further insights about the posterior distribution of binding site configurations. We also illustrate, using simulated and real datasets, that the centroid estimator can differ from the traditional maximum a posteriori or maximum likelihood estimators.

  4. Bayesian centroid estimation for motif discovery.

    Directory of Open Access Journals (Sweden)

    Luis Carvalho

    Full Text Available Biological sequences may contain patterns that signal important biomolecular functions; a classical example is regulation of gene expression by transcription factors that bind to specific patterns in genomic promoter regions. In motif discovery we are given a set of sequences that share a common motif and aim to identify not only the motif composition, but also the binding sites in each sequence of the set. We propose a new centroid estimator that arises from a refined and meaningful loss function for binding site inference. We discuss the main advantages of centroid estimation for motif discovery, including computational convenience, and how its principled derivation offers further insights about the posterior distribution of binding site configurations. We also illustrate, using simulated and real datasets, that the centroid estimator can differ from the traditional maximum a posteriori or maximum likelihood estimators.

  5. Comparison of performance of some common Hartmann-Shack centroid estimation methods

    Science.gov (United States)

    Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.

    2016-03-01

    The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.

  6. A stepwise regression tree for nonlinear approximation: applications to estimating subpixel land cover

    Science.gov (United States)

    Huang, C.; Townshend, J.R.G.

    2003-01-01

    A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.

  7. Doppler Centroid Estimation for Airborne SAR Supported by POS and DEM

    Directory of Open Access Journals (Sweden)

    CHENG Chunquan

    2015-05-01

    Full Text Available It is difficult to estimate the Doppler frequency and modulating rate for airborne SAR by using traditional vector method due to instable flight and complex terrain. In this paper, it is qualitatively analyzed that the impacts of POS, DEM and their errors on airborne SAR Doppler parameters. Then an innovative vector method is presented based on the range-coplanarity equation to estimate the Doppler centroid taking the POS and DEM as auxiliary data. The effectiveness of the proposed method is validated and analyzed via the simulation experiments. The theoretical analysis and experimental results show that the method can be used to estimate the Doppler centroid with high accuracy even in the cases of high relief, instable flight, and large squint SAR.

  8. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  9. Performance evaluation of the spectral centroid downshift method for attenuation estimation.

    Science.gov (United States)

    Samimi, Kayvan; Varghese, Tomy

    2015-05-01

    Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequencydomain approaches applied to this problem. In this study, a statistical analysis of this method's performance was carried out based on a parametric model of the signal power spectrum in the presence of electronic noise. The parametric model used for the power spectrum of received RF data assumes a Gaussian spectral profile for the transmit pulse, and incorporates effects of attenuation, windowing, and electronic noise. Spectral moments were calculated and used to estimate second-order centroid statistics. A theoretical expression for the variance of a maximum likelihood estimator of attenuation coefficient was derived in terms of the centroid statistics and other model parameters, such as transmit pulse center frequency and bandwidth, RF data window length, SNR, and number of regression points. Theoretically predicted estimation variances were compared with experimentally estimated variances on RF data sets from both computer-simulated and physical tissue-mimicking phantoms. Scan parameter ranges for this study were electronic SNR from 10 to 70 dB, transmit pulse standard deviation from 0.5 to 4.1 MHz, transmit pulse center frequency from 2 to 8 MHz, and data window length from 3 to 17 mm. Acceptable agreement was observed between theoretical predictions and experimentally estimated values with differences smaller than 0.05 dB/cm/MHz across the parameter ranges investigated. This model helps predict the best attenuation estimation variance achievable with the CDS method, in terms of said scan parameters.

  10. Prediction of RNA secondary structure using generalized centroid estimators.

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi

    2009-02-15

    Recent studies have shown that the methods for predicting secondary structures of RNAs on the basis of posterior decoding of the base-pairing probabilities has an advantage with respect to prediction accuracy over the conventionally utilized minimum free energy methods. However, there is room for improvement in the objective functions presented in previous studies, which are maximized in the posterior decoding with respect to the accuracy measures for secondary structures. We propose novel estimators which improve the accuracy of secondary structure prediction of RNAs. The proposed estimators maximize an objective function which is the weighted sum of the expected number of the true positives and that of the true negatives of the base pairs. The proposed estimators are also improved versions of the ones used in previous works, namely CONTRAfold for secondary structure prediction from a single RNA sequence and McCaskill-MEA for common secondary structure prediction from multiple alignments of RNA sequences. We clarify the relations between the proposed estimators and the estimators presented in previous works, and theoretically show that the previous estimators include additional unnecessary terms in the evaluation measures with respect to the accuracy. Furthermore, computational experiments confirm the theoretical analysis by indicating improvement in the empirical accuracy. The proposed estimators represent extensions of the centroid estimators proposed in Ding et al. and Carvalho and Lawrence, and are applicable to a wide variety of problems in bioinformatics. Supporting information and the CentroidFold software are available online at: http://www.ncrna.org/software/centroidfold/.

  11. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    Science.gov (United States)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  12. Diffeomorphic Iterative Centroid Methods for Template Estimation on Large Datasets

    OpenAIRE

    Cury , Claire; Glaunès , Joan Alexis; Colliot , Olivier

    2014-01-01

    International audience; A common approach for analysis of anatomical variability relies on the stimation of a template representative of the population. The Large Deformation Diffeomorphic Metric Mapping is an attractive framework for that purpose. However, template estimation using LDDMM is computationally expensive, which is a limitation for the study of large datasets. This paper presents an iterative method which quickly provides a centroid of the population in the shape space. This centr...

  13. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    Ibn-Elhaj E

    2009-01-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  14. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    E. M. Ismaili Aalaoui

    2009-02-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  15. Spatial scaling of net primary productivity using subpixel landcover information

    Science.gov (United States)

    Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.

    2008-10-01

    Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.

  16. Performance Evaluation of the Spectral Centroid Downshift Method for Attenuation Estimation

    OpenAIRE

    Samimi, Kayvan; Varghese, Tomy

    2015-01-01

    Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequency-domain approaches applied to this problem. In this study, a statistical analysis of this method’s performance was carried out based on a parametric m...

  17. Estimation bias from using nonlinear Fourier plane correlators for sub-pixel image shift measurement and implications for the binary joint transform correlator

    Science.gov (United States)

    Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.

    2007-09-01

    When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.

  18. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    Science.gov (United States)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined

  19. Subpixel edge estimation with lens aberrations compensation based on the iterative image approximation for high-precision thermal expansion measurements of solids

    Science.gov (United States)

    Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.

    2017-06-01

    A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.

  20. Target Centroid Position Estimation of Phase-Path Volume Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Fengjun Hu

    2016-01-01

    Full Text Available For the problem of easily losing track target when obstacles appear in intelligent robot target tracking, this paper proposes a target tracking algorithm integrating reduced dimension optimal Kalman filtering algorithm based on phase-path volume integral with Camshift algorithm. After analyzing the defects of Camshift algorithm, compare the performance with the SIFT algorithm and Mean Shift algorithm, and Kalman filtering algorithm is used for fusion optimization aiming at the defects. Then aiming at the increasing amount of calculation in integrated algorithm, reduce dimension with the phase-path volume integral instead of the Gaussian integral in Kalman algorithm and reduce the number of sampling points in the filtering process without influencing the operational precision of the original algorithm. Finally set the target centroid position from the Camshift algorithm iteration as the observation value of the improved Kalman filtering algorithm to fix predictive value; thus to make optimal estimation of target centroid position and keep the target tracking so that the robot can understand the environmental scene and react in time correctly according to the changes. The experiments show that the improved algorithm proposed in this paper shows good performance in target tracking with obstructions and reduces the computational complexity of the algorithm through the dimension reduction.

  1. An improved Q estimation approach: the weighted centroid frequency shift method

    Science.gov (United States)

    Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Dong, Chunhui; Tao, Yonghui; Zhou, Yatao

    2016-06-01

    Seismic wave propagation in subsurface media suffers from absorption, which can be quantified by the quality factor Q. Accurate estimation of the Q factor is of great importance for the resolution enhancement of seismic data, precise imaging and interpretation, and reservoir prediction and characterization. The centroid frequency shift method (CFS) is currently one of the most commonly used Q estimation methods. However, for seismic data that contain noise, the accuracy and stability of Q extracted using CFS depend on the choice of frequency band. In order to reduce the influence of frequency band choices and obtain Q with greater precision and robustness, we present an improved CFS Q measurement approach—the weighted CFS method (WCFS), which incorporates a Gaussian weighting coefficient into the calculation procedure of the conventional CFS. The basic idea is to enhance the proportion of advantageous frequencies in the amplitude spectrum and reduce the weight of disadvantageous frequencies. In this novel method, we first construct a Gauss function using the centroid frequency and variance of the reference wavelet. Then we employ it as the weighting coefficient for the amplitude spectrum of the original signal. Finally, the conventional CFS is adopted for the weighted amplitude spectrum to extract the Q factor. Numerical tests of noise-free synthetic data demonstrate that the WCFS is feasible and efficient, and produces more accurate results than the conventional CFS. Tests for noisy synthetic data indicate that the new method has better anti-noise capability than the CFS. The application to field vertical seismic profile (VSP) data further demonstrates its validity5.

  2. Noise in position measurement by centroid calculation

    International Nuclear Information System (INIS)

    Volkov, P.

    1996-01-01

    The position of a particle trajectory in a gaseous (or semiconductor) detector can be measured by calculating the centroid of the induced charge on the cathode plane. The charge amplifiers attached to each cathode strip introduce noise which is added to the signal. This noise broadens the position resolution line. Our article gives an analytical tool to estimate the resolution broadening due to the noise per strip and the number of strips involved in the centroid calculation. It is shown that the position resolution increases faster than the square root of the number of strips involved. We also consider the consequence of added interstrip capacitors, intended to diminish the differential nonlinearity. It is shown that the position error increases slower than linearly with the interstrip capacities, due to the cancellation of correlated noise. The estimation we give, can be applied to calculations of position broadening other than the centroid finding. (orig.)

  3. Improved measurements of RNA structure conservation with generalized centroid estimators

    Directory of Open Access Journals (Sweden)

    Yohei eOkada

    2011-08-01

    Full Text Available Identification of non-protein-coding RNAs (ncRNAs in genomes is acrucial task for not only molecular cell biology but alsobioinformatics. Secondary structures of ncRNAs are employed as a keyfeature of ncRNA analysis since biological functions of ncRNAs aredeeply related to their secondary structures. Although the minimumfree energy (MFE structure of an RNA sequence is regarded as the moststable structure, MFE alone could not be an appropriate measure foridentifying ncRNAs since the free energy is heavily biased by thenucleotide composition. Therefore, instead of MFE itself, severalalternative measures for identifying ncRNAs have been proposed such asthe structure conservation index (SCI and the base pair distance(BPD, both of which employ MFE structures. However, thesemeasurements are unfortunately not suitable for identifying ncRNAs insome cases including the genome-wide search and incur high falsediscovery rate. In this study, we propose improved measurements basedon SCI and BPD, applying generalized centroid estimators toincorporate the robustness against low quality multiple alignments.Our experiments show that our proposed methods achieve higher accuracythan the original SCI and BPD for not only human-curated structuralalignments but also low quality alignments produced by CLUSTALW. Furthermore, the centroid-based SCI on CLUSTAL W alignments is moreaccurate than or comparable with that of the original SCI onstructural alignments generated with RAF, a high quality structuralaligner, for which two-fold expensive computational time is requiredon average. We conclude that our methods are more suitable forgenome-wide alignments which are of low quality from the point of viewon secondary structures than the original SCI and BPD.

  4. Correction of sub-pixel topographical effects on land surface albedo retrieved from geostationary satellite (FengYun-2D) observations

    International Nuclear Information System (INIS)

    Roupioz, L; Nerry, F; Jia, L; Menenti, M

    2014-01-01

    The Qinghai-Tibetan Plateau is characterised by a very strong relief which affects albedo retrieval from satellite data. The objective of this study is to highlight the effects of sub-pixel topography and to account for those effects when retrieving land surface albedo from geostationary satellite FengYun-2D (FY-2D) data with 1.25km spatial resolution using the high spatial resolution (30 m) data of the Digital Elevation Model (DEM) from ASTER. The methodology integrates the effects of sub-pixel topography on the estimation of the total irradiance received at the surface, allowing the computation of the topographically corrected surface reflectance. Furthermore, surface albedo is estimated by applying the parametric BRDF (Bidirectional Reflectance Distribution Function) model called RPV (Rahman-Pinty-Verstraete) to the terrain corrected surface reflectance. The results, evaluated against ground measurements collected over several experimental sites on the Qinghai-Tibetan Plateau, document the advantage of integrating the sub-pixel topography effects in the land surface reflectance at 1km resolution to estimate the land surface albedo. The results obtained after using sub-pixel topographic correction are compared with the ones obtained after using pixel level topographic correction. The preliminary results imply that, in highly rugged terrain, the sub-pixel topography correction method gives more accurate results. The pixel level correction tends to overestimate surface albedo

  5. Centroid crossing

    International Nuclear Information System (INIS)

    Swift, G.

    1990-01-01

    This paper presents an algorithm for finding peaks in data spectra. It is based on calculating a moving centroid across the spectrum and picking off the points between which the calculated centroid crosses the channel number. Interpolation can then yield a more precise peak location. This algorithm can be implemented very efficiently requiring about one addition, subtraction, multiplication, and division operation per data point. With integer data and a centroid window equal to a power of two (so that the division can be done with shifts), the algorithm is particularly suited to efficient machine language implementation. With suitable adjustments (involving only little overhead except at suspected peaks), it is possible to minimize either false peak location or missing good peaks. Extending the method to more dimensions is straightforward although interpolating is more difficult. The algorithm has been used on a variety of nuclear data spectra with great success

  6. Centroids of effective interactions from measured single-particle energies: An application

    International Nuclear Information System (INIS)

    Cole, B.J.

    1990-01-01

    Centroids of the effective nucleon-nucleon interaction for the mass region A=28--64 are extracted directly from experimental single-particle spectra, by comparing single-particle energies relative to different cores. Uncertainties in the centroids are estimated at approximately 100 keV, except in cases of exceptional fragmentation of the single-particle strength. The use of a large number of inert cores allows the dependence of the interaction on mass or model space to be investigated. The method permits accurate empirical modifications to be made to realistic interactions calculated from bare nucleon-nucleon potentials, which are known to possess defective centroids in many cases. In addition, the centroids can be used as input to the more sophisticated fitting procedures that are employed to produce matrix elements of the effective interaction

  7. Ambiguity Of Doppler Centroid In Synthetic-Aperture Radar

    Science.gov (United States)

    Chang, Chi-Yung; Curlander, John C.

    1991-01-01

    Paper discusses performances of two algorithms for resolution of ambiguity in estimated Doppler centroid frequency of echoes in synthetic-aperture radar. One based on range-cross-correlation technique, other based on multiple-pulse-repetition-frequency technique.

  8. Major shell centroids in the symplectic collective model

    International Nuclear Information System (INIS)

    Draayer, J.P.; Rosensteel, G.; Tulane Univ., New Orleans, LA

    1983-01-01

    Analytic expressions are given for the major shell centroids of the collective potential V(#betta#, #betta#) and the shape observable #betta# 2 in the Sp(3,R) symplectic model. The tools of statistical spectroscopy are shown to be useful, firstly, in translating a requirement that the underlying shell structure be preserved into constraints on the parameters of the collective potential and, secondly, in giving a reasonable estimate for a truncation of the infinite dimensional symplectic model space from experimental B(E2) transition strengths. Results based on the centroid information are shown to compare favorably with results from exact calculations in the case of 20 Ne. (orig.)

  9. Combined centroid-envelope dynamics of intense, magnetically focused charged beams surrounded by conducting walls

    International Nuclear Information System (INIS)

    Fiuza, K.; Rizzato, F.B.; Pakter, R.

    2006-01-01

    In this paper we analyze the combined envelope-centroid dynamics of magnetically focused high-intensity charged beams surrounded by conducting walls. Similar to the case where conducting walls are absent, it is shown that the envelope and centroid dynamics decouple from each other. Mismatched envelopes still decay into equilibrium with simultaneous emittance growth, but the centroid keeps oscillating with no appreciable energy loss. Some estimates are performed to analytically obtain characteristics of halo formation seen in the full simulations

  10. Subpixel edge localization with reduced uncertainty by violating the Nyquist criterion

    Science.gov (United States)

    Heidingsfelder, Philipp; Gao, Jun; Wang, Kun; Ott, Peter

    2014-12-01

    In this contribution, the extent to which the Nyquist criterion can be violated in optical imaging systems with a digital sensor, e.g., a digital microscope, is investigated. In detail, we analyze the subpixel uncertainty of the detected position of a step edge, the edge of a stripe with a varying width, and that of a periodic rectangular pattern for varying pixel pitches of the sensor, thus also in aliased conditions. The analysis includes the investigation of different algorithms of edge localization based on direct fitting or based on the derivative of the edge profile, such as the common centroid method. In addition to the systematic error of these algorithms, the influence of the photon noise (PN) is included in the investigation. A simplified closed form solution for the uncertainty of the edge position caused by the PN is derived. The presented results show that, in the vast majority of cases, the pixel pitch can exceed the Nyquist sampling distance by about 50% without an increase of the uncertainty of edge localization. This allows one to increase the field-of-view without increasing the resolution of the sensor and to decrease the size of the setup by reducing the magnification. Experimental results confirm the simulation results.

  11. 2D Sub-Pixel Disparity Measurement Using QPEC / Medicis

    Directory of Open Access Journals (Sweden)

    M. Cournet

    2016-06-01

    Full Text Available In the frame of its earth observation missions, CNES created a library called QPEC, and one of its launcher called Medicis. QPEC / Medicis is a sub-pixel two-dimensional stereo matching algorithm that works on an image pair. This tool is a block matching algorithm, which means that it is based on a local method. Moreover it does not regularize the results found. It proposes several matching costs, such as the Zero mean Normalised Cross-Correlation or statistical measures (the Mutual Information being one of them, and different match validation flags. QPEC / Medicis is able to compute a two-dimensional dense disparity map with a subpixel precision. Hence, it is more versatile than disparity estimation methods found in computer vision literature, which often assume an epipolar geometry. CNES uses Medicis, among other applications, during the in-orbit image quality commissioning of earth observation satellites. For instance the Pléiades-HR 1A & 1B and the Sentinel-2 geometric calibrations are based on this block matching algorithm. Over the years, it has become a common tool in ground segments for in-flight monitoring purposes. For these two kinds of applications, the two-dimensional search and the local sub-pixel measure without regularization can be essential. This tool is also used to generate automatic digital elevation models, for which it was not initially dedicated. This paper deals with the QPEC / Medicis algorithm. It also presents some of its CNES applications (in-orbit commissioning, in flight monitoring or digital elevation model generation. Medicis software is distributed outside the CNES as well. This paper finally describes some of these external applications using Medicis, such as ground displacement measurement, or intra-oral scanner in the dental domain.

  12. An Adaptive Connectivity-based Centroid Algorithm for Node Positioning in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Aries Pratiarso

    2015-06-01

    Full Text Available In wireless sensor network applications, the position of nodes is randomly distributed following the contour of the observation area. A simple solution without any measurement tools is provided by range-free method. However, this method yields the coarse estimating position of the nodes. In this paper, we propose Adaptive Connectivity-based (ACC algorithm. This algorithm is a combination of Centroid as range-free based algorithm, and hop-based connectivity algorithm. Nodes have a possibility to estimate their own position based on the connectivity level between them and their reference nodes. Each node divides its communication range into several regions where each of them has a certain weight depends on the received signal strength. The weighted value is used to obtain the estimated position of nodes. Simulation result shows that the proposed algorithm has up to 3 meter error of estimated position on 100x100 square meter observation area, and up to 3 hop counts for 80 meters' communication range. The proposed algorithm performs an average error positioning up to 10 meters better than Weighted Centroid algorithm. Keywords: adaptive, connectivity, centroid, range-free.

  13. Error diffusion applied to the manipulation of liquid-crystal display subpixels

    Science.gov (United States)

    Dallas, William J.; Fan, Jiahua; Roehrig, Hans; Krupinski, Elizabeth A.

    2004-05-01

    Flat-panel displays based on liquid crystal technology are becoming widely used in the medical imaging arena. Despite the impressive capabilities of presently-existing panels, some medical images push their boundaries. We are working with mammograms that contain up to 4800 x 6400 14-bit pixels. Stated differently, these images contain 30 mega-pixels each. In the standard environment, for film viewing, the mammograms are hung four-up, i.e. four images are located side by side. Because many of the LCD panels used for monochrome display of medical images are based on color models, the pixels of the panels are divided into sub-pixels. These sub-pixels vary in their numbers and in the degrees of independence. Manufacturers have used both spatial and temporal modulation of these sub-pixels to improve the quality of images presented by the monitors. In this presentation we show how the sub-pixel structure of some present and future displays can be used to attain higher spatial resolution than the full-pixel resolution specification would suggest while also providing increased contrast resolution. The error diffusion methods we discuss provide a natural way of controlling sub-pixels and implementing trade-offs. In smooth regions of the image contrast resolution can maximized. In rapidly-varying regions of the image spatial resolution can be favored.

  14. The Centroid of a Lie Triple Algebra

    Directory of Open Access Journals (Sweden)

    Xiaohong Liu

    2013-01-01

    Full Text Available General results on the centroids of Lie triple algebras are developed. Centroids of the tensor product of a Lie triple algebra and a unitary commutative associative algebra are studied. Furthermore, the centroid of the tensor product of a simple Lie triple algebra and a polynomial ring is completely determined.

  15. Uncertainty Quantification in Earthquake Source Characterization with Probabilistic Centroid Moment Tensor Inversion

    Science.gov (United States)

    Dettmer, J.; Benavente, R. F.; Cummins, P. R.

    2017-12-01

    This work considers probabilistic, non-linear centroid moment tensor inversion of data from earthquakes at teleseismic distances. The moment tensor is treated as deviatoric and centroid location is parametrized with fully unknown latitude, longitude, depth and time delay. The inverse problem is treated as fully non-linear in a Bayesian framework and the posterior density is estimated with interacting Markov chain Monte Carlo methods which are implemented in parallel and allow for chain interaction. The source mechanism and location, including uncertainties, are fully described by the posterior probability density and complex trade-offs between various metrics are studied. These include the percent of double couple component as well as fault orientation and the probabilistic results are compared to results from earthquake catalogs. Additional focus is on the analysis of complex events which are commonly not well described by a single point source. These events are studied by jointly inverting for multiple centroid moment tensor solutions. The optimal number of sources is estimated by the Bayesian information criterion to ensure parsimonious solutions. [Supported by NSERC.

  16. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased

  17. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    Science.gov (United States)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  18. Subpixel level mapping of remotely sensed image using colorimetry

    Directory of Open Access Journals (Sweden)

    M. Suresh

    2018-04-01

    Full Text Available The problem of extracting proportion of classes present within a pixel has been a challenge for researchers for which already numerous methodologies have been developed but still saturation is far ahead, since still the methods accounting these mixed classes are not perfect and they would never be perfect until one can talk about one to one correspondence for each pixel and ground data, which is practically impossible. In this paper a step towards generation of new method for finding out mixed class proportions in a pixel on the basis of the mixing property of colors as per colorimetry. The methodology involves locating the class color of a mixed pixel on chromaticity diagram and then using contextual information mainly the location of neighboring pixels on chromaticity diagram to estimate the proportion of classes in the mixed pixel.Also the resampling method would be more accurate when accounting for sharp and exact boundaries. With the usage of contextual information can generate the resampled image containing only the colors which really exist. The process is simply accounting the fraction and then the number of pixels by multiplying the fraction by total number of pixels into which one pixel is splitted to get number of pixels of each color based on contextual information. Keywords: Subpixel classification, Remote sensing imagery, Colorimetric color space, Sampling and subpixel mapping

  19. Centroid motion in periodically focused beams

    International Nuclear Information System (INIS)

    Moraes, J.S.; Pakter, R.; Rizzato, F.B.

    2005-01-01

    The role of the centroid dynamics in the transport of periodically focused particle beams is investigated. A Kapchinskij-Vladimirskij equilibrium distribution for an off-axis beam is derived. It is shown that centroid and envelope dynamics are uncoupled and that unstable regions for the centroid dynamics overlap with previously stable regions for the envelope dynamics alone. Multiparticle simulations validate the findings. The effects of a conducting pipe encapsulating the beam are also investigated. It is shown that the charge induced at the pipe may generate chaotic orbits which can be detrimental to the adequate functioning of the transport mechanism

  20. Centroid finding method for position-sensitive detectors

    International Nuclear Information System (INIS)

    Radeka, V.; Boie, R.A.

    1979-10-01

    A new centroid finding method for all detectors where the signal charge is collected or induced on strips of wires, or on subdivided resistive electrodes, is presented. The centroid of charge is determined by convolution of the sequentially switched outputs from these subdivisions or from the strips with a linear centroid finding filter. The position line width is inversely proportional to N/sup 3/2/, where N is the number of subdivisions

  1. Centroid finding method for position-sensitive detectors

    International Nuclear Information System (INIS)

    Radeka, V.; Boie, R.A.

    1980-01-01

    A new centroid finding method for all detectors where the signal charge is collected or induced on strips or wires, or on subdivided resistive electrodes, is presented. The centroid of charge is determined by convolution of the sequentially switched outputs from these subdivisions or from the strips with a linear centroid finding filter. The position line width is inversely proportional to N 3 sup(/) 2 , where N is the number of subdivisions. (orig.)

  2. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean

  3. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  4. Networks and centroid metrics for understanding football

    African Journals Online (AJOL)

    Gonçalo Dias

    games. However, it seems that the centroid metric, supported only by the position of players in the field ...... the strategy adopted by the coach (Gama et al., 2014). ... centroid distance as measures of team's tactical performance in youth football.

  5. Transverse centroid oscillations in solenoidially focused beam transport lattices

    International Nuclear Information System (INIS)

    Lund, Steven M.; Wootton, Christopher J.; Lee, Edward P.

    2009-01-01

    Transverse centroid oscillations are analyzed for a beam in a solenoid transport lattice. Linear equations of motion are derived that describe small-amplitude centroid oscillations induced by displacement and rotational misalignments of the focusing solenoids in the transport lattice, dipole steering elements, and initial centroid offset errors. These equations are analyzed in a local rotating Larmor frame to derive complex-variable 'alignment functions' and 'bending functions' that efficiently describe the characteristics of the centroid oscillations induced by both mechanical misalignments of the solenoids and dipole steering elements. The alignment and bending functions depend only on the properties of the ideal lattice in the absence of errors and steering, and have associated expansion amplitudes set by the misalignments and steering fields, respectively. Applications of this formulation are presented for statistical analysis of centroid oscillations, calculation of actual lattice misalignments from centroid measurements, and optimal beam steering.

  6. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

    Science.gov (United States)

    Haller, Istvan; Nedevschi, Sergiu

    2012-02-01

    Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE

  7. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  8. Analysis of the positon resolution in centroid measurements in MWPC

    International Nuclear Information System (INIS)

    Gatti, E.; Longoni, A.

    1981-01-01

    Resolution limits in avalanche localization along the anode wires of an MWPC with cathodes connected by resistors and equally spaced amplifiers, are evaluated. A simple weighted-centroid method and a highly linear method based on a linear centroid finding filter, are considered. The contributions to the variance of the estimator of the avalanche position, due to the series noise of the amplifiers and to the thermal noise of the resistive line are separately calculated and compared. A comparison is made with the resolution of the MWPC with isolated cathodes. The calculations are performed with a distributed model of the diffusive line formed by the cathodes and the resistors. A comparison is also made with the results obtained with a simple lumped model of the diffusive line. A number of graphs useful in determining the best parameters of a MWPC, with a specified position and time resolution, are given. It has been found that, for short resolution times, an MWPC with cathodes connected by resitors presents better resolution (lower variance of the estimator of the avalanche position) than an MWPC with isolated cathodes. Conversely, for long resolution times, the variance of the estimator of the avalanche position is lower in an MWPC with isolated cathodes. (orig.)

  9. Sub-pixel estimation of tree cover and bare surface densities using regression tree analysis

    Directory of Open Access Journals (Sweden)

    Carlos Augusto Zangrando Toneli

    2011-09-01

    Full Text Available Sub-pixel analysis is capable of generating continuous fields, which represent the spatial variability of certain thematic classes. The aim of this work was to develop numerical models to represent the variability of tree cover and bare surfaces within the study area. This research was conducted in the riparian buffer within a watershed of the São Francisco River in the North of Minas Gerais, Brazil. IKONOS and Landsat TM imagery were used with the GUIDE algorithm to construct the models. The results were two index images derived with regression trees for the entire study area, one representing tree cover and the other representing bare surface. The use of non-parametric and non-linear regression tree models presented satisfactory results to characterize wetland, deciduous and savanna patterns of forest formation.

  10. Thorough statistical comparison of machine learning regression models and their ensembles for sub-pixel imperviousness and imperviousness change mapping

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2017-12-01

    Full Text Available We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.

  11. Thorough statistical comparison of machine learning regression models and their ensembles for sub-pixel imperviousness and imperviousness change mapping

    Science.gov (United States)

    Drzewiecki, Wojciech

    2017-12-01

    We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.

  12. Hybridized centroid technique for 3D Molodensky-Badekas ...

    African Journals Online (AJOL)

    In view of this, the present study developed and tested two new hybrid centroid techniques known as the harmonic-quadratic mean and arithmetic-quadratic mean centroids. The proposed hybrid approaches were compared with the geometric mean, harmonic mean, median, quadratic mean and arithmetic mean. In addition ...

  13. Optimisation of centroiding algorithms for photon event counting imaging

    International Nuclear Information System (INIS)

    Suhling, K.; Airey, R.W.; Morgan, B.L.

    1999-01-01

    Approaches to photon event counting imaging in which the output events of an image intensifier are located using a centroiding technique have long been plagued by fixed pattern noise in which a grid of dimensions similar to those of the CCD pixels is superimposed on the image. This is caused by a mismatch between the photon event shape and the centroiding algorithm. We have used hyperbolic cosine, Gaussian, Lorentzian, parabolic as well as 3-, 5-, and 7-point centre of gravity algorithms, and hybrids thereof, to assess means of minimising this fixed pattern noise. We show that fixed pattern noise generated by the widely used centre of gravity centroiding is due to intrinsic features of the algorithm. Our results confirm that the recently proposed use of Gaussian centroiding does indeed show a significant reduction of fixed pattern noise compared to centre of gravity centroiding (Michel et al., Mon. Not. R. Astron. Soc. 292 (1997) 611-620). However, the disadvantage of a Gaussian algorithm is a centroiding failure for small pulses, caused by a division by zero, which leads to a loss of detective quantum efficiency (DQE) and to small amounts of residual fixed pattern noise. Using both real data from an image intensifier system employing a progressive scan camera, framegrabber and PC, and also synthetic data from Monte-Carlo simulations, we find that hybrid centroiding algorithms can reduce the fixed pattern noise without loss of resolution or loss of DQE. Imaging a test pattern to assess the features of the different algorithms shows that a hybrid of Gaussian and 3-point centre of gravity centroiding algorithms results in an optimum combination of low fixed pattern noise (lower than a simple Gaussian), high DQE, and high resolution. The Lorentzian algorithm gives the worst results in terms of high fixed pattern noise and low resolution, and the Gaussian and hyperbolic cosine algorithms have the lowest DQEs

  14. A Hybridized Centroid Technique for 3D Molodensky-Badekas ...

    African Journals Online (AJOL)

    Richannan

    the same point in a second reference frame (Ghilani, 2010). ... widely used approach by most researchers to compute values of centroid coordinates in the ... choice of centroid method on the Veis model has been investigated by Ziggah et al.

  15. A focal plane metrology system and PSF centroiding experiment

    Science.gov (United States)

    Li, Haitao; Li, Baoquan; Cao, Yang; Li, Ligang

    2016-10-01

    In this paper, we present an overview of a detector array equipment metrology testbed and a micro-pixel centroiding experiment currently under development at the National Space Science Center, Chinese Academy of Sciences. We discuss on-going development efforts aimed at calibrating the intra-/inter-pixel quantum efficiency and pixel positions for scientific grade CMOS detector, and review significant progress in achieving higher precision differential centroiding for pseudo star images in large area back-illuminated CMOS detector. Without calibration of pixel positions and intrapixel response, we have demonstrated that the standard deviation of differential centroiding is below 2.0e-3 pixels.

  16. Transforming landscape ecological evaluations using sub-pixel remote sensing classifications: A study of invasive saltcedar (Tamarix spp.)

    Science.gov (United States)

    Frazier, Amy E.

    Invasive species disrupt landscape patterns and compromise the functionality of ecosystem processes. Non-native saltcedar (Tamarix spp.) poses significant threats to native vegetation and groundwater resources in the southwestern U.S. and Mexico, and quantifying spatial and temporal distribution patterns is essential for monitoring its spread. Advanced remote sensing classification techniques such as sub-pixel classifications are able to detect and discriminate saltcedar from native vegetation with high accuracy, but these types of classifications are not compatible with landscape metrics, which are the primary tool available for statistically assessing distribution patterns, because they do not have discrete class boundaries. The objective of this research is to develop new methods that allow sub-pixel classifications to be analyzed using landscape metrics. The research will be carried out through three specific aims: (1) develop and test a method to transform continuous sub-pixel classifications into categorical representations that are compatible with widely used landscape metric tools, (2) establish a gradient-based concept of landscape using sub-pixel classifications and the technique developed in the first objective to explore the relationships between pattern and process, and (3) generate a new super-resolution mapping technique method to predict the spatial locations of fractional land covers within a pixel. Results show that the threshold gradient method is appropriate for discretizing sub-pixel data, and can be used to generate increased information about the landscape compared to traditional single-value metrics. Additionally, the super-resolution classification technique was also able to provide detailed sub-pixel mapping information, but additional work will be needed to develop rigorous validation and accuracy assessment techniques.

  17. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    International Nuclear Information System (INIS)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-01-01

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  18. Digital Speckle Photography of Subpixel Displacements of Speckle Structures Based on Analysis of Their Spatial Spectra

    Science.gov (United States)

    Maksimova, L. A.; Ryabukho, P. V.; Mysina, N. Yu.; Lyakin, D. V.; Ryabukho, V. P.

    2018-04-01

    We have investigated the capabilities of the method of digital speckle interferometry for determining subpixel displacements of a speckle structure formed by a displaceable or deformable object with a scattering surface. An analysis of spatial spectra of speckle structures makes it possible to perform measurements with a subpixel accuracy and to extend the lower boundary of the range of measurements of displacements of speckle structures to the range of subpixel values. The method is realized on the basis of digital recording of the images of undisplaced and displaced speckle structures, their spatial frequency analysis using numerically specified constant phase shifts, and correlation analysis of spatial spectra of speckle structures. Transformation into the frequency range makes it possible to obtain quantities to be measured with a subpixel accuracy from the shift of the interference-pattern minimum in the diffraction halo by introducing an additional phase shift into the complex spatial spectrum of the speckle structure or from the slope of the linear plot of the function of accumulated phase difference in the field of the complex spatial spectrum of the displaced speckle structure. The capabilities of the method have been investigated in natural experiment.

  19. Radial lens distortion correction with sub-pixel accuracy for X-ray micro-tomography.

    Science.gov (United States)

    Vo, Nghia T; Atwood, Robert C; Drakopoulos, Michael

    2015-12-14

    Distortion correction or camera calibration for an imaging system which is highly configurable and requires frequent disassembly for maintenance or replacement of parts needs a speedy method for recalibration. Here we present direct techniques for calculating distortion parameters of a non-linear model based on the correct determination of the center of distortion. These techniques are fast, very easy to implement, and accurate at sub-pixel level. The implementation at the X-ray tomography system of the I12 beamline, Diamond Light Source, which strictly requires sub-pixel accuracy, shows excellent performance in the calibration image and in the reconstructed images.

  20. Statistical analysis of x-ray stress measurement by centroid method

    International Nuclear Information System (INIS)

    Kurita, Masanori; Amano, Jun; Sakamoto, Isao

    1982-01-01

    The X-ray technique allows a nondestructive and rapid measurement of residual stresses in metallic materials. The centroid method has an advantage over other X-ray methods in that it can determine the angular position of a diffraction line, from which the stress is calculated, even with an asymmetrical line profile. An equation for the standard deviation of the angular position of a diffraction line, σsub(p), caused by statistical fluctuation was derived, which is a fundamental source of scatter in X-ray stress measurements. This equation shows that an increase of X-ray counts by a factor of k results in a decrease of σsub(p) by a factor of 1/√k. It also shows that σsub(p) increases rapidly as the angular range used in calculating the centroid increases. It is therefore important to calculate the centroid using the narrow angular range between the two ends of the diffraction line where it starts to deviate from the straight background line. By using quenched structural steels JIS S35C and S45C, the residual stresses and their standard deviations were calculated by the centroid, parabola, Gaussian curve, and half-width methods, and the results were compared. The centroid of a diffraction line was affected greatly by the background line used. The standard deviation of the stress measured by the centroid method was found to be the largest among the four methods. (author)

  1. Subpixel Snow Cover Mapping from MODIS Data by Nonparametric Regression Splines

    Science.gov (United States)

    Akyurek, Z.; Kuter, S.; Weber, G. W.

    2016-12-01

    Spatial extent of snow cover is often considered as one of the key parameters in climatological, hydrological and ecological modeling due to its energy storage, high reflectance in the visible and NIR regions of the electromagnetic spectrum, significant heat capacity and insulating properties. A significant challenge in snow mapping by remote sensing (RS) is the trade-off between the temporal and spatial resolution of satellite imageries. In order to tackle this issue, machine learning-based subpixel snow mapping methods, like Artificial Neural Networks (ANNs), from low or moderate resolution images have been proposed. Multivariate Adaptive Regression Splines (MARS) is a nonparametric regression tool that can build flexible models for high dimensional and complex nonlinear data. Although MARS is not often employed in RS, it has various successful implementations such as estimation of vertical total electron content in ionosphere, atmospheric correction and classification of satellite images. This study is the first attempt in RS to evaluate the applicability of MARS for subpixel snow cover mapping from MODIS data. Total 16 MODIS-Landsat ETM+ image pairs taken over European Alps between March 2000 and April 2003 were used in the study. MODIS top-of-atmospheric reflectance, NDSI, NDVI and land cover classes were used as predictor variables. Cloud-covered, cloud shadow, water and bad-quality pixels were excluded from further analysis by a spatial mask. MARS models were trained and validated by using reference fractional snow cover (FSC) maps generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also developed. The mutual comparison of obtained MARS and ANN models was accomplished on independent test areas. The MARS model performed better than the ANN model with an average RMSE of 0.1288 over the independent test areas; whereas the average RMSE of the ANN model

  2. Plasma Channel Diagnostic Based on Laser Centroid Oscillations

    International Nuclear Information System (INIS)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2010-01-01

    A technique has been developed for measuring the properties of discharge-based plasma channels by monitoring the centroid location of a laser beam exiting the channel as a function of input alignment offset between the laser and the channel. The centroid position of low-intensity ( 14 Wcm -2 ) laser pulses focused at the input of a hydrogen-filled capillary discharge waveguide was scanned and the exit positions recorded to determine the channel shape and depth with an accuracy of a few %. In addition, accurate alignment of the laser beam through the plasma channel can be provided by minimizing laser centroid motion at the channel exit as the channel depth is scanned either by scanning the plasma density or the discharge timing. The improvement in alignment accuracy provided by this technique will be crucial for minimizing electron beam pointing errors in laser plasma accelerators.

  3. Exploring Subpixel Learning Algorithms for Estimating Global Land Cover Fractions from Satellite Data Using High Performance Computing

    Directory of Open Access Journals (Sweden)

    Uttam Kumar

    2017-10-01

    Full Text Available Land cover (LC refers to the physical and biological cover present over the Earth’s surface in terms of the natural environment such as vegetation, water, bare soil, etc. Most LC features occur at finer spatial scales compared to the resolution of primary remote sensing satellites. Therefore, observed data are a mixture of spectral signatures of two or more LC features resulting in mixed pixels. One solution to the mixed pixel problem is the use of subpixel learning algorithms to disintegrate the pixel spectrum into its constituent spectra. Despite the popularity and existing research conducted on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of several subpixel learning algorithms based on least squares, sparse regression, signal–subspace and geometrical methods. Analysis of the results obtained through computer-simulated and Landsat data indicated that fully constrained least squares (FCLS outperformed the other techniques. Further, FCLS was used to unmix global Web-Enabled Landsat Data to obtain abundances of substrate (S, vegetation (V and dark object (D classes. Due to the sheer nature of data and computational needs, we leveraged the NASA Earth Exchange (NEX high-performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into four classes, namely forest, farmland, water and urban areas (in conjunction with nighttime lights data over California, USA using a random forest classifier. Validation of these LC maps with the National Land Cover Database 2011 products and North American Forest Dynamics static forest map shows a 6% improvement in unmixing-based classification relative to per-pixel classification. As such, abundance maps continue to offer a useful alternative to high-spatial-resolution classified maps for forest inventory analysis, multi

  4. A Novel Approach Based on MEMS-Gyro's Data Deep Coupling for Determining the Centroid of Star Spot

    Directory of Open Access Journals (Sweden)

    Xing Fei

    2012-01-01

    Full Text Available The traditional approach of star tracker for determining the centroid of spot requires enough energy and good shape, so a relatively long exposure time and stable three-axis state become necessary conditions to maintain high accuracy, these limit its update rate and dynamic performance. In view of these issues, this paper presents an approach for determining the centroid of star spot which based on MEMS-Gyro's data deep coupling, it achieves the deep fusion of the data of star tracker and MEMS-Gyro at star map level through the introduction of EKF. The trajectory predicted by using the angular velocity of three axes can be used to set the extraction window, this enhances the dynamic performance because of the accurate extraction when the satellite has angular speed. The optimal estimations of the centroid position and the drift in the output signal of MEMS-Gyro through this approach reduce the influence of noise of the detector on accuracy of the traditional approach for determining the centroid and effectively correct the output signal of MEMS-Gyro. At the end of this paper, feasibility of this approach is verified by simulation.

  5. Subpixel Inundation Mapping Using Landsat-8 OLI and UAV Data for a Wetland Region on the Zoige Plateau, China

    Directory of Open Access Journals (Sweden)

    Haoming Xia

    2017-01-01

    Full Text Available Wetland inundation is crucial to the survival and prosperity of fauna and flora communities in wetland ecosystems. Even small changes in surface inundation may result in a substantial impact on the wetland ecosystem characteristics and function. This study presented a novel method for wetland inundation mapping at a subpixel scale in a typical wetland region on the Zoige Plateau, northeast Tibetan Plateau, China, by combining use of an unmanned aerial vehicle (UAV and Landsat-8 Operational Land Imager (OLI data. A reference subpixel inundation percentage (SIP map at a Landsat-8 OLI 30 m pixel scale was first generated using high resolution UAV data (0.16 m. The reference SIP map and Landsat-8 OLI imagery were then used to develop SIP estimation models using three different retrieval methods (Linear spectral unmixing (LSU, Artificial neural networks (ANN, and Regression tree (RT. Based on observations from 2014, the estimation results indicated that the estimation model developed with RT method could provide the best fitting results for the mapping wetland SIP (R2 = 0.933, RMSE = 8.73% compared to the other two methods. The proposed model with RT method was validated with observations from 2013, and the estimated SIP was highly correlated with the reference SIP, with an R2 of 0.986 and an RMSE of 9.84%. This study highlighted the value of high resolution UAV data and globally and freely available Landsat data in combination with the developed approach for monitoring finely gradual inundation change patterns in wetland ecosystems.

  6. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    Science.gov (United States)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  7. A Framework Based on 2-D Taylor Expansion for Quantifying the Impacts of Subpixel Reflectance Variance and Covariance on Cloud Optical Thickness and Effective Radius Retrievals Based on the Bispectral Method

    Science.gov (United States)

    Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, K.

    2016-01-01

    framework can be used to estimate the retrieval uncertainty from subpixel reflectance variations in operational satellite cloud products and to help understand the differences in t and re retrievals between two instruments.

  8. A centroid model of species distribution with applications to the Carolina wren Thryothorus ludovicianus and house finch Haemorhous mexicanus in the United States

    Science.gov (United States)

    Huang, Qiongyu; Sauer, John R.; Swatantran, Anu; Dubayah, Ralph

    2016-01-01

    Drastic shifts in species distributions are a cause of concern for ecologists. Such shifts pose great threat to biodiversity especially under unprecedented anthropogenic and natural disturbances. Many studies have documented recent shifts in species distributions. However, most of these studies are limited to regional scales, and do not consider the abundance structure within species ranges. Developing methods to detect systematic changes in species distributions over their full ranges is critical for understanding the impact of changing environments and for successful conservation planning. Here, we demonstrate a centroid model for range-wide analysis of distribution shifts using the North American Breeding Bird Survey. The centroid model is based on a hierarchical Bayesian framework which models population change within physiographic strata while accounting for several factors affecting species detectability. Yearly abundance-weighted range centroids are estimated. As case studies, we derive annual centroids for the Carolina wren and house finch in their ranges in the U.S. We further evaluate the first-difference correlation between species’ centroid movement and changes in winter severity, total population abundance. We also examined associations of change in centroids from sub-ranges. Change in full-range centroid movements of Carolina wren significantly correlate with snow cover days (r = −0.58). For both species, the full-range centroid shifts also have strong correlation with total abundance (r = 0.65, and 0.51 respectively). The movements of the full-range centroids of the two species are correlated strongly (up to r = 0.76) with that of the sub-ranges with more drastic population changes. Our study demonstrates the usefulness of centroids for analyzing distribution changes in a two-dimensional spatial context. Particularly it highlights applications that associate the centroid with factors such as environmental stressors, population characteristics

  9. A quantum generalization of intrinsic reaction coordinate using path integral centroid coordinates

    International Nuclear Information System (INIS)

    Shiga, Motoyuki; Fujisaki, Hiroshi

    2012-01-01

    We propose a generalization of the intrinsic reaction coordinate (IRC) for quantum many-body systems described in terms of the mass-weighted ring polymer centroids in the imaginary-time path integral theory. This novel kind of reaction coordinate, which may be called the ''centroid IRC,'' corresponds to the minimum free energy path connecting reactant and product states with a least amount of reversible work applied to the center of masses of the quantum nuclei, i.e., the centroids. We provide a numerical procedure to obtain the centroid IRC based on first principles by combining ab initio path integral simulation with the string method. This approach is applied to NH 3 molecule and N 2 H 5 - ion as well as their deuterated isotopomers to study the importance of nuclear quantum effects in the intramolecular and intermolecular proton transfer reactions. We find that, in the intramolecular proton transfer (inversion) of NH 3 , the free energy barrier for the centroid variables decreases with an amount of about 20% compared to the classical one at the room temperature. In the intermolecular proton transfer of N 2 H 5 - , the centroid IRC is largely deviated from the ''classical'' IRC, and the free energy barrier is reduced by the quantum effects even more drastically.

  10. A Framework Based on 2-D Taylor Expansion for Quantifying the Impacts of Sub-Pixel Reflectance Variance and Covariance on Cloud Optical Thickness and Effective Radius Retrievals Based on the Bi-Spectral Method

    Science.gov (United States)

    Zhang, Z.; Werner, F.; Cho, H. -M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry

    2016-01-01

    to estimate the retrieval uncertainty from sub-pixel reflectance variations in operational satellite cloud products and to help understand the differences in and re retrievals between two instruments.

  11. Implementation of the Centroid Method for the Correction of Turbulence

    Directory of Open Access Journals (Sweden)

    Enric Meinhardt-Llopis

    2014-07-01

    Full Text Available The centroid method for the correction of turbulence consists in computing the Karcher-Fréchet mean of the sequence of input images. The direction of deformation between a pair of images is determined by the optical flow. A distinguishing feature of the centroid method is that it can produce useful results from an arbitrarily small set of input images.

  12. Radiographic measures of thoracic kyphosis in osteoporosis: Cobb and vertebral centroid angles

    International Nuclear Information System (INIS)

    Briggs, A.M.; Greig, A.M.; Wrigley, T.V.; Tully, E.A.; Adams, P.E.; Bennell, K.L.

    2007-01-01

    Several measures can quantify thoracic kyphosis from radiographs, yet their suitability for people with osteoporosis remains uncertain. The aim of this study was to examine the validity and reliability of the vertebral centroid and Cobb angles in people with osteoporosis. Lateral radiographs of the thoracic spine were captured in 31 elderly women with osteoporosis. Thoracic kyphosis was measured globally (T1-T12) and regionally (T4-T9) using Cobb and vertebral centroid angles. Multisegmental curvature was also measured by fitting polynomial functions to the thoracic curvature profile. Canonical and Pearson correlations were used to examine correspondence; agreement between measures was examined with linear regression. Moderate to high intra- and inter-rater reliability was achieved (SEM = 0.9-4.0 ). Concurrent validity of the simple measures was established against multisegmental curvature (r = 0.88-0.98). Strong association was observed between the Cobb and centroid angles globally (r = 0.84) and regionally (r 0.83). Correspondence between measures was moderate for the Cobb method (r 0.72), yet stronger for the centroid method (r = 0.80). The Cobb angle was 20% greater for regional measures due to the influence of endplate tilt. Regional Cobb and centroid angles are valid and reliable measures of thoracic kyphosis in people with osteoporosis. However, the Cobb angle is biased by endplate tilt, suggesting that the centroid angle is more appropriate for this population. (orig.)

  13. Radiographic measures of thoracic kyphosis in osteoporosis: Cobb and vertebral centroid angles

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, A.M.; Greig, A.M. [University of Melbourne, Centre for Health, Exercise and Sports Medicine, School of Physiotherapy, Victoria (Australia); University of Melbourne, Department of Medicine, Royal Melbourne Hospital, Victoria (Australia); Wrigley, T.V.; Tully, E.A.; Adams, P.E.; Bennell, K.L. [University of Melbourne, Centre for Health, Exercise and Sports Medicine, School of Physiotherapy, Victoria (Australia)

    2007-08-15

    Several measures can quantify thoracic kyphosis from radiographs, yet their suitability for people with osteoporosis remains uncertain. The aim of this study was to examine the validity and reliability of the vertebral centroid and Cobb angles in people with osteoporosis. Lateral radiographs of the thoracic spine were captured in 31 elderly women with osteoporosis. Thoracic kyphosis was measured globally (T1-T12) and regionally (T4-T9) using Cobb and vertebral centroid angles. Multisegmental curvature was also measured by fitting polynomial functions to the thoracic curvature profile. Canonical and Pearson correlations were used to examine correspondence; agreement between measures was examined with linear regression. Moderate to high intra- and inter-rater reliability was achieved (SEM = 0.9-4.0 ). Concurrent validity of the simple measures was established against multisegmental curvature (r = 0.88-0.98). Strong association was observed between the Cobb and centroid angles globally (r = 0.84) and regionally (r = 0.83). Correspondence between measures was moderate for the Cobb method (r = 0.72), yet stronger for the centroid method (r = 0.80). The Cobb angle was 20% greater for regional measures due to the influence of endplate tilt. Regional Cobb and centroid angles are valid and reliable measures of thoracic kyphosis in people with osteoporosis. However, the Cobb angle is biased by endplate tilt, suggesting that the centroid angle is more appropriate for this population. (orig.)

  14. Chandra ACIS Sub-pixel Resolution

    Science.gov (United States)

    Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.

    2011-05-01

    We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy

  15. Study on Zero-Doppler Centroid Control for GEO SAR Ground Observation

    Directory of Open Access Journals (Sweden)

    Yicheng Jiang

    2014-01-01

    Full Text Available In geosynchronous Earth orbit SAR (GEO SAR, Doppler centroid compensation is a key step for imaging process, which could be performed by the attitude steering of a satellite platform. However, this zero-Doppler centroid control method does not work well when the look angle of radar is out of an expected range. This paper primarily analyzes the Doppler properties of GEO SAR in the Earth rectangular coordinate. Then, according to the actual conditions of the GEO SAR ground observation, the effective range is presented by the minimum and maximum possible look angles which are directly related to the orbital parameters. Based on the vector analysis, a new approach for zero-Doppler centroid control in GEO SAR, performing the attitude steering by a combination of pitch and roll rotation, is put forward. This approach, considering the Earth’s rotation and elliptical orbit effects, can accurately reduce the residual Doppler centroid. All the simulation results verify the correctness of the range of look angle and the proposed steering method.

  16. The effect of event shape on centroiding in photon counting detectors

    International Nuclear Information System (INIS)

    Kawakami, Hajime; Bone, David; Fordham, John; Michel, Raul

    1994-01-01

    High resolution, CCD readout, photon counting detectors employ simple centroiding algorithms for defining the spatial position of each detected event. The accuracy of centroiding is very dependent upon a number of parameters including the profile, energy and width of the intensified event. In this paper, we provide an analysis of how the characteristics of an intensified event change as the input count rate increases and the consequent effect on centroiding. The changes in these parameters are applied in particular to the MIC photon counting detector developed at UCL for ground and space based astronomical applications. This detector has a maximum format of 3072x2304 pixels permitting its use in the highest resolution applications. Individual events, at light level from 5 to 1000k events/s over the detector area, were analysed. It was found that both the asymmetry and width of event profiles were strongly dependent upon the energy of the intensified event. The variation in profile then affected the centroiding accuracy leading to loss of resolution. These inaccuracies have been quantified for two different 3 CCD pixel centroiding algorithms and one 2 pixel algorithm. The results show that a maximum error of less than 0.05 CCD pixel occurs with the 3 pixel algorithms and 0.1 CCD pixel for the 2 pixel algorithm. An improvement is proposed by utilising straight pore MCPs in the intensifier and a 70 μm air gap in front of the CCD. ((orig.))

  17. Automatic centroid detection and surface measurement with a digital Shack–Hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Yin, Xiaoming; Zhao, Liping; Li, Xiang; Fang, Zhongping

    2010-01-01

    With the breakthrough of manufacturing technologies, the measurement of surface profiles is becoming a big issue. A Shack–Hartmann wavefront sensor (SHWS) provides a promising technology for non-contact surface measurement with a number of advantages over interferometry. The SHWS splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. So the accuracy of the centroid measurement determines the accuracy of the SHWS. In this paper, we have presented a new centroid measurement algorithm based on an adaptive thresholding and dynamic windowing method by utilizing image-processing techniques. Based on this centroid detection method, we have developed a digital SHWS system which can automatically detect centroids of focal spots, reconstruct the wavefront and measure the 3D profile of the surface. The system has been tested with various simulated and real surfaces such as flat surfaces, spherical and aspherical surfaces as well as deformable surfaces. The experimental results demonstrate that the system has good accuracy, repeatability and immunity to optical misalignment. The system is also suitable for on-line applications of surface measurement

  18. Photon counting imaging and centroiding with an electron-bombarded CCD using single molecule localisation software

    International Nuclear Information System (INIS)

    Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus

    2016-01-01

    Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times.

  19. Research on Centroid Position for Stairs Climbing Stability of Search and Rescue Robot

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2011-01-01

    Full Text Available This paper represents the relationship between the stability of stairs climbing and the centroid position of the search and rescue robot. The robot system is considered as a mass point-plane model and the kinematics features are analyzed to find the relationship between centroid position and the maximal pitch angle of stairs the robot could climb up. A computable function about this relationship is given in this paper. During the stairs climbing, there is a maximal stability-keeping angle depends on the centroid position and the pitch angle of stairs, and the numerical formula is developed about the relationship between the maximal stability-keeping angle and the centroid position and pitch angle of stairs. The experiment demonstrates the trustworthy and correction of the method in the paper.

  20. Intraoperative cyclorotation and pupil centroid shift during LASIK and PRK.

    Science.gov (United States)

    Narváez, Julio; Brucks, Matthew; Zimmerman, Grenith; Bekendam, Peter; Bacon, Gregory; Schmid, Kristin

    2012-05-01

    To determine the degree of cyclorotation and centroid shift in the x and y axis that occurs intraoperatively during LASIK and photorefractive keratectomy (PRK). Intraoperative cyclorotation and centroid shift were measured in 63 eyes from 34 patients with a mean age of 34 years (range: 20 to 56 years) undergoing either LASIK or PRK. Preoperatively, an iris image of each eye was obtained with the VISX WaveScan Wavefront System (Abbott Medical Optics Inc) with iris registration. A VISX Star S4 (Abbott Medical Optics Inc) laser was later used to measure cyclotorsion and pupil centroid shift at the beginning of the refractive procedure and after flap creation or epithelial removal. The mean change in intraoperative cyclorotation was 1.48±1.11° in LASIK eyes and 2.02±2.63° in PRK eyes. Cyclorotation direction changed by >2° in 21% of eyes after flap creation in LASIK and in 32% of eyes after epithelial removal in PRK. The respective mean intraoperative shift in the x axis and y axis was 0.13±0.15 mm and 0.17±0.14 mm, respectively, in LASIK eyes, and 0.09±0.07 mm and 0.10±0.13 mm, respectively, in PRK eyes. Intraoperative centroid shifts >100 μm in either the x axis or y axis occurred in 71% of LASIK eyes and 55% of PRK eyes. Significant changes in cyclotorsion and centroid shifts were noted prior to surgery as well as intraoperatively with both LASIK and PRK. It may be advantageous to engage iris registration immediately prior to ablation to provide a reference point representative of eye position at the initiation of laser delivery. Copyright 2012, SLACK Incorporated.

  1. Centroid vetting of transiting planet candidates from the Next Generation Transit Survey

    Science.gov (United States)

    Günther, Maximilian N.; Queloz, Didier; Gillen, Edward; McCormac, James; Bayliss, Daniel; Bouchy, Francois; Walker, Simon. R.; West, Richard G.; Eigmüller, Philipp; Smith, Alexis M. S.; Armstrong, David J.; Burleigh, Matthew; Casewell, Sarah L.; Chaushev, Alexander P.; Goad, Michael R.; Grange, Andrew; Jackman, James; Jenkins, James S.; Louden, Tom; Moyano, Maximiliano; Pollacco, Don; Poppenhaeger, Katja; Rauer, Heike; Raynard, Liam; Thompson, Andrew P. G.; Udry, Stéphane; Watson, Christopher A.; Wheatley, Peter J.

    2017-11-01

    The Next Generation Transit Survey (NGTS), operating in Paranal since 2016, is a wide-field survey to detect Neptunes and super-Earths transiting bright stars, which are suitable for precise radial velocity follow-up and characterization. Thereby, its sub-mmag photometric precision and ability to identify false positives are crucial. Particularly, variable background objects blended in the photometric aperture frequently mimic Neptune-sized transits and are costly in follow-up time. These objects can best be identified with the centroiding technique: if the photometric flux is lost off-centre during an eclipse, the flux centroid shifts towards the centre of the target star. Although this method has successfully been employed by the Kepler mission, it has previously not been implemented from the ground. We present a fully automated centroid vetting algorithm developed for NGTS, enabled by our high-precision autoguiding. Our method allows detecting centroid shifts with an average precision of 0.75 milli-pixel (mpix), and down to 0.25 mpix for specific targets, for a pixel size of 4.97 arcsec. The algorithm is now part of the NGTS candidate vetting pipeline and automatically employed for all detected signals. Further, we develop a joint Bayesian fitting model for all photometric and centroid data, allowing to disentangle which object (target or background) is causing the signal, and what its astrophysical parameters are. We demonstrate our method on two NGTS objects of interest. These achievements make NGTS the first ground-based wide-field transit survey ever to successfully apply the centroiding technique for automated candidate vetting, enabling the production of a robust candidate list before follow-up.

  2. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  3. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Yin Xiaoming; Li Xiang; Zhao Liping; Fang Zhongping

    2009-01-01

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  4. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    Science.gov (United States)

    Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  5. Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier

    Science.gov (United States)

    Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar

    2015-02-01

    In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.

  6. The efficiency of the centroid method compared to a simple average

    DEFF Research Database (Denmark)

    Eskildsen, Jacob Kjær; Kristensen, Kai; Nielsen, Rikke

    Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot.......Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot....

  7. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  8. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution.

    Science.gov (United States)

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-07

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  9. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution

    Science.gov (United States)

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-01

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  10. A robust Hough transform algorithm for determining the radiation centers of circular and rectangular fields with subpixel accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Du Weiliang; Yang, James [Department of Radiation Physics, University of Texas M D Anderson Cancer Center, 1515 Holcombe Blvd, Unit 94, Houston, TX 77030 (United States)], E-mail: wdu@mdanderson.org

    2009-02-07

    Uncertainty in localizing the radiation field center is among the major components that contribute to the overall positional error and thus must be minimized. In this study, we developed a Hough transform (HT)-based computer algorithm to localize the radiation center of a circular or rectangular field with subpixel accuracy. We found that the HT method detected the centers of the test circular fields with an absolute error of 0.037 {+-} 0.019 pixels. On a typical electronic portal imager with 0.5 mm image resolution, this mean detection error was translated to 0.02 mm, which was much finer than the image resolution. It is worth noting that the subpixel accuracy described here does not include experimental uncertainties such as linac mechanical instability or room laser inaccuracy. The HT method was more accurate and more robust to image noise and artifacts than the traditional center-of-mass method. Application of the HT method in Winston-Lutz tests was demonstrated to measure the ball-radiation center alignment with subpixel accuracy. Finally, the method was applied to quantitative evaluation of the radiation center wobble during collimator rotation.

  11. Detection of Olea europaea subsp. cuspidata and Juniperus procera in the dry Afromontane forest of northern Ethiopia using subpixel analysis of Landsat imagery

    Science.gov (United States)

    Hishe, Hadgu; Giday, Kidane; Neka, Mulugeta; Soromessa, Teshome; Van Orshoven, Jos; Muys, Bart

    2015-01-01

    Comprehensive and less costly forest inventory approaches are required to monitor the spatiotemporal dynamics of key species in forest ecosystems. Subpixel analysis using the earth resources data analysis system imagine subpixel classification procedure was tested to extract Olea europaea subsp. cuspidata and Juniperus procera canopies from Landsat 7 enhanced thematic mapper plus imagery. Control points with various canopy area fractions of the target species were collected to develop signatures for each of the species. With these signatures, the imagine subpixel classification procedure was run for each species independently. The subpixel process enabled the detection of O. europaea subsp. cuspidata and J. procera trees in pure and mixed pixels. Total of 100 pixels each were field verified for both species. An overall accuracy of 85% was achieved for O. europaea subsp. cuspidata and 89% for J. procera. A high overall accuracy level of detecting species at a natural forest was achieved, which encourages using the algorithm for future species monitoring activities. We recommend that the algorithm has to be validated in similar environment to enrich the knowledge on its capability to ensure its wider usage.

  12. Application of adjusted subpixel method (ASM) in HRCT measurements of the bronchi in bronchial asthma patients and healthy individuals.

    Science.gov (United States)

    Mincewicz, Grzegorz; Rumiński, Jacek; Krzykowski, Grzegorz

    2012-02-01

    Recently, we described a model system which included corrections of high-resolution computed tomography (HRCT) bronchial measurements based on the adjusted subpixel method (ASM). To verify the clinical application of ASM by comparing bronchial measurements obtained by means of the traditional eye-driven method, subpixel method alone and ASM in a group comprised of bronchial asthma patients and healthy individuals. The study included 30 bronchial asthma patients and the control group comprised of 20 volunteers with no symptoms of asthma. The lowest internal and external diameters of the bronchial cross-sections (ID and ED) and their derivative parameters were determined in HRCT scans using: (1) traditional eye-driven method, (2) subpixel technique, and (3) ASM. In the case of the eye-driven method, lower ID values along with lower bronchial lumen area and its percentage ratio to total bronchial area were basic parameters that differed between asthma patients and healthy controls. In the case of the subpixel method and ASM, both groups were not significantly different in terms of ID. Significant differences were observed in values of ED and total bronchial area with both parameters being significantly higher in asthma patients. Compared to ASM, the eye-driven method overstated the values of ID and ED by about 30% and 10% respectively, while understating bronchial wall thickness by about 18%. Results obtained in this study suggest that the traditional eye-driven method of HRCT-based measurement of bronchial tree components probably overstates the degree of bronchial patency in asthma patients. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Application of adjusted subpixel method (ASM) in HRCT measurements of the bronchi in bronchial asthma patients and healthy individuals

    International Nuclear Information System (INIS)

    Mincewicz, Grzegorz; Rumiński, Jacek; Krzykowski, Grzegorz

    2012-01-01

    Background: Recently, we described a model system which included corrections of high-resolution computed tomography (HRCT) bronchial measurements based on the adjusted subpixel method (ASM). Objective: To verify the clinical application of ASM by comparing bronchial measurements obtained by means of the traditional eye-driven method, subpixel method alone and ASM in a group comprised of bronchial asthma patients and healthy individuals. Methods: The study included 30 bronchial asthma patients and the control group comprised of 20 volunteers with no symptoms of asthma. The lowest internal and external diameters of the bronchial cross-sections (ID and ED) and their derivative parameters were determined in HRCT scans using: (1) traditional eye-driven method, (2) subpixel technique, and (3) ASM. Results: In the case of the eye-driven method, lower ID values along with lower bronchial lumen area and its percentage ratio to total bronchial area were basic parameters that differed between asthma patients and healthy controls. In the case of the subpixel method and ASM, both groups were not significantly different in terms of ID. Significant differences were observed in values of ED and total bronchial area with both parameters being significantly higher in asthma patients. Compared to ASM, the eye-driven method overstated the values of ID and ED by about 30% and 10% respectively, while understating bronchial wall thickness by about 18%. Conclusions: Results obtained in this study suggest that the traditional eye-driven method of HRCT-based measurement of bronchial tree components probably overstates the degree of bronchial patency in asthma patients.

  14. Sensitivity of the normalized difference vegetation index to subpixel canopy cover, soil albedo, and pixel scale

    Science.gov (United States)

    Jasinski, Michael F.

    1990-01-01

    An analytical framework is provided for examining the physically based behavior of the normalized difference vegetation index (NDVI) in terms of the variability in bulk subpixel landscape components and with respect to variations in pixel scales, within the context of the stochastic-geometric canopy reflectance model. Analysis focuses on regional scale variability in horizontal plant density and soil background reflectance distribution. Modeling is generalized to different plant geometries and solar angles through the use of the nondimensional solar-geometric similarity parameter. Results demonstrate that, for Poisson-distributed plants and for one deterministic distribution, NDVI increases with increasing subpixel fractional canopy amount, decreasing soil background reflectance, and increasing shadows, at least within the limitations of the geometric reflectance model. The NDVI of a pecan orchard and a juniper landscape is presented and discussed.

  15. Estimation of sub-pixel water area on Tibet plateau using multiple endmembers spectral mixture spectral analysis from MODIS data

    Science.gov (United States)

    Cui, Qian; Shi, Jiancheng; Xu, Yuanliu

    2011-12-01

    Water is the basic needs for human society, and the determining factor of stability of ecosystem as well. There are lots of lakes on Tibet Plateau, which will lead to flood and mudslide when the water expands sharply. At present, water area is extracted from TM or SPOT data for their high spatial resolution; however, their temporal resolution is insufficient. MODIS data have high temporal resolution and broad coverage. So it is valuable resource for detecting the change of water area. Because of its low spatial resolution, mixed-pixels are common. In this paper, four spectral libraries are built using MOD09A1 product, based on that, water body is extracted in sub-pixels utilizing Multiple Endmembers Spectral Mixture Analysis (MESMA) using MODIS daily reflectance data MOD09GA. The unmixed result is comparing with contemporaneous TM data and it is proved that this method has high accuracy.

  16. Networks and centroid metrics for understanding football | Gama ...

    African Journals Online (AJOL)

    This study aimedto verifythe network of contacts resulting from the collective behaviour of professional football teams through the centroid method and networks as well, therebyproviding detailed information about the match to coaches and sport analysts. For this purpose, 999 collective attacking actions from twoteams were ...

  17. Automatic extraction of nuclei centroids of mouse embryonic cells from fluorescence microscopy images.

    Directory of Open Access Journals (Sweden)

    Md Khayrul Bashar

    Full Text Available Accurate identification of cell nuclei and their tracking using three dimensional (3D microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i Pre-processing, (ii Local enhancement, and (iii Centroid extraction. The first step includes two variations: first variation (Variant-1 uses the whole 3D pre-processed image, whereas the second one (Variant-2 modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2

  18. A variational centroid density procedure for the calculation of transmission coefficients for asymmetric barriers at low temperature

    International Nuclear Information System (INIS)

    Messina, M.; Schenter, G.K.; Garrett, B.C.

    1995-01-01

    The low temperature behavior of the centroid density method of Voth, Chandler, and Miller (VCM) [J. Chem. Phys. 91, 7749 (1989)] is investigated for tunneling through a one-dimensional barrier. We find that the bottleneck for a quantum activated process as defined by VCM does not correspond to the classical bottleneck for the case of an asymmetric barrier. If the centroid density is constrained to be at the classical bottleneck for an asymmetric barrier, the centroid density method can give transmission coefficients that are too large by as much as five orders of magnitude. We follow a variational procedure, as suggested by VCM, whereby the best transmission coefficient is found by varying the position of the centroid until the minimum value for this transmission coefficient is obtained. This is a procedure that is readily generalizable to multidimensional systems. We present calculations on several test systems which show that this variational procedure greatly enhances the accuracy of the centroid density method compared to when the centroid is constrained to be at the barrier top. Furthermore, the relation of this procedure to the low temperature periodic orbit or ''instanton'' approach is discussed. copyright 1995 American Institute of Physics

  19. Robustness of regularities for energy centroids in the presence of random interactions

    International Nuclear Information System (INIS)

    Zhao, Y.M.; Arima, A.; Yoshida, N.; Ogawa, K.; Yoshinaga, N.; Kota, V. K. B.

    2005-01-01

    In this paper we study energy centroids such as those with fixed spin and isospin and those with fixed irreducible representations for both bosons and fermions, in the presence of random two-body and/or three-body interactions. Our results show that regularities of energy centroids of fixed-spin states reported in earlier works are very robust in these more complicated cases. We suggest that these behaviors might be intrinsic features of quantum many-body systems interacting by random forces

  20. Determination of star bodies from p-centroid bodies

    Indian Academy of Sciences (India)

    An immediate consequence of the definition of the p-centroid body of K is that for any .... The dual mixed volume ˜V−p(K, L) of star bodies K, L can be defined by d. −p ..... [16] Lindenstrauss J and Milman V D, Local theory of normed spaces and ...

  1. A further investigation of the centroid-to-centroid method for stereotactic lung radiotherapy: A phantom study

    International Nuclear Information System (INIS)

    Lu, Bo; Samant, Sanjiv; Mittauer, Kathryn; Lee, Soyoung; Huang, Yin; Li, Jonathan; Kahler, Darren; Liu, Chihray

    2013-01-01

    Purpose: Our previous study [B. Lu et al., “A patient alignment solution for lung SBRT setups based on a deformable registration technique,” Med. Phys. 39(12), 7379–7389 (2012)] proposed a deformable-registration-based patient setup strategy called the centroid-to-centroid (CTC) method, which can perform an accurate alignment of internal-target-volume (ITV) centroids between averaged four-dimensional computed tomography and cone-beam computed tomography (CBCT) images. Scenarios with variations between CBCT and simulation CT caused by irregular breathing and/or tumor change were not specifically considered in the patient study [B. Lu et al., “A patient alignment solution for lung SBRT setups based on a deformable registration technique,” Med. Phys. 39(12), 7379–7389 (2012)] due to the lack of both a sufficiently large patient data sample and a method of tumor tracking. The aim of this study is to thoroughly investigate and compare the impacts of breathing pattern and tumor change on both the CTC and the translation-only (T-only) gray-value mode strategies by employing a four-dimensional (4D) lung phantom.Methods: A sophisticated anthropomorphic 4D phantom (CIRS Dynamic Thorax Phantom model 008) was employed to simulate all desired respiratory variations. The variation scenarios were classified into four groups: inspiration to expiration ratio (IE ratio) change, tumor trajectory change, tumor position change, tumor size change, and the combination of these changes. For each category the authors designed several scenarios to demonstrate the effects of different levels of breathing variation on both of the T-only and the CTC methods. Each scenario utilized 4DCT and CBCT scans. The ITV centroid alignment discrepancies for CTC and T-only were evaluated. The dose-volume-histograms (DVHs) of ITVs for two extreme cases were analyzed.Results: Except for some extreme cases in the combined group, the accuracy of the CTC registration was about 2 mm for all cases for

  2. Empirical Centroid Fictitious Play: An Approach For Distributed Learning In Multi-Agent Games

    OpenAIRE

    Swenson, Brian; Kar, Soummya; Xavier, Joao

    2013-01-01

    The paper is concerned with distributed learning in large-scale games. The well-known fictitious play (FP) algorithm is addressed, which, despite theoretical convergence results, might be impractical to implement in large-scale settings due to intense computation and communication requirements. An adaptation of the FP algorithm, designated as the empirical centroid fictitious play (ECFP), is presented. In ECFP players respond to the centroid of all players' actions rather than track and respo...

  3. Landform classification using a sub-pixel spatial attraction model to increase spatial resolution of digital elevation model (DEM

    Directory of Open Access Journals (Sweden)

    Marzieh Mokarrama

    2018-04-01

    Full Text Available The purpose of the present study is preparing a landform classification by using digital elevation model (DEM which has a high spatial resolution. To reach the mentioned aim, a sub-pixel spatial attraction model was used as a novel method for preparing DEM with a high spatial resolution in the north of Darab, Fars province, Iran. The sub-pixel attraction models convert the pixel into sub-pixels based on the neighboring pixels fraction values, which can only be attracted by a central pixel. Based on this approach, a mere maximum of eight neighboring pixels can be selected for calculating of the attraction value. In the mentioned model, other pixels are supposed to be far from the central pixel to receive any attraction. In the present study by using a sub-pixel attraction model, the spatial resolution of a DEM was increased. The design of the algorithm is accomplished by using a DEM with a spatial resolution of 30 m (the Advanced Space borne Thermal Emission and Reflection Radiometer; (ASTER and a 90 m (the Shuttle Radar Topography Mission; (SRTM. In the attraction model, scale factors of (S = 2, S = 3, and S = 4 with two neighboring methods of touching (T = 1 and quadrant (T = 2 are applied to the DEMs by using MATLAB software. The algorithm is evaluated by taking the best advantages of 487 sample points, which are measured by surveyors. The spatial attraction model with scale factor of (S = 2 gives better results compared to those scale factors which are greater than 2. Besides, the touching neighborhood method is turned to be more accurate than the quadrant method. In fact, dividing each pixel into more than two sub-pixels decreases the accuracy of the resulted DEM. On the other hand, in these cases DEM, is itself in charge of increasing the value of root-mean-square error (RMSE and shows that attraction models could not be used for S which is greater than 2. Thus considering results, the proposed model is highly capable of

  4. Non-obtuse Remeshing with Centroidal Voronoi Tessellation

    KAUST Repository

    Yan, Dongming; Wonka, Peter

    2015-01-01

    We present a novel remeshing algorithm that avoids triangles with small and triangles with large (obtuse) angles. Our solution is based on an extension to Centroidal Voronoi Tesselation (CVT). We augment the original CVT formulation by a penalty term that penalizes short Voronoi edges, while the CVT term helps to avoid small angles. Our results show significant improvements of the remeshing quality over the state of the art.

  5. Non-obtuse Remeshing with Centroidal Voronoi Tessellation

    KAUST Repository

    Yan, Dongming

    2015-12-03

    We present a novel remeshing algorithm that avoids triangles with small and triangles with large (obtuse) angles. Our solution is based on an extension to Centroidal Voronoi Tesselation (CVT). We augment the original CVT formulation by a penalty term that penalizes short Voronoi edges, while the CVT term helps to avoid small angles. Our results show significant improvements of the remeshing quality over the state of the art.

  6. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    Science.gov (United States)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  7. Determination of star bodies from p-centroid bodies

    Indian Academy of Sciences (India)

    In this paper, we prove that an origin-symmetric star body is uniquely determined by its -centroid body. Furthermore, using spherical harmonics, we establish a result for non-symmetric star bodies. As an application, we show that there is a unique member of p ⟨ K ⟩ characterized by having larger volume than any other ...

  8. Systematic shifts of evaluated charge centroid for the cathode read-out multiwire proportional chamber

    International Nuclear Information System (INIS)

    Endo, I.; Kawamoto, T.; Mizuno, Y.; Ohsugi, T.; Taniguchi, T.; Takeshita, T.

    1981-01-01

    We have investigated the systematic error associtated with the charge centroid evaluation for the cathode read-out multiwire proportional chamber. Correction curves for the systematic error according to six centroid finding algorithms have been obtained by using the charge distribution calculated in a simple electrostatic mode. They have been experimentally examined and proved to be essential for the accurate determination of the irradiated position. (orig.)

  9. Fast centroid algorithm for determining the surface plasmon resonance angle using the fixed-boundary method

    International Nuclear Information System (INIS)

    Zhan, Shuyue; Wang, Xiaoping; Liu, Yuling

    2011-01-01

    To simplify the algorithm for determining the surface plasmon resonance (SPR) angle for special applications and development trends, a fast method for determining an SPR angle, called the fixed-boundary centroid algorithm, has been proposed. Two experiments were conducted to compare three centroid algorithms from the aspects of the operation time, sensitivity to shot noise, signal-to-noise ratio (SNR), resolution, and measurement range. Although the measurement range of this method was narrower, the other performance indices were all better than the other two centroid methods. This method has outstanding performance, high speed, good conformity, low error and a high SNR and resolution. It thus has the potential to be widely adopted

  10. Comparison of BiLinearly Interpolated Subpixel Sensitivity Mapping and Pixel-Level Decorrelation

    Science.gov (United States)

    Challener, Ryan C.; Harrington, Joseph; Cubillos, Patricio; Foster, Andrew S.; Deming, Drake; WASP Consortium

    2016-10-01

    Exoplanet eclipse signals are weaker than the systematics present in the Spitzer Space Telescope's Infrared Array Camera (IRAC), and thus the correction method can significantly impact a measurement. BiLinearly Interpolated Subpixel Sensitivity (BLISS) mapping calculates the sensitivity of the detector on a subpixel grid and corrects the photometry for any sensitivity variations. Pixel-Level Decorrelation (PLD) removes the sensitivity variations by considering the relative intensities of the pixels around the source. We applied both methods to WASP-29b, a Saturn-sized planet with a mass of 0.24 ± 0.02 Jupiter masses and a radius of 0.84 ± 0.06 Jupiter radii, which we observed during eclipse twice with the 3.6 µm and once with the 4.5 µm channels of IRAC aboard Spitzer in 2010 and 2011 (programs 60003 and 70084, respectively). We compared the results of BLISS and PLD, and comment on each method's ability to remove time-correlated noise. WASP-29b exhibits a strong detection at 3.6 µm and no detection at 4.5 µm. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

  11. Noninvasive measurement of cardiopulmonary blood volume: evaluation of the centroid method

    International Nuclear Information System (INIS)

    Fouad, F.M.; MacIntyre, W.J.; Tarazi, R.C.

    1981-01-01

    Cardiopulmonary blood volume (CPV) and mean pulmonary transit time (MTT) determined by radionuclide measurements (Tc-99m HSA) were compared with values obtained from simultaneous dye-dilution (DD) studies (indocyanine green). The mean transit time was obtained from radionuclide curves by two methods: the peak-to-peak time and the interval between the two centroids determined from the right and left-ventricular time-concentration curves. Correlation of dye-dilution MTT and peak-to-peak time was significant (r = 0.79, p < 0.001), but its correlation with centroid-derived values was better (r = 0.86, p < 0.001). CPV values (using the centroid method for radionuclide technique) correlated significantly with values derived from dye-dilution curves (r = 0.74, p < 0.001). Discrepancies between the two were greater the more rapid the circulation (r = 0.61, p < 0.01), suggesting that minor inaccuracies of dye-dilution methods, due to positioning or delay of the system, can become magnified in hyperkinetic conditions. The radionuclide method is simple, repeatable, and noninvasive, and it provides simultaneous evaluation of pulmonary and systemic hemodynamics. Further, calculation of the ratio of cardiopulmonary to total blood volume can be used as an index of overall venous distensibility and relocation of intravascular blood volume

  12. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    Science.gov (United States)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  13. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    Science.gov (United States)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  14. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  15. Establishing the soft and hard tissue area centers (centroids) for the skull and introducing a newnon-anatomical cephalometric line

    International Nuclear Information System (INIS)

    AlBalkhi, Khalid M; AlShahrani, Ibrahim; AlMadi, Abdulaziz

    2008-01-01

    The purpose of this study was to demonstrate how to establish the area center (centroid) of both the soft and hard tissues of the outline of the lateral cephalometric skull image, and to introduce the concept of a new non-anatomical centroid line. Lateral cephalometric radiographs, size 12 x 14 inch, of fifty seven adult subjects were selected based on their pleasant, balanced profile, Class I skeletal and dental relationship and no major dental malocclusion or malrelationship. The area centers (centroids) of both soft and hard tissue skull were practically established using a customized software computer program called the m -file . Connecting the two centers introduced the concept of a new non-anatomical soft and hard centroids line. (author)

  16. Characterizing sub-pixel landsat ETM plus fire severity on experimental fires in the Kruger National Park, South Africa

    CSIR Research Space (South Africa)

    Landmann, T

    2003-07-01

    Full Text Available Burn severity was quantitatively mapped using a unique linear spectral mixture model to determine sub-pixel abundances of different ashes and combustion completeness measured on the corresponding fire-affected pixels in Landsat data. A new burn...

  17. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  18. A general centroid determination methodology, with application to multilayer dielectric structures and thermally stimulated current measurements

    International Nuclear Information System (INIS)

    Miller, S.L.; Fleetwood, D.M.; McWhorter, P.J.; Reber, R.A. Jr.; Murray, J.R.

    1993-01-01

    A general methodology is developed to experimentally characterize the spatial distribution of occupied traps in dielectric films on a semiconductor. The effects of parasitics such as leakage, charge transport through more than one interface, and interface trap charge are quantitatively addressed. Charge transport with contributions from multiple charge species is rigorously treated. The methodology is independent of the charge transport mechanism(s), and is directly applicable to multilayer dielectric structures. The centroid capacitance, rather than the centroid itself, is introduced as the fundamental quantity that permits the generic analysis of multilayer structures. In particular, the form of many equations describing stacked dielectric structures becomes independent of the number of layers comprising the stack if they are expressed in terms of the centroid capacitance and/or the flatband voltage. The experimental methodology is illustrated with an application using thermally stimulated current (TSC) measurements. The centroid of changes (via thermal emission) in the amount of trapped charge was determined for two different samples of a triple-layer dielectric structure. A direct consequence of the TSC analyses is the rigorous proof that changes in interface trap charge can contribute, though typically not significantly, to thermally stimulated current

  19. Subpixel mapping and test beam studies with a HV2FEI4v2 CMOS-Sensor-Hybrid Module for the ATLAS inner detector upgrade

    Science.gov (United States)

    Bisanz, T.; Große-Knetter, J.; Quadt, A.; Rieger, J.; Weingarten, J.

    2017-08-01

    The upgrade to the High Luminosity Large Hadron Collider will increase the instantaneous luminosity by more than a factor of 5, thus creating significant challenges to the tracking systems of all experiments. Recent advancement of active pixel detectors designed in CMOS processes provide attractive alternatives to the well-established hybrid design using passive sensors since they allow for smaller pixel sizes and cost effective production. This article presents studies of a high-voltage CMOS active pixel sensor designed for the ATLAS tracker upgrade. The sensor is glued to the read-out chip of the Insertable B-Layer, forming a capacitively coupled pixel detector. The pixel pitch of the device under test is 33× 125 μm2, while the pixels of the read-out chip have a pitch of 50× 250 μm2. Three pixels of the CMOS device are connected to one read-out pixel, the information of which of these subpixels is hit is encoded in the amplitude of the output signal (subpixel encoding). Test beam measurements are presented that demonstrate the usability of this subpixel encoding scheme.

  20. A study on high speed wavefront control algorithm for an adaptive optics system

    International Nuclear Information System (INIS)

    Park, Seung Kyu; Baik, Sung Hoon; Kim, Cheol Jung; Seo, Young Seok

    2000-01-01

    We developed a high speed control algorithm and system for measuring and correcting the wavefront distortions based on Windows operating system. To get quickly the information of wavefront distortion from the Hartman spot image, we preprocessed the image to remove background noises and extracted the centroid position by finding the center of weights. We moved finely the centroid position with sub-pixel resolution repeatedly to get the wavefront information with more enhanced resolution. We designed a differential data communication driver and an isolated analog driver to have robust system control. As the experimental results, the measurement resolution of the wavefront was 0.05 pixels and correction speed was 5Hz

  1. A comparison of methods for calculating population exposure estimates of daily weather for health research

    Directory of Open Access Journals (Sweden)

    Dear Keith BG

    2006-09-01

    Full Text Available Abstract Background To explain the possible effects of exposure to weather conditions on population health outcomes, weather data need to be calculated at a level in space and time that is appropriate for the health data. There are various ways of estimating exposure values from raw data collected at weather stations but the rationale for using one technique rather than another; the significance of the difference in the values obtained; and the effect these have on a research question are factors often not explicitly considered. In this study we compare different techniques for allocating weather data observations to small geographical areas and different options for weighting averages of these observations when calculating estimates of daily precipitation and temperature for Australian Postal Areas. Options that weight observations based on distance from population centroids and population size are more computationally intensive but give estimates that conceptually are more closely related to the experience of the population. Results Options based on values derived from sites internal to postal areas, or from nearest neighbour sites – that is, using proximity polygons around weather stations intersected with postal areas – tended to include fewer stations' observations in their estimates, and missing values were common. Options based on observations from stations within 50 kilometres radius of centroids and weighting of data by distance from centroids gave more complete estimates. Using the geographic centroid of the postal area gave estimates that differed slightly from the population weighted centroids and the population weighted average of sub-unit estimates. Conclusion To calculate daily weather exposure values for analysis of health outcome data for small areas, the use of data from weather stations internal to the area only, or from neighbouring weather stations (allocated by the use of proximity polygons, is too limited. The most

  2. Measurement of centroid trajectory of Dragon-I electron beam

    International Nuclear Information System (INIS)

    Jiang Xiaoguo; Wang Yuan; Zhang Wenwei; Zhang Kaizhi; Li Jing; Li Chenggang; Yang Guojun

    2005-01-01

    The control of the electron beam in an intense current linear induction accelerator (LIA) is very important. The center position of the electron beam and the beam profile are two important parameters which should be measured accurately. The setup of a time-resolved measurement system and a data processing method for determining the beam center position are introduced for the purpose of obtaining Dragon-I electron beam trajectory including beam profile. The actual results show that the centroid position error can be controlled in one to two pixels. the time-resolved beam centroid trajectory of Dragon-I (18.5 MeV, 2 kA, 90 ns) is obtained recently in 10 ns interval, 3 ns exposure time with a multi-frame gated camera. The results show that the screw movement of the electron beam is mainly limited in an area with a radius of 0.5 mm and the time-resolved diameters of the beam are 8.4 mm, 8.8 mm, 8.5 mm, 9.3 mm and 7.6 mm. These results have provided a very important support to several research areas such as beam trajectory tuning and beam transmission. (authors)

  3. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    Science.gov (United States)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  4. Characterizing Subpixel Spatial Resolution of a Hybrid CMOS Detector

    Science.gov (United States)

    Bray, Evan; Burrows, Dave; Chattopadhyay, Tanmoy; Falcone, Abraham; Hull, Samuel; Kern, Matthew; McQuaide, Maria; Wages, Mitchell

    2018-01-01

    The detection of X-rays is a unique process relative to other wavelengths, and allows for some novel features that increase the scientific yield of a single observation. Unlike lower photon energies, X-rays liberate a large number of electrons from the silicon absorber array of the detector. This number is usually on the order of several hundred to a thousand for moderate-energy X-rays. These electrons tend to diffuse outward into what is referred to as the charge cloud. This cloud can then be picked up by several pixels, forming a specific pattern based on the exact incident location. By conducting the first ever “mesh experiment" on a hybrid CMOS detector (HCD), we have experimentally determined the charge cloud shape and used it to characterize responsivity of the detector with subpixel spatial resolution.

  5. On the timing properties of germanium detectors: The centroid diagrams of prompt photopeaks and Compton events

    International Nuclear Information System (INIS)

    Penev, I.; Andrejtscheff, W.; Protochristov, Ch.; Zhelev, Zh.

    1987-01-01

    In the applications of the generalized centroid shift method with germanium detectors, the energy dependence of the time centroids of prompt photopeaks (zero-time line) and of Compton background events reveal a peculiar behavior crossing each other at about 100 keV. The effect is plausibly explained as associated with the ratio of γ-quanta causing the photoeffect and Compton scattering, respectively, at the boundaries of the detector. (orig.)

  6. Centroid and Envelope Eynamics of Charged Particle Beams in an Oscillating Wobbler and External Focusing Lattice for Heavy Ion Fusion Applications

    International Nuclear Information System (INIS)

    Davidson, Ronald C.; Logan, B. Grant

    2011-01-01

    Recent heavy ion fusion target studies show that it is possible to achieve ignition with direct drive and energy gain larger than 100 at 1MJ. To realize these advanced, high-gain schemes based on direct drive, it is necessary to develop a reliable beam smoothing technique to mitigate instabilities and facilitate uniform deposition on the target. The dynamics of the beam centroid can be explored as a possible beam smoothing technique to achieve a uniform illumination over a suitably chosen region of the target. The basic idea of this technique is to induce an oscillatory motion of the centroid for each transverse slice of the beam in such a way that the centroids of different slices strike different locations on the target. The centroid dynamics is controlled by a set of biased electrical plates called 'wobblers'. Using a model based on moments of the Vlasov-Maxwell equations, we show that the wobbler deflection force acts only on the centroid motion, and that the envelope dynamics are independent of the wobbler fields. If the conducting wall is far away from the beam, then the envelope dynamics and centroid dynamics are completely decoupled. This is a preferred situation for the beam wobbling technique, because the wobbler system can be designed to generate the desired centroid motion on the target without considering its effects on the envelope and emittance. A conceptual design of the wobbler system for a heavy ion fusion driver is briefly summarized.

  7. A Framework for Quantifying the Impacts of Sub-Pixel Reflectance Variance and Covariance on Cloud Optical Thickness and Effective Radius Retrievals Based on the Bi-Spectral Method.

    Science.gov (United States)

    Zhang, Z; Werner, F.; Cho, H. -M.; Wind, Galina; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry

    2017-01-01

    The so-called bi-spectral method retrieves cloud optical thickness (t) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near infrared (VIS/NIR) band and the other in a shortwave-infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved t and re. In this study, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the t and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the t and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval.

  8. A double inequality for bounding Toader mean by the centroidal mean

    Indian Academy of Sciences (India)

    A double inequality for bounding Toader mean by the centroidal mean. YUN HUA1,∗ and FENG QI2. 1Department of Information Engineering, Weihai Vocational College, Weihai City,. Shandong Province 264210, China. 2College of Mathematics, Inner Mongolia University for Nationalities, Tongliao City,. Inner Mongolia ...

  9. A walk-free centroid method for lifetime measutement of 207Pb 569.7 keV state

    International Nuclear Information System (INIS)

    Gu Jiahui; Liu Jingyi; Xiao Genlai

    1988-01-01

    An improvement have been made in acquiring data of delayed coincidence spectra with ND-620 data acquisition system and off-line data analysis program. The delayed and anti-delayed coincidence spectra can be obtained in one run. The difference of their centroids is the mean lifetime τ. The centroid position of a delayed coincidence spectrum is the zero time of another delayed coincidence spectrum, so the requirement of measuring prompt time spectrum is avoided. The walk of prompt and delayed coincidence spectrum coming from different run are resolved and the walk during the measurement is compensated partly. The delayed coincidence time spectra of 207 Pb 569.7 keV state are measured and the half lifetime is calculated via three different methods (slop method, convolution method, centroid shift). The final value of half lifetime is 129.5±1.4ps. THe experimental reduced transition probability is compared with theoretical values

  10. DESIGN OF DYADIC-INTEGER-COEFFICIENTS BASED BI-ORTHOGONAL WAVELET FILTERS FOR IMAGE SUPER-RESOLUTION USING SUB-PIXEL IMAGE REGISTRATION

    Directory of Open Access Journals (Sweden)

    P.B. Chopade

    2014-05-01

    Full Text Available This paper presents image super-resolution scheme based on sub-pixel image registration by the design of a specific class of dyadic-integer-coefficient based wavelet filters derived from the construction of a half-band polynomial. First, the integer-coefficient based half-band polynomial is designed by the splitting approach. Next, this designed half-band polynomial is factorized and assigned specific number of vanishing moments and roots to obtain the dyadic-integer coefficients low-pass analysis and synthesis filters. The possibility of these dyadic-integer coefficients based wavelet filters is explored in the field of image super-resolution using sub-pixel image registration. The two-resolution frames are registered at a specific shift from one another to restore the resolution lost by CCD array of camera. The discrete wavelet transform (DWT obtained from the designed coefficients is applied on these two low-resolution images to obtain the high resolution image. The developed approach is validated by comparing the quality metrics with existing filter banks.

  11. A double inequality for bounding Toader mean by the centroidal mean

    Indian Academy of Sciences (India)

    Annual Meetings · Mid Year Meetings · Discussion Meetings · Public Lectures · Lecture Workshops · Refresher Courses · Symposia · Live Streaming. Home; Journals; Proceedings – Mathematical Sciences; Volume 124; Issue 4. A double inequality for bounding Toader mean by the centroidal mean. Yun Hua Feng Qi.

  12. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    Science.gov (United States)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  13. Ce3+ 5d-centroid shift and vacuum referred 4f-electron binding energies of all lanthanide impurities in 150 different compounds

    International Nuclear Information System (INIS)

    Dorenbos, Pieter

    2013-01-01

    A review on the wavelengths of all five 4f–5d transitions for Ce 3+ in about 150 different inorganic compounds (fluorides, chlorides, bromides, iodides, oxides, sulfides, selenides, nitrides) is presented. It provides data on the centroid shift and the crystal field splitting of the 5d-configuration which are then used to estimate the Eu 2+ inter 4f-electron Coulomb repulsion energy U(6,A) in compound A. The four semi-empirical models (the redshift model, the centroid shift model, the charge transfer model, and the chemical shift model) on lanthanide levels that were developed past 12 years are briefly reviewed. It will be demonstrated how those models together with the collected data of this work and elsewhere can be united to construct schemes that contain the binding energy of electrons in the 4f and 5d states for each divalent and each trivalent lanthanide ion relative to the vacuum energy. As example the vacuum referred binding energy schemes for LaF 3 and La 2 O 3 will be constructed. - Highlights: ► An compilation on all five Ce 3+ 4f–5d energies in 150 inorganic compounds is presented. ► The relationship between the 5d centroid shift and host cation electronegativity id demonstrated. ► The electronic structure scheme of the lanthanides in La 2 O 3 and LaF 3 is presented.

  14. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  15. Feature selection and nearest centroid classification for protein mass spectrometry

    Directory of Open Access Journals (Sweden)

    Levner Ilya

    2005-03-01

    Full Text Available Abstract Background The use of mass spectrometry as a proteomics tool is poised to revolutionize early disease diagnosis and biomarker identification. Unfortunately, before standard supervised classification algorithms can be employed, the "curse of dimensionality" needs to be solved. Due to the sheer amount of information contained within the mass spectra, most standard machine learning techniques cannot be directly applied. Instead, feature selection techniques are used to first reduce the dimensionality of the input space and thus enable the subsequent use of classification algorithms. This paper examines feature selection techniques for proteomic mass spectrometry. Results This study examines the performance of the nearest centroid classifier coupled with the following feature selection algorithms. Student-t test, Kolmogorov-Smirnov test, and the P-test are univariate statistics used for filter-based feature ranking. From the wrapper approaches we tested sequential forward selection and a modified version of sequential backward selection. Embedded approaches included shrunken nearest centroid and a novel version of boosting based feature selection we developed. In addition, we tested several dimensionality reduction approaches, namely principal component analysis and principal component analysis coupled with linear discriminant analysis. To fairly assess each algorithm, evaluation was done using stratified cross validation with an internal leave-one-out cross-validation loop for automated feature selection. Comprehensive experiments, conducted on five popular cancer data sets, revealed that the less advocated sequential forward selection and boosted feature selection algorithms produce the most consistent results across all data sets. In contrast, the state-of-the-art performance reported on isolated data sets for several of the studied algorithms, does not hold across all data sets. Conclusion This study tested a number of popular feature

  16. Segmentation of arterial vessel wall motion to sub-pixel resolution using M-mode ultrasound.

    Science.gov (United States)

    Fancourt, Craig; Azer, Karim; Ramcharan, Sharmilee L; Bunzel, Michelle; Cambell, Barry R; Sachs, Jeffrey R; Walker, Matthew

    2008-01-01

    We describe a method for segmenting arterial vessel wall motion to sub-pixel resolution, using the returns from M-mode ultrasound. The technique involves measuring the spatial offset between all pairs of scans from their cross-correlation, converting the spatial offsets to relative wall motion through a global optimization, and finally translating from relative to absolute wall motion by interpolation over the M-mode image. The resulting detailed wall distension waveform has the potential to enhance existing vascular biomarkers, such as strain and compliance, as well as enable new ones.

  17. Relation between medium fluid temperature and centroid subchannel temperatures of a nuclear fuel bundle mock-up

    International Nuclear Information System (INIS)

    Carvalho Tofani, P. de.

    1986-01-01

    The subchannel method used in nuclear fuel bundle thermal-hydraulic analysis lies in the statement that subchannel fluid temperatures are taken at mixed mean values. However, the development of mixing correlations and code assessment procedures are, sometimes in the literature, based upon the assumption of identity between lumped and local (subchannel centroid) temperature values. The present paper is concerned with the presentation of an approach for correlating lumped to centroid subchannel temperatures, based upon previously formulated models by the author, applied, applied to a nine heated tube bundle experimental data set. (Author) [pt

  18. Relation between medium fluid temperature and centroid subchannel temperatures of a nuclear fuel bundle mock-up

    International Nuclear Information System (INIS)

    Carvalho Tofani, P. de.

    1986-01-01

    The subchannel method used in nuclear fuel bundle thermal-hydraulic analysis lies in the statement that subchannel fluid temperatures are taken at mixed mean values. However, the development of mixing correlations and code assessment procedures are, sometimes in the literature, based upon the assumption of identity between lumped and local (subchannel centroid) temperature values. The present paper is concerned with the presentation of an approach for correlating lumped to centroid subchannel temperatures, based upon previously formulated models by the author, applied to a nine heated tube bundle experimental data set. (Author) [pt

  19. Centroid and Theoretical Rotation: Justification for Their Use in Q Methodology Research

    Science.gov (United States)

    Ramlo, Sue

    2016-01-01

    This manuscript's purpose is to introduce Q as a methodology before providing clarification about the preferred factor analytical choices of centroid and theoretical (hand) rotation. Stephenson, the creator of Q, designated that only these choices allowed for scientific exploration of subjectivity while not violating assumptions associated with…

  20. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  1. Anatomy guided automated SPECT renal seed point estimation

    Science.gov (United States)

    Dwivedi, Shekhar; Kumar, Sailendra

    2010-04-01

    Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.

  2. FINGERPRINT MATCHING BASED ON PORE CENTROIDS

    Directory of Open Access Journals (Sweden)

    S. Malathi

    2011-05-01

    Full Text Available In recent years there has been exponential growth in the use of bio- metrics for user authentication applications. Automated Fingerprint Identification systems have become popular tool in many security and law enforcement applications. Most of these systems rely on minutiae (ridge ending and bifurcation features. With the advancement in sensor technology, high resolution fingerprint images (1000 dpi pro- vide micro level of features (pores that have proven to be useful fea- tures for identification. In this paper, we propose a new strategy for fingerprint matching based on pores by reliably extracting the pore features The extraction of pores is done by Marker Controlled Wa- tershed segmentation method and the centroids of each pore are con- sidered as feature vectors for matching of two fingerprint images. Experimental results shows that the proposed method has better per- formance with lower false rates and higher accuracy.

  3. Automatic localization of the left ventricular blood pool centroid in short axis cardiac cine MR images.

    Science.gov (United States)

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; Abdul Aziz, Yang Faridah; Chee, Kok Han; McLaughlin, Robert A

    2018-06-01

    In this paper, we develop and validate an open source, fully automatic algorithm to localize the left ventricular (LV) blood pool centroid in short axis cardiac cine MR images, enabling follow-on automated LV segmentation algorithms. The algorithm comprises four steps: (i) quantify motion to determine an initial region of interest surrounding the heart, (ii) identify potential 2D objects of interest using an intensity-based segmentation, (iii) assess contraction/expansion, circularity, and proximity to lung tissue to score all objects of interest in terms of their likelihood of constituting part of the LV, and (iv) aggregate the objects into connected groups and construct the final LV blood pool volume and centroid. This algorithm was tested against 1140 datasets from the Kaggle Second Annual Data Science Bowl, as well as 45 datasets from the STACOM 2009 Cardiac MR Left Ventricle Segmentation Challenge. Correct LV localization was confirmed in 97.3% of the datasets. The mean absolute error between the gold standard and localization centroids was 2.8 to 4.7 mm, or 12 to 22% of the average endocardial radius. Graphical abstract Fully automated localization of the left ventricular blood pool in short axis cardiac cine MR images.

  4. Lifetime measurements in {sup 170}Yb using the generalized centroid difference method

    Energy Technology Data Exchange (ETDEWEB)

    Karayonchev, Vasil; Regis, Jean-Marc; Jolie, Jan; Dannhoff, Moritz; Saed-Samii, Nima; Blazhev, Andrey [Institute of Nuclear Physics, University of Cologne, Cologne (Germany)

    2016-07-01

    An experiment using the electronic γ-γ ''fast-timing'' technique was performed at the 10 MV Tandem Van-De-Graaff accelerator of the Institute for Nuclear Physics, Cologne in order to measure lifetimes of the yrast states in {sup 170}Yb. The lifetime of the first 2{sup +} state was determined using the slope method, which means by fitting an exponential decay to the ''slope'' seen in the energy-gated time-difference spectra. The value of τ=2.201(57) ns is in good agreement with the lifetimes measured using other techniques. The lifetimes of the first 4{sup +} and the 6{sup +} states are determined for the first time. They are in the ps range and were measured using the generalized centroid difference method, an extension of the well-known centroid-shift method and developed for fast-timing arrays. The derived reduced transition probabilities B(E2) values are compared with calculations done using the confined beta soft model and show good agreement within the experimental uncertainties.

  5. The strengths and limitations of effective centroid force models explored by studying isotopic effects in liquid water

    Science.gov (United States)

    Yuan, Ying; Li, Jicun; Li, Xin-Zheng; Wang, Feng

    2018-05-01

    The development of effective centroid potentials (ECPs) is explored with both the constrained-centroid and quasi-adiabatic force matching using liquid water as a test system. A trajectory integrated with the ECP is free of statistical noises that would be introduced when the centroid potential is approximated on the fly with a finite number of beads. With the reduced cost of ECP, challenging experimental properties can be studied in the spirit of centroid molecular dynamics. The experimental number density of H2O is 0.38% higher than that of D2O. With the ECP, the H2O number density is predicted to be 0.42% higher, when the dispersion term is not refit. After correction of finite size effects, the diffusion constant of H2O is found to be 21% higher than that of D2O, which is in good agreement with the 29.9% higher diffusivity for H2O observed experimentally. Although the ECP is also able to capture the redshifts of both the OH and OD stretching modes in liquid water, there are a number of properties that a classical simulation with the ECP will not be able to recover. For example, the heat capacities of H2O and D2O are predicted to be almost identical and higher than the experimental values. Such a failure is simply a result of not properly treating quantized vibrational energy levels when the trajectory is propagated with classical mechanics. Several limitations of the ECP based approach without bead population reconstruction are discussed.

  6. Modification of backgammon shape cathode and graded charge division readout method for a novel triple charge division centroid finding method

    International Nuclear Information System (INIS)

    Javanmardi, F.; Matoba, M.; Sakae, T.

    1996-01-01

    Triple Charge Division (TCD) centroid finding method that uses modified pattern of Backgammon Shape Cathode (MBSC) is introduced for medium range length position sensitive detectors with optimum numbers of cathode segments. MBSC pattern has three separated areas and uses saw tooth like insulator gaps for separating the areas. Side areas of the MBSC pattern are severed by a central common area. Size of the central area is twice of the size of both sides. Whereas central area is the widest area among three, both sides' areas have the main role in position sensing. With the same resolution and linearity, active region of original Backgammon pattern increases twice by using MBSC pattern, and with the same length, linearity of TCD centroid finding is much better than Backgammon charge division readout method. Linearity prediction of TCD centroid finding and experimental results conducted us to find an optimum truncation of the apices of MBCS pattern in the central area. The TCD centroid finding has an especial readout method since charges must be collected from two segments in both sides and from three segments in the central area of MBSC pattern. The so called Graded Charge Division (GCD) is the especial readout method for TCD. The GCD readout is a combination of the charge division readout and sequence grading of serial segments. Position sensing with TCD centroid finding and GCD readout were done by two sizes MBSC patterns (200mm and 80mm) and Spatial resolution about 1% of the detector length is achieved

  7. Hough transform used on the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Chia, Chou-Min; Huang, Kuang-Yuh; Chang, Elmer

    2016-01-01

    An approach to the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor (SHWS) is presented. The SHWS has a common problem, in that while measuring high-order wavefront distortion, the spots may exceed each of the subapertures, which are used to restrict the displacement of spots. This artificial restriction may limit the dynamic range of the SHWS. When using the SHWS to measure adaptive optics or aspheric lenses, the accuracy of the traditional spot-centroiding algorithm may be uncertain because the spots leave or cross the confined area of the subapertures. The proposed algorithm combines the Hough transform with an artificial neural network, which requires no confined subapertures, to increase the dynamic range of the SHWS. This algorithm is then explored in comprehensive simulations and the results are compared with those of the existing algorithm.

  8. CentroidFold: a web server for RNA secondary structure prediction

    OpenAIRE

    Sato, Kengo; Hamada, Michiaki; Asai, Kiyoshi; Mituyama, Toutai

    2009-01-01

    The CentroidFold web server (http://www.ncrna.org/centroidfold/) is a web application for RNA secondary structure prediction powered by one of the most accurate prediction engine. The server accepts two kinds of sequence data: a single RNA sequence and a multiple alignment of RNA sequences. It responses with a prediction result shown as a popular base-pair notation and a graph representation. PDF version of the graph representation is also available. For a multiple alignment sequence, the ser...

  9. A Proposal to Speed up the Computation of the Centroid of an Interval Type-2 Fuzzy Set

    Directory of Open Access Journals (Sweden)

    Carlos E. Celemin

    2013-01-01

    Full Text Available This paper presents two new algorithms that speed up the centroid computation of an interval type-2 fuzzy set. The algorithms include precomputation of the main operations and initialization based on the concept of uncertainty bounds. Simulations over different kinds of footprints of uncertainty reveal that the new algorithms achieve computation time reductions with respect to the Enhanced-Karnik algorithm, ranging from 40 to 70%. The results suggest that the initialization used in the new algorithms effectively reduces the number of iterations to compute the extreme points of the interval centroid while precomputation reduces the computational cost of each iteration.

  10. Comparison of pure and 'Latinized' centroidal Voronoi tessellation against various other statistical sampling methods

    International Nuclear Information System (INIS)

    Romero, Vicente J.; Burkardt, John V.; Gunzburger, Max D.; Peterson, Janet S.

    2006-01-01

    A recently developed centroidal Voronoi tessellation (CVT) sampling method is investigated here to assess its suitability for use in statistical sampling applications. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-dimensional parameter spaces. On several 2-D test problems CVT has recently been found to provide exceedingly effective and efficient point distributions for response surface generation. Additionally, for statistical function integration and estimation of response statistics associated with uniformly distributed random-variable inputs (uncorrelated), CVT has been found in initial investigations to provide superior points sets when compared against latin-hypercube and simple-random Monte Carlo methods and Halton and Hammersley quasi-random sequence methods. In this paper, the performance of all these sampling methods and a new variant ('Latinized' CVT) are further compared for non-uniform input distributions. Specifically, given uncorrelated normal inputs in a 2-D test problem, statistical sampling efficiencies are compared for resolving various statistics of response: mean, variance, and exceedence probabilities

  11. Communications: On artificial frequency shifts in infrared spectra obtained from centroid molecular dynamics: Quantum liquid water

    Science.gov (United States)

    Ivanov, Sergei D.; Witt, Alexander; Shiga, Motoyuki; Marx, Dominik

    2010-01-01

    Centroid molecular dynamics (CMD) is a popular method to extract approximate quantum dynamics from path integral simulations. Very recently we have shown that CMD gas phase infrared spectra exhibit significant artificial redshifts of stretching peaks, due to the so-called "curvature problem" imprinted by the effective centroid potential. Here we provide evidence that for condensed phases, and in particular for liquid water, CMD produces pronounced artificial redshifts for high-frequency vibrations such as the OH stretching band. This peculiar behavior intrinsic to the CMD method explains part of the unexpectedly large quantum redshifts of the stretching band of liquid water compared to classical frequencies, which is improved after applying a simple and rough "harmonic curvature correction."

  12. Evaluation of the Use of Sub-Pixel Offset Tracking Techniques to Monitor Landslides in Densely Vegetated Steeply Sloped Areas

    Directory of Open Access Journals (Sweden)

    Luyi Sun

    2016-08-01

    Full Text Available Sub-Pixel Offset Tracking (sPOT is applied to derive high-resolution centimetre-level landslide rates in the Three Gorges Region of China using TerraSAR-X Hi-resolution Spotlight (TSX HS space-borne SAR images. These results contrast sharply with previous use of conventional differential Interferometric Synthetic Aperture Radar (DInSAR techniques in areas with steep slopes, dense vegetation and large variability in water vapour which indicated around 12% phase coherent coverage. By contrast, sPOT is capable of measuring two dimensional deformation of large gradient over steeply sloped areas covered in dense vegetation. Previous applications of sPOT in this region relies on corner reflectors (CRs, (high coherence features to obtain reliable measurements. However, CRs are expensive and difficult to install, especially in remote areas; and other potential high coherence features comparable with CRs are very few and outside the landslide boundary. The resultant sub-pixel level deformation field can be statistically analysed to yield multi-modal maps of deformation regions. This approach is shown to have a significant impact when compared with previous offset tracking measurements of landslide deformation, as it is demonstrated that sPOT can be applied even in densely vegetated terrain without relying on high-contrast surface features or requiring any de-noising process.

  13. MOBIUS-STRIP-LIKE COLUMNAR FUNCTIONAL CONNECTIONS ARE REVEALED IN SOMATO-SENSORY RECEPTIVE FIELD CENTROIDS.

    Directory of Open Access Journals (Sweden)

    James Joseph Wright

    2014-10-01

    Full Text Available Receptive fields of neurons in the forelimb region of areas 3b and 1 of primary somatosensory cortex, in cats and monkeys, were mapped using extracellular recordings obtained sequentially from nearly radial penetrations. Locations of the field centroids indicated the presence of a functional system, in which cortical homotypic representations of the limb surfaces are entwined in three-dimensional Mobius-strip-like patterns of synaptic connections. Boundaries of somatosensory receptive field in nested groups irregularly overlie the centroid order, and are interpreted as arising from the superposition of learned connections upon the embryonic order. Since the theory of embryonic synaptic self-organisation used to model these results was devised and earlier used to explain findings in primary visual cortex, the present findings suggest the theory may be of general application throughout cortex, and may reveal a modular functional synaptic system, which, only in some parts of the cortex, and in some species, is manifest as anatomical ordering into columns.

  14. Optimization of soy isoflavone extraction with different solvents using the simplex-centroid mixture design.

    Science.gov (United States)

    Yoshiara, Luciane Yuri; Madeira, Tiago Bervelieri; Delaroza, Fernanda; da Silva, Josemeyre Bonifácio; Ida, Elza Iouko

    2012-12-01

    The objective of this study was to optimize the extraction of different isoflavone forms (glycosidic, malonyl-glycosidic, aglycone and total) from defatted cotyledon soy flour using the simplex-centroid experimental design with four solvents of varying polarity (water, acetone, ethanol and acetonitrile). The obtained extracts were then analysed by high-performance liquid chromatography. The profile of the different soy isoflavones forms varied with different extractions solvents. Varying the solvent or mixture used, the extraction of different isoflavones was optimized using the centroid-simplex mixture design. The special cubic model best fitted to the four solvents and its combination for soy isoflavones extraction. For glycosidic isoflavones extraction, the polar ternary mixture (water, acetone and acetonitrile) achieved the best extraction; malonyl-glycosidic forms were better extracted with mixtures of water, acetone and ethanol. Aglycone isoflavones, water and acetone mixture were best extracted and total isoflavones, the best solvents were ternary mixture of water, acetone and ethanol.

  15. 4fn-15d centroid shift in lanthanides and relation with anion polarizability, covalency, and cation electronegativity

    International Nuclear Information System (INIS)

    Dorenbos, P.; Andriessen, J.; Eijk, C.W.E. van

    2003-01-01

    Data collected on the centroid shift of the 5d-configuration of Ce 3+ in oxide and fluoride compounds were recently analyzed with a model involving the correlated motion between 5d-electron and ligand electrons. The correlation effects are proportional to the polarizability of the anion ligands and it leads, like covalency, to lowering of the 5d-orbital energies. By means of ab initio Hartree-Fock-LCAO calculations including configuration interaction the contribution from covalency and correlated motion to the centroid shift are determined separately for Ce 3+ in various compounds. It will be shown that in fluoride compounds, covalency provides an insignificant contribution. In oxides, polarizability appears to be of comparable importance as covalency

  16. Oscillations of centroid position and surface area of soccer teams in small-sided games

    NARCIS (Netherlands)

    Frencken, Wouter; Lemmink, Koen; Delleman, Nico; Visscher, Chris

    2011-01-01

    There is a need for a collective variable that captures the dynamics of team sports like soccer at match level. The centroid positions and surface areas of two soccer teams potentially describe the coordinated flow of attacking and defending in small-sided soccer games at team level. The aim of the

  17. Centroid Localization of Uncooperative Nodes in Wireless Networks Using a Relative Span Weighting Method

    Directory of Open Access Journals (Sweden)

    Christine Laurendeau

    2010-01-01

    Full Text Available Increasingly ubiquitous wireless technologies require novel localization techniques to pinpoint the position of an uncooperative node, whether the target is a malicious device engaging in a security exploit or a low-battery handset in the middle of a critical emergency. Such scenarios necessitate that a radio signal source be localized by other network nodes efficiently, using minimal information. We propose two new algorithms for estimating the position of an uncooperative transmitter, based on the received signal strength (RSS of a single target message at a set of receivers whose coordinates are known. As an extension to the concept of centroid localization, our mechanisms weigh each receiver's coordinates based on the message's relative RSS at that receiver, with respect to the span of RSS values over all receivers. The weights may decrease from the highest RSS receiver either linearly or exponentially. Our simulation results demonstrate that for all but the most sparsely populated wireless networks, our exponentially weighted mechanism localizes a target node within the regulations stipulated for emergency services location accuracy.

  18. An Investigation on the Use of Different Centroiding Algorithms and Star Catalogs in Astro-Geodetic Observations

    Science.gov (United States)

    Basoglu, Burak; Halicioglu, Kerem; Albayrak, Muge; Ulug, Rasit; Tevfik Ozludemir, M.; Deniz, Rasim

    2017-04-01

    In the last decade, the importance of high-precise geoid determination at local or national level has been pointed out by Turkish National Geodesy Commission. The Commission has also put objective of modernization of national height system of Turkey to the agenda. Meanwhile several projects have been realized in recent years. In Istanbul city, a GNSS/Levelling geoid was defined in 2005 for the metropolitan area of the city with an accuracy of ±3.5cm. In order to achieve a better accuracy in this area, "Local Geoid Determination with Integration of GNSS/Levelling and Astro-Geodetic Data" project has been conducted in Istanbul Technical University and Bogazici University KOERI since January 2016. The project is funded by The Scientific and Technological Research Council of Turkey. With the scope of the project, modernization studies of Digital Zenith Camera System are being carried on in terms of hardware components and software development. Accentuated subjects are the star catalogues, and centroiding algorithm used to identify the stars on the zenithal star field. During the test observations of Digital Zenith Camera System performed between 2013-2016, final results were calculated using the PSF method for star centroiding, and the second USNO CCD Astrograph Catalogue (UCAC2) for the reference star positions. This study aims to investigate the position accuracy of the star images by comparing different centroiding algorithms and available star catalogs used in astro-geodetic observations conducted with the digital zenith camera system.

  19. Two tree-formation methods for fast pattern search using nearest-neighbour and nearest-centroid matching

    NARCIS (Netherlands)

    Schomaker, Lambertus; Mangalagiu, D.; Vuurpijl, Louis; Weinfeld, M.; Schomaker, Lambert; Vuurpijl, Louis

    2000-01-01

    This paper describes tree­based classification of character images, comparing two methods of tree formation and two methods of matching: nearest neighbor and nearest centroid. The first method, Preprocess Using Relative Distances (PURD) is a tree­based reorganization of a flat list of patterns,

  20. Alteração no método centroide de avaliação da adaptabilidade genotípica Alteration of the centroid method to evaluate genotypic adaptability

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2009-03-01

    Full Text Available O objetivo deste trabalho foi alterar o método centroide de avaliação da adaptabilidade e estabilidade fenotípica de genótipos, para deixá-lo com maior sentido biológico e melhorar aspectos quantitativos e qualitativos de sua análise. A alteração se deu pela adição de mais três ideótipos, definidos de acordo com valores médios dos genótipos nos ambientes. Foram utilizados dados provenientes de um experimento sobre produção de matéria seca de 92 genótipos de alfafa (Medicago sativa realizado em blocos ao acaso, com duas repetições. Os genótipos foram submetidos a 20 cortes, no período de novembro de 2004 a junho de 2006. Cada corte foi considerado um ambiente. A inclusão dos ideótipos de maior sentido biológico (valores médios nos ambientes resultou em uma dispersão gráfica em forma de uma seta voltada para a direita, na qual os genótipos mais produtivos ficaram próximos à ponta da seta. Com a alteração, apenas cinco genótipos foram classificados nas mesmas classes do método centroide original. A figura em forma de seta proporciona uma comparação direta dos genótipos, por meio da formação de um gradiente de produtividade. A alteração no método mantém a facilidade de interpretação dos resultados para a recomendação dos genótipos presente no método original e não permite duplicidade de interpretação dos resultados.ABSTRACT The objective of this work was to modify the centroid method of evaluation of phenotypic adaptability and the phenotype stability of genotypes in order for the method to make greater biological sense and improve its quantitative and qualitative performance. The method was modified by means of the inclusion of three additional ideotypes defined in accordance with the genotypes' average yield in the environments tested. The alfalfa (Medicago sativa L. forage yield of 92 genotypes was used. The trial had a randomized block design, with two replicates, and the data were used to

  1. Two methods to estimate the position resolution for straw chambers with strip readout

    International Nuclear Information System (INIS)

    Golutvin, I.A.; Movchan, S.A.; Peshekhonov, V.D.; Preda, T.

    1992-01-01

    The centroid and charge-ratio methods are presented to estimate the position resolution of the straw chambers with strip readout. For the straw chambers of 10 mm in diameter, the highest position resolution was obtained for a strip pitch of 5 mm. With the centroid method and perpendicular X-ray beam, the position resolution was ≅120 μm, for the signal-to-noise ratio of 60-65. The charge-ratio method has demonstrated ≅10% better position resolution at the edges of the strip. 6 refs.; 5 figs

  2. THE ENERGY DEPENDENCE OF THE CENTROID FREQUENCY AND PHASE LAG OF THE QUASI-PERIODIC OSCILLATIONS IN GRS 1915+105

    International Nuclear Information System (INIS)

    Qu, J. L.; Lu, F. J.; Lu, Y.; Song, L. M.; Zhang, S.; Wang, J. M.; Ding, G. Q.

    2010-01-01

    We present a study of the centroid frequencies and phase lags of quasi-periodic oscillations (QPOs) as functions of photon energy for GRS 1915+105. It is found that the centroid frequencies of the 0.5-10 Hz QPOs and their phase lags are both energy dependent, and there exists an anticorrelation between the QPO frequency and phase lag. These new results challenge the popular QPO models, because none of them can fully explain the observed properties. We suggest that the observed QPO phase lags are partially due to the variation of the QPO frequency with energy, especially for those with frequency higher than 3.5 Hz.

  3. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy

    Science.gov (United States)

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  4. User Manual and Supporting Information for Library of Codes for Centroidal Voronoi Point Placement and Associated Zeroth, First, and Second Moment Determination; TOPICAL

    International Nuclear Information System (INIS)

    BURKARDT, JOHN; GUNZBURGER, MAX; PETERSON, JANET; BRANNON, REBECCA M.

    2002-01-01

    The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported

  5. Shack-Hartmann centroid detection method based on high dynamic range imaging and normalization techniques

    International Nuclear Information System (INIS)

    Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas

    2010-01-01

    In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.

  6. Subpixel Snow-covered Area Including Differentiated Grain Size from AVIRIS Data Over the Sierra Nevada Mountain Range

    Science.gov (United States)

    Hill, R.; Calvin, W. M.; Harpold, A. A.

    2016-12-01

    Mountain snow storage is the dominant source of water for humans and ecosystems in western North America. Consequently, the spatial distribution of snow-covered area is fundamental to both hydrological, ecological, and climate models. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were collected along the entire Sierra Nevada mountain range extending from north of Lake Tahoe to south of Mt. Whitney during the 2015 and 2016 snow-covered season. The AVIRIS dataset used in this experiment consists of 224 contiguous spectral channels with wavelengths ranging 400-2500 nanometers at a 15-meter spatial pixel size. Data from the Sierras were acquired on four days: 2/24/15 during a very low snow year, 3/24/16 near maximum snow accumulation, and 5/12/16 and 5/18/16 during snow ablation and snow loss. Previous retrieval of subpixel snow-covered area in alpine regions used multiple snow endmembers due to the sensitivity of snow spectral reflectance to grain size. We will present a model that analyzes multiple endmembers of varying snow grain size, vegetation, rock, and soil in segmented regions along the Sierra Nevada to determine snow-cover spatial extent, snow sub-pixel fraction and approximate grain size or melt state. The root mean squared error will provide a spectrum-wide assessment of the mixture model's goodness-of-fit. Analysis will compare snow-covered area and snow-cover depletion in the 2016 year, and annual variation from the 2015 year. Field data were also acquired on three days concurrent with the 2016 flights in the Sagehen Experimental Forest and will support ground validation of the airborne data set.

  7. The centroid shift of the 5d levels of Ce3+ with respect to the 4f levels in ionic crystals, a theoretical investigation

    International Nuclear Information System (INIS)

    Andriessen, J.; Dorenbos, P.; Eijk, C.W.E van

    2002-01-01

    The centroid shifts of the 5d level of Ce 3+ in BaF 2 , LaAlO 3 and LaCl 3 have been calculated using the ionic cluster approach. By applying configuration interaction as extension of the basic HF-LCAO approach the dynamical polarization contribution to the centroid shift was calculated. This was found to be only successful if basis sets are used optimized for polarization of the anions

  8. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    Science.gov (United States)

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  9. Color capable sub-pixel resolving optofluidic microscope and its application to blood cell imaging for malaria diagnosis.

    Directory of Open Access Journals (Sweden)

    Seung Ah Lee

    Full Text Available Miniaturization of imaging systems can significantly benefit clinical diagnosis in challenging environments, where access to physicians and good equipment can be limited. Sub-pixel resolving optofluidic microscope (SROFM offers high-resolution imaging in the form of an on-chip device, with the combination of microfluidics and inexpensive CMOS image sensors. In this work, we report on the implementation of color SROFM prototypes with a demonstrated optical resolution of 0.66 µm at their highest acuity. We applied the prototypes to perform color imaging of red blood cells (RBCs infected with Plasmodium falciparum, a particularly harmful type of malaria parasites and one of the major causes of death in the developing world.

  10. The mirror symmetric centroid difference method for picosecond lifetime measurements via {gamma}-{gamma} coincidences using very fast LaBr{sub 3}(Ce) scintillator detectors

    Energy Technology Data Exchange (ETDEWEB)

    Regis, J.-M., E-mail: regis@ikp.uni-koeln.d [Institut fuer Kernphysik, Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany); Pascovici, G.; Jolie, J.; Rudigier, M. [Institut fuer Kernphysik, Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany)

    2010-10-01

    The ultra-fast timing technique was introduced in the 1980s and is capable of measuring picosecond lifetimes of nuclear excited states with about 3 ps accuracy. Very fast scintillator detectors are connected to an electronic timing circuit and detector vs. detector time spectra are analyzed by means of the centroid shift method. The very good 3% energy resolution of the nowadays available LaBr{sub 3}(Ce) scintillator detectors for {gamma}-rays has made possible an extension of the well-established fast timing technique. The energy dependent fast timing characteristics or the prompt curve, respectively, of the LaBr{sub 3}(Ce) scintillator detector has been measured using a standard {sup 152}Eu {gamma}-ray source. For any energy combination in the range of 200keVcentroid shift method providing very attractive features for picosecond lifetime measurements is presented. The mirror symmetric centroid difference method takes advantage of the symmetry obtained when performing {gamma}-{gamma} lifetime measurements using a pair of almost identical very fast scintillator detectors. In particular cases, the use of the mirror symmetric centroid difference method also allows the direct determination of picosecond lifetimes, hence without the need of calibrating the prompt curve.

  11. THE KEPLER PIXEL RESPONSE FUNCTION

    International Nuclear Information System (INIS)

    Bryson, Stephen T.; Haas, Michael R.; Dotson, Jessie L.; Koch, David G.; Borucki, William J.; Tenenbaum, Peter; Jenkins, Jon M.; Chandrasekaran, Hema; Caldwell, Douglas A.; Klaus, Todd; Gilliland, Ronald L.

    2010-01-01

    Kepler seeks to detect sequences of transits of Earth-size exoplanets orbiting solar-like stars. Such transit signals are on the order of 100 ppm. The high photometric precision demanded by Kepler requires detailed knowledge of how the Kepler pixels respond to starlight during a nominal observation. This information is provided by the Kepler pixel response function (PRF), defined as the composite of Kepler's optical point-spread function, integrated spacecraft pointing jitter during a nominal cadence and other systematic effects. To provide sub-pixel resolution, the PRF is represented as a piecewise-continuous polynomial on a sub-pixel mesh. This continuous representation allows the prediction of a star's flux value on any pixel given the star's pixel position. The advantages and difficulties of this polynomial representation are discussed, including characterization of spatial variation in the PRF and the smoothing of discontinuities between sub-pixel polynomial patches. On-orbit super-resolution measurements of the PRF across the Kepler field of view are described. Two uses of the PRF are presented: the selection of pixels for each star that maximizes the photometric signal-to-noise ratio for that star, and PRF-fitted centroids which provide robust and accurate stellar positions on the CCD, primarily used for attitude and plate scale tracking. Good knowledge of the PRF has been a critical component for the successful collection of high-precision photometry by Kepler.

  12. Centroid moment tensor catalogue using a 3-D continental scale Earth model: Application to earthquakes in Papua New Guinea and the Solomon Islands

    Science.gov (United States)

    Hejrani, Babak; Tkalčić, Hrvoje; Fichtner, Andreas

    2017-07-01

    Although both earthquake mechanism and 3-D Earth structure contribute to the seismic wavefield, the latter is usually assumed to be layered in source studies, which may limit the quality of the source estimate. To overcome this limitation, we implement a method that takes advantage of a 3-D heterogeneous Earth model, recently developed for the Australasian region. We calculate centroid moment tensors (CMTs) for earthquakes in Papua New Guinea (PNG) and the Solomon Islands. Our method is based on a library of Green's functions for each source-station pair for selected Geoscience Australia and Global Seismic Network stations in the region, and distributed on a 3-D grid covering the seismicity down to 50 km depth. For the calculation of Green's functions, we utilize a spectral-element method for the solution of the seismic wave equation. Seismic moment tensors were calculated using least squares inversion, and the 3-D location of the centroid is found by grid search. Through several synthetic tests, we confirm a trade-off between the location and the correct input moment tensor components when using a 1-D Earth model to invert synthetics produced in a 3-D heterogeneous Earth. Our CMT catalogue for PNG in comparison to the global CMT shows a meaningful increase in the double-couple percentage (up to 70%). Another significant difference that we observe is in the mechanism of events with depth shallower then 15 km and Mw region.

  13. X-ray phase contrast imaging of objects with subpixel-size inhomogeneities: a geometrical optics model.

    Science.gov (United States)

    Gasilov, Sergei V; Coan, Paola

    2012-09-01

    Several x-ray phase contrast extraction algorithms use a set of images acquired along the rocking curve of a perfect flat analyzer crystal to study the internal structure of objects. By measuring the angular shift of the rocking curve peak, one can determine the local deflections of the x-ray beam propagated through a sample. Additionally, some objects determine a broadening of the crystal rocking curve, which can be explained in terms of multiple refraction of x rays by many subpixel-size inhomogeneities contained in the sample. This fact may allow us to differentiate between materials and features characterized by different refraction properties. In the present work we derive an expression for the beam broadening in the form of a linear integral of the quantity related to statistical properties of the dielectric susceptibility distribution function of the object.

  14. Quantum size correction to the work function and centroid of excess charge in positively ionized simple metal clusters

    International Nuclear Information System (INIS)

    Payami, M.

    2004-01-01

    In this work, we have shown the important role of the finite-size correction to the work function in predicting the correct position of the centroid of excess charge in positively charged simple metal clusters with different r s values (2≤ r s ≥ 7). For this purpose, firstly we have calculated the self-consistent Kohn-Sham energies of neutral and singly-ionized clusters with sizes 2≤ N ≥100 in the framework of local spin-density approximation and stabilized jellium model as well as simple jellium model with rigid jellium. Secondly, we have fitted our results to the asymptotic ionization formulas both with and without the size correction to the work function. The results of fittings show that the formula containing the size correction predict a correct position of the centroid inside the jellium while the other predicts a false position, outside the jellium sphere

  15. Investigation on method of estimating the excitation spectrum of vibration source

    International Nuclear Information System (INIS)

    Zhang Kun; Sun Lei; Lin Song

    2010-01-01

    In practical engineer area, it is hard to obtain the excitation spectrum of the auxiliary machines of nuclear reactor through direct measurement. To solve this problem, the general method of estimating the excitation spectrum of vibration source through indirect measurement is proposed. First, the dynamic transfer matrix between the virtual excitation points and the measure points is obtained through experiment. The matrix combined with the response spectrum at the measure points under practical work condition can be used to calculate the excitation spectrum acts on the virtual excitation points. Then a simplified method is proposed which is based on the assumption that the vibration machine can be regarded as rigid body. The method treats the centroid as the excitation point and the dynamic transfer matrix is derived by using the sub structure mobility synthesis method. Thus, the excitation spectrum can be obtained by the inverse of the transfer matrix combined with the response spectrum at the measure points. Based on the above method, a computing example is carried out to estimate the excitation spectrum acts on the centroid of a electrical pump. By comparing the input excitation and the estimated excitation, the reliability of this method is verified. (authors)

  16. Quantum size correction to the work function and the centroid of excess charge in positively ionized simple metal clusters

    Directory of Open Access Journals (Sweden)

    M. Payami

    2003-12-01

    Full Text Available  In this work, we have shown the important role of the finite-size correction to the work function in predicting the correct position of the centroid of excess charge in positively charged simple metal clusters with different values . For this purpose, firstly we have calculated the self-consistent Kohn-Sham energies of neutral and singly-ionized clusters with sizes in the framework of local spin-density approximation and stabilized jellium model (SJM as well as simple jellium model (JM with rigid jellium. Secondly, we have fitted our results to the asymptotic ionization formulas both with and without the size correction to the work function. The results of fittings show that the formula containing the size correction predict a correct position of the centroid inside the jellium while the other predicts a false position, outside the jellium sphere.

  17. Modelling Perception of Structure and Affect in Music: Spectral Centroid and Wishart's Red Bird

    Directory of Open Access Journals (Sweden)

    Roger T. Dean

    2011-12-01

    Full Text Available Pearce (2011 provides a positive and interesting response to our article on time series analysis of the influences of acoustic properties on real-time perception of structure and affect in a section of Trevor Wishart’s Red Bird (Dean & Bailes, 2010. We address the following topics raised in the response and our paper. First, we analyse in depth the possible influence of spectral centroid, a timbral feature of the acoustic stream distinct from the high level general parameter we used initially, spectral flatness. We find that spectral centroid, like spectral flatness, is not a powerful predictor of real-time responses, though it does show some features that encourage its continued consideration. Second, we discuss further the issue of studying both individual responses, and as in our paper, group averaged responses. We show that a multivariate Vector Autoregression model handles the grand average series quite similarly to those of individual members of our participant groups, and we analyse this in greater detail with a wide range of approaches in work which is in press and continuing. Lastly, we discuss the nature and intent of computational modelling of cognition using acoustic and music- or information theoretic data streams as predictors, and how the music- or information theoretic approaches may be applied to electroacoustic music, which is ‘sound-based’ rather than note-centred like Western classical music.

  18. The centroidal algorithm in molecular similarity and diversity calculations on confidential datasets

    Science.gov (United States)

    Trepalin, Sergey; Osadchiy, Nikolay

    2005-09-01

    Chemical structure provides exhaustive description of a compound, but it is often proprietary and thus an impediment in the exchange of information. For example, structure disclosure is often needed for the selection of most similar or dissimilar compounds. Authors propose a centroidal algorithm based on structural fragments (screens) that can be efficiently used for the similarity and diversity selections without disclosing structures from the reference set. For an increased security purposes, authors recommend that such set contains at least some tens of structures. Analysis of reverse engineering feasibility showed that the problem difficulty grows with decrease of the screen's radius. The algorithm is illustrated with concrete calculations on known steroidal, quinoline, and quinazoline drugs. We also investigate a problem of scaffold identification in combinatorial library dataset. The results show that relatively small screens of radius equal to 2 bond lengths perform well in the similarity sorting, while radius 4 screens yield better results in diversity sorting. The software implementation of the algorithm taking SDF file with a reference set generates screens of various radii which are subsequently used for the similarity and diversity sorting of external SDFs. Since the reverse engineering of the reference set molecules from their screens has the same difficulty as the RSA asymmetric encryption algorithm, generated screens can be stored openly without further encryption. This approach ensures an end user transfers only a set of structural fragments and no other data. Like other algorithms of encryption, the centroid algorithm cannot give 100% guarantee of protecting a chemical structure from dataset, but probability of initial structure identification is very small-order of 10-40 in typical cases.

  19. Estimating Daily Evapotranspiration Based on A Model of Evapotranspiration Fraction (EF) for Mixed Pixels

    Science.gov (United States)

    Xin, X.; Li, F.; Peng, Z.; Qinhuo, L.

    2017-12-01

    Land surface heterogeneities significantly affect the reliability and accuracy of remotely sensed evapotranspiration (ET), and it gets worse for lower resolution data. At the same time, temporal scale extrapolation of the instantaneous latent heat flux (LE) at satellite overpass time to daily ET are crucial for applications of such remote sensing product. The purpose of this paper is to propose a simple but efficient model for estimating daytime evapotranspiration considering heterogeneity of mixed pixels. In order to do so, an equation to calculate evapotranspiration fraction (EF) of mixed pixels was derived based on two key assumptions. Assumption 1: the available energy (AE) of each sub-pixel equals approximately to that of any other sub-pixels in the same mixed pixel within acceptable margin of bias, and as same as the AE of the mixed pixel. It's only for a simpification of the equation, and its uncertainties and resulted errors in estimated ET are very small. Assumption 2: EF of each sub-pixel equals to the EF of the nearest pure pixel(s) of same land cover type. This equation is supposed to be capable of correcting the spatial scale error of the mixed pixels EF and can be used to calculated daily ET with daily AE data.The model was applied to an artificial oasis in the midstream of Heihe River. HJ-1B satellite data were used to estimate the lumped fluxes at the scale of 300 m after resampling the 30-m resolution datasets to 300 m resolution, which was used to carry on the key step of the model. The results before and after correction were compare to each other and validated using site data of eddy-correlation systems. Results indicated that the new model is capable of improving accuracy of daily ET estimation relative to the lumped method. Validations at 12 sites of eddy-correlation systems for 9 days of HJ-1B overpass showed that the R² increased to 0.82 from 0.62; the RMSE decreased to 1.60 MJ/m² from 2.47MJ/m²; the MBE decreased from 1.92 MJ/m² to 1

  20. A new technique for fire risk estimation in the wildland urban interface

    Science.gov (United States)

    Dasgupta, S.; Qu, J. J.; Hao, X.

    A novel technique based on the physical variable of pre-ignition energy is proposed for assessing fire risk in the Grassland-Urban-Interface The physical basis lends meaning a site and season independent applicability possibilities for computing spread rates and ignition probabilities features contemporary fire risk indices usually lack The method requires estimates of grass moisture content and temperature A constrained radiative-transfer inversion scheme on MODIS NIR-SWIR reflectances which reduces solution ambiguity is used for grass moisture retrieval while MODIS land surface temperature emissivity products are used for retrieving grass temperature Subpixel urban contamination of the MODIS reflective and thermal signals over a Grassland-Urban-Interface pixel is corrected using periodic estimates of urban influence from high spatial resolution ASTER

  1. Sub-Pixel Accuracy Crack Width Determination on Concrete Beams in Load Tests by Triangle Mesh Geometry Analysis

    Science.gov (United States)

    Liebold, F.; Maas, H.-G.

    2018-05-01

    This paper deals with the determination of crack widths of concrete beams during load tests from monocular image sequences. The procedure starts in a reference image of the probe with suitable surface texture under zero load, where a large number of points is defined by an interest operator. Then a triangulated irregular network is established to connect the points. Image sequences are recorded during load tests with the load increasing continuously or stepwise, or at intermittently changing load. The vertices of the triangles are tracked through the consecutive images of the sequence with sub-pixel accuracy by least squares matching. All triangles are then analyzed for changes by principal strain calculation. For each triangle showing significant strain, a crack width is computed by a thorough geometric analysis of the relative movement of the vertices.

  2. Using Hedonic price model to estimate effects of flood on real ...

    African Journals Online (AJOL)

    Distances were measured in metres from the centroid of the building to the edge of the river and roads using Global Positioning System. The result of the estimation shows that property located within the floodplain are lowers in value by an average of N 493, 408 which represents 6.8 percent reduction in sales price for an ...

  3. Centroid and full-width at half maximum uncertainties of histogrammed data with an underlying Gaussian distribution -- The moments method

    International Nuclear Information System (INIS)

    Valentine, J.D.; Rana, A.E.

    1996-01-01

    The effect of approximating a continuous Gaussian distribution with histogrammed data are studied. The expressions for theoretical uncertainties in centroid and full-width at half maximum (FWHM), as determined by calculation of moments, are derived using the error propagation method for a histogrammed Gaussian distribution. The results are compared with the corresponding pseudo-experimental uncertainties for computer-generated histogrammed Gaussian peaks to demonstrate the effect of binning the data. It is shown that increasing the number of bins in the histogram improves the continuous distribution approximation. For example, a FWHM ≥ 9 and FWHM ≥ 12 bins are needed to reduce the pseudo-experimental standard deviation of FWHM to within ≥5% and ≥1%, respectively, of the theoretical value for a peak containing 10,000 counts. In addition, the uncertainties in the centroid and FWHM as a function of peak area are studied. Finally, Sheppard's correction is applied to partially correct for the binning effect

  4. The Single-Molecule Centroid Localization Algorithm Improves the Accuracy of Fluorescence Binding Assays.

    Science.gov (United States)

    Hua, Boyang; Wang, Yanbo; Park, Seongjin; Han, Kyu Young; Singh, Digvijay; Kim, Jin H; Cheng, Wei; Ha, Taekjip

    2018-03-13

    Here, we demonstrate that the use of the single-molecule centroid localization algorithm can improve the accuracy of fluorescence binding assays. Two major artifacts in this type of assay, i.e., nonspecific binding events and optically overlapping receptors, can be detected and corrected during analysis. The effectiveness of our method was confirmed by measuring two weak biomolecular interactions, the interaction between the B1 domain of streptococcal protein G and immunoglobulin G and the interaction between double-stranded DNA and the Cas9-RNA complex with limited sequence matches. This analysis routine requires little modification to common experimental protocols, making it readily applicable to existing data and future experiments.

  5. Simplex-centroid mixture formulation for optimised composting of kitchen waste.

    Science.gov (United States)

    Abdullah, N; Chin, N L

    2010-11-01

    Composting is a good recycling method to fully utilise all the organic wastes present in kitchen waste due to its high nutritious matter within the waste. In this present study, the optimised mixture proportions of kitchen waste containing vegetable scraps (V), fish processing waste (F) and newspaper (N) or onion peels (O) were determined by applying the simplex-centroid mixture design method to achieve the desired initial moisture content and carbon-to-nitrogen (CN) ratio for effective composting process. The best mixture was at 48.5% V, 17.7% F and 33.7% N for blends with newspaper while for blends with onion peels, the mixture proportion was 44.0% V, 19.7% F and 36.2% O. The predicted responses from these mixture proportions fall in the acceptable limits of moisture content of 50% to 65% and CN ratio of 20-40 and were also validated experimentally. Copyright 2010 Elsevier Ltd. All rights reserved.

  6. A Novel Sub-pixel Measurement Algorithm Based on Mixed the Fractal and Digital Speckle Correlation in Frequency Domain

    Directory of Open Access Journals (Sweden)

    Zhangfang Hu

    2014-10-01

    Full Text Available The digital speckle correlation is a non-contact in-plane displacement measurement method based on machine vision. Motivated by the facts that the low accuracy and large amount of calculation produced by the traditional digital speckle correlation method in spatial domain, we introduce a sub-pixel displacement measurement algorithm which employs a fast interpolation method based on fractal theory and digital speckle correlation in frequency domain. This algorithm can overcome either the blocking effect or the blurring caused by the traditional interpolation methods, and the frequency domain processing also avoids the repeated searching in the correlation recognition of the spatial domain, thus the operation quantity is largely reduced and the information extracting speed is improved. The comparative experiment is given to verify that the proposed algorithm in this paper is effective.

  7. A BAND SELECTION METHOD FOR SUB-PIXEL TARGET DETECTION IN HYPERSPECTRAL IMAGES BASED ON LABORATORY AND FIELD REFLECTANCE SPECTRAL COMPARISON

    Directory of Open Access Journals (Sweden)

    S. Sharifi hashjin

    2016-06-01

    Full Text Available In recent years, developing target detection algorithms has received growing interest in hyperspectral images. In comparison to the classification field, few studies have been done on dimension reduction or band selection for target detection in hyperspectral images. This study presents a simple method to remove bad bands from the images in a supervised manner for sub-pixel target detection. The proposed method is based on comparing field and laboratory spectra of the target of interest for detecting bad bands. For evaluation, the target detection blind test dataset is used in this study. Experimental results show that the proposed method can improve efficiency of the two well-known target detection methods, ACE and CEM.

  8. A walk-free centroid method for lifetime measurements with pulsed beams

    International Nuclear Information System (INIS)

    Julin, R.; Kantele, J.; Luontama, M.; Passoja, A.; Poikolainen, T.

    1977-09-01

    A delayed-coincidence lifetime measurement method based on a comparison of walk-free centroids of time spectra is presented. The time is measured between the cyclotron RF signal and the pulse from a plastic scintillation detector followed by a fixed energy selection. The events to be time-analyzed are selected from the associated charge-particle spectrum of a silicon detector which is operated in coincidence with the scintillator, i.e., independently of the formation of the signal containing the time information. With this technique, with the micropulse FWHM of typically 500 to 700 ps, half-lives down to the 10 ps region can be measured. The following half-lives are obtained with the new method: 160+-6 ps for the 2032 keV level in 209 Pb; 45+-10 ps and 160+-20 ps for the 1756.8 keV (0 2 + ) and 2027.3 keV (0 3 + ) levels in 116 Sn, respectively. (author)

  9. Remote sensing estimates of impervious surfaces for hydrological modelling of changes in flood risk during high-intensity rainfall events

    DEFF Research Database (Denmark)

    Kaspersen, Per Skougaard; Fensholt, Rasmus; Drews, Martin

    This paper addresses the accuracy and applicability of medium resolution (MR) remote sensing estimates of impervious surfaces (IS) for urban land cover change analysis. Landsat-based vegetation indices (VI) are found to provide fairly accurate measurements of sub-pixel imperviousness for urban...... areas at different geographical locations within Europe, and to be applicable for cities with diverse morphologies and dissimilar climatic and vegetative conditions. Detailed data on urban land cover changes can be used to examine the diverse environmental impacts of past and present urbanisation...

  10. Acquisition and Initial Analysis of H+- and H--Beam Centroid Jitter at LANSCE

    Science.gov (United States)

    Gilpatrick, J. D.; Bitteker, L.; Gulley, M. S.; Kerstiens, D.; Oothoudt, M.; Pillai, C.; Power, J.; Shelley, F.

    2006-11-01

    During the 2005 Los Alamos Neutron Science Center (LANSCE) beam runs, beam current and centroid-jitter data were observed, acquired, analyzed, and documented for both the LANSCE H+ and H- beams. These data were acquired using three beam position monitors (BPMs) from the 100-MeV Isotope Production Facility (IPF) beam line and three BPMs from the Switchyard transport line at the end of the LANSCE 800-MeV linac. The two types of data acquired, intermacropulse and intramacropulse, were analyzed for statistical and frequency characteristics as well as various other correlations including comparing their phase-space like characteristics in a coordinate system of transverse angle versus transverse position. This paper will briefly describe the measurements required to acquire these data, the initial analysis of these jitter data, and some interesting dilemmas these data presented.

  11. Attenuation (1/Q) estimation in reflection seismic records

    International Nuclear Information System (INIS)

    Raji, Wasiu; Rietbrock, Andreas

    2013-01-01

    Despite its numerous potential applications, the lack of a reliable method for determining attenuation (1/Q) in seismic data is an issue when utilizing attenuation for hydrocarbon exploration. In this paper, a new method for measuring attenuation in reflection seismic data is presented. The inversion process involves two key stages: computation of the centroid frequency for the individual signal using a variable window length and fast Fourier transform; and estimation of the difference in the centroid frequency and travel time for paired incident and transmitted signals. The new method introduces a shape factor and a constant which allows several spectral shapes to be used to represent a real seismic signal without altering the mathematical model. Application of the new method to synthetic data shows that it can provide reliable estimates of Q using any of the spectral shapes commonly assumed for real seismic signals. Tested against two published methods of Q measurement, the new method shows less sensitivity to interference from noise and change of frequency bandwidth. The method is also applied to a 3D data set from the Gullfaks field, North Sea, Norway. The trace length is divided into four intervals: AB, BC, CD, and DE. Results show that interval AB has the lowest 1/Q value, and that interval BC has the highest 1/Q value. The values of 1/Q measured in the CDP stack using the new method are consistent with those measured using the classical spectral ratio method. (paper)

  12. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    Science.gov (United States)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  13. Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range

    Science.gov (United States)

    Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.

    2013-12-01

    The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/ΔE) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 μm from the current 24 μm spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 μm square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 μm and 3.9±0.1 μm at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these

  14. Optimization of attenuation estimation in reflection for in vivo human dermis characterization at 20 MHz.

    Science.gov (United States)

    Fournier, Céline; Bridal, S Lori; Coron, Alain; Laugier, Pascal

    2003-04-01

    In vivo skin attenuation estimators must be applicable to backscattered radio frequency signals obtained in a pulse-echo configuration. This work compares three such estimators: short-time Fourier multinarrowband (MNB), short-time Fourier centroid shift (FC), and autoregressive centroid shift (ARC). All provide estimations of the attenuation slope (beta, dB x cm(-1) x MHz(-1)); MNB also provides an independent estimation of the mean attenuation level (IA, dB x cm(-1)). Practical approaches are proposed for data windowing, spectral variance characterization, and bandwidth selection. Then, based on simulated data, FC and ARC were selected as the best (compromise between bias and variance) attenuation slope estimators. The FC, ARC, and MNB were applied to in vivo human skin data acquired at 20 MHz to estimate betaFC, betaARC, and IA(MNB), respectively (without diffraction correction, between 11 and 27 MHz). Lateral heterogeneity had less effect and day-to-day reproducibility was smaller for IA than for beta. The IA and betaARC were dependent on pressure applied to skin during acquisition and IA on room and skin-surface temperatures. Negative values of IA imply that IA and beta may be influenced not only by skin's attenuation but also by structural heterogeneity across dermal depth. Even so, IA was correlated to subject age and IA, betaFC, and betaARC were dependent on subject gender. Thus, in vivo attenuation measurements reveal interesting variations with subject age and gender and thus appeared promising to detect skin structure modifications.

  15. Rapid Moment Magnitude Estimation Using Strong Motion Derived Static Displacements

    OpenAIRE

    Muzli, Muzli; Asch, Guenter; Saul, Joachim; Murjaya, Jaya

    2015-01-01

    The static surface deformation can be recovered from strong motion records. Compared to satellite-based measurements such as GPS or InSAR, the advantage of strong motion records is that they have the potential to provide real-time coseismic static displacements. The use of these valuable data was optimized for the moment magnitude estimation. A centroid grid search method was introduced to calculate the moment magnitude by using1 model. The method to data sets was applied of the 2011...

  16. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    Science.gov (United States)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  17. Next Generation Snow Cover Mapping: Can Future Hyperspectral Satellite Spectrometer Systems Improve Subpixel Snow-covered Area and Grain Size in the Sierra Nevada?

    Science.gov (United States)

    Hill, R.; Calvin, W. M.; Harpold, A.

    2017-12-01

    Mountain snow storage is the dominant source of water for humans and ecosystems in western North America. Consequently, the spatial distribution of snow-covered area is fundamental to both hydrological, ecological, and climate models. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were collected along the entire Sierra Nevada mountain range extending from north of Lake Tahoe to south of Mt. Whitney during the 2015 and 2016 snow-covered season. The AVIRIS dataset used in this experiment consists of 224 contiguous spectral channels with wavelengths ranging 400-2500 nanometers at a 15-meter spatial pixel size. Data from the Sierras were acquired on four days: 2/24/15 during a very low snow year, 3/24/16 near maximum snow accumulation, and 5/12/16 and 5/18/16 during snow ablation and snow loss. Building on previous retrieval of subpixel snow-covered area algorithms that take into account varying grain size we present a model that analyzes multiple endmembers of varying snow grain size, vegetation, rock, and soil in segmented regions along the Sierra Nevada to determine snow-cover spatial extent, snow sub-pixel fraction, and approximate grain size. In addition, varying simulated models of the data will compare and contrast the retrieval of current snow products such as MODIS Snow-Covered Area and Grain Size (MODSCAG) and the Airborne Space Observatory (ASO). Specifically, does lower spatial resolution (MODIS), broader resolution bandwidth (MODIS), and limited spectral resolution (ASO) affect snow-cover area and grain size approximations? The implications of our findings will help refine snow mapping products for planned hyperspectral satellite spectrometer systems such as EnMAP (slated to launch in 2019), HISUI (planned for inclusion on the International Space Station in 2018), and HyspIRI (currently under consideration).

  18. Collective centroid oscillations as an emittance preservation diagnostic in linear collider linacs

    International Nuclear Information System (INIS)

    Adolphsen, C.E.; Bane, K.L.F.; Spence, W.L.; Woodley, M.D.

    1997-08-01

    Transverse bunch centroid oscillations, induced at operating beam currents at which transverse wakefields are substantial, and observed at Beam Position Monitors, are sensitive to the actual magnetic focusing, energy gain, and rf phase profiles in a linac, and are insensitive to misalignments and jitter sources. In the pulse stealing set-up implemented at the SLC, they thus allow the frequent monitoring of the stability of the in-place emittance growth inhibiting or mitigating measures--primarily the energy scaled magnetic lattice and the rf phases necessary for BNS damping--independent of the actual emittance growth as driven by misalignments and jitter. The authors have developed a physically based analysis technique to meaningfully reduce the data. Oscillation beta-beating is a primary indicator of beam energy errors; shifts in the invariant amplitude reflect differential internal motion along the longitudinally extended bunch and thus are a sensitive indicator of the real rf phases in the machine; shifts in betatron phase advance contain corroborative information sensitive to both effects

  19. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    Science.gov (United States)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post

  20. Simulation of plume rise: Study the effect of stably stratified turbulence layer on the rise of a buoyant plume from a continuous source by observing the plume centroid

    Science.gov (United States)

    Bhimireddy, Sudheer Reddy; Bhaganagar, Kiran

    2016-11-01

    Buoyant plumes are common in atmosphere when there exists a difference in temperature or density between the source and its ambience. In a stratified environment, plume rise happens until the buoyancy variation exists between the plume and ambience. In a calm no wind ambience, this plume rise is purely vertical and the entrainment happens because of the relative motion of the plume with ambience and also ambient turbulence. In this study, a plume centroid is defined as the plume mass center and is calculated from the kinematic equation which relates the rate of change of centroids position to the plume rise velocity. Parameters needed to describe the plume are considered as the plume radius, plumes vertical velocity and local buoyancy of the plume. The plume rise velocity is calculated by the mass, momentum and heat conservation equations in their differential form. Our study focuses on the entrainment velocity, as it depicts the extent of plume growth. This entrainment velocity is made up as sum of fractions of plume's relative velocity and ambient turbulence. From the results, we studied the effect of turbulence on the plume growth by observing the variation in the plume radius at different heights and the centroid height reached before loosing its buoyancy.

  1. Estimating the accuracy of geographical imputation

    Directory of Open Access Journals (Sweden)

    Boscoe Francis P

    2008-01-01

    Full Text Available Abstract Background To reduce the number of non-geocoded cases researchers and organizations sometimes include cases geocoded to postal code centroids along with cases geocoded with the greater precision of a full street address. Some analysts then use the postal code to assign information to the cases from finer-level geographies such as a census tract. Assignment is commonly completed using either a postal centroid or by a geographical imputation method which assigns a location by using both the demographic characteristics of the case and the population characteristics of the postal delivery area. To date no systematic evaluation of geographical imputation methods ("geo-imputation" has been completed. The objective of this study was to determine the accuracy of census tract assignment using geo-imputation. Methods Using a large dataset of breast, prostate and colorectal cancer cases reported to the New Jersey Cancer Registry, we determined how often cases were assigned to the correct census tract using alternate strategies of demographic based geo-imputation, and using assignments obtained from postal code centroids. Assignment accuracy was measured by comparing the tract assigned with the tract originally identified from the full street address. Results Assigning cases to census tracts using the race/ethnicity population distribution within a postal code resulted in more correctly assigned cases than when using postal code centroids. The addition of age characteristics increased the match rates even further. Match rates were highly dependent on both the geographic distribution of race/ethnicity groups and population density. Conclusion Geo-imputation appears to offer some advantages and no serious drawbacks as compared with the alternative of assigning cases to census tracts based on postal code centroids. For a specific analysis, researchers will still need to consider the potential impact of geocoding quality on their results and evaluate

  2. Observations of sensor bias dependent cluster centroid shifts in a prototype sensor for the LHCb Vertex Locator detector

    CERN Document Server

    Papadelis, Aras

    2006-01-01

    We present results from a recent beam test of a prototype sensor for the LHCb Vertex Locator detector, read out with the Beetle 1.3 front-end chip. We have studied the effect of the sensor bias voltage on the reconstructed cluster positions in a sensor placed in a 120GeV pion beam at a 10° incidence angle. We find an unexplained sysematic shift in the reconstructed cluster centroid when increasing the bias voltage on an already overdepleted sensor. The shift is independent of strip pitch and sensor thickness.

  3. Bayesian inference and interpretation of centroid moment tensors of the 2016 Kumamoto earthquake sequence, Kyushu, Japan

    Science.gov (United States)

    Hallo, Miroslav; Asano, Kimiyuki; Gallovič, František

    2017-09-01

    On April 16, 2016, Kumamoto prefecture in Kyushu region, Japan, was devastated by a shallow M JMA7.3 earthquake. The series of foreshocks started by M JMA6.5 foreshock 28 h before the mainshock. They have originated in Hinagu fault zone intersecting the mainshock Futagawa fault zone; hence, the tectonic background for this earthquake sequence is rather complex. Here we infer centroid moment tensors (CMTs) for 11 events with M JMA between 4.8 and 6.5, using strong motion records of the K-NET, KiK-net and F-net networks. We use upgraded Bayesian full-waveform inversion code ISOLA-ObsPy, which takes into account uncertainty of the velocity model. Such an approach allows us to reliably assess uncertainty of the CMT parameters including the centroid position. The solutions show significant systematic spatial and temporal variations throughout the sequence. Foreshocks are right-lateral steeply dipping strike-slip events connected to the NE-SW shear zone. Those located close to the intersection of the Hinagu and Futagawa fault zones are dipping slightly to ESE, while those in the southern area are dipping to WNW. Contrarily, aftershocks are mostly normal dip-slip events, being related to the N-S extensional tectonic regime. Most of the deviatoric moment tensors contain only minor CLVD component, which can be attributed to the velocity model uncertainty. Nevertheless, two of the CMTs involve a significant CLVD component, which may reflect complex rupture process. Decomposition of those moment tensors into two pure shear moment tensors suggests combined right-lateral strike-slip and normal dip-slip mechanisms, consistent with the tectonic settings of the intersection of the Hinagu and Futagawa fault zones.[Figure not available: see fulltext.

  4. Fast algorithm for spectral processing with application to on-line welding quality assurance

    Science.gov (United States)

    Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.

    2006-10-01

    A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.

  5. Epidemiology from Tweets: Estimating Misuse of Prescription Opioids in the USA from Social Media.

    Science.gov (United States)

    Chary, Michael; Genes, Nicholas; Giraud-Carrier, Christophe; Hanson, Carl; Nelson, Lewis S; Manini, Alex F

    2017-12-01

    The misuse of prescription opioids (MUPO) is a leading public health concern. Social media are playing an expanded role in public health research, but there are few methods for estimating established epidemiological metrics from social media. The purpose of this study was to demonstrate that the geographic variation of social media posts mentioning prescription opioid misuse strongly correlates with government estimates of MUPO in the last month. We wrote software to acquire publicly available tweets from Twitter from 2012 to 2014 that contained at least one keyword related to prescription opioid use (n = 3,611,528). A medical toxicologist and emergency physician curated the list of keywords. We used the semantic distance (SemD) to automatically quantify the similarity of meaning between tweets and identify tweets that mentioned MUPO. We defined the SemD between two words as the shortest distance between the two corresponding word-centroids. Each word-centroid represented all recognized meanings of a word. We validated this automatic identification with manual curation. We used Twitter metadata to estimate the location of each tweet. We compared our estimated geographic distribution with the 2013-2015 National Surveys on Drug Usage and Health (NSDUH). Tweets that mentioned MUPO formed a distinct cluster far away from semantically unrelated tweets. The state-by-state correlation between Twitter and NSDUH was highly significant across all NSDUH survey years. The correlation was strongest between Twitter and NSDUH data from those aged 18-25 (r = 0.94, p usage. Mentions of MUPO on Twitter correlate strongly with state-by-state NSDUH estimates of MUPO. We have also demonstrated that a natural language processing can be used to analyze social media to provide insights for syndromic toxicosurveillance.

  6. A Subpixel Classification of Multispectral Satellite Imagery for Interpetation of Tundra-Taiga Ecotone Vegetation (Case Study on Tuliok River Valley, Khibiny, Russia)

    Science.gov (United States)

    Mikheeva, A. I.; Tutubalina, O. V.; Zimin, M. V.; Golubeva, E. I.

    2017-12-01

    The tundra-taiga ecotone plays significant role in northern ecosystems. Due to global climatic changes, the vegetation of the ecotone is the key object of many remote-sensing studies. The interpretation of vegetation and nonvegetation objects of the tundra-taiga ecotone on satellite imageries of a moderate resolution is complicated by the difficulty of extracting these objects from the spectral and spatial mixtures within a pixel. This article describes a method for the subpixel classification of Terra ASTER satellite image for vegetation mapping of the tundra-taiga ecotone in the Tuliok River, Khibiny Mountains, Russia. It was demonstrated that this method allows to determine the position of the boundaries of ecotone objects and their abundance on the basis of quantitative criteria, which provides a more accurate characteristic of ecotone vegetation when compared to the per-pixel approach of automatic imagery interpretation.

  7. Power centroid radar and its rise from the universal cybernetics duality

    Science.gov (United States)

    Feria, Erlan H.

    2014-05-01

    Power centroid radar (PC-Radar) is a fast and powerful adaptive radar scheme that naturally surfaced from the recent discovery of the time-dual for information theory which has been named "latency theory." Latency theory itself was born from the universal cybernetics duality (UC-Duality), first identified in the late 1970s, that has also delivered a time dual for thermodynamics that has been named "lingerdynamics" and anchors an emerging lifespan theory for biological systems. In this paper the rise of PC-Radar from the UC-Duality is described. The development of PC-Radar, US patented, started with Defense Advanced Research Projects Agency (DARPA) funded research on knowledge-aided (KA) adaptive radar of the last decade. The outstanding signal to interference plus noise ratio (SINR) performance of PC-Radar under severely taxing environmental disturbances will be established. More specifically, it will be seen that the SINR performance of PC-Radar, either KA or knowledgeunaided (KU), approximates that of an optimum KA radar scheme. The explanation for this remarkable result is that PC-Radar inherently arises from the UC-Duality, which advances a "first principles" duality guidance theory for the derivation of synergistic storage-space/computational-time compression solutions. Real-world synthetic aperture radar (SAR) images will be used as prior-knowledge to illustrate these results.

  8. The impact of the in-orbit background and the X-ray source intensity on the centroiding accuracy of the Swift X-ray telescope

    CERN Document Server

    Ambrosi, R M; Hill, J; Cheruvu, C; Abbey, A F; Short, A D T

    2002-01-01

    The optical components of the Swift Gamma Ray Burst Explorer X-ray Telescope (XRT), consisting of the JET-X spare flight mirror and a charge coupled device of the type used in the EPIC program, were used in a re-calibration study carried out at the Panter facility, which is part of the Max Planck Institute for Extraterrestrial Physics. The objective of this study was to check the focal length and the off axis performance of the mirrors and to show that the half energy width (HEW) of the on-axis point spread function (PSF) was of the order of 16 arcsec at 1.5 keV (Nucl. Instr. and Meth. A 488 (2002) 543; SPIE 4140 (2000) 64) and that a centroiding accuracy better that 1 arcsec could be achieved within the 4 arcmin sampling area designated by the Burst Alert Telescope (Nucl. Instr. and Meth. A 488 (2002) 543). The centroiding accuracy of the Swift XRT's optical components was tested as a function of distance from the focus and off axis position of the PSF (Nucl. Instr. and Meth. A 488 (2002) 543). The presence ...

  9. Application of the Backward-Smoothing Extended Kalman Filter to Attitude Estimation and Prediction using Radar Observations

    Science.gov (United States)

    2009-06-01

    Pressure (R) Figure 2.9 Aerodynamic drag acting at the centroid of each surface element This approach avoids time- consuming repetitive evaluation of...5. Update the state estimate with the latest measurement yfe (37 p. 210) xfe(tfc) = **_!&) + Kfc (yfe - hfc) (3.72) In some cases it is necessary to...in XELIAS can be a rather challenging and time consuming task, depending on the complexity of the target being analyzed. 4. According to the XELIAS

  10. Path integral centroid molecular dynamics simulations of semiinfinite slab and bulk liquid of para-hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Kinugawa, Kenichi [Nara Women`s Univ., Nara (Japan). Dept. of Chemistry

    1998-10-01

    It has been unsuccessful to solve a set of time-dependent Schroedinger equations numerically for many-body quantum systems which involve, e.g., a number of hydrogen molecules, protons, and excess electrons at a low temperature, where quantum effect evidently appears. This undesirable situation is fatal for the investigation of real low-temperature chemical systems because they are essentially composed of many quantum degrees of freedom. However, if we use a new technique called `path integral centroid molecular dynamics (CMD) simulation` proposed by Cao and Voth in 1994, the real-time semi-classical dynamics of many degrees of freedom can be computed by utilizing the techniques already developed in the traditional classical molecular dynamics (MD) simulations. Therefore, the CMD simulation is expected to be very powerful tool for the quantum dynamics studies or real substances. (J.P.N.)

  11. Centroid based clustering of high throughput sequencing reads based on n-mer counts.

    Science.gov (United States)

    Solovyov, Alexander; Lipkin, W Ian

    2013-09-08

    Many problems in computational biology require alignment-free sequence comparisons. One of the common tasks involving sequence comparison is sequence clustering. Here we apply methods of alignment-free comparison (in particular, comparison using sequence composition) to the challenge of sequence clustering. We study several centroid based algorithms for clustering sequences based on word counts. Study of their performance shows that using k-means algorithm with or without the data whitening is efficient from the computational point of view. A higher clustering accuracy can be achieved using the soft expectation maximization method, whereby each sequence is attributed to each cluster with a specific probability. We implement an open source tool for alignment-free clustering. It is publicly available from github: https://github.com/luscinius/afcluster. We show the utility of alignment-free sequence clustering for high throughput sequencing analysis despite its limitations. In particular, it allows one to perform assembly with reduced resources and a minimal loss of quality. The major factor affecting performance of alignment-free read clustering is the length of the read.

  12. Depth estimation of complex geometry scenes from light fields

    Science.gov (United States)

    Si, Lipeng; Wang, Qing

    2018-01-01

    The surface camera (SCam) of light fields gathers angular sample rays passing through a 3D point. The consistency of SCams is evaluated to estimate the depth map of scene. But the consistency is affected by several limitations such as occlusions or non-Lambertian surfaces. To solve those limitations, the SCam is partitioned into two segments that one of them could satisfy the consistency constraint. The segmentation pattern of SCam is highly related to the texture of spatial patch, so we enforce a mask matching to describe the shape correlation between segments of SCam and spatial patch. To further address the ambiguity in textureless region, a global method with pixel-wise plane label is presented. Plane label inference at each pixel can recover not only depth value but also local geometry structure, that is suitable for light fields with sub-pixel disparities and continuous view variation. Our method is evaluated on public light field datasets and outperforms the state-of-the-art.

  13. Comparison of the performances of land use regression modelling and dispersion modelling in estimating small-scale variations in long-term air pollution concentrations in a Dutch urban area.

    NARCIS (Netherlands)

    Beelen, R.M.J.; Voogt, M.; Duyzer, J.; Zandveld, P.; Hoek, G.

    2010-01-01

    The performance of a Land Use Regression (LUR) model and a dispersion model (URBIS - URBis Information System) was compared in a Dutch urban area. For the Rijnmond area, i.e. Rotterdam and surroundings, nitrogen dioxide (NO2) concentrations for 2001 were estimated for nearly 70 000 centroids of a

  14. Ranking Fuzzy Numbers with a Distance Method using Circumcenter of Centroids and an Index of Modality

    Directory of Open Access Journals (Sweden)

    P. Phani Bushan Rao

    2011-01-01

    Full Text Available Ranking fuzzy numbers are an important aspect of decision making in a fuzzy environment. Since their inception in 1965, many authors have proposed different methods for ranking fuzzy numbers. However, there is no method which gives a satisfactory result to all situations. Most of the methods proposed so far are nondiscriminating and counterintuitive. This paper proposes a new method for ranking fuzzy numbers based on the Circumcenter of Centroids and uses an index of optimism to reflect the decision maker's optimistic attitude and also an index of modality that represents the neutrality of the decision maker. This method ranks various types of fuzzy numbers which include normal, generalized trapezoidal, and triangular fuzzy numbers along with crisp numbers with the particularity that crisp numbers are to be considered particular cases of fuzzy numbers.

  15. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Estimation of selected seasonal streamflow statistics representative of 1930-2002 in West Virginia

    Science.gov (United States)

    Wiley, Jeffrey B.; Atkins, John T.

    2010-01-01

    Regional equations and procedures were developed for estimating seasonal 1-day 10-year, 7-day 10-year, and 30-day 5-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the seasonal U.S. Environmental Protection Agency harmonic-mean flows and the 50-percent flow-duration values. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Regional equations were developed using ordinary least squares regression using statistics from 117 U.S. Geological Survey continuous streamgage stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia-North, South-Central, and Eastern Panhandle Regions-were determined. Drainage area, average annual precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. The average standard error of estimates for the equations ranged from 12.6 to 299 percent. Procedures developed to estimate the selected seasonal streamflow statistics in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia that have independent variables within the limits of the stations used to develop the regional equations: drainage area from 16.3 to 1,516 square miles in the North Region, from 2.78 to 1,619 square miles in the South-Central Region, and from 8.83 to 3,041 square miles in the Eastern Panhandle Region; average annual precipitation from 42.3 to 61.4 inches in the South-Central Region and from 39.8 to 52.9 inches in the Eastern Panhandle Region; and longitude of the basin centroid from 79.618 to 82.023 decimal degrees in the North Region. All estimates of seasonal streamflow statistics are representative of the period from the 1930 to the 2002 climatic year.

  17. Sub-pixel analysis to support graphic security after scanning at low resolution

    Science.gov (United States)

    Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve

    2006-02-01

    Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced

  18. Imaging Formation Algorithm of the Ground and Space-Borne Hybrid BiSAR Based on Parameters Estimation from Direct Signal

    Directory of Open Access Journals (Sweden)

    Qingjun Zhang

    2014-01-01

    Full Text Available This paper proposes a novel image formation algorithm for the bistatic synthetic aperture radar (BiSAR with the configuration of a noncooperative transmitter and a stationary receiver in which the traditional imaging algorithm failed because the necessary imaging parameters cannot be estimated from the limited information from the noncooperative data provider. In the new algorithm, the essential parameters for imaging, such as squint angle, Doppler centroid, and Doppler chirp-rate, will be estimated by full exploration of the recorded direct signal (direct signal is the echo from satellite to stationary receiver directly from the transmitter. The Doppler chirp-rate is retrieved by modeling the peak phase of direct signal as a quadratic polynomial. The Doppler centroid frequency and the squint angle can be derived from the image contrast optimization. Then the range focusing, the range cell migration correction (RCMC, and the azimuth focusing are implemented by secondary range compression (SRC and the range cell migration, respectively. At last, the proposed algorithm is validated by imaging of the BiSAR experiment configured with china YAOGAN 10 SAR as the transmitter and the receiver platform located on a building at a height of 109 m in Jiangsu province. The experiment image with geometric correction shows good accordance with local Google images.

  19. Pixel-Level Decorrelation and BiLinearly Interpolated Subpixel Sensitivity applied to WASP-29b

    Science.gov (United States)

    Challener, Ryan; Harrington, Joseph; Cubillos, Patricio; Blecic, Jasmina; Deming, Drake

    2017-10-01

    Measured exoplanet transit and eclipse depths can vary significantly depending on the methodology used, especially at the low S/N levels in Spitzer eclipses. BiLinearly Interpolated Subpixel Sensitivity (BLISS) models a physical, spatial effect, which is independent of any astrophysical effects. Pixel-Level Decorrelation (PLD) uses the relative variations in pixels near the target to correct for flux variations due to telescope motion. PLD is being widely applied to all Spitzer data without a thorough understanding of its behavior. It is a mathematical method derived from a Taylor expansion, and many of its parameters do not have a physical basis. PLD also relies heavily on binning the data to remove short time-scale variations, which can artifically smooth the data. We applied both methods to 4 eclipse observations of WASP-29b, a Saturn-sized planet, which was observed twice with the 3.6 µm and twice with the 4.5 µm channels of Spitzer's IRAC in 2010, 2011 and 2014 (programs 60003, 70084, and 10054, respectively). We compare the resulting eclipse depths and midpoints from each model, assess each method's ability to remove correlated noise, and discuss how to choose or combine the best data analysis methods. We also refined the orbit from eclipse timings, detecting a significant nonzero eccentricity, and we used our Bayesian Atmospheric Radiative Transfer (BART) code to retrieve the planet's atmosphere, which is consistent with a blackbody. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

  20. Neutron radiography with sub-15 {mu}m resolution through event centroiding

    Energy Technology Data Exchange (ETDEWEB)

    Tremsin, Anton S., E-mail: ast@ssl.berkeley.edu [Space Sciences Laboratory, University of California at Berkeley, Berkeley, CA 94720 (United States); McPhate, Jason B.; Vallerga, John V.; Siegmund, Oswald H.W. [Space Sciences Laboratory, University of California at Berkeley, Berkeley, CA 94720 (United States); Bruce Feller, W. [NOVA Scientific, Inc. 10 Picker Road, Sturbridge, MA 01566 (United States); Lehmann, Eberhard; Kaestner, Anders; Boillat, Pierre; Panzner, Tobias; Filges, Uwe [Spallation Neutron Source Division, Paul Scherrer Institute, CH-5232 Villigen (Switzerland)

    2012-10-01

    Conversion of thermal and cold neutrons into a strong {approx}1 ns electron pulse with an absolute neutron detection efficiency as high as 50-70% makes detectors with {sup 10}B-doped Microchannel Plates (MCPs) very attractive for neutron radiography and microtomography applications. The subsequent signal amplification preserves the location of the event within the MCP pore (typically 6-10 {mu}m in diameter), providing the possibility to perform neutron counting with high spatial resolution. Different event centroiding techniques of the charge landing on a patterned anode enable accurate reconstruction of the neutron position, provided the charge footprints do not overlap within the time required for event processing. The new fast 2 Multiplication-Sign 2 Timepix readout with >1.2 kHz frame rates provides the unique possibility to detect neutrons with sub-15 {mu}m resolution at several MHz/cm{sup 2} counting rates. The results of high resolution neutron radiography experiments presented in this paper, demonstrate the sub-15 {mu}m resolution capability of our detection system. The high degree of collimation and cold spectrum of ICON and BOA beamlines combined with the high spatial resolution and detection efficiency of MCP-Timepix detectors are crucial for high contrast neutron radiography and microtomography with high spatial resolution. The next generation of Timepix electronics with sparsified readout should enable counting rates in excess of 10{sup 7} n/cm{sup 2}/s taking full advantage of high beam intensity of present brightest neutron imaging facilities.

  1. Multiresponse optimisation on biodiesel obtained through a ternary mixture of vegetable oil and animal fat: Simplex-centroid mixture design application

    International Nuclear Information System (INIS)

    Orives, Juliane Resges; Galvan, Diego; Coppo, Rodolfo Lopes; Rodrigues, Cezar Henrique Furtoso; Angilelli, Karina Gomes; Borsato, Dionísio

    2014-01-01

    Highlights: • Mixture experimental design was used which allowed evaluating various responses. • Predictive equation was presented that allows verifying the behavior of the mixtures. • The results depicted that the obtained biodiesel dispensed the use of any additives. - Abstract: The quality of biodiesel is a determining factor in its commercialisation, and parameters such as the Cold Filter Plugging Point (CFPP) and Induction Period (IP) determine its operability in engines on cold days and storage time, respectively. These factors are important in characterisation of the final product. A B100 biodiesel formulation was developed using a multiresponse optimisation, for which the CFPP and cost were minimised, and the IP and yield were maximised. The experiments were carried out according to a simplex-centroid mixture design using soybean oil, beef tallow, and poultry fat. The optimum formulation consisted of 50% soybean oil, 20% beef tallow, and 30% poultry fat and had CFPP values of 1.92 °C, raw material costs of US$ 903.87 ton −1 , an IP of 8.28 h, and a yield of 95.68%. Validation was performed in triplicate and the t-test indicated that there were no difference between the estimated and experimental values for none of the dependent variables, thus indicating efficiency of the joint optimisation in the biodiesel production process that met the criteria for CFPP and IP, as well as high yield and low cost

  2. A comparison of moment magnitude estimates for the European-Mediterranean and Italian regions

    Science.gov (United States)

    Gasperini, Paolo; Lolli, Barbara; Vannucci, Gianfranco; Boschi, Enzo

    2012-09-01

    With the goal of constructing a homogeneous data set of moment magnitudes (Mw) to be used for seismic hazard assessment, we compared Mw estimates from moment tensor catalogues available online. We found an apparent scaling disagreement between Mw estimates from the National Earthquake Information Center (NEIC) of the US Geological Survey and from the Global Centroid Moment Tensor (GCMT) project. We suspect that this is the effect of an underestimation of Mw > 7.0 (M0 > 4.0 × 1019 Nm) computed by NEIC owing to the limitations of their computational approach. We also found an apparent scaling disagreement between GCMT and two regional moment tensor catalogues provided by the 'Eidgenössische Technische Hochschule Zürich' (ETHZ) and by the European-Mediterranean Regional Centroid Moment Tensor (RCMT) project of the Italian 'Istituto Nazionale di Geofisica e Vulcanologia' (INGV). This is probably the effect of the overestimation of Mw < 5.5 (M0 < 2.2 × 1017 Nm), up to year 2002, and of Mw < 5.0 (M0 < 4.0 × 1016 Nm), since year 2003, owing to the physical limitations of the standard CMT inversion method used by GCMT for the earthquakes of relatively low magnitude. If the discrepant data are excluded from the comparisons, the scaling disagreements become insignificant in all cases. We observed instead small absolute offsets (≤0.1 units) for NEIC and ETHZ catalogues with respect to GCMT whereas there is an almost perfect correspondence between RCMT and GCMT. Finally, we found a clear underestimation of about 0.2 units of Mw magnitudes computed at the INGV using the time-domain moment tensor (TDMT) method with respect to those reported by GCMT and RCMT. According to our results, we suggest appropriate offset corrections to be applied to Mw estimates from NEIC, ETHZ and TDMT catalogues before merging their data with GCMT and RCMT catalogues. We suggest as well to discard the probably discrepant data from NEIC and GCMT if other Mw estimates from different sources are

  3. Maximizing signal-to-noise ratio (SNR) in 3-D large bandgap semiconductor pixelated detectors in optimal and non-optimal filtering conditions

    International Nuclear Information System (INIS)

    Rodrigues, Miesher L.; Serra, Andre da S.; He, Zhong; Zhu, Yuefeng

    2009-01-01

    3-D pixelated semiconductor detectors are used in radiation detection applications requiring spectroscopic and imaging information from radiation sources. Reconstruction algorithms used to determine direction and energy of incoming gamma rays can be improved by reducing electronic noise and using optimum filtering techniques. Position information can be improved by achieving sub-pixel resolution. Electronic noise is the limiting factor. Achieving sub-pixel resolution - position of the interaction better than one pixel pitch - in 3-D pixelated semiconductor detectors is a challenging task due to the fast transient characteristics of these signals. This work addresses two fundamental questions: the first is to determine the optimum filter, while the second is to estimate the achievable sub-pixel resolution using this filter. It is shown that the matched filter is the optimum filter when applying the signal-to-noise ratio criteria. Also, non-optimum filters are studied. The framework of 3-D waveform simulation using the Shockley-Ramo Theorem and the Hecht Equation for electron and hole trapping is presented in this work. This waveform simulator can be used to analyze current detectors as well as explore new ideas and concepts in future work. Numerical simulations show that assuming an electronic noise of 3.3 keV it is possible to subdivide the pixel region into 5x5 sub-pixels. After analyzing these results, it is suggested that sub-pixel information can also improve energy resolution. Current noise levels present the major drawback to both achieve sub-pixel resolution as well as improve energy resolution below the current limits. (author)

  4. The generalized centroid difference method for picosecond sensitive determination of lifetimes of nuclear excited states using large fast-timing arrays

    Energy Technology Data Exchange (ETDEWEB)

    Régis, J.-M., E-mail: regis@ikp.uni-koeln.de [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Mach, H. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Simpson, G.S. [Laboratoire de Physique Subatomique et de Cosmologie Grenoble, 53, rue des Martyrs, 38026 Grenoble Cedex (France); Jolie, J.; Pascovici, G.; Saed-Samii, N.; Warr, N. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Bruce, A. [School of Computing, Engineering and Mathematics, University of Brighton, Lewes Road, Brighton BN2 4GJ (United Kingdom); Degenkolb, J. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Fraile, L.M. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Fransen, C. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Ghita, D.G. [Horia Hulubei National Institute for Physics and Nuclear Engineering, 77125 Bucharest (Romania); and others

    2013-10-21

    A novel method for direct electronic “fast-timing” lifetime measurements of nuclear excited states via γ–γ coincidences using an array equipped with N∈N equally shaped very fast high-resolution LaBr{sub 3}(Ce) scintillator detectors is presented. Analogous to the mirror symmetric centroid difference method, the generalized centroid difference method provides two independent “start” and “stop” time spectra obtained by a superposition of the N(N−1)γ–γ time difference spectra of the N detector fast-timing system. The two fast-timing array time spectra correspond to a forward and reverse gating of a specific γ–γ cascade. Provided that the energy response and the electronic time pick-off of the detectors are almost equal, a mean prompt response difference between start and stop events is calibrated and used as a single correction for lifetime determination. These combined fast-timing arrays mean γ–γ time-walk characteristics can be determined for 40keV

  5. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    Science.gov (United States)

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  6. Computing travel time when the exact address is unknown: a comparison of point and polygon ZIP code approximation methods.

    Science.gov (United States)

    Berke, Ethan M; Shi, Xun

    2009-04-29

    Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.

  7. Field programmable gate array based hardware implementation of a gradient filter for edge detection in colour images with subpixel precision

    International Nuclear Information System (INIS)

    Schellhorn, M; Rosenberger, M; Correns, M; Blau, M; Goepfert, A; Rueckwardt, M; Linss, G

    2010-01-01

    Within the field of industrial image processing the use of colour cameras becomes ever more common. Increasingly the established black and white cameras are replaced by economical single-chip colour cameras with Bayer pattern. The use of the additional colour information is particularly important for recognition or inspection. Become interesting however also for the geometric metrology, if measuring tasks can be solved more robust or more exactly. However only few suitable algorithms are available, in order to detect edges with the necessary precision. All attempts require however additional computation expenditure. On the basis of a new filter for edge detection in colour images with subpixel precision, the implementation on a pre-processing hardware platform is presented. Hardware implemented filters offer the advantage that they can be used easily with existing measuring software, since after the filtering a single channel image is present, which unites the information of all colour channels. Advanced field programmable gate arrays represent an ideal platform for the parallel processing of multiple channels. The effective implementation presupposes however a high programming expenditure. On the example of the colour filter implementation, arising problems are analyzed and the chosen solution method is presented.

  8. Detection of a surface breaking crack by using the centroid variations of laser ultrasonic spectrums

    International Nuclear Information System (INIS)

    Park, Seung Kyu; Baik, Sung Hoon; Lim, Chang Hwan; Joo, Young Sang; Jung, Hyun Kyu; Cha, Hyung Ki; Kang, Young June

    2006-01-01

    A laser ultrasonic system is a non-contact inspection device with a wide-band spectrum and a high spatial resolution. It provides absolute measurements of the moving distance and it can be applied to hard-to-access locations including curved or rough surfaces like in a nuclear power plant. In this paper, we have investigated the detection methods of the depth of a surface-breaking crack by using the surface wave of a laser ultrasound. The filtering function of a surface-breaking crack is a kind of a low-pass filter. The higher frequency components are more highly decreased in proportion to the crack depth. Also, the center frequency value of each ultrasound spectrum is decreased in proportion to the crack depth. We extracted the depth information of a surface-breaking crack by observing the centroid variation of the frequency spectrum. We describe the experimental results to detect the crack depth information by using the peak-to-valley values in the time domain and the center frequency values in the frequency domain.

  9. Distributed formation tracking using local coordinate systems

    DEFF Research Database (Denmark)

    Yang, Qingkai; Cao, Ming; Garcia de Marina, Hector

    2018-01-01

    This paper studies the formation tracking problem for multi-agent systems, for which a distributed estimator–controller scheme is designed relying only on the agents’ local coordinate systems such that the centroid of the controlled formation tracks a given trajectory. By introducing a gradient...... descent term into the estimator, the explicit knowledge of the bound of the agents’ speed is not necessary in contrast to existing works, and each agent is able to compute the centroid of the whole formation in finite time. Then, based on the centroid estimation, a distributed control algorithm...

  10. Estimating Velocities of Glaciers Using Sentinel-1 SAR Imagery

    Science.gov (United States)

    Gens, R.; Arnoult, K., Jr.; Friedl, P.; Vijay, S.; Braun, M.; Meyer, F. J.; Gracheva, V.; Hogenson, K.

    2017-12-01

    In an international collaborative effort, software has been developed to estimate the velocities of glaciers by using Sentinel-1 Synthetic Aperture Radar (SAR) imagery. The technique, initially designed by the University of Erlangen-Nuremberg (FAU), has been previously used to quantify spatial and temporal variabilities in the velocities of surging glaciers in the Pakistan Karakoram. The software estimates surface velocities by first co-registering image pairs to sub-pixel precision and then by estimating local offsets based on cross-correlation. The Alaska Satellite Facility (ASF) at the University of Alaska Fairbanks (UAF) has modified the software to make it more robust and also capable of migration into the Amazon Cloud. Additionally, ASF has implemented a prototype that offers the glacier tracking processing flow as a subscription service as part of its Hybrid Pluggable Processing Pipeline (HyP3). Since the software is co-located with ASF's cloud-based Sentinel-1 archive, processing of large data volumes is now more efficient and cost effective. Velocity maps are estimated for Single Look Complex (SLC) SAR image pairs and a digital elevation model (DEM) of the local topography. A time series of these velocity maps then allows the long-term monitoring of these glaciers. Due to the all-weather capabilities and the dense coverage of Sentinel-1 data, the results are complementary to optically generated ones. Together with the products from the Global Land Ice Velocity Extraction project (GoLIVE) derived from Landsat 8 data, glacier speeds can be monitored more comprehensively. Examples from Sentinel-1 SAR-derived results are presented along with optical results for the same glaciers.

  11. Decadal Western Pacific Warm Pool Variability: A Centroid and Heat Content Study.

    Science.gov (United States)

    Kidwell, Autumn; Han, Lu; Jo, Young-Heon; Yan, Xiao-Hai

    2017-10-13

    We examine several characteristics of the Western Pacific Warm Pool (WP) in the past thirty years of mixed interannual variability and climate change. Our study presents the three-dimensional WP centroid (WPC) movement, WP heat content anomaly (HC) and WP volume (WPV) on interannual to decadal time scales. We show the statistically significant correlation between each parameter's interannual anomaly and the NINO 3, NINO 3.4, NINO 4, SOI, and PDO indices. The longitudinal component of the WPC is most strongly correlated with NINO 4 (R = 0.78). The depth component of the WPC has the highest correlation (R = -0.6) with NINO3.4. The WPV and NINO4 have an R-Value of -0.65. HC has the highest correlation with NINO3.4 (R = -0.52). During the study period of 1982-2014, the non-linear trends, derived from ensemble empirical mode decomposition (EEMD), show that the WPV, WP depth and HC have all increased. The WPV has increased by 14% since 1982 and the HC has increased from -1 × 10 8  J/m 2 in 1993 to 10 × 10 8  J/m 2 in 2014. While the largest variances in the latitudinal and longitudinal WPC locations are associated with annual and seasonal timescales, the largest variances in the WPV and HC are due to the multi-decadal non-linear trend.

  12. Estimating Selected Streamflow Statistics Representative of 1930-2002 in West Virginia

    Science.gov (United States)

    Wiley, Jeffrey B.

    2008-01-01

    Regional equations and procedures were developed for estimating 1-, 3-, 7-, 14-, and 30-day 2-year; 1-, 3-, 7-, 14-, and 30-day 5-year; and 1-, 3-, 7-, 14-, and 30-day 10-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the 1-day, 3-year and 4-day, 3-year biologically based low-flow frequency values; the U.S. Environmental Protection Agency harmonic-mean flows; and the 10-, 25-, 50-, 75-, and 90-percent flow-duration values. Regional equations were developed using ordinary least-squares regression using statistics from 117 U.S. Geological Survey continuous streamflow-gaging stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia - North, South-Central, and Eastern Panhandle - were determined. Drainage area, precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. Estimating procedures are presented for determining statistics at a gaging station, a partial-record station, and an ungaged location. Examples of some estimating procedures are presented.

  13. Model Independent Analysis of Beam Centroid Dynamics in Accelerators

    International Nuclear Information System (INIS)

    Wang, Chun-xi

    2003-01-01

    Fundamental issues in Beam-Position-Monitor (BPM)-based beam dynamics observations are studied in this dissertation. The major topic is the Model-Independent Analysis (MIA) of beam centroid dynamics. Conventional beam dynamics analysis requires a certain machine model, which itself of ten needs to be refined by beam measurements. Instead of using any particular machine model, MIA relies on a statistical analysis of the vast amount of BPM data that often can be collected non-invasively during normal machine operation. There are two major parts in MIA. One is noise reduction and degrees-of-freedom analysis using a singular value decomposition of a BPM-data matrix, which constitutes a principal component analysis of BPM data. The other is a physical base decomposition of the BPM-data matrix based on the time structure of pulse-by-pulse beam and/or machine parameters. The combination of these two methods allows one to break the resolution limit set by individual BPMs and observe beam dynamics at more accurate levels. A physical base decomposition is particularly useful for understanding various beam dynamics issues. MIA improves observation and analysis of beam dynamics and thus leads to better understanding and control of beams in both linacs and rings. The statistical nature of MIA makes it potentially useful in other fields. Another important topic discussed in this dissertation is the measurement of a nonlinear Poincare section (one-turn) map in circular accelerators. The beam dynamics in a ring is intrinsically nonlinear. In fact, nonlinearities are a major factor that limits stability and influences the dynamics of halos. The Poincare section map plays a basic role in characterizing and analyzing such a periodic nonlinear system. Although many kinds of nonlinear beam dynamics experiments have been conducted, no direct measurement of a nonlinear map has been reported for a ring in normal operation mode. This dissertation analyzes various issues concerning map

  14. Model Independent Analysis of Beam Centroid Dynamics in Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chun-xi

    2003-04-21

    Fundamental issues in Beam-Position-Monitor (BPM)-based beam dynamics observations are studied in this dissertation. The major topic is the Model-Independent Analysis (MIA) of beam centroid dynamics. Conventional beam dynamics analysis requires a certain machine model, which itself of ten needs to be refined by beam measurements. Instead of using any particular machine model, MIA relies on a statistical analysis of the vast amount of BPM data that often can be collected non-invasively during normal machine operation. There are two major parts in MIA. One is noise reduction and degrees-of-freedom analysis using a singular value decomposition of a BPM-data matrix, which constitutes a principal component analysis of BPM data. The other is a physical base decomposition of the BPM-data matrix based on the time structure of pulse-by-pulse beam and/or machine parameters. The combination of these two methods allows one to break the resolution limit set by individual BPMs and observe beam dynamics at more accurate levels. A physical base decomposition is particularly useful for understanding various beam dynamics issues. MIA improves observation and analysis of beam dynamics and thus leads to better understanding and control of beams in both linacs and rings. The statistical nature of MIA makes it potentially useful in other fields. Another important topic discussed in this dissertation is the measurement of a nonlinear Poincare section (one-turn) map in circular accelerators. The beam dynamics in a ring is intrinsically nonlinear. In fact, nonlinearities are a major factor that limits stability and influences the dynamics of halos. The Poincare section map plays a basic role in characterizing and analyzing such a periodic nonlinear system. Although many kinds of nonlinear beam dynamics experiments have been conducted, no direct measurement of a nonlinear map has been reported for a ring in normal operation mode. This dissertation analyzes various issues concerning map

  15. MLC quality assurance using EPID: A fitting technique with subpixel precision

    International Nuclear Information System (INIS)

    Mamalui-Hunter, Maria; Li, Harold; Low, Daniel A.

    2008-01-01

    Amorphous silicon based electronic portal imaging devices (EPIDs) have been shown to be a good alternative to radiographic film for routine quality assurance (QA) of multileaf collimator (MLC) positioning accuracy. In this work, we present a method of acquiring an EPID image of a traditional strip-test image using analytical fits of the interleaf and leaf abutment image signatures. After exposure, the EPID image pixel values are divided by an open field image to remove EPID response and radiation field variations. Profiles acquired in the direction orthogonal to the leaf motion exhibit small peaks caused by interleaf leakage. Gaussian profiles are fitted to the interleaf leakage peaks, the results of which are, using multiobjective optimization, used to calculate the image rotational angle with respect to the collimator axis of rotation. The relative angle is used to rotate the image to align the MLC leaf travel to the image pixel axes. The leaf abutments also present peaks that are fitted by heuristic functions, in this case modified Lorentzian functions. The parameters of the Lorentzian functions are used to parameterize the leaf gap width and positions. By imaging a set of MLC fields with varying gaps forming symmetric and asymmetric abutments, calibration curves with regard to relative peak height (RPH) versus nominal gap width are obtained. Based on this calibration data, the individual leaf positions are calculated to compare with the nominal programmed positions. The results demonstrate that the collimator rotation angle can be determined as accurate as 0.01 deg. . A change in MLC gap width of 0.2 mm leads to a change in RPH of about 10%. For asymmetrically produced gaps, a 0.2 mm MLC leaf gap width change causes 0.2 pixel peak position change. Subpixel resolution is obtained by using a parameterized fit of the relatively large abutment peaks. By contrast, for symmetrical gap changes, the peak position remains unchanged with a standard deviation of 0

  16. 360-degrees profilometry using strip-light projection coupled to Fourier phase-demodulation.

    Science.gov (United States)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-11

    360 degrees (360°) digitalization of three dimensional (3D) solids using a projected light-strip is a well-established technique in academic and commercial profilometers. These profilometers project a light-strip over the digitizing solid while the solid is rotated a full revolution or 360-degrees. Then, a computer program typically extracts the centroid of this light-strip, and by triangulation one obtains the shape of the solid. Here instead of using intensity-based light-strip centroid estimation, we propose to use Fourier phase-demodulation for 360° solid digitalization. The advantage of Fourier demodulation over strip-centroid estimation is that the accuracy of phase-demodulation linearly-increases with the fringe density, while in strip-light the centroid-estimation errors are independent. Here we proposed first to construct a carrier-frequency fringe-pattern by closely adding the individual light-strip images recorded while the solid is being rotated. Next, this high-density fringe-pattern is phase-demodulated using the standard Fourier technique. To test the feasibility of this Fourier demodulation approach, we have digitized two solids with increasing topographic complexity: a Rubik's cube and a plastic model of a human-skull. According to our results, phase demodulation based on the Fourier technique is less noisy than triangulation based on centroid light-strip estimation. Moreover, Fourier demodulation also provides the amplitude of the analytic signal which is a valuable information for the visualization of surface details.

  17. Self-consistent study of space-charge-dominated beams in a misaligned transport system

    International Nuclear Information System (INIS)

    Sing Babu, P.; Goswami, A.; Pandit, V.S.

    2013-01-01

    A self-consistent particle-in-cell (PIC) simulation method is developed to investigate the dynamics of space-charge-dominated beams through a misaligned solenoid based transport system. Evolution of beam centroid, beam envelope and emittance is studied as a function of misalignment parameters for various types of beam distributions. Simulation results performed up to 40 mA of proton beam indicate that centroid oscillations induced by the displacement and rotational misalignments of solenoids do not depend of the beam distribution. It is shown that the beam envelope around the centroid is independent of the centroid motion for small centroid oscillation. In addition, we have estimated the loss of beam during the transport caused by the misalignment for various beam distributions

  18. Bayesian ISOLA: new tool for automated centroid moment tensor inversion

    Science.gov (United States)

    Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John

    2017-04-01

    Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in

  19. Dynamics of the off axis intense beam propagation in a spiral inflector

    Energy Technology Data Exchange (ETDEWEB)

    Goswami, A., E-mail: animesh@vecc.gov.in; Sing Babu, P., E-mail: psb@vecc.gov.in; Pandit, V.S., E-mail: pandit@vecc.gov.in

    2017-01-01

    In this paper the dynamics of space charge dominated beam in a spiral inflector is discussed by developing equations of motion for centroid and beam envelope for the off axis beam propagation. Evolution of the beam centroid and beam envelope is studied as a function of the beam current for various input beam parameters. The transmission of beam through the inflector is also estimated as a function of the beam current for an on axis and off axis beam by tracking a large number of particles. Simulation studies show that shift of the centroid from the axis at the inflector entrance affects the centroid location at the exit of the inflector and causes reduction in the beam transmission. The centroid shift at the entrance in the horizontal plane (h plane) is more critical as it affects the centroid shift in the vertical plane (u plane) by a large amount near the inflector exit where the available aperture is small. The beam transmission is found to reduce with increase in the centroid shift as well as with the beam current.

  20. Observational Evidence for the Effect of Amplification Bias in Gravitational Microlensing Experiments

    Science.gov (United States)

    Han, Cheongho; Jeong, Youngjin; Kim, Ho-Il

    1998-11-01

    Recently Alard, Mao, & Guibert and Alard proposed to detect the shift of a star's image centroid, δx, as a method to identify the lensed source among blended stars. Goldberg & Woźniak actually applied this method to the OGLE-1 database and found that seven of 15 events showed significant centroid shifts of δx >~ 0.2". The amount of centroid shift has been estimated theoretically by Goldberg; however, he treated the problem in general and did not apply it to a particular survey or field and therefore based his estimate on simple toy model luminosity functions (i.e., power laws). In this paper, we construct the expected distribution of δx for Galactic bulge events based on the precise stellar luminosity function observed by Holtzman et al. using the Hubble Space Telescope. Their luminosity function is complete up to MI ~ 9.0 (MV ~ 12), which corresponds to faint M-type stars. In our analysis we find that regular blending cannot produce a large fraction of events with measurable centroid shifts. By contrast, a significant fraction of events would have measurable centroid shifts if they are affected by amplification-bias blending. Therefore, the measurements of large centroid shifts for an important fraction of microlensing events of Goldberg & Woźniak confirm the prediction of Han & Alard that a large fraction of Galactic bulge events are affected by amplification-bias blending.

  1. Quick regional centroid moment tensor solutions for the Emilia 2012 (northern Italy seismic sequence

    Directory of Open Access Journals (Sweden)

    Silvia Pondrelli

    2012-10-01

    Full Text Available In May 2012, a seismic sequence struck the Emilia region (northern Italy. The mainshock, of Ml 5.9, occurred on May 20, 2012, at 02:03 UTC. This was preceded by a smaller Ml 4.1 foreshock some hours before (23:13 UTC on May 19, 2012 and followed by more than 2,500 earthquakes in the magnitude range from Ml 0.7 to 5.2. In addition, on May 29, 2012, three further strong earthquakes occurred, all with magnitude Ml ≥5.2: a Ml 5.8 earthquake in the morning (07:00 UTC, followed by two events within just 5 min of each other, one at 10:55 UTC (Ml 5.3 and the second at 11:00 UTC (Ml 5.2. For all of the Ml ≥4.0 earthquakes in Italy and for all of the Ml ≥4.5 in the Mediterranean area, an automatic procedure for the computation of a regional centroid moment tensor (RCMT is triggered by an email alert. Within 1 h of the event, a manually revised quick RCMT (QRCMT can be published on the website if the solution is considered stable. In particular, for the Emilia seismic sequence, 13 QRCMTs were determined and for three of them, those with M >5.5, the automatically computed QRCMTs fitted the criteria for publication without manual revision. Using this seismic sequence as a test, we can then identify the magnitude threshold for automatic publication of our QRCMTs.

  2. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    Science.gov (United States)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

  3. Solar resources estimation combining digital terrain models and satellite images techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bosch, J.L.; Batlles, F.J. [Universidad de Almeria, Departamento de Fisica Aplicada, Ctra. Sacramento s/n, 04120-Almeria (Spain); Zarzalejo, L.F. [CIEMAT, Departamento de Energia, Madrid (Spain); Lopez, G. [EPS-Universidad de Huelva, Departamento de Ingenieria Electrica y Termica, Huelva (Spain)

    2010-12-15

    One of the most important steps to make use of any renewable energy is to perform an accurate estimation of the resource that has to be exploited. In the designing process of both active and passive solar energy systems, radiation data is required for the site, with proper spatial resolution. Generally, a radiometric stations network is used in this evaluation, but when they are too dispersed or not available for the study area, satellite images can be utilized as indirect solar radiation measurements. Although satellite images cover wide areas with a good acquisition frequency they usually have a poor spatial resolution limited by the size of the image pixel, and irradiation must be interpolated to evaluate solar irradiation at a sub-pixel scale. When pixels are located in flat and homogeneous areas, correlation of solar irradiation is relatively high, and classic interpolation can provide a good estimation. However, in complex topography zones, data interpolation is not adequate and the use of Digital Terrain Model (DTM) information can be helpful. In this work, daily solar irradiation is estimated for a wide mountainous area using a combination of Meteosat satellite images and a DTM, with the advantage of avoiding the necessity of ground measurements. This methodology utilizes a modified Heliosat-2 model, and applies for all sky conditions; it also introduces a horizon calculation of the DTM points and accounts for the effect of snow covers. Model performance has been evaluated against data measured in 12 radiometric stations, with results in terms of the Root Mean Square Error (RMSE) of 10%, and a Mean Bias Error (MBE) of +2%, both expressed as a percentage of the mean value measured. (author)

  4. Optimal deep neural networks for sparse recovery via Laplace techniques

    OpenAIRE

    Limmer, Steffen; Stanczak, Slawomir

    2017-01-01

    This paper introduces Laplace techniques for designing a neural network, with the goal of estimating simplex-constraint sparse vectors from compressed measurements. To this end, we recast the problem of MMSE estimation (w.r.t. a pre-defined uniform input distribution) as the problem of computing the centroid of some polytope that results from the intersection of the simplex and an affine subspace determined by the measurements. Owing to the specific structure, it is shown that the centroid ca...

  5. Cellular Phone Towers, Cell towers developed for Appraiser's Department in 2003. Location was based upon parcel centroids, and corrected to orthophotography. Probably includes towers other than cell towers (uncertain). Not published., Published in 2003, 1:1200 (1in=100ft) scale, Sedgwick County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Cellular Phone Towers dataset current as of 2003. Cell towers developed for Appraiser's Department in 2003. Location was based upon parcel centroids, and corrected...

  6. Towards breaking the spatial resolution barriers: An optical flow and super-resolution approach for sea ice motion estimation

    Science.gov (United States)

    Petrou, Zisis I.; Xian, Yang; Tian, YingLi

    2018-04-01

    Estimation of sea ice motion at fine scales is important for a number of regional and local level applications, including modeling of sea ice distribution, ocean-atmosphere and climate dynamics, as well as safe navigation and sea operations. In this study, we propose an optical flow and super-resolution approach to accurately estimate motion from remote sensing images at a higher spatial resolution than the original data. First, an external example learning-based super-resolution method is applied on the original images to generate higher resolution versions. Then, an optical flow approach is applied on the higher resolution images, identifying sparse correspondences and interpolating them to extract a dense motion vector field with continuous values and subpixel accuracies. Our proposed approach is successfully evaluated on passive microwave, optical, and Synthetic Aperture Radar data, proving appropriate for multi-sensor applications and different spatial resolutions. The approach estimates motion with similar or higher accuracy than the original data, while increasing the spatial resolution of up to eight times. In addition, the adopted optical flow component outperforms a state-of-the-art pattern matching method. Overall, the proposed approach results in accurate motion vectors with unprecedented spatial resolutions of up to 1.5 km for passive microwave data covering the entire Arctic and 20 m for radar data, and proves promising for numerous scientific and operational applications.

  7. Using VEGETATION satellite data and the crop model STICS-Prairie to estimate pasture production at the national level in France

    Science.gov (United States)

    Di Bella, C.; Faivre, R.; Ruget, F.; Seguin, B.

    In France, pastures constitute an important land cover type, sustaining principally husbandry production. The absence of low-cost methods applicable to large regions has conducted to the use of simulation models, as in the ISOP system. Remote sensing data may be considered as a potential tool to improve a correct diagnosis in a real time framework. Thirteen forage regions (FR) of France, differing in their soil, climatic and productive characteristics were selected for this purpose. SPOT4-VEGETATION images have been used to provide, using subpixel estimation models, the spectral signature corresponding to pure pasture conditions. This information has been related with some growth variables estimated by STICS-Prairie model (inside ISOP system). Beyond the good general agreement between the two types of data, we found that the best relations were observed between NDVI middle infrared based index (SWVI) and leaf area index. The results confirm the capacities of the satellite data to provide complementary productive variables and help to identify the spatial and temporal differences between satellite and model information, mainly during the harvesting periods. This could contribute to improve the evaluations of the model on a regional scale.

  8. Evaluation of the Airborne CASI/TASI Ts-VI Space Method for Estimating Near-Surface Soil Moisture

    Directory of Open Access Journals (Sweden)

    Lei Fan

    2015-03-01

    Full Text Available High spatial resolution airborne data with little sub-pixel heterogeneity were used to evaluate the suitability of the temperature/vegetation (Ts/VI space method developed from satellite observations, and were explored to improve the performance of the Ts/VI space method for estimating soil moisture (SM. An evaluation of the airborne ΔTs/Fr space (incorporated with air temperature revealed that normalized difference vegetation index (NDVI saturation and disturbed pixels were hindering the appropriate construction of the space. The non-disturbed ΔTs/Fr space, which was modified by adjusting the NDVI saturation and eliminating the disturbed pixels, was clearly correlated with the measured SM. The SM estimations of the non-disturbed ΔTs/Fr  space using the evaporative fraction (EF and temperature vegetation dryness index (TVDI were validated by using the SM measured at a depth of 4 cm, which was determined according to the land surface types. The validation results show that the EF approach provides superior estimates with a lower RMSE (0.023 m3·m−3 value and a higher correlation coefficient (0.68 than the TVDI. The application of the airborne ΔTs/Fr  space shows that the two modifications proposed in this study strengthen the link between the ΔTs/Fr space and SM, which is important for improving the precision of the remote sensing Ts/VI space method for monitoring SM.

  9. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    International Nuclear Information System (INIS)

    Lerche, Ch.W.; Ros, A.; Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A.; Sanchez, F.; Benlloch, J.M.

    2009-01-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  10. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)

    2009-06-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  11. Development of a Standardized Methodology for the Use of COSI-Corr Sub-Pixel Image Correlation to Determine Surface Deformation Patterns in Large Magnitude Earthquakes.

    Science.gov (United States)

    Milliner, C. W. D.; Dolan, J. F.; Hollingsworth, J.; Leprince, S.; Ayoub, F.

    2014-12-01

    Coseismic surface deformation is typically measured in the field by geologists and with a range of geophysical methods such as InSAR, LiDAR and GPS. Current methods, however, either fail to capture the near-field coseismic surface deformation pattern where vital information is needed, or lack pre-event data. We develop a standardized and reproducible methodology to fully constrain the surface, near-field, coseismic deformation pattern in high resolution using aerial photography. We apply our methodology using the program COSI-corr to successfully cross-correlate pairs of aerial, optical imagery before and after the 1992, Mw 7.3 Landers and 1999, Mw 7.1 Hector Mine earthquakes. This technique allows measurement of the coseismic slip distribution and magnitude and width of off-fault deformation with sub-pixel precision. This technique can be applied in a cost effective manner for recent and historic earthquakes using archive aerial imagery. We also use synthetic tests to constrain and correct for the bias imposed on the result due to use of a sliding window during correlation. Correcting for artificial smearing of the tectonic signal allows us to robustly measure the fault zone width along a surface rupture. Furthermore, the synthetic tests have constrained for the first time the measurement precision and accuracy of estimated fault displacements and fault-zone width. Our methodology provides the unique ability to robustly understand the kinematics of surface faulting while at the same time accounting for both off-fault deformation and measurement biases that typically complicates such data. For both earthquakes we find that our displacement measurements derived from cross-correlation are systematically larger than the field displacement measurements, indicating the presence of off-fault deformation. We show that the Landers and Hector Mine earthquake accommodated 46% and 38% of displacement away from the main primary rupture as off-fault deformation, over a mean

  12. A New Scrambling Evaluation Scheme Based on Spatial Distribution Entropy and Centroid Difference of Bit-Plane

    Science.gov (United States)

    Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi

    Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.

  13. Detection, emission estimation and risk prediction of forest fires in China using satellite sensors and simulation models in the past three decades--an overview.

    Science.gov (United States)

    Zhang, Jia-Hua; Yao, Feng-Mei; Liu, Cheng; Yang, Li-Min; Boken, Vijendra K

    2011-08-01

    Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP) estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction, have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR) and near infrared (NIR) channels of satellite sensors have been employed for detecting live fuel moisture content (FMC), and the Normalized Difference Water Index (NDWI) was used for evaluating the forest vegetation condition and its moisture status.

  14. A probabilistic sampling method (PSM for estimating geographic distance to health services when only the region of residence is known

    Directory of Open Access Journals (Sweden)

    Peek-Asa Corinne

    2011-01-01

    Full Text Available Abstract Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM, which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks within each participant's reported region (ZIP Code, weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV. We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance

  15. A probabilistic sampling method (PSM) for estimating geographic distance to health services when only the region of residence is known

    Science.gov (United States)

    2011-01-01

    Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM), which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks) within each participant's reported region (ZIP Code), weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV). We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance estimates. The algorithm

  16. Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle

    Science.gov (United States)

    Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon

    2018-03-01

    Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.

  17. Validation of an elastic registration technique to estimate anatomical lung modification in Non-Small-Cell Lung Cancer Tomotherapy

    International Nuclear Information System (INIS)

    Faggiano, Elena; Cattaneo, Giovanni M; Ciavarro, Cristina; Dell'Oca, Italo; Persano, Diego; Calandrino, Riccardo; Rizzo, Giovanna

    2011-01-01

    The study of lung parenchyma anatomical modification is useful to estimate dose discrepancies during the radiation treatment of Non-Small-Cell Lung Cancer (NSCLC) patients. We propose and validate a method, based on free-form deformation and mutual information, to elastically register planning kVCT with daily MVCT images, to estimate lung parenchyma modification during Tomotherapy. We analyzed 15 registrations between the planning kVCT and 3 MVCT images for each of the 5 NSCLC patients. Image registration accuracy was evaluated by visual inspection and, quantitatively, by Correlation Coefficients (CC) and Target Registration Errors (TRE). Finally, a lung volume correspondence analysis was performed to specifically evaluate registration accuracy in lungs. Results showed that elastic registration was always satisfactory, both qualitatively and quantitatively: TRE after elastic registration (average value of 3.6 mm) remained comparable and often smaller than voxel resolution. Lung volume variations were well estimated by elastic registration (average volume and centroid errors of 1.78% and 0.87 mm, respectively). Our results demonstrate that this method is able to estimate lung deformations in thorax MVCT, with an accuracy within 3.6 mm comparable or smaller than the voxel dimension of the kVCT and MVCT images. It could be used to estimate lung parenchyma dose variations in thoracic Tomotherapy

  18. Estimating tree bole volume using artificial neural network models for four species in Turkey.

    Science.gov (United States)

    Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V

    2010-01-01

    Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.

  19. The influence of multi-season imagery on models of canopy cover: A case study

    Science.gov (United States)

    John W. Coulston; Dennis M. Jacobs; Chris R. King; Ivey C. Elmore

    2013-01-01

    Quantifying tree canopy cover in a spatially explicit fashion is important for broad-scale monitoring of ecosystems and for management of natural resources. Researchers have developed empirical models of tree canopy cover to produce geospatial products. For subpixel models, percent tree canopy cover estimates (derived from fine-scale imagery) serve as the response...

  20. Automatic detection and quantitative analysis of cells in the mouse primary motor cortex

    Science.gov (United States)

    Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui

    2014-09-01

    Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.

  1. Machine Learning on Images: Combining Passive Microwave and Optical Data to Estimate Snow Water Equivalent

    Science.gov (United States)

    Dozier, J.; Tolle, K.; Bair, N.

    2014-12-01

    We have a problem that may be a specific example of a generic one. The task is to estimate spatiotemporally distributed estimates of snow water equivalent (SWE) in snow-dominated mountain environments, including those that lack on-the-ground measurements. Several independent methods exist, but all are problematic. The remotely sensed date of disappearance of snow from each pixel can be combined with a calculation of melt to reconstruct the accumulated SWE for each day back to the last significant snowfall. Comparison with streamflow measurements in mountain ranges where such data are available shows this method to be accurate, but the big disadvantage is that SWE can only be calculated retroactively after snow disappears, and even then only for areas with little accumulation during the melt season. Passive microwave sensors offer real-time global SWE estimates but suffer from several issues, notably signal loss in wet snow or in forests, saturation in deep snow, subpixel variability in the mountains owing to the large (~25 km) pixel size, and SWE overestimation in the presence of large grains such as depth and surface hoar. Throughout the winter and spring, snow-covered area can be measured at sub-km spatial resolution with optical sensors, with accuracy and timeliness improved by interpolating and smoothing across multiple days. So the question is, how can we establish the relationship between Reconstruction—available only after the snow goes away—and passive microwave and optical data to accurately estimate SWE during the snow season, when the information can help forecast spring runoff? Linear regression provides one answer, but can modern machine learning techniques (used to persuade people to click on web advertisements) adapt to improve forecasts of floods and droughts in areas where more than one billion people depend on snowmelt for their water resources?

  2. Evaluating Vegetation Type Effects on Land Surface Temperature at the City Scale

    Science.gov (United States)

    Wetherley, E. B.; McFadden, J. P.; Roberts, D. A.

    2017-12-01

    Understanding the effects of different plant functional types and urban materials on surface temperatures has significant consequences for climate modeling, water management, and human health in cities. To date, doing so at the urban scale has been complicated by small-scale surface heterogeneity and limited data. In this study we examined gradients of land surface temperature (LST) across sub-pixel mixtures of different vegetation types and urban materials across the entire Los Angeles, CA, metropolitan area (4,283 km2). We used AVIRIS airborne hyperspectral imagery (36 m resolution, 224 bands, 0.35 - 2.5 μm) to estimate sub-pixel fractions of impervious, pervious, tree, and turfgrass surfaces, validating them with simulated mixtures constructed from image spectra. We then used simultaneously imaged LST retrievals collected at multiple times of day to examine how temperature changed along gradients of the sub-pixel mixtures. Diurnal in situ LST measurements were used to confirm image values. Sub-pixel fractions were well correlated with simulated validation data for turfgrass (r2 = 0.71), tree (r2 = 0.77), impervious (r2 = 0.77), and pervious (r2 = 0.83) surfaces. The LST of pure pixels showed the effects of both the diurnal cycle and the surface type, with vegetated classes having a smaller diurnal temperature range of 11.6°C whereas non-vegetated classes had a diurnal range of 16.2°C (similar to in situ measurements collected simultaneously with the imagery). Observed LST across fractional gradients of turf/impervious and tree/impervious sub-pixel mixtures decreased linearly with increasing vegetation fraction. The slopes of decreasing LST were significantly different between tree and turf mixtures, with steeper slopes observed for turf (p < 0.05). These results suggest that different physiological characteristics and different access to irrigation water of urban trees and turfgrass results in significantly different LST effects, which can be detected at

  3. Positron annihilation studies in the field induced depletion regions of metal-oxide-semiconductor structures

    Science.gov (United States)

    Asoka-Kumar, P.; Leung, T. C.; Lynn, K. G.; Nielsen, B.; Forcier, M. P.; Weinberg, Z. A.; Rubloff, G. W.

    1992-06-01

    The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions.

  4. Positron annihilation studies in the field induced depletion regions of metal-oxide-semiconductor structures

    International Nuclear Information System (INIS)

    Asoka-Kumar, P.; Leung, T.C.; Lynn, K.G.; Nielsen, B.; Forcier, M.P.; Weinberg, Z.A.; Rubloff, G.W.

    1992-01-01

    The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions

  5. Detection, Emission Estimation and Risk Prediction of Forest Fires in China Using Satellite Sensors and Simulation Models in the Past Three Decades—An Overview

    Directory of Open Access Journals (Sweden)

    Cheng Liu

    2011-07-01

    Full Text Available Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction,have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR and near infrared (NIR channels of satellite sensors have been employed for detecting live fuel moisture content (FMC, and the Normalized Difference Water Index (NDWI was used for evaluating the forest vegetation condition and its moisture status.

  6. Detection, Emission Estimation and Risk Prediction of Forest Fires in China Using Satellite Sensors and Simulation Models in the Past Three Decades—An Overview

    Science.gov (United States)

    Zhang, Jia-Hua; Yao, Feng-Mei; Liu, Cheng; Yang, Li-Min; Boken, Vijendra K.

    2011-01-01

    Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP) estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction, have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR) and near infrared (NIR) channels of satellite sensors have been employed for detecting live fuel moisture content (FMC), and the Normalized Difference Water Index (NDWI) was used for evaluating the forest vegetation condition and its moisture status. PMID:21909297

  7. Comparison of ArcGIS and SAS Geostatistical Analyst to Estimate Population-Weighted Monthly Temperature for US Counties.

    Science.gov (United States)

    Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang

    Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R 2 , mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R 2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.

  8. Estimation and mapping of above ground biomass and carbon of ...

    African Journals Online (AJOL)

    In addition, field data from 35 sample plots comprising of the Diameter at Breast Height (DBH), co-ordinates of centroids and angles to the top and bottom of the individual trees was used for the analysis. The relationship between biomass and radar backscatter for selected sample plots was established using pairwise ...

  9. Reflective photovoltaics

    Energy Technology Data Exchange (ETDEWEB)

    Lentine, Anthony L.; Nielson, Gregory N.; Cruz-Campa, Jose Luis; Okandan, Murat; Goeke, Ronald S.

    2018-03-06

    A photovoltaic module includes colorized reflective photovoltaic cells that act as pixels. The colorized reflective photovoltaic cells are arranged so that reflections from the photovoltaic cells or pixels visually combine into an image on the photovoltaic module. The colorized photovoltaic cell or pixel is composed of a set of 100 to 256 base color sub-pixel reflective segments or sub-pixels. The color of each pixel is determined by the combination of base color sub-pixels forming the pixel. As a result, each pixel can have a wide variety of colors using a set of base colors, which are created, from sub-pixel reflective segments having standard film thicknesses.

  10. Component optimization of dairy manure vermicompost, straw, and peat in seedling compressed substrates using simplex-centroid design.

    Science.gov (United States)

    Yang, Longyuan; Cao, Hongliang; Yuan, Qiaoxia; Luoa, Shuai; Liu, Zhigang

    2018-03-01

    Vermicomposting is a promising method to disposal dairy manures, and the dairy manure vermicompost (DMV) to replace expensive peat is of high value in the application of seedling compressed substrates. In this research, three main components: DMV, straw, and peat, are conducted in the compressed substrates, and the effect of individual components and the corresponding optimal ratio for the seedling production are significant. To address these issues, the simplex-centroid experimental mixture design is employed, and the cucumber seedling experiment is conducted to evaluate the compressed substrates. Results demonstrated that the mechanical strength and physicochemical properties of compressed substrates for cucumber seedling can be well satisfied with suitable mixture ratio of the components. Moreover, DMV, straw, and peat) could be determined at 0.5917:0.1608:0.2475 when the weight coefficients of the three parameters (shoot length, root dry weight, and aboveground dry weight) were 1:1:1. For different purpose, the optimum ratio can be little changed on the basis of different weight coefficients. Compressed substrate is lump and has certain mechanical strength, produced by application of mechanical pressure to the seedling substrates. It will not harm seedlings when bedding out the seedlings, since the compressed substrate and seedling are bedded out together. However, there is no one using the vermicompost and agricultural waste components of compressed substrate for vegetable seedling production before. Thus, it is important to understand the effect of individual components to seedling production, and to determine the optimal ratio of components.

  11. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    International Nuclear Information System (INIS)

    Poynee, L A

    2003-01-01

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation

  12. PyCCF: Python Cross Correlation Function for reverberation mapping studies

    Science.gov (United States)

    Sun, Mouyuan; Grier, C. J.; Peterson, B. M.

    2018-05-01

    PyCCF emulates a Fortran program written by B. Peterson for use with reverberation mapping. The code cross correlates two light curves that are unevenly sampled using linear interpolation and measures the peak and centroid of the cross-correlation function. In addition, it is possible to run Monto Carlo iterations using flux randomization and random subset selection (RSS) to produce cross-correlation centroid distributions to estimate the uncertainties in the cross correlation results.

  13. AROSICS: An Automated and Robust Open-Source Image Co-Registration Software for Multi-Sensor Satellite Data

    Directory of Open Access Journals (Sweden)

    Daniel Scheffler

    2017-07-01

    Full Text Available Geospatial co-registration is a mandatory prerequisite when dealing with remote sensing data. Inter- or intra-sensoral misregistration will negatively affect any subsequent image analysis, specifically when processing multi-sensoral or multi-temporal data. In recent decades, many algorithms have been developed to enable manual, semi- or fully automatic displacement correction. Especially in the context of big data processing and the development of automated processing chains that aim to be applicable to different remote sensing systems, there is a strong need for efficient, accurate and generally usable co-registration. Here, we present AROSICS (Automated and Robust Open-Source Image Co-Registration Software, a Python-based open-source software including an easy-to-use user interface for automatic detection and correction of sub-pixel misalignments between various remote sensing datasets. It is independent of spatial or spectral characteristics and robust against high degrees of cloud coverage and spectral and temporal land cover dynamics. The co-registration is based on phase correlation for sub-pixel shift estimation in the frequency domain utilizing the Fourier shift theorem in a moving-window manner. A dense grid of spatial shift vectors can be created and automatically filtered by combining various validation and quality estimation metrics. Additionally, the software supports the masking of, e.g., clouds and cloud shadows to exclude such areas from spatial shift detection. The software has been tested on more than 9000 satellite images acquired by different sensors. The results are evaluated exemplarily for two inter-sensoral and two intra-sensoral use cases and show registration results in the sub-pixel range with root mean square error fits around 0.3 pixels and better.

  14. 2017 ARL Summer Student Program. Volume 1: Symposium Presentations

    Science.gov (United States)

    2017-12-01

    estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...Laws Limits of Subpixel Motion Detection with a Video Camera The Effect of Imperceptible Noise Stimulation on the Spinal Reflex Solving...Research & Engineering Directorate The Effect of Imperceptible Noise Stimulation on the Spinal Reflex Special awards were presented to 2 of the

  15. Research on the method of information system risk state estimation based on clustering particle filter

    Directory of Open Access Journals (Sweden)

    Cui Jia

    2017-05-01

    Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  16. Research on the method of information system risk state estimation based on clustering particle filter

    Science.gov (United States)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  17. The effect of split pixel HDR image sensor technology on MTF measurements

    Science.gov (United States)

    Deegan, Brian M.

    2014-03-01

    Split-pixel HDR sensor technology is particularly advantageous in automotive applications, because the images are captured simultaneously rather than sequentially, thereby reducing motion blur. However, split pixel technology introduces artifacts in MTF measurement. To achieve a HDR image, raw images are captured from both large and small sub-pixels, and combined to make the HDR output. In some cases, a large sub-pixel is used for long exposure captures, and a small sub-pixel for short exposures, to extend the dynamic range. The relative size of the photosensitive area of the pixel (fill factor) plays a very significant role in the output MTF measurement. Given an identical scene, the MTF will be significantly different, depending on whether you use the large or small sub-pixels i.e. a smaller fill factor (e.g. in the short exposure sub-pixel) will result in higher MTF scores, but significantly greater aliasing. Simulations of split-pixel sensors revealed that, when raw images from both sub-pixels are combined, there is a significant difference in rising edge (i.e. black-to-white transition) and falling edge (white-to-black) reproduction. Experimental results showed a difference of ~50% in measured MTF50 between the falling and rising edges of a slanted edge test chart.

  18. Photometric Analysis in the Kepler Science Operations Center Pipeline

    Science.gov (United States)

    Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.

    2010-01-01

    We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (thirty minute) and 512 short cadence (one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss the science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.

  19. Estimating basin lagtime and hydrograph-timing indexes used to characterize stormflows for runoff-quality analysis

    Science.gov (United States)

    Granato, Gregory E.

    2012-01-01

    A nationwide study to better define triangular-hydrograph statistics for use with runoff-quality and flood-flow studies was done by the U.S. Geological Survey (USGS) in cooperation with the Federal Highway Administration. Although the triangular hydrograph is a simple linear approximation, the cumulative distribution of stormflow with a triangular hydrograph is a curvilinear S-curve that closely approximates the cumulative distribution of stormflows from measured data. The temporal distribution of flow within a runoff event can be estimated using the basin lagtime, (which is the time from the centroid of rainfall excess to the centroid of the corresponding runoff hydrograph) and the hydrograph recession ratio (which is the ratio of the duration of the falling limb to the rising limb of the hydrograph). This report documents results of the study, methods used to estimate the variables, and electronic files that facilitate calculation of variables. Ten viable multiple-linear regression equations were developed to estimate basin lagtimes from readily determined drainage basin properties using data published in 37 stormflow studies. Regression equations using the basin lag factor (BLF, which is a variable calculated as the main-channel length, in miles, divided by the square root of the main-channel slope in feet per mile) and two variables describing development in the drainage basin were selected as the best candidates, because each equation explains about 70 percent of the variability in the data. The variables describing development are the USGS basin development factor (BDF, which is a function of the amount of channel modifications, storm sewers, and curb-and-gutter streets in a basin) and the total impervious area variable (IMPERV) in the basin. Two datasets were used to develop regression equations. The primary dataset included data from 493 sites that have values for the BLF, BDF, and IMPERV variables. This dataset was used to develop the best-fit regression

  20. Fast Template-based Shape Analysis using Diffeomorphic Iterative Centroid

    OpenAIRE

    Cury , Claire; Glaunès , Joan Alexis; Chupin , Marie; Colliot , Olivier

    2014-01-01

    International audience; A common approach for the analysis of anatomical variability relies on the estimation of a representative template of the population, followed by the study of this population based on the parameters of the deformations going from the template to the population. The Large Deformation Diffeomorphic Metric Mapping framework is widely used for shape analysis of anatomical structures, but computing a template with such framework is computationally expensive. In this paper w...

  1. THE SLOAN DIGITAL SKY SURVEY REVERBERATION MAPPING PROJECT: BIASES IN z  > 1.46 REDSHIFTS DUE TO QUASAR DIVERSITY

    Energy Technology Data Exchange (ETDEWEB)

    Denney, K. D.; Peterson, B. M. [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Horne, Keith [SUPA Physics and Astronomy, University of St. Andrews, St. Andrews KY16 9SS (United Kingdom); Brandt, W. N.; Grier, C. J.; Trump, J. R. [Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802 (United States); Ho, Luis C. [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China); Ge, J., E-mail: denney@astronomy.ohio-state.edu [Astronomy Department University of Florida 211 Bryant Space Science Center P.O. Box 112055 Gainesville, FL 32611-2055 (United States)

    2016-12-10

    We use the coadded spectra of 32 epochs of Sloan Digital Sky Survey (SDSS) Reverberation Mapping Project observations of 482 quasars with z  > 1.46 to highlight systematic biases in the SDSS- and Baryon Oscillation Spectroscopic Survey (BOSS)-pipeline redshifts due to the natural diversity of quasar properties. We investigate the characteristics of this bias by comparing the BOSS-pipeline redshifts to an estimate from the centroid of He ii λ 1640. He ii has a low equivalent width but is often well-defined in high-S/N spectra, does not suffer from self-absorption, and has a narrow component which, when present (the case for about half of our sources), produces a redshift estimate that, on average, is consistent with that determined from [O ii] to within the He ii and [O ii] centroid measurement uncertainties. The large redshift differences of ∼1000 km s{sup −1}, on average, between the BOSS-pipeline and He ii-centroid redshifts, suggest there are significant biases in a portion of BOSS quasar redshift measurements. Adopting the He ii-based redshifts shows that C iv does not exhibit a ubiquitous blueshift for all quasars, given the precision probed by our measurements. Instead, we find a distribution of C iv-centroid blueshifts across our sample, with a dynamic range that (i) is wider than that previously reported for this line, and (ii) spans C iv centroids from those consistent with the systemic redshift to those with significant blueshifts of thousands of kilometers per second. These results have significant implications for measurement and use of high-redshift quasar properties and redshifts, and studies based thereon.

  2. THE SLOAN DIGITAL SKY SURVEY REVERBERATION MAPPING PROJECT: BIASES IN z  > 1.46 REDSHIFTS DUE TO QUASAR DIVERSITY

    International Nuclear Information System (INIS)

    Denney, K. D.; Peterson, B. M.; Horne, Keith; Brandt, W. N.; Grier, C. J.; Trump, J. R.; Ho, Luis C.; Ge, J.

    2016-01-01

    We use the coadded spectra of 32 epochs of Sloan Digital Sky Survey (SDSS) Reverberation Mapping Project observations of 482 quasars with z  > 1.46 to highlight systematic biases in the SDSS- and Baryon Oscillation Spectroscopic Survey (BOSS)-pipeline redshifts due to the natural diversity of quasar properties. We investigate the characteristics of this bias by comparing the BOSS-pipeline redshifts to an estimate from the centroid of He ii λ 1640. He ii has a low equivalent width but is often well-defined in high-S/N spectra, does not suffer from self-absorption, and has a narrow component which, when present (the case for about half of our sources), produces a redshift estimate that, on average, is consistent with that determined from [O ii] to within the He ii and [O ii] centroid measurement uncertainties. The large redshift differences of ∼1000 km s −1 , on average, between the BOSS-pipeline and He ii-centroid redshifts, suggest there are significant biases in a portion of BOSS quasar redshift measurements. Adopting the He ii-based redshifts shows that C iv does not exhibit a ubiquitous blueshift for all quasars, given the precision probed by our measurements. Instead, we find a distribution of C iv-centroid blueshifts across our sample, with a dynamic range that (i) is wider than that previously reported for this line, and (ii) spans C iv centroids from those consistent with the systemic redshift to those with significant blueshifts of thousands of kilometers per second. These results have significant implications for measurement and use of high-redshift quasar properties and redshifts, and studies based thereon.

  3. Combined effect of carnosol, rosmarinic acid and thymol on the oxidative stability of soybean oil using a simplex centroid mixture design.

    Science.gov (United States)

    Saoudi, Salma; Chammem, Nadia; Sifaoui, Ines; Jiménez, Ignacio A; Lorenzo-Morales, Jacob; Piñero, José E; Bouassida-Beji, Maha; Hamdi, Moktar; L Bazzocchi, Isabel

    2017-08-01

    Oxidation taking place during the use of oil leads to the deterioration of both nutritional and sensorial qualities. Natural antioxidants from herbs and plants are rich in phenolic compounds and could therefore be more efficient than synthetic ones in preventing lipid oxidation reactions. This study was aimed at the valorization of Tunisian aromatic plants and their active compounds as new sources of natural antioxidant preventing oil oxidation. Carnosol, rosmarinic acid and thymol were isolated from Rosmarinus officinalis and Thymus capitatus by column chromatography and were analyzed by nuclear magnetic resonance. Their antioxidant activities were measured by DPPH, ABTS and FRAP assays. These active compounds were added to soybean oil in different proportions using a simplex-centroid mixture design. Antioxidant activity and oxidative stability of oils were determined before and after 20 days of accelerated oxidation at 60 °C. Results showed that bioactive compounds are effective in maintaining oxidative stability of soybean oil. However, the binary interaction of rosmarinic acid and thymol caused a reduction in antioxidant activity and oxidative stability of soybean oil. Optimum conditions for maximum antioxidant activity and oxidative stability were found to be an equal ternary mixture of carnosol, rosmarinic acid and thymol. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  4. Olkiluoto hydrogeochemistry. A 3-D modelling approach for sparce data set

    International Nuclear Information System (INIS)

    Luukkonen, A.; Partamies, S.; Pitkaenen, P.

    2003-07-01

    Olkiluoto at Eurajoki has been selected as a candidate site for final disposal repository for the used nuclear waste produced in Finland. In the long term safety assessment, one of the principal evaluation tools of safe disposal is hydrogeochemistry. For assessment purposes Posiva Oy excavates in the Olkiluoto bedrock an underground research laboratory (ONKALO). The complexity of the groundwater chemistry is characteristic to the Olkiluoto site and causes a demand to examine and visualise these hydrogeochemical features in 3-D together with the structural model. The need to study the hydrogeochemical features is not inevitable only in the stable undisturbed (pre-excavational) conditions but also in the disturbed system caused by the construction activities and open-tunnel conditions of the ONKALO. The present 3-D approach is based on integrating the independently and separately developed structural model and the results from the geochemical mixing calculations of the groundwater samples. For spatial geochemical regression purposes the study area is divided into four primary sectors on the basis of the occurrence of the samples. The geochemical information within the four primary sector are summed up in the four sector centroids that sum-up the depth distributions of the different water types within each primary sector area. The geographic locations of the centroids are used for secondary division of the study area into secondary sectors. With the aid of secondary sectors spatial regressions between the centroids can be calculated and interpolation of water type fractions within the centroid volume becomes possible. Similarly, extrapolations outside the centroid volume are possible as well. The mixing proportions of the five detected water types in an arbitrary point in the modelling volume can be estimated by applying the four centroids and by using lateral linear regression. This study utilises two separate data sets: the older data set and the newer data set. The

  5. Exploiting differential vegetation phenology for satellite-based mapping of semiarid grass vegetation in the southwestern United States and northern Mexico

    Science.gov (United States)

    Dye, Dennis G.; Middleton, Barry R.; Vogel, John M.; Wu, Zhuoting; Velasco, Miguel G.

    2016-01-01

    We developed and evaluated a methodology for subpixel discrimination and large-area mapping of the perennial warm-season (C4) grass component of vegetation cover in mixed-composition landscapes of the southwestern United States and northern Mexico. We describe the methodology within a general, conceptual framework that we identify as the differential vegetation phenology (DVP) paradigm. We introduce a DVP index, the Normalized Difference Phenometric Index (NDPI) that provides vegetation type-specific information at the subpixel scale by exploiting differential patterns of vegetation phenology detectable in time-series spectral vegetation index (VI) data from multispectral land imagers. We used modified soil-adjusted vegetation index (MSAVI2) data from Landsat to develop the NDPI, and MSAVI2 data from MODIS to compare its performance relative to one alternate DVP metric (difference of spring average MSAVI2 and summer maximum MSAVI2), and two simple, conventional VI metrics (summer average MSAVI2, summer maximum MSAVI2). The NDPI in a scaled form (NDPIs) performed best in predicting variation in perennial C4 grass cover as estimated from landscape photographs at 92 sites (R2 = 0.76, p landscapes of the Southwest, and potentially for monitoring of its response to drought, climate change, grazing and other factors, including land management. With appropriate adjustments, the method could potentially be used for subpixel discrimination and mapping of grass or other vegetation types in other regions where the vegetation components of the landscape exhibit contrasting seasonal patterns of phenology.

  6. Infrared Small Moving Target Detection via Saliency Histogram and Geometrical Invariability

    Directory of Open Access Journals (Sweden)

    Minjie Wan

    2017-06-01

    Full Text Available In order to detect both bright and dark small moving targets effectively in infrared (IR video sequences, a saliency histogram and geometrical invariability based method is presented in this paper. First, a saliency map that roughly highlights the salient regions of the original image is obtained by tuning its amplitude spectrum in the frequency domain. Then, a saliency histogram is constructed by means of averaging the accumulated saliency value of each gray level in the map, through which bins corresponding to bright target and dark target are assigned with large values in the histogram. Next, single-frame detection of candidate targets is accomplished by a binarized segmentation using an adaptive threshold, and their centroid coordinates with sub-pixel accuracy are calculated through a connected components labeling method as well as a gray-weighted criterion. Finally, considering the motion characteristics in consecutive frames, an inter-frame false alarm suppression method based on geometrical invariability is developed to improve the precision rate further. Quantitative analyses demonstrate the detecting precision of this proposed approach can be up to 97% and Receiver Operating Characteristic (ROC curves further verify our method outperforms other state-of-the-arts methods in both detection rate and false alarm rate.

  7. Evaluation of digital image correlation techniques using realistic ground truth speckle images

    International Nuclear Information System (INIS)

    Cofaru, C; Philips, W; Van Paepegem, W

    2010-01-01

    Digital image correlation (DIC) has been acknowledged and widely used in recent years in the field of experimental mechanics as a contactless method for determining full field displacements and strains. Even though several sub-pixel motion estimation algorithms have been proposed in the literature, little is known about their accuracy and limitations in reproducing complex underlying motion fields occurring in real mechanical tests. This paper presents a new method for evaluating sub-pixel motion estimation algorithms using ground truth speckle images that are realistically warped using artificial motion fields that were obtained following two distinct approaches: in the first, the horizontal and vertical displacement fields are created according to theoretical formulas for the given type of experiment while the second approach constructs the displacements through radial basis function interpolation starting from real DIC results. The method is applied in the evaluation of five DIC algorithms with results indicating that the gradient-based DIC methods generally have a quality advantage when using small sized blocks and are a better choice for calculating very small displacements and strains. The Newton–Raphson is the overall best performing method with a notable quality advantage when large block sizes are employed and in experiments where large strain fields are of interest

  8. Source Parameter Inversion for Recent Great Earthquakes from a Decade-long Observation of Global Gravity Fields

    Science.gov (United States)

    Han, Shin-Chan; Riva, Ricccardo; Sauber, Jeanne; Okal, Emile

    2013-01-01

    We quantify gravity changes after great earthquakes present within the 10 year long time series of monthly Gravity Recovery and Climate Experiment (GRACE) gravity fields. Using spherical harmonic normal-mode formulation, the respective source parameters of moment tensor and double-couple were estimated. For the 2004 Sumatra-Andaman earthquake, the gravity data indicate a composite moment of 1.2x10(exp 23)Nm with a dip of 10deg, in agreement with the estimate obtained at ultralong seismic periods. For the 2010 Maule earthquake, the GRACE solutions range from 2.0 to 2.7x10(exp 22)Nm for dips of 12deg-24deg and centroid depths within the lower crust. For the 2011 Tohoku-Oki earthquake, the estimated scalar moments range from 4.1 to 6.1x10(exp 22)Nm, with dips of 9deg-19deg and centroid depths within the lower crust. For the 2012 Indian Ocean strike-slip earthquakes, the gravity data delineate a composite moment of 1.9x10(exp 22)Nm regardless of the centroid depth, comparing favorably with the total moment of the main ruptures and aftershocks. The smallest event we successfully analyzed with GRACE was the 2007 Bengkulu earthquake with M(sub 0) approx. 5.0x10(exp 21)Nm. We found that the gravity data constrain the focal mechanism with the centroid only within the upper and lower crustal layers for thrust events. Deeper sources (i.e., in the upper mantle) could not reproduce the gravity observation as the larger rigidity and bulk modulus at mantle depths inhibit the interior from changing its volume, thus reducing the negative gravity component. Focal mechanisms and seismic moments obtained in this study represent the behavior of the sources on temporal and spatial scales exceeding the seismic and geodetic spectrum.

  9. Estimation of the Rotational Terms of the Dynamic Response Matrix

    Directory of Open Access Journals (Sweden)

    D. Montalvão

    2004-01-01

    Full Text Available The dynamic response of a structure can be described by both its translational and rotational receptances. The latter ones are frequently not considered because of the difficulties in applying a pure moment excitation or in measuring rotations. However, in general, this implies a reduction up to 75% of the complete model. On the other hand, if a modification includes a rotational inertia, the rotational receptances of the unmodified system are needed. In one method, more commonly found in the literature, a so called T-block is attached to the structure. Then, a force, applied to an arm of the T-block, generates a moment together with a force at the connection point. The T-block also allows for angular displacement measurements. Nevertheless, the results are often not quite satisfactory. In this work, an alternative method based upon coupling techniques is developed, in which rotational receptances are estimated without the need of applying a moment excitation. This is accomplished by introducing a rotational inertia modification when rotating the T-block. The force is then applied in its centroid. Several numerical and experimental examples are discussed so that the methodology can be clearly described. The advantages and limitations are identified within the practical application of the method.

  10. Seismicity in the block mountains between Halle and Leipzig, Central Germany: centroid moment tensors, ground motion simulation, and felt intensities of two M ≈ 3 earthquakes in 2015 and 2017

    Science.gov (United States)

    Dahm, Torsten; Heimann, Sebastian; Funke, Sigward; Wendt, Siegfried; Rappsilber, Ivo; Bindi, Dino; Plenefisch, Thomas; Cotton, Fabrice

    2018-05-01

    On April 29, 2017 at 0:56 UTC (2:56 local time), an M W = 2.8 earthquake struck the metropolitan area between Leipzig and Halle, Germany, near the small town of Markranstädt. The earthquake was felt within 50 km from the epicenter and reached a local intensity of I 0 = IV. Already in 2015 and only 15 km northwest of the epicenter, a M W = 3.2 earthquake struck the area with a similar large felt radius and I 0 = IV. More than 1.1 million people live in the region, and the unusual occurrence of the two earthquakes led to public attention, because the tectonic activity is unclear and induced earthquakes have occurred in neighboring regions. Historical earthquakes south of Leipzig had estimated magnitudes up to M W ≈ 5 and coincide with NW-SE striking crustal basement faults. We use different seismological methods to analyze the two recent earthquakes and discuss them in the context of the known tectonic structures and historical seismicity. Novel stochastic full waveform simulation and inversion approaches are adapted for the application to weak, local earthquakes, to analyze mechanisms and ground motions and their relation to observed intensities. We find NW-SE striking normal faulting mechanisms for both earthquakes and centroid depths of 26 and 29 km. The earthquakes are located where faults with large vertical offsets of several hundred meters and Hercynian strike have developed since the Mesozoic. We use a stochastic full waveform simulation to explain the local peak ground velocities and calibrate the method to simulate intensities. Since the area is densely populated and has sensitive infrastructure, we simulate scenarios assuming that a 12-km long fault segment between the two recent earthquakes is ruptured and study the impact of rupture parameters on ground motions and expected damage.

  11. The skeletal maturation status estimated by statistical shape analysis: axial images of Japanese cervical vertebra.

    Science.gov (United States)

    Shin, S M; Kim, Y-I; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B

    2015-01-01

    To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. The sample included 24 female and 19 male patients with hand-wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index.

  12. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    Science.gov (United States)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  13. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gilani, Syed Irtiza Ali

    2008-09-15

    correlation matrix is employed. This method is beneficial because it offers sub-pixel displacement estimation without interpolation, increased robustness to noise and limited computational complexity. Owing to all these advantages, the proposed technique is very suitable for the real-time implementation to solve the motion correction problem. (orig.)

  14. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    International Nuclear Information System (INIS)

    Gilani, Syed Irtiza Ali

    2008-09-01

    correlation matrix is employed. This method is beneficial because it offers sub-pixel displacement estimation without interpolation, increased robustness to noise and limited computational complexity. Owing to all these advantages, the proposed technique is very suitable for the real-time implementation to solve the motion correction problem. (orig.)

  15. Leucosome distribution in migmatitic paragneisses and orthogneisses: A record of self-organized melt migration and entrapment in a heterogeneous partially-molten crust

    Science.gov (United States)

    Yakymchuk, C.; Brown, M.; Ivanic, T. J.; Korhonen, F. J.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  16. How processing digital elevation models can affect simulated water budgets

    Science.gov (United States)

    Kuniansky, E.L.; Lowery, M.A.; Campbell, B.G.

    2009-01-01

    For regional models, the shallow water table surface is often used as a source/sink boundary condition, as model grid scale precludes simulation of the water table aquifer. This approach is appropriate when the water table surface is relatively stationary. Since water table surface maps are not readily available, the elevation of the water table used in model cells is estimated via a two-step process. First, a regression equation is developed using existing land and water table elevations from wells in the area. This equation is then used to predict the water table surface for each model cell using land surface elevation available from digital elevation models (DEM). Two methods of processing DEM for estimating the land surface for each cell are commonly used (value nearest the cell centroid or mean value in the cell). This article demonstrates how these two methods of DEM processing can affect the simulated water budget. For the example presented, approximately 20% more total flow through the aquifer system is simulated if the centroid value rather than the mean value is used. This is due to the one-third greater average ground water gradients associated with the centroid value than the mean value. The results will vary depending on the particular model area topography and cell size. The use of the mean DEM value in each model cell will result in a more conservative water budget and is more appropriate because the model cell water table value should be representative of the entire cell area, not the centroid of the model cell.

  17. Intrafraction Bladder Motion in Radiation Therapy Estimated From Pretreatment and Posttreatment Volumetric Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Foroudi, Farshad, E-mail: farshad.foroudi@petermac.org [Division of Radiation Oncology, Peter MacCallum Cancer Centre, Melbourne, Victoria (Australia); Pham, Daniel [Radiation Therapy Services, Peter MacCallum Cancer Centre, Melbourne, Victoria (Australia); Bressel, Mathias [Biostatistics and Clinical Trials, Peter MacCallum Cancer Centre, Melbourne, Victoria (Australia); Gill, Suki [Division of Radiation Oncology, Peter MacCallum Cancer Centre, Melbourne, Victoria (Australia); Kron, Tomas [Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Victoria (Australia)

    2013-05-01

    Purpose: The use of image guidance protocols using soft tissue anatomy identification before treatment can reduce interfractional variation. This makes intrafraction clinical target volume (CTV) to planning target volume (PTV) changes more important, including those resulting from intrafraction bladder filling and motion. The purpose of this study was to investigate the required intrafraction margins for soft tissue image guidance from pretreatment and posttreatment volumetric imaging. Methods and Materials: Fifty patients with muscle-invasive bladder cancer (T2-T4) underwent an adaptive radiation therapy protocol using daily pretreatment cone beam computed tomography (CBCT) with weekly posttreatment CBCT. A total of 235 pairs of pretreatment and posttreatment CBCT images were retrospectively contoured by a single radiation oncologist (CBCT-CTV). The maximum bladder displacement was measured according to the patient's bony pelvis movement during treatment, intrafraction bladder filling, and bladder centroid motion. Results: The mean time between pretreatment and posttreatment CBCT was 13 minutes, 52 seconds (range, 7 min 52 sec to 30 min 56 sec). Taking into account patient motion, bladder centroid motion, and bladder filling, the required margins to cover intrafraction changes from pretreatment to posttreatment in the superior, inferior, right, left, anterior, and posterior were 1.25 cm (range, 1.19-1.50 cm), 0.67 cm (range, 0.58-1.12 cm), 0.74 cm (range, 0.59-0.94 cm), 0.73 cm (range, 0.51-1.00 cm), 1.20 cm (range, 0.85-1.32 cm), and 0.86 cm (range, 0.73-0.99), respectively. Small bladders on pretreatment imaging had relatively the largest increase in pretreatment to posttreatment volume. Conclusion: Intrafraction motion of the bladder based on pretreatment and posttreatment bladder imaging can be significant particularly in the anterior and superior directions. Patient motion, bladder centroid motion, and bladder filling all contribute to changes between

  18. Hexicon 2: automated processing of hydrogen-deuterium exchange mass spectrometry data with improved deuteration distribution estimation.

    Science.gov (United States)

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L; Hamprecht, Fred A; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  19. Hexicon 2: Automated Processing of Hydrogen-Deuterium Exchange Mass Spectrometry Data with Improved Deuteration Distribution Estimation

    Science.gov (United States)

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L.; Hamprecht, Fred A.; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  20. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    Science.gov (United States)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  1. Improving the Curie depth estimation through optimizing the spectral block dimensions of the aeromagnetic data in the Sabalan geothermal field

    Science.gov (United States)

    Akbar, Somaieh; Fathianpour, Nader

    2016-12-01

    The Curie point depth is of great importance in characterizing geothermal resources. In this study, the Curie iso-depth map was provided using the well-known method of dividing the aeromagnetic dataset into overlapping blocks and analyzing the power spectral density of each block separately. Determining the optimum block dimension is vital in improving the resolution and accuracy of estimating Curie point depth. To investigate the relation between the optimal block size and power spectral density, a forward magnetic modeling was implemented on an artificial prismatic body with specified characteristics. The top, centroid, and bottom depths of the body were estimated by the spectral analysis method for different block dimensions. The result showed that the optimal block size could be considered as the smallest possible block size whose corresponding power spectrum represents an absolute maximum in small wavenumbers. The Curie depth map of the Sabalan geothermal field and its surrounding areas, in the northwestern Iran, was produced using a grid of 37 blocks with different dimensions from 10 × 10 to 50 × 50 km2, which showed at least 50% overlapping with adjacent blocks. The Curie point depth was estimated in the range of 5 to 21 km. The promising areas with the Curie point depths less than 8.5 km are located around Mountain Sabalan encompassing more than 90% of known geothermal resources in the study area. Moreover, the Curie point depth estimated by the improved spectral analysis is in good agreement with the depth calculated from the thermal gradient data measured in one of the exploratory wells in the region.

  2. Viewing angle switching of patterned vertical alignment liquid crystal display

    International Nuclear Information System (INIS)

    Lim, Young Jin; Jeong, Eun; Chin, Mi Hyung; Lee, Seung Hee; Ji, Seunghoon; Lee, Gi-Dong

    2008-01-01

    Viewing angle control of a patterned vertical alignment (PVA) liquid crystal display using only one panel is investigated. In conventional PVA modes, a vertically aligned liquid crystal (LC) director tilts down in four directions making 45 deg. with respect to crossed polarizers to exhibit a wide viewing angle. In the viewing angle control device, one pixel was divided into two sub-pixels such that the LC director in the main pixel is controlled to be tilted down in multiple directions making an angle with the polarizer, playing the role of main display with the wide viewing angle, while the LC director in the sub-pixel is controlled to be tilted down to the polarizer axis, playing the role of sub-pixel to the viewing angle control for the narrow viewing angle. Using sub-pixel control, light leakage or any type of information such as characters and image can be generated in oblique viewing directions without distorting the image quality in the normal direction, which will prevent others from peeping at the displayed image by overlapping the displayed image with the made image

  3. A controllable viewing angle LCD with an optically isotropic liquid crystal

    International Nuclear Information System (INIS)

    Kim, Min Su; Lim, Young Jin; Yoon, Sukin; Kang, Shin-Woong; Lee, Seung Hee; Kim, Miyoung; Wu, Shin-Tson

    2010-01-01

    An optically isotropic liquid crystal (LC) such as a blue phase LC or an optically isotropic nano-structured LC exhibits a very wide viewing angle because the induced birefringence is along the in-plane electric field. Utilizing such a material, we propose a liquid crystal display (LCD) whose viewing angle can be switched from wide view to narrow view using only one panel. In the device, each pixel is divided into two parts: a major pixel and a sub-pixel. The main pixels display the images while the sub-pixels control the viewing angle. In the main pixels, birefringence is induced by horizontal electric fields through inter-digital electrodes leading to a wide viewing angle, while in the sub-pixels, birefringence is induced by the vertical electric field so that phase retardation occurs only at oblique angles. As a result, the dark state (or contrast ratio) of the entire pixel can be controlled by the voltage of the sub-pixels. Such a switchable viewing angle LCD is attractive for protecting personal privacy.

  4. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  5. Hartman Testing of X-Ray Telescopes

    Science.gov (United States)

    Saha, Timo T.; Biskasch, Michael; Zhang, William W.

    2013-01-01

    Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.

  6. Tests of the methods of analysis of picosecond lifetimes and measurement of the half-life of the 569.6 keV level in 207Pb

    International Nuclear Information System (INIS)

    Lima, E. de; Kawakami, H.; Lima, A. de; Hichwa, R.; Ramayya, A.V.; Hamilton, J.H.; Dunn, W.; Kim, H.J.

    1978-01-01

    Customarily one extracts the half-life of the nuclear state from a delayed time spectrum by an analysis of the centroid shift, the slope and lately by the convolution method. Recently there have been two formulas relating the centroid shift to the half-life of the nuclear state. These two procedures can give different results for the half-life when Tsub(1/2) the same order or less than the time width of one channel. An extensive investigation of these two formulas and precedures has been made by measuring the half-life of the first excited state in 207 Pb at 569.6 keV. This analysis confirms Bay's formula relating the centroid shift to the half-life of the state. The half-life of the 569.6 keV level in 207 Pb is measured to be (129+-3) ps in excellent agreement with Weisskopf's single particle estimate of 128 ps for an E2 transition. (Auth.)

  7. Development and optimization of a mixed beverage made of whey and water-soluble soybean extract flavored with chocolate using a simplex-centroid design

    Directory of Open Access Journals (Sweden)

    Dóris Faria de OLIVEIRA

    2017-10-01

    Full Text Available Abstract This study aimed to combine the nutritional advantages of whey and soybean by developing a type of chocolate beverage with water-soluble soybean extract dissolved in whey. Different concentrations of thickeners (carrageenan, pectin and starch – maximum level of 500 mg.100 mL-1 were tested by a simplex-centroid design. Several physicochemical, rheological, and sensory properties of the beverages were measured and a multi-response optimization was conducted aiming to obtain a whey and soybean beverage with increased overall sensory impression and maximum purchase intention. Beverages presented mean protein levels higher than 3.1 g.100 mL-1, a low content of lipids (< 2 g.100 mL-1 and total soluble solids ≥20 g.100 mL-1. Response surface methodology was applied and the proposed for overall impression and purchase intention presented R2=0.891 and R2=0.966, respectively. The desirability index (d-value=0.92 showed that the best formulation should contain 46% carrageenan and 54% pectin in the formulation. The formulation manufactured with this combination of thickeners was tested and the overall impression was 7.11±1.09 (over a 9-point hedonic scale and the purchase intention was 4.0±1.3 (over a 5-point hedonic scale, thus showing that the proposed models were predictive.

  8. A NEW APPLICATION OF THE ASTROMETRIC METHOD TO BREAK SEVERE DEGENERACIES IN BINARY MICROLENSING EVENTS

    International Nuclear Information System (INIS)

    Chung, Sun-Ju; Park, Byeong-Gon; Humphrey, Andrew; Ryu, Yoon-Hyun

    2009-01-01

    When a source star is microlensed by one stellar component of widely separated binary stellar components, after finishing the lensing event, the event induced by the other binary star can be additionally detected. In this paper, we investigate whether the close/wide degeneracies in binary lensing events can be resolved by detecting the additional centroid shift of the source images induced by the secondary binary star in wide binary lensing events. From this investigation, we find that if the source star passes close to the Einstein ring of the secondary companion, the degeneracy can be easily resolved by using future astrometric follow-up observations with high astrometric precision. We determine the probability of detecting the additional centroid shift in binary lensing events with high magnification. From this, we find that the degeneracy of binary lensing events with a separation of ∼<20.0 AU can be resolved with a significant efficiency. We also estimate the waiting time for the detection of the additional centroid shift in wide binary lensing events. We find that for typical Galactic lensing events with a separation of ∼<20.0 AU, the additional centroid shift can be detected within 100 days, and thus the degeneracy of those events can be sufficiently broken within a year.

  9. Scaling of Thermal Images at Different Spatial Resolution: The Mixed Pixel Problem

    Directory of Open Access Journals (Sweden)

    Hamlyn G. Jones

    2014-07-01

    Full Text Available The consequences of changes in spatial resolution for application of thermal imagery in plant phenotyping in the field are discussed. Where image pixels are significantly smaller than the objects of interest (e.g., leaves, accurate estimates of leaf temperature are possible, but when pixels reach the same scale or larger than the objects of interest, the observed temperatures become significantly biased by the background temperature as a result of the presence of mixed pixels. Approaches to the estimation of the true leaf temperature that apply both at the whole-pixel level and at the sub-pixel level are reviewed and discussed.

  10. Simulation and Analysis of the Topographic Effects on Snow-Free Albedo over Rugged Terrain

    Directory of Open Access Journals (Sweden)

    Dalei Hao

    2018-02-01

    Full Text Available Topography complicates the modeling and retrieval of land surface albedo due to shadow effects and the redistribution of incident radiation. Neglecting topographic effects may lead to a significant bias when estimating land surface albedo over a single slope. However, for rugged terrain, a comprehensive and systematic investigation of topographic effects on land surface albedo is currently ongoing. Accurately estimating topographic effects on land surface albedo over a rugged terrain presents a challenge in remote sensing modeling and applications. In this paper, we focused on the development of a simplified estimation method for snow-free albedo over a rugged terrain at a 1-km scale based on a 30-m fine-scale digital elevation model (DEM. The proposed method was compared with the radiosity approach based on simulated and real DEMs. The results of the comparison showed that the proposed method provided adequate computational efficiency and satisfactory accuracy simultaneously. Then, the topographic effects on snow-free albedo were quantitatively investigated and interpreted by considering the mean slope, subpixel aspect distribution, solar zenith angle, and solar azimuth angle. The results showed that the more rugged the terrain and the larger the solar illumination angle, the more intense the topographic effects were on black-sky albedo (BSA. The maximum absolute deviation (MAD and the maximum relative deviation (MRD of the BSA over a rugged terrain reached 0.28 and 85%, respectively, when the SZA was 60° for different terrains. Topographic effects varied with the mean slope, subpixel aspect distribution, SZA and SAA, which should not be neglected when modeling albedo.

  11. A Method for Qualitative Mapping of Thick Oil Spills Using Imaging Spectroscopy

    Science.gov (United States)

    Clark, Roger N.; Swayze, Gregg A.; Leifer, Ira; Livo, K. Eric; Lundeen, Sarah; Eastwood, Michael; Green, Robert O.; Kokaly, Raymond F.; Hoefen, Todd; Sarture, Charles; McCubbin, Ian; Roberts, Dar; Steele, Denis; Ryan, Thomas; Dominguez, Roseanne; Pearson, Neil; ,

    2010-01-01

    A method is described to create qualitative images of thick oil in oil spills on water using near-infrared imaging spectroscopy data. The method uses simple 'three-point-band depths' computed for each pixel in an imaging spectrometer image cube using the organic absorption features due to chemical bonds in aliphatic hydrocarbons at 1.2, 1.7, and 2.3 microns. The method is not quantitative because sub-pixel mixing and layering effects are not considered, which are necessary to make a quantitative volume estimate of oil.

  12. A Local Weighted Nearest Neighbor Algorithm and a Weighted and Constrained Least-Squared Method for Mixed Odor Analysis by Electronic Nose Systems

    Directory of Open Access Journals (Sweden)

    Jyuo-Min Shyu

    2010-11-01

    Full Text Available A great deal of work has been done to develop techniques for odor analysis by electronic nose systems. These analyses mostly focus on identifying a particular odor by comparing with a known odor dataset. However, in many situations, it would be more practical if each individual odorant could be determined directly. This paper proposes two methods for such odor components analysis for electronic nose systems. First, a K-nearest neighbor (KNN-based local weighted nearest neighbor (LWNN algorithm is proposed to determine the components of an odor. According to the component analysis, the odor training data is firstly categorized into several groups, each of which is represented by its centroid. The examined odor is then classified as the class of the nearest centroid. The distance between the examined odor and the centroid is calculated based on a weighting scheme, which captures the local structure of each predefined group. To further determine the concentration of each component, odor models are built by regressions. Then, a weighted and constrained least-squares (WCLS method is proposed to estimate the component concentrations. Experiments were carried out to assess the effectiveness of the proposed methods. The LWNN algorithm is able to classify mixed odors with different mixing ratios, while the WCLS method can provide good estimates on component concentrations.

  13. Noise estimation of beam position monitors at RHIC

    International Nuclear Information System (INIS)

    Shen, X.; Bai, M.

    2014-01-01

    Beam position monitors (BPM) are used to record the average orbits and transverse turn-by-turn displacements of the beam centroid motion. The Relativistic Hadron Ion Collider (RHIC) has 160 BPMs for each plane in each of the Blue and Yellow rings: 72 dual-plane BPMs in the insertion regions (IR) and 176 single-plane modules in the arcs. Each BPM is able to acquire 1024 or 4096 consecutive turn-by-turn beam positions. Inevitably, there are broadband noisy signals in the turn-by-turn data due to BPM electronics as well as other sources. A detailed study of the BPM noise performance is critical for reliable optics measurement and beam dynamics analysis based on turn-by-turn data.

  14. Niche tracking and rapid establishment of distributional equilibrium in the house sparrow show potential responsiveness of species to climate change.

    Directory of Open Access Journals (Sweden)

    William B Monahan

    Full Text Available The ability of species to respond to novel future climates is determined in part by their physiological capacity to tolerate climate change and the degree to which they have reached and continue to maintain distributional equilibrium with the environment. While broad-scale correlative climatic measurements of a species' niche are often described as estimating the fundamental niche, it is unclear how well these occupied portions actually approximate the fundamental niche per se, versus the fundamental niche that exists in environmental space, and what fitness values bounding the niche are necessary to maintain distributional equilibrium. Here, we investigate these questions by comparing physiological and correlative estimates of the thermal niche in the introduced North American house sparrow (Passer domesticus. Our results indicate that occupied portions of the fundamental niche derived from temperature correlations closely approximate the centroid of the existing fundamental niche calculated on a fitness threshold of 50% population mortality. Using these niche measures, a 75-year time series analysis (1930-2004 further shows that: (i existing fundamental and occupied niche centroids did not undergo directional change, (ii interannual changes in the two niche centroids were correlated, (iii temperatures in North America moved through niche space in a net centripetal fashion, and consequently, (iv most areas throughout the range of the house sparrow tracked the existing fundamental niche centroid with respect to at least one temperature gradient. Following introduction to a new continent, the house sparrow rapidly tracked its thermal niche and established continent-wide distributional equilibrium with respect to major temperature gradients. These dynamics were mediated in large part by the species' broad thermal physiological tolerances, high dispersal potential, competitive advantage in human-dominated landscapes, and climatically induced

  15. Coherent multiscale image processing using dual-tree quaternion wavelets.

    Science.gov (United States)

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  16. Laboratory Studies of Carbon Emission from Biomass Burning for use in Remote Sensing

    Science.gov (United States)

    Wald, Andrew E.; Kaufman, Yoram J.

    1998-01-01

    Biomass burning is a significant source of many trace gases in the atmosphere. Up to 25% of the total anthropogenic carbon dioxide added to the atmosphere annually is from biomass burning. However, this gaseous emission from fires is not directly detectable from satellite. Infrared radiance from the fires is. In order to see if infrared radiance can be used as a tracer for these emitted gases, we made laboratory measurements to determine the correlation of emitted carbon dioxide, carbon monoxide and total burned biomass with emitted infrared radiance. If the measured correlations among these quantities hold in the field, then satellite-observed infrared radiance can be used to estimate gaseous emission and total burned biomass on a global, daily basis. To this end, several types of biomass fuels were burned under controlled conditions in a large-scale combustion laboratory. Simultaneous measurements of emitted spectral infrared radiance, emitted carbon dioxide, carbon monoxide, and total mass loss were made. In addition measurements of fuel moisture content and fuel elemental abundance were made. We found that for a given fire, the quantity of carbon burned can be estimated from 11 (micro)m radiance measurements only within a factor of five. This variation arises from three sources, 1) errors in our measurements, 2) the subpixel nature of the fires, and 3) inherent differences in combustion of different fuel types. Despite this large range, these measurements can still be used for large-scale satellite estimates of biomass burned. This is because of the very large possible spread of fire sizes that will be subpixel as seen by Moderate Resolution Imaging Spectroradiometer (MODIS). Due to this large spread, even relatively low-precision correlations can still be useful for large-scale estimates of emitted carbon. Furthermore, such estimates using the MODIS 3.9 (micro)m channel should be even more accurate than our estimates based on 11 (micro)m radiance.

  17. Hubble space telescope/advanced camera for surveys confirmation of the dark substructure in A520

    International Nuclear Information System (INIS)

    Jee, M. J.; Hoekstra, H.; Mahdavi, A.; Babul, A.

    2014-01-01

    We present a weak-lensing study of the cluster A520 based on Advanced Camera for Surveys (ACS) data. The excellent data quality provides a mean source density of ∼109 arcmin –2 , which improves both resolution and significance of the mass reconstruction compared to a previous study based on Wide Field Planetary Camera 2 (WFPC2) images. We take care in removing instrumental effects such as the charge trailing due to radiation damage of the detector and the position-dependent point-spread function. This new ACS analysis confirms the previous claims that a substantial amount of dark mass is present between two luminous subclusters where we observe very little light. The centroid of the dark peak in the current ACS analysis is offset to the southwest by ∼1' with respect to the centroid from the WFPC2 analysis. Interestingly, this new centroid is in better agreement with the location where the X-ray emission is strongest, and the mass-to-light ratio estimated with this centroid is much higher (813 ± 78 M ☉ /L R☉ ) than the previous value; the aperture mass with the WFPC2 centroid provides a consistent mass. Although we cannot provide a definite explanation for the dark peak, we discuss a revised scenario, wherein dark matter with a more conventional range (σ DM /m DM < 1 cm 2 g –1 ) of self-interacting cross-section can lead to the detection of this dark substructure. If supported by detailed numerical simulations, this hypothesis opens up the possibility that the A520 system can be used to establish a lower limit of the self-interacting cross-section of dark matter.

  18. Comparison of thermal, salt and dye tracing to estimate shallow flow velocities: Novel triple-tracer approach

    Science.gov (United States)

    Abrantes, João R. C. B.; Moruzzi, Rodrigo B.; Silveira, Alexandre; de Lima, João L. M. P.

    2018-02-01

    The accurate measurement of shallow flow velocities is crucial to understand and model the dynamics of sediment and pollutant transport by overland flow. In this study, a novel triple-tracer approach was used to re-evaluate and compare the traditional and well established dye and salt tracer techniques with the more recent thermal tracer technique in estimating shallow flow velocities. For this purpose a triple tracer (i.e. dyed-salted-heated water) was used. Optical and infrared video cameras and an electrical conductivity sensor were used to detect the tracers in the flow. Leading edge and centroid velocities of the tracers were measured and the correction factors used to determine the actual mean flow velocities from tracer measured velocities were compared and investigated. Experiments were carried out for different flow discharges (32-1813 ml s-1) on smooth acrylic, sand, stones and synthetic grass bed surfaces with 0.8, 4.4 and 13.2% slopes. The results showed that thermal tracers can be used to estimate shallow flow velocities, since the three techniques yielded very similar results without significant differences between them. The main advantages of the thermal tracer were that the movement of the tracer along the measuring section was more easily visible than it was in the real image videos and that it was possible to measure space-averaged flow velocities instead of only one velocity value, with the salt tracer. The correction factors used to determine the actual mean velocity of overland flow varied directly with Reynolds and Froude numbers, flow velocity and slope and inversely with flow depth and bed roughness. In shallow flows, velocity estimation using tracers entails considerable uncertainty and caution must be taken with these measurements, especially in field studies where these variables vary appreciably in space and time.

  19. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    Science.gov (United States)

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  20. Earthquakes Sources Parameter Estimation of 20080917 and 20081114 Near Semangko Fault, Sumatra Using Three Components of Local Waveform Recorded by IA Network Station

    Directory of Open Access Journals (Sweden)

    Madlazim

    2012-04-01

    Full Text Available The 17/09/2008 22:04:80 UTC and 14/11/2008 00:27:31.70 earthquakes near Semangko fault were analyzed to identify the fault planes. The two events were relocated to assess physical insight against the hypocenter uncertainty. The datas used to determine source parameters of both earthquakes were three components of local waveform recorded by Geofon broadband IA network stations, (MDSI, LWLI, BLSI and RBSI for the event of 17/09/2008 and (MDSI, LWLI, BLSI and KSI for the event of 14/11/2008. Distance from the epicenter to all station was less than 5°. Moment tensor solution of two events was simultaneously analyzed by determination of the centroid position. Simultaneous analysis covered hypocenter position, centroid position and nodal planes of two events indicated Semangko fault planes. Considering that the Semangko fault zone is a high seismicity area, the identification of the seismic fault is important for the seismic hazard investigation in the region.

  1. High Precision Edge Detection Algorithm for Mechanical Parts

    Science.gov (United States)

    Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui

    2018-04-01

    High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  2. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    Science.gov (United States)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  3. Estimates of Free-tropospheric NO2 Abundance from the Aura Ozone Monitoring Instrument (OMI) Using Cloud Slicing Technique

    Science.gov (United States)

    Choi, S.; Joiner, J.; Krotkov, N. A.; Choi, Y.; Duncan, B. N.; Celarier, E. A.; Bucsela, E. J.; Vasilkov, A. P.; Strahan, S. E.; Veefkind, J. P.; Cohen, R. C.; Weinheimer, A. J.; Pickering, K. E.

    2013-12-01

    Total column measurements of NO2 from space-based sensors are of interest to the atmospheric chemistry and air quality communities; the relatively short lifetime of near-surface NO2 produces satellite-observed hot-spots near pollution sources including power plants and urban areas. However, estimates of NO2 concentrations in the free-troposphere, where lifetimes are longer and the radiative impact through ozone formation is larger, are severely lacking. Such information is critical to evaluate chemistry-climate and air quality models that are used for prediction of the evolution of tropospheric ozone and its impact of climate and air quality. Here, we retrieve free-tropospheric NO2 volume mixing ratio (VMR) using the cloud slicing technique. We use cloud optical centroid pressures (OCPs) as well as collocated above-cloud vertical NO2 columns (defined as the NO2 column from top of the atmosphere to the cloud OCP) from the Ozone Monitoring Instrument (OMI). The above-cloud NO2 vertical columns used in our study are retrieved independent of a priori NO2 profile information. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud optical centroid pressure is proportional to the NO2 volume mixing ratio (VMR) for a given pressure (altitude) range. We retrieve NO2 volume mixing ratios and compare the obtained NO2 VMRs with in-situ aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. The agreement is good when proper data screening is applied. In addition, the OMI cloud slicing reports a high NO2 VMR where the aircraft reported lightning NOx during the Deep Convection Clouds and Chemistry (DC3) campaign in 2012. We also provide a global seasonal climatology of free-tropospheric NO2 VMR in cloudy conditions. Enhanced NO2 in free troposphere commonly appears near polluted urban locations where NO2 produced in the boundary layer may be transported vertically out of the

  4. Lack of Correlation Between External Fiducial Positions and Internal Tumor Positions During Breath-Hold CT

    International Nuclear Information System (INIS)

    Hunjan, Sandeep; Starkschall, George; Prado, Karl; Dong Lei; Balter, Peter

    2010-01-01

    Purpose: For thoracic tumors, if four-dimensional computed tomography (4DCT) is unavailable, the internal margin can be estimated by use of breath-hold (BH) CT scans acquired at end inspiration (EI) and end expiration (EE). By use of external surrogates for tumor position, BH accuracy is estimated by minimizing the difference between respiratory extrema BH and mean equivalent-phase free breathing (FB) positions. We tested the assumption that an external surrogate for BH accuracy correlates with internal tumor positional accuracy during BH CT. Methods and Materials: In 16 lung cancer patients, 4DCT images, as well as BH CT images at EI and EE, were acquired. Absolute differences between BH and mean equivalent-phase (FB) positions were calculated for both external fiducials and gross tumor volume (GTV) centroids as metrics of external and internal BH accuracy, respectively, and the results were correlated. Results: At EI, the absolute difference between mean FB and BH fiducial displacement correlated poorly with the absolute difference between FB and BH GTV centroid positions on CT images (R 2 = 0.11). Similarly, at EE, the absolute difference between mean FB and BH fiducial displacements correlated poorly with the absolute difference between FB and BH GTV centroid positions on CT images (R 2 = 0.18). Conclusions: External surrogates for tumor position are not an accurate metric of BH accuracy for lung cancer patients. This implies that care should be taken when using such an approach because an incorrect internal margin could be generated.

  5. Size and duration of the high-frequency radiator in the source of the December 26, 2004 Sumatra earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Gusev, A A [Institute of Volcanology and Seismology, Russian Academy of Sciences, Petropavlovsk-Kamchatskii (Russian Federation); [Kamchatka Branch, Geophysical Survey, Russian Academy of Sciences, Petropavlovsk-Kamchatskii (Russian Federation)]. E-mail: gusev@emsd.iks.ru; Guseva, E M [Kamchatka Branch, Geophysical Survey, Russian Academy of Sciences, Petropavlovsk-Kamchatskii (Russian Federation); Panza, G F [University of Trieste, Department of Earth Sciences, Trieste (Italy); [Abdus Salam International Centre for Theoretical Physics, SAND Group, Trieste (Italy)

    2006-04-15

    We recover the gross space-time characteristics of high-frequency (HF) radiator of the great Sumatra-Andaman islands earthquake of Dec. 26, 2004, (M{sub w}=9.0-9.3) using the inversion of parameters describing the time histories of the power of radiated HF P waves. To determine these time histories we process teleseismic P waves at 37 BB stations, using, in sequence: (1) band filtering in the bands 0.4-1.2, 1.2-2, 2-3 and 3-4 Hz; (2) calculation of squared attenuation-corrected acceleration wave amplitudes, making 'power signal'; (3) elimination of distortion related to scattering and expressed as P coda. In step (3) we employ, as an empirical Green function, the power signal determined from an aftershock, from which we construct an inverse filter, and apply it to the recorded power signal. We thus recover the source time function for HF power, with a definite end and no coda. Three parameters are extracted from such signals: full ('100%') duration, temporal centroid, and 99% duration. Through linear inversion, station full durations deliver estimates of the rupture stopping point and stopping time. Similarly, signal temporal centroids and 99% durations can be inverted to obtain the position of the space-time centroid of HF energy radiator and of the point corresponding to the discharge of 99% of the energy. Inversion was successful for the three lower-frequency bands and resulted in the following joint estimates: source length of 1100{+-}220 km (100%) and 800{+-}200 km (99%), source duration of 690 s (100%) and 550 s (99%). The stopping point differs insignificantly from the northern extremity of the aftershock zone. Spatial HF radiation centroid is located at the distance of about 400 km at the azimuth N327W from the epicenter. Rupture propagation velocity estimates are 1.4-1.7 km/s for the entire rupture and 2.3 km/s for its southern, more powerful part. An interesting detail of the source is that the northernmost 300 km of the rupture radiated only 1% of the

  6. Size and duration of the high-frequency radiator in the source of the December 26, 2004 Sumatra earthquake

    International Nuclear Information System (INIS)

    Gusev, A.A.; Guseva, E.M.; Panza, G.F.

    2006-04-01

    We recover the gross space-time characteristics of high-frequency (HF) radiator of the great Sumatra-Andaman islands earthquake of Dec. 26, 2004, (M w =9.0-9.3) using the inversion of parameters describing the time histories of the power of radiated HF P waves. To determine these time histories we process teleseismic P waves at 37 BB stations, using, in sequence: (1) band filtering in the bands 0.4-1.2, 1.2-2, 2-3 and 3-4 Hz; (2) calculation of squared attenuation-corrected acceleration wave amplitudes, making 'power signal'; (3) elimination of distortion related to scattering and expressed as P coda. In step (3) we employ, as an empirical Green function, the power signal determined from an aftershock, from which we construct an inverse filter, and apply it to the recorded power signal. We thus recover the source time function for HF power, with a definite end and no coda. Three parameters are extracted from such signals: full ('100%') duration, temporal centroid, and 99% duration. Through linear inversion, station full durations deliver estimates of the rupture stopping point and stopping time. Similarly, signal temporal centroids and 99% durations can be inverted to obtain the position of the space-time centroid of HF energy radiator and of the point corresponding to the discharge of 99% of the energy. Inversion was successful for the three lower-frequency bands and resulted in the following joint estimates: source length of 1100±220 km (100%) and 800±200 km (99%), source duration of 690 s (100%) and 550 s (99%). The stopping point differs insignificantly from the northern extremity of the aftershock zone. Spatial HF radiation centroid is located at the distance of about 400 km at the azimuth N327W from the epicenter. Rupture propagation velocity estimates are 1.4-1.7 km/s for the entire rupture and 2.3 km/s for its southern, more powerful part. An interesting detail of the source is that the northernmost 300 km of the rupture radiated only 1% of the total HF

  7. High Precision Edge Detection Algorithm for Mechanical Parts

    Directory of Open Access Journals (Sweden)

    Duan Zhenyun

    2018-04-01

    Full Text Available High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  8. Depth of interaction detection for {gamma}-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Instituto de Aplicaciones de las Tecnologias de la Informacion y de las Comunicaciones Avanzadas, (UPV) Camino de Vera s/n, E46022 (Spain)], E-mail: lerche@ific.uv.es; Doering, M. [Institut fuer Kernphysik, Forschungszentrum Juelich GmbH, D52425 Juelich (Germany); Ros, A. [Institute de Fisica Corpuscular (CSIC-UV), 22085, Valencia E46071 (Spain); Herrero, V.; Gadea, R.; Aliaga, R.J.; Colom, R.; Mateo, F.; Monzo, J.M.; Ferrando, N.; Toledo, J.F.; Martinez, J.D.; Sebastia, A. [Instituto de Aplicaciones de las Tecnologias de la Informacion y de las Comunicaciones Avanzadas, (UPV) Camino de Vera s/n, E46022 (Spain); Sanchez, F.; Benlloch, J.M. [Institute de Fisica Corpuscular (CSIC-UV), 22085, Valencia E46071 (Spain)

    2009-03-11

    A novel design for an inexpensive depth of interaction capable detector for {gamma}-ray imaging has been developed. The design takes advantage of the strong correlation between the width of the scintillation light distribution in monolithic crystals and the interaction depth of {gamma}-rays. We present in this work an inexpensive modification of the commonly used charge dividing circuits which enables the instantaneous and simultaneous computation of the second order moment of light distribution. This measure provides a good estimate for the depth of interaction and does not affect the determination of the position centroids and the energy release of {gamma}-ray impact. The method has been tested with a detector consisting of a monolithic LSO block sized 42x42x10mm{sup 3} and a position-sensitive photomultiplier tube H8500 from Hamamatsu. The mean spatial resolution of the detector was found to be 3.4mm for the position centroids and 4.9mm for the DOI. The best spatial resolutions were observed at the center of the detector and yielded 1.4mm for the position centroids and 1.9mm for the DOI.

  9. Depth of interaction detection for γ-ray imaging

    International Nuclear Information System (INIS)

    Lerche, Ch.W.; Doering, M.; Ros, A.; Herrero, V.; Gadea, R.; Aliaga, R.J.; Colom, R.; Mateo, F.; Monzo, J.M.; Ferrando, N.; Toledo, J.F.; Martinez, J.D.; Sebastia, A.; Sanchez, F.; Benlloch, J.M.

    2009-01-01

    A novel design for an inexpensive depth of interaction capable detector for γ-ray imaging has been developed. The design takes advantage of the strong correlation between the width of the scintillation light distribution in monolithic crystals and the interaction depth of γ-rays. We present in this work an inexpensive modification of the commonly used charge dividing circuits which enables the instantaneous and simultaneous computation of the second order moment of light distribution. This measure provides a good estimate for the depth of interaction and does not affect the determination of the position centroids and the energy release of γ-ray impact. The method has been tested with a detector consisting of a monolithic LSO block sized 42x42x10mm 3 and a position-sensitive photomultiplier tube H8500 from Hamamatsu. The mean spatial resolution of the detector was found to be 3.4mm for the position centroids and 4.9mm for the DOI. The best spatial resolutions were observed at the center of the detector and yielded 1.4mm for the position centroids and 1.9mm for the DOI.

  10. Clustering with position-specific constraints on variance: Applying redescending M-estimators to label-free LC-MS data analysis

    Directory of Open Access Journals (Sweden)

    Mani D R

    2011-08-01

    Full Text Available Abstract Background Clustering is a widely applicable pattern recognition method for discovering groups of similar observations in data. While there are a large variety of clustering algorithms, very few of these can enforce constraints on the variation of attributes for data points included in a given cluster. In particular, a clustering algorithm that can limit variation within a cluster according to that cluster's position (centroid location can produce effective and optimal results in many important applications ranging from clustering of silicon pixels or calorimeter cells in high-energy physics to label-free liquid chromatography based mass spectrometry (LC-MS data analysis in proteomics and metabolomics. Results We present MEDEA (M-Estimator with DEterministic Annealing, an M-estimator based, new unsupervised algorithm that is designed to enforce position-specific constraints on variance during the clustering process. The utility of MEDEA is demonstrated by applying it to the problem of "peak matching"--identifying the common LC-MS peaks across multiple samples--in proteomic biomarker discovery. Using real-life datasets, we show that MEDEA not only outperforms current state-of-the-art model-based clustering methods, but also results in an implementation that is significantly more efficient, and hence applicable to much larger LC-MS data sets. Conclusions MEDEA is an effective and efficient solution to the problem of peak matching in label-free LC-MS data. The program implementing the MEDEA algorithm, including datasets, clustering results, and supplementary information is available from the author website at http://www.hephy.at/user/fru/medea/.

  11. Baselines to detect population stability of the threatened alpine plant Packera franciscana (Asteraceae)

    Science.gov (United States)

    James F. Fowler; Carolyn Hull Sieg; Shaula Hedwall

    2015-01-01

    Population size and density estimates have traditionally been acceptable ways to track species’ response to changing environments; however, species' population centroid elevation has recently been an equally important metric. Packera franciscana (Greene) W.A. Weber and A. Love (Asteraceae; San Francisco Peaks ragwort) is a single mountain endemic plant found only...

  12. Automatic detection of multiple UXO-like targets using magnetic anomaly inversion and self-adaptive fuzzy c-means clustering

    Science.gov (United States)

    Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining

    2017-12-01

    We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.

  13. Centroid stabilization for laser alignment to corner cubes: designing a matched filter

    Energy Technology Data Exchange (ETDEWEB)

    Awwal, Abdul A. S.; Bliss, Erlan; Brunton, Gordon; Kamm, Victoria Miller; Leach, Richard R.; Lowe-Webb, Roger; Roberts, Randy; Wilhelmsen, Karl

    2016-11-08

    Automation of image-based alignment of National Ignition Facility high energy laser beams is providing the capability of executing multiple target shots per day. One important alignment is beam centration through the second and third harmonic generating crystals in the final optics assembly (FOA), which employs two retroreflecting corner cubes as centering references for each beam. Beam-to-beam variations and systematic beam changes over time in the FOA corner cube images can lead to a reduction in accuracy as well as increased convergence durations for the template-based position detector. A systematic approach is described that maintains FOA corner cube templates and guarantees stable position estimation.

  14. Aeroelastically coupled blades for vertical axis wind turbines

    Science.gov (United States)

    Paquette, Joshua; Barone, Matthew F.

    2016-02-23

    Various technologies described herein pertain to a vertical axis wind turbine blade configured to rotate about a rotation axis. The vertical axis wind turbine blade includes at least an attachment segment, a rear swept segment, and optionally, a forward swept segment. The attachment segment is contiguous with the forward swept segment, and the forward swept segment is contiguous with the rear swept segment. The attachment segment includes a first portion of a centroid axis, the forward swept segment includes a second portion of the centroid axis, and the rear swept segment includes a third portion of the centroid axis. The second portion of the centroid axis is angularly displaced ahead of the first portion of the centroid axis and the third portion of the centroid axis is angularly displaced behind the first portion of the centroid axis in the direction of rotation about the rotation axis.

  15. Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.

    Science.gov (United States)

    Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal

    2016-11-01

    Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.

  16. Estimating urban vegetation fraction across 25 cities in pan-Pacific using Landsat time series data

    Science.gov (United States)

    Lu, Yuhao; Coops, Nicholas C.; Hermosilla, Txomin

    2017-04-01

    Urbanization globally is consistently reshaping the natural landscape to accommodate the growing human population. Urban vegetation plays a key role in moderating environmental impacts caused by urbanization and is critically important for local economic, social and cultural development. The differing patterns of human population growth, varying urban structures and development stages, results in highly varied spatial and temporal vegetation patterns particularly in the pan-Pacific region which has some of the fastest urbanization rates globally. Yet spatially-explicit temporal information on the amount and change of urban vegetation is rarely documented particularly in less developed nations. Remote sensing offers an exceptional data source and a unique perspective to map urban vegetation and change due to its consistency and ubiquitous nature. In this research, we assess the vegetation fractions of 25 cities across 12 pan-Pacific countries using annual gap-free Landsat surface reflectance products acquired from 1984 to 2012, using sub-pixel, spectral unmixing approaches. Vegetation change trends were then analyzed using Mann-Kendall statistics and Theil-Sen slope estimators. Unmixing results successfully mapped urban vegetation for pixels located in urban parks, forested mountainous regions, as well as agricultural land (correlation coefficient ranging from 0.66 to 0.77). The greatest vegetation loss from 1984 to 2012 was found in Shanghai, Tianjin, and Dalian in China. In contrast, cities including Vancouver (Canada) and Seattle (USA) showed stable vegetation trends through time. Using temporal trend analysis, our results suggest that it is possible to reduce noise and outliers caused by phenological changes particularly in cropland using dense new Landsat time series approaches. We conclude that simple yet effective approaches of unmixing Landsat time series data for assessing spatial and temporal changes of urban vegetation at regional scales can provide

  17. Weak-lensing shear estimates with general adaptive moments, and studies of bias by pixellation, PSF distortions, and noise

    Science.gov (United States)

    Simon, Patrick; Schneider, Peter

    2017-08-01

    In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We have employed general adaptive moments (GLAM ) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. The GLAM ellipticity has useful properties for any chosen weight profile: the weighted ellipticity is identical to that of isophotes of elliptical images, and in absence of noise and pixellation it is always an unbiased estimator of reduced shear. We show that moment-based techniques, adaptive or unweighted, are similar to a model-based approach in the sense that they can be seen as imperfect fit of an elliptical profile to the image. Due to residuals in the fit, moment-based estimates of ellipticities are prone to underfitting bias when inferred from observed images. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimised estimator for the pre-seeing GLAM ellipticity and quantify its bias for noise-free images. To deal with images where pixel noise is prominent, we consider a Bayesian approach to infer GLAM ellipticity where, similar to the noise-free case, the ellipticity posterior can be inconsistent with the true ellipticity if we do not properly account for our ignorance about fit residuals. This underfitting bias, quantified in the paper, does not vary with the overall noise level but changes with the pre-seeing brightness profile and the correlation or heterogeneity of pixel noise over the image. Furthermore, when inferring a constant ellipticity or, more relevantly, constant shear from a source sample with a distribution of intrinsic properties (sizes, centroid positions, intrinsic shapes), an additional, now noise-dependent bias arises towards low signal-to-noise if incorrect prior densities for the intrinsic properties are used. We discuss the origin of this

  18. Measuring the Alfvénic nature of the interstellar medium: Velocity anisotropy revisited

    International Nuclear Information System (INIS)

    Burkhart, Blakesley; Lazarian, A.; Leão, I. C.; De Medeiros, J. R.; Esquivel, A.

    2014-01-01

    The dynamics of the interstellar medium (ISM) are strongly affected by turbulence, which shows increased anisotropy in the presence of a magnetic field. We expand upon the Esquivel and Lazarian method to estimate the Alfvén Mach number using the structure function anisotropy in velocity centroid data from Position-Position-Velocity maps. We utilize three-dimensional magnetohydrodynamic simulations of fully developed turbulence, with a large range of sonic and Alfvénic Mach numbers, to produce synthetic observations of velocity centroids with observational characteristics such as thermal broadening, cloud boundaries, noise, and radiative transfer effects of carbon monoxide. In addition, we investigate how the resulting anisotropy-Alfvén Mach number dependency found in Esquivel and Lazarian might change when taking the second moment of the Position-Position-Velocity cube or when using different expressions to calculate the velocity centroids. We find that the degree of anisotropy is related primarily to the magnetic field strength (i.e., Alfvén Mach number) and the line-of-sight orientation, with a secondary effect on sonic Mach number. If the line of sight is parallel to up to ≈45 deg off of the mean field direction, the velocity centroid anisotropy is not prominent enough to distinguish different Alfvénic regimes. The observed anisotropy is not strongly affected by including radiative transfer, although future studies should include additional tests for opacity effects. These results open up the possibility of studying the magnetic nature of the ISM using statistical methods in addition to existing observational techniques.

  19. A phase-based stereo vision system-on-a-chip.

    Science.gov (United States)

    Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia

    2007-02-01

    A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.

  20. The plant virus microscope image registration method based on mismatches removing.

    Science.gov (United States)

    Wei, Lifang; Zhou, Shucheng; Dong, Heng; Mao, Qianzhuo; Lin, Jiaxiang; Chen, Riqing

    2016-01-01

    The electron microscopy is one of the major means to observe the virus. The view of virus microscope images is limited by making specimen and the size of the camera's view field. To solve this problem, the virus sample is produced into multi-slice for information fusion and image registration techniques are applied to obtain large field and whole sections. Image registration techniques have been developed in the past decades for increasing the camera's field of view. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Alternatively, the methods are conceived just to provide visually pleasant registration for high overlap ratio image sequence. This work presents a method for virus microscope image registration acquired with detailed visual information and subpixel accuracy, even when overlap ratio of image sequence is 10% or less. The method proposed focus on the correspondence set and interimage transformation. A mismatch removal strategy is proposed by the spatial consistency and the components of keypoint to enrich the correspondence set. And the translation model parameter as well as tonal inhomogeneities is corrected by the hierarchical estimation and model select. In the experiments performed, we tested different registration approaches and virus images, confirming that the translation model is not always stationary, despite the fact that the images of the sample come from the same sequence. The mismatch removal strategy makes building registration of virus microscope images at subpixel accuracy easier and optional parameters for building registration according to the hierarchical estimation and model select strategies make the proposed method high precision and reliable for low overlap ratio image sequence. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  2. Special Session on Adaptive Optics in Russia and China. Volume 23

    Science.gov (United States)

    1995-01-01

    Fisica Aplicada. Universidad de Cantabria. 39005. SANTANDER. SPAIN. Tel: 42-201445. Fax: 42-201402. 1.- INTRODUCTION To estimate the centroid of a light...Lia M. Zerbino ##, Eduardo Aguirre +, Anibal P. Laquidara ++ and Mario-Garavaglia ft. Centro de Investigaciones Opticas (CIOp) CC 124 Correo Central...Professor at Universidad Nacional dt La Plata and CONICET Engiuneer. C (lOp belongs to Consejo Nacional de Investigaciones Cientificas y Ticnicas (CONICET

  3. Modal mass estimation from ambient vibrations measurement: A method for civil buildings

    Science.gov (United States)

    Acunzo, G.; Fiorini, N.; Mori, F.; Spina, D.

    2018-01-01

    A new method for estimating the modal mass ratios of buildings from unscaled mode shapes identified from ambient vibrations is presented. The method is based on the Multi Rigid Polygons (MRP) model in which each floor of the building is ideally divided in several non-deformable polygons that move independent of each other. The whole mass of the building is concentrated in the centroid of the polygons and the experimental mode shapes are expressed in term of rigid translations and of rotations. In this way, the mass matrix of the building can be easily computed on the basis of simple information about the geometry and the materials of the structure. The modal mass ratios can be then obtained through the classical equation of structural dynamics. Ambient vibrations measurement must be performed according to this MRP models, using at least two biaxial accelerometers per polygon. After a brief illustration of the theoretical background of the method, numerical validations are presented analysing the method sensitivity for possible different source of errors. Quality indexes are defined for evaluating the approximation of the modal mass ratios obtained from a certain MRP model. The capability of the proposed model to be applied to real buildings is illustrated through two experimental applications. In the first one, a geometrically irregular reinforced concrete building is considered, using a calibrated Finite Element Model for validating the results of the method. The second application refers to a historical monumental masonry building, with a more complex geometry and with less information available. In both cases, MRP models with a different number of rigid polygons per floor are compared.

  4. Asymmetry in some common assignment algorithms: the dispersion factor solution

    OpenAIRE

    T de la Barra; B Pérez

    1986-01-01

    Many common assignment algorithms are based on Dial's original design to determine the paths that trip makers will follow from a given origin to destination centroids. The purpose of this paper is to show that the rules that have to be applied result in two unwanted properties. The first is that trips assigned from an origin centroid i to a destination j can be dramatically different to those resulting from centroid j to centroid i , even if the number of trips is the same and the network is ...

  5. A method to determine the detector locations of the cone-beam projection of the balls’ centers

    International Nuclear Information System (INIS)

    Deng, Lin; Xi, Xiaoqi; Li, Lei; Han, Yu; Yan, Bin

    2015-01-01

    In geometric calibration of cone-beam computed tomography (CBCT), sphere-like objects such as balls are widely imaged, the positioning information of which is obtained to determine the unknown geometric parameters. In this process, the accuracy of the detector location of CB projection of the center of the ball, which we call the center projection, is very important, since geometric calibration is sensitive to errors in the positioning information. Currently in almost all the geometric calibration using balls, the center projection is invariably estimated by the center of the support of the projection or the centroid of the intensity values inside the support approximately. Clackdoyle’s work indicates that the center projection is not always at the center of the support or the centroid of the intensity values inside, and has given a quantitative analysis of the maximum errors in evaluating the center projection by the centroid. In this paper, an exact method is proposed to calculate the center projection, utilizing both the detector location of the ellipse center and the two axis lengths of the ellipse. Numerical simulation results have demonstrated the precision and the robustness of the proposed method. Finally there are some comments on this work with non-uniform density balls, as well as the effect by the error occurred in the evaluation for the location of the orthogonal projection of the cone vertex onto the detector. (paper)

  6. Reliability of an experimental method to analyse the impact point on a golf ball during putting.

    Science.gov (United States)

    Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn

    2015-06-01

    This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.

  7. An improved computing method for the image edge detection

    Institute of Scientific and Technical Information of China (English)

    Gang Wang; Liang Xiao; Anzhi He

    2007-01-01

    The framework of detecting the image edge based on the sub-pixel multi-fractal measure (SPMM) is presented. The measure is defined, which gives the sub-pixel local distribution of the image gradient. The more precise singularity exponent of every pixel can be obtained by performing the SPMM analysis on the image. Using the singularity exponents and the multi-fractal spectrum of the image, the image can be segmented into a series of sets with different singularity exponents, thus the image edge can be detected automatically and easily. The simulation results show that the SPMM has higher quality factor in the image edge detection.

  8. Machine vision for high-precision volume measurement applied to levitated containerless material processing

    International Nuclear Information System (INIS)

    Bradshaw, R.C.; Schmidt, D.P.; Rogers, J.R.; Kelton, K.F.; Hyers, R.W.

    2005-01-01

    By combining the best practices in optical dilatometry with numerical methods, a high-speed and high-precision technique has been developed to measure the volume of levitated, containerlessly processed samples with subpixel resolution. Containerless processing provides the ability to study highly reactive materials without the possibility of contamination affecting thermophysical properties. Levitation is a common technique used to isolate a sample as it is being processed. Noncontact optical measurement of thermophysical properties is very important as traditional measuring methods cannot be used. Modern, digitally recorded images require advanced numerical routines to recover the subpixel locations of sample edges and, in turn, produce high-precision measurements

  9. Cylinder gauge measurement using a position sensitive detector

    International Nuclear Information System (INIS)

    St John, W. Doyle

    2007-01-01

    A position sensitive detector (PSD) has been used to determine the diameter of cylindrical pins based on the shift in a laser beam's centroid. The centroid of the light beam is defined here as the weighted average of position by the local intensity. A shift can be observed in the centroid of an otherwise axially symmetric light beam, which is partially obstructed. Additionally, the maximum shift in the centroid is a unique function of the obstructing cylinder diameter. Thus to determine the cylinder diameter, one only needs to detect this maximum shift as the cylinder is swept across the beam

  10. Wave-equation Q tomography

    KAUST Repository

    Dutta, Gaurav

    2016-10-12

    Strong subsurface attenuation leads to distortion of amplitudes and phases of seismic waves propagating inside the earth. The amplitude and the dispersion losses from attenuation are often compensated for during prestack depth migration. However, most attenuation compensation or Qcompensation migration algorithms require an estimate of the background Q model. We have developed a wave-equation gradient optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ∈, where ∈ is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early arrivals. The gradient is computed by migrating the observed traces weighted by the frequency shift residuals. The background Q model is perturbed until the predicted and the observed traces have the same peak frequencies or the same centroid frequencies. Numerical tests determined that an improved accuracy of the Q model by wave-equation Q tomography leads to a noticeable improvement in migration image quality. © 2016 Society of Exploration Geophysicists.

  11. Using Range-Wide Abundance Modeling to Identify Key Conservation Areas for the Micro-Endemic Bolson Tortoise (Gopherus flavomarginatus.

    Directory of Open Access Journals (Sweden)

    Cinthya A Ureña-Aranda

    Full Text Available A widespread biogeographic pattern in nature is that population abundance is not uniform across the geographic range of species: most occurrence sites have relatively low numbers, whereas a few places contain orders of magnitude more individuals. The Bolson tortoise Gopherus flavomarginatus is endemic to a small region of the Chihuahuan Desert in Mexico, where habitat deterioration threatens this species with extinction. In this study we combined field burrows counts and the approach for modeling species abundance based on calculating the distance to the niche centroid to obtain range-wide abundance estimates. For the Bolson tortoise, we found a robust, negative relationship between observed burrows abundance and distance to the niche centroid, with a predictive capacity of 71%. Based on these results we identified four priority areas for the conservation of this microendemic and threatened tortoise. We conclude that this approach may be a useful approximation for identifying key areas for sampling and conservation efforts in elusive and rare species.

  12. Wave-equation Q tomography

    KAUST Repository

    Dutta, Gaurav; Schuster, Gerard T.

    2016-01-01

    Strong subsurface attenuation leads to distortion of amplitudes and phases of seismic waves propagating inside the earth. The amplitude and the dispersion losses from attenuation are often compensated for during prestack depth migration. However, most attenuation compensation or Qcompensation migration algorithms require an estimate of the background Q model. We have developed a wave-equation gradient optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ∈, where ∈ is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early arrivals. The gradient is computed by migrating the observed traces weighted by the frequency shift residuals. The background Q model is perturbed until the predicted and the observed traces have the same peak frequencies or the same centroid frequencies. Numerical tests determined that an improved accuracy of the Q model by wave-equation Q tomography leads to a noticeable improvement in migration image quality. © 2016 Society of Exploration Geophysicists.

  13. PIV-DCNN: cascaded deep convolutional neural networks for particle image velocimetry

    Science.gov (United States)

    Lee, Yong; Yang, Hua; Yin, Zhouping

    2017-12-01

    Velocity estimation (extracting the displacement vector information) from the particle image pairs is of critical importance for particle image velocimetry. This problem is mostly transformed into finding the sub-pixel peak in a correlation map. To address the original displacement extraction problem, we propose a different evaluation scheme (PIV-DCNN) with four-level regression deep convolutional neural networks. At each level, the networks are trained to predict a vector from two input image patches. The low-level network is skilled at large displacement estimation and the high- level networks are devoted to improving the accuracy. Outlier replacement and symmetric window offset operation glue the well- functioning networks in a cascaded manner. Through comparison with the standard PIV methods (one-pass cross-correlation method, three-pass window deformation), the practicability of the proposed PIV-DCNN is verified by the application to a diversity of synthetic and experimental PIV images.

  14. Martial arts striking hand peak acceleration, accuracy and consistency.

    Science.gov (United States)

    Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A

    2013-01-01

    The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.

  15. Beam-Based Alignment of Magnetic Field in the Fermilab Electron Cooler Cooling Section

    International Nuclear Information System (INIS)

    Seletskiy, S. M.; Tupikov, V.

    2006-01-01

    The Fermilab Electron Cooling Project requires low effective anglular spread of electrons in the cooling section. One of the main components of the effective electron angles is an angle of electron beam centroid with respect to antiproton beam. This angle is caused by the poor quality of magnetic field in the 20 m long cooling section solenoid and by the mismatch of the beam centroid to the entrance of the cooling section. This paper focuses on the beam-based procedure of the alignment of the cooling section field and beam centroid matching. The discussed procedure allows to suppress the beam centroid angles below the critical value of 0.1 mrad

  16. Ground calibration of the spatial response and quantum efficiency of the CdZnTe hard x-ray detectors for NuSTAR

    Science.gov (United States)

    Grefenstette, Brian W.; Bhalerao, Varun; Cook, W. Rick; Harrison, Fiona A.; Kitaguchi, Takao; Madsen, Kristin K.; Mao, Peter H.; Miyasaka, Hiromasa; Rana, Vikram

    2017-08-01

    Pixelated Cadmium Zinc Telluride (CdZnTe) detectors are currently flying on the Nuclear Spectroscopic Telescope ARray (NuSTAR) NASA Astrophysics Small Explorer. While the pixel pitch of the detectors is ≍ 605 μm, we can leverage the detector readout architecture to determine the interaction location of an individual photon to much higher spatial accuracy. The sub-pixel spatial location allows us to finely oversample the point spread function of the optics and reduces imaging artifacts due to pixelation. In this paper we demonstrate how the sub-pixel information is obtained, how the detectors were calibrated, and provide ground verification of the quantum efficiency of our Monte Carlo model of the detector response.

  17. Regional PV power estimation and forecast to mitigate the impact of high photovoltaic penetration on electric grid.

    Science.gov (United States)

    Pierro, Marco; De Felice, Matteo; Maggioni, Enrico; Moser, David; Perotto, Alessandro; Spada, Francesco; Cornaro, Cristina

    2017-04-01

    The growing photovoltaic generation results in a stochastic variability of the electric demand that could compromise the stability of the grid and increase the amount of energy reserve and the energy imbalance cost. On regional scale, solar power estimation and forecast is becoming essential for Distribution System Operators, Transmission System Operator, energy traders, and aggregators of generation. Indeed the estimation of regional PV power can be used for PV power supervision and real time control of residual load. Mid-term PV power forecast can be employed for transmission scheduling to reduce energy imbalance and related cost of penalties, residual load tracking, trading optimization, secondary energy reserve assessment. In this context, a new upscaling method was developed and used for estimation and mid-term forecast of the photovoltaic distributed generation in a small area in the north of Italy under the control of a local DSO. The method was based on spatial clustering of the PV fleet and neural networks models that input satellite or numerical weather prediction data (centered on cluster centroids) to estimate or predict the regional solar generation. It requires a low computational effort and very few input information should be provided by users. The power estimation model achieved a RMSE of 3% of installed capacity. Intra-day forecast (from 1 to 4 hours) obtained a RMSE of 5% - 7% while the one and two days forecast achieve to a RMSE of 7% and 7.5%. A model to estimate the forecast error and the prediction intervals was also developed. The photovoltaic production in the considered region provided the 6.9% of the electric consumption in 2015. Since the PV penetration is very similar to the one observed at national level (7.9%), this is a good case study to analyse the impact of PV generation on the electric grid and the effects of PV power forecast on transmission scheduling and on secondary reserve estimation. It appears that, already with 7% of PV

  18. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  19. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    Directory of Open Access Journals (Sweden)

    Edwin J. Niklitschek

    2016-10-01

    Full Text Available Background Mixture models (MM can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM, under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011, from four distinct nursery habitats. (Mediterranean lagoons Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI and uncertainty (SE were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06 when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0

  20. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    Science.gov (United States)

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI nursery signatures improved reliability

  1. Equilibrium and stability of off-axis periodically focused particle beams

    International Nuclear Information System (INIS)

    Moraes, J.S.; Pakter, R.; Rizzato, F.B.

    2004-01-01

    A general equation for the centroid motion of free, continuous, intense beams propagating off axis in solenoidal periodic focusing fields is derived. The centroid equation is found to be independent of the specific beam distribution and may exhibit unstable solutions. A new Vlasov equilibrium for off-axis beam propagation is also obtained. The properties of the equilibrium and the relevance of centroid motion to beam confinement are discussed

  2. Investigation of the Relationship Between Gross Tumor Volume Location and Pneumonitis Rates Using a Large Clinical Database of Non-Small-Cell Lung Cancer Patients

    International Nuclear Information System (INIS)

    Vinogradskiy, Yevgeniy; Tucker, Susan L.; Liao Zhongxing; Martel, Mary K.

    2012-01-01

    Purpose: Studies have suggested that function may vary throughout the lung, and that patients who have tumors located in the base of the lung are more susceptible to radiation pneumonitis. The purpose of our study was to investigate the relationship between gross tumor volume (GTV) location and pneumonitis rates using a large clinical database of 547 patients with non–small-cell lung cancer. Methods and Materials: The GTV centroids of all patients were mapped onto one common coordinate system, in which the boundaries of the coordinate system were defined by the extreme points of each individual patient lung. The data were qualitatively analyzed by graphing all centroids and displaying the data according to the presence of severe pneumonitis, tumor stage, and smoking status. The centroids were grouped according to superior–inferior segments, and the pneumonitis rates were analyzed. In addition, we incorporated the GTV centroid information into a Lyman–Kutcher–Burman normal tissue complication probability model and tested whether adding spatial information significantly improved the fit of the model. Results: Of the 547 patients analyzed, 111 (20.3%) experienced severe radiation pneumonitis. The pneumonitis incidence rates were 16%, 23%, and 21% for the superior, middle, and inferior thirds of the lung, respectively. Qualitatively, the GTV centroids of nonsmokers were notably absent from the superior portion of the lung. In addition, the GTV centroids of patients who had Stage III and IV clinical staging were concentrated toward the medial edge of the lung. The comparison between the GTV centroid model and the conventional dose–volume model did not yield a statistically significant difference in model fit. Conclusions: Lower pneumonitis rates were noted for the superior portion of the lung; however the differences were not statistically significant. For our patient cohort, incorporating GTV centroid information did not lead to a statistically significant

  3. Investigation of the relationship between gross tumor volume location and pneumonitis rates using a large clinical database of non-small-cell lung cancer patients.

    Science.gov (United States)

    Vinogradskiy, Yevgeniy; Tucker, Susan L; Liao, Zhongxing; Martel, Mary K

    2012-04-01

    Studies have suggested that function may vary throughout the lung, and that patients who have tumors located in the base of the lung are more susceptible to radiation pneumonitis. The purpose of our study was to investigate the relationship between gross tumor volume (GTV) location and pneumonitis rates using a large clinical database of 547 patients with non-small-cell lung cancer. The GTV centroids of all patients were mapped onto one common coordinate system, in which the boundaries of the coordinate system were defined by the extreme points of each individual patient lung. The data were qualitatively analyzed by graphing all centroids and displaying the data according to the presence of severe pneumonitis, tumor stage, and smoking status. The centroids were grouped according to superior-inferior segments, and the pneumonitis rates were analyzed. In addition, we incorporated the GTV centroid information into a Lyman-Kutcher-Burman normal tissue complication probability model and tested whether adding spatial information significantly improved the fit of the model. Of the 547 patients analyzed, 111 (20.3%) experienced severe radiation pneumonitis. The pneumonitis incidence rates were 16%, 23%, and 21% for the superior, middle, and inferior thirds of the lung, respectively. Qualitatively, the GTV centroids of nonsmokers were notably absent from the superior portion of the lung. In addition, the GTV centroids of patients who had Stage III and IV clinical staging were concentrated toward the medial edge of the lung. The comparison between the GTV centroid model and the conventional dose-volume model did not yield a statistically significant difference in model fit. Lower pneumonitis rates were noted for the superior portion of the lung; however the differences were not statistically significant. For our patient cohort, incorporating GTV centroid information did not lead to a statistically significant improvement in the fit of the pneumonitis model. Copyright

  4. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin

    2015-02-03

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  5. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin; Dutta, Rishabh; Jonsson, Sigurjon

    2015-01-01

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  6. Accuracy of locating circular features using machine vision

    Science.gov (United States)

    Sklair, Cheryl W.; Hoff, William A.; Gatrell, Lance B.

    1992-03-01

    The ability to automatically locate objects using vision is a key technology for flexible, intelligent robotic operations. The vision task is facilitated by placing optical targets or markings in advance on the objects to be located. A number of researchers have advocated the use of circular target features as the features that can be most accurately located. This paper describes extensive analysis on circle centroid accuracy using both simulations and laboratory measurements. The work was part of an effort to design a video positioning sensor for NASA's Flight Telerobotic Servicer that would meet accuracy requirements. We have analyzed the main contributors to centroid error and have classified them into the following: (1) spatial quantization errors, (2) errors due to signal noise and random timing errors, (3) surface tilt errors, and (4) errors in modeling camera geometry. It is possible to compensate for the errors in (3) given an estimate of the tilt angle, and the errors from (4) by calibrating the intrinsic camera attributes. The errors in (1) and (2) cannot be compensated for, but they can be measured and their effects reduced somewhat. To characterize these error sources, we measured centroid repeatability under various conditions, including synchronization method, signal-to-noise ratio, and frequency attenuation. Although these results are specific to our video system and equipment, they provide a reference point that should be a characteristic of typical CCD cameras and digitization equipment.

  7. The influence of image sensor irradiation damage on the tracking and pointing accuracy of optical communication system

    Science.gov (United States)

    Li, Xiaoliang; Luo, Lei; Li, Pengwei; Yu, Qingkui

    2018-03-01

    The image sensor in satellite optical communication system may generate noise due to space irradiation damage, leading to deviation for the determination of the light spot centroid. Based on the irradiation test data of CMOS devices, simulated defect spots in different sizes have been used for calculating the centroid deviation value by grey-level centroid algorithm. The impact on tracking & pointing accuracy of the system has been analyzed. The results show that both the amount and the position of irradiation-induced defect pixels contribute to spot centroid deviation. And the larger spot has less deviation. At last, considering the space radiation damage, suggestions are made for the constraints of spot size selection.

  8. Trilateration-based localization algorithm for ADS-B radar systems

    Science.gov (United States)

    Huang, Ming-Shih

    Rapidly increasing growth and demand in various unmanned aerial vehicles (UAV) have pushed governmental regulation development and numerous technology research advances toward integrating unmanned and manned aircraft into the same civil airspace. Safety of other airspace users is the primary concern; thus, with the introduction of UAV into the National Airspace System (NAS), a key issue to overcome is the risk of a collision with manned aircraft. The challenge of UAV integration is global. As automatic dependent surveillance-broadcast (ADS-B) system has gained wide acceptance, additional exploitations of the radioed satellite-based information are topics of current interest. One such opportunity includes the augmentation of the communication ADS-B signal with a random bi-phase modulation for concurrent use as a radar signal for detecting other aircraft in the vicinity. This dissertation provides detailed discussion about the ADS-B radar system, as well as the formulation and analysis of a suitable non-cooperative multi-target tracking method for the ADS-B radar system using radar ranging techniques and particle filter algorithms. In order to deal with specific challenges faced by the ADS-B radar system, several estimation algorithms are studied. Trilateration-based localization algorithms are proposed due to their easy implementation and their ability to work with coherent signal sources. The centroid of three most closely spaced intersections of constant-range loci is conventionally used as trilateration estimate without rigorous justification. In this dissertation, we address the quality of trilateration intersections through range scaling factors. A number of well-known triangle centers, including centroid, incenter, Lemoine point (LP), and Fermat point (FP), are discussed in detail. To the author's best knowledge, LP was never associated with trilateration techniques. According our study, LP is proposed as the best trilateration estimator thanks to the

  9. Ozone mixing ratios inside tropical deep convective clouds from OMI satellite measurements

    Directory of Open Access Journals (Sweden)

    J. R. Ziemke

    2009-01-01

    Full Text Available We have developed a new technique for estimating ozone mixing ratio inside deep convective clouds. The technique uses the concept of an optical centroid cloud pressure that is indicative of the photon path inside clouds. Radiative transfer calculations based on realistic cloud vertical structure as provided by CloudSat radar data show that because deep convective clouds are optically thin near the top, photons can penetrate significantly inside the cloud. This photon penetration coupled with in-cloud scattering produces optical centroid pressures that are hundreds of hPa inside the cloud. We combine measured column ozone and the optical centroid cloud pressure derived using the effects of rotational-Raman scattering to estimate O3 mixing ratio in the upper regions of deep convective clouds. The data are obtained from the Ozone Monitoring Instrument (OMI onboard NASA's Aura satellite. Our results show that low O3 concentrations in these clouds are a common occurrence throughout much of the tropical Pacific. Ozonesonde measurements in the tropics following convective activity also show very low concentrations of O3 in the upper troposphere. These low amounts are attributed to vertical injection of ozone poor oceanic boundary layer air during convection into the upper troposphere followed by convective outflow. Over South America and Africa, O3 mixing ratios inside deep convective clouds often exceed 50 ppbv which are comparable to mean background (cloud-free amounts and are consistent with higher concentrations of injected boundary layer/lower tropospheric O3 relative to the remote Pacific. The Atlantic region in general also consists of higher amounts of O3 precursors due to both biomass burning and lightning. Assuming that O3 is well mixed (i.e., constant mixing ratio with height up to the tropopause, we can estimate the stratospheric column O3 over

  10. Automated quantification of surface water inundation in wetlands using optical satellite imagery

    Science.gov (United States)

    DeVries, Ben; Huang, Chengquan; Lang, Megan W.; Jones, John W.; Huang, Wenli; Creed, Irena F.; Carroll, Mark L.

    2017-01-01

    We present a fully automated and scalable algorithm for quantifying surface water inundation in wetlands. Requiring no external training data, our algorithm estimates sub-pixel water fraction (SWF) over large areas and long time periods using Landsat data. We tested our SWF algorithm over three wetland sites across North America, including the Prairie Pothole Region, the Delmarva Peninsula and the Everglades, representing a gradient of inundation and vegetation conditions. We estimated SWF at 30-m resolution with accuracies ranging from a normalized root-mean-square-error of 0.11 to 0.19 when compared with various high-resolution ground and airborne datasets. SWF estimates were more sensitive to subtle inundated features compared to previously published surface water datasets, accurately depicting water bodies, large heterogeneously inundated surfaces, narrow water courses and canopy-covered water features. Despite this enhanced sensitivity, several sources of errors affected SWF estimates, including emergent or floating vegetation and forest canopies, shadows from topographic features, urban structures and unmasked clouds. The automated algorithm described in this article allows for the production of high temporal resolution wetland inundation data products to support a broad range of applications.

  11. Theoretical Predictions of Giant Resonances in 94Mo

    Science.gov (United States)

    Golden, Matthew; Bonasera, Giacomo; Shlomo, Shalom

    2016-09-01

    We perform Hartree-Fock based Random Phase Approximation using thirty-three common Skyrme interactions found in the literature for 94Mo. We calculate the strength functions and the Centroid Energies of the Isoscalar Giant Resonances for all multipolarities L0, L1, L2, L3. We compare the calculated Centroid Energies with the experimental value; we also study the Centroid Energy and any correlation it may have with the Nuclear Matter properties of each interaction.

  12. Study of Hybrid Localization Noncooperative Scheme in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Irfan Dwiguna Sumitra

    2017-01-01

    Full Text Available In this paper, we evaluated the experiment and analysis measurement accuracy to determine object location based on wireless sensor network (WSN. The algorithm estimates the position of sensor nodes employing received signal strength (RSS from scattered nodes in the environment, in particular for the indoor building. Besides that, we considered another algorithm based on weight centroid localization (WCL. In particular testbed, we combined both RSS and WCL as hybrid localization in case of noncooperative scheme with considering that source nodes directly communicate only with anchor nodes. Our experimental result shows localization accuracy of more than 90% and obtained the estimation error reduction to 4% compared to existing algorithms.

  13. Surveillance instrumentation for spent-fuel safeguards

    International Nuclear Information System (INIS)

    McKenzie, J.M.; Holmes, J.P.; Gillman, L.K.; Schmitz, J.A.; McDaniel, P.J.

    1978-01-01

    The movement, in a facility, of spent reactor fuel may be tracked using simple instrumentation together with a real time unfolding algorithm. Experimental measurements, from multiple radiation monitors and crane weight and position monitors, were obtained during spent fuel movements at the G.E. Morris Spent-Fuel Storage Facility. These data and a preliminary version of an unfolding algorithm were used to estimate the position of the centroid and the magnitude of the spent fuel radiation source. Spatial location was estimated to +-1.5 m and source magnitude to +-10% of their true values. Application of this surveillance instrumentation to spent-fuel safeguards is discussed

  14. Metode RCE-Kmeans untuk Clustering Data

    Directory of Open Access Journals (Sweden)

    Izmy Alwiah Musdar

    2015-07-01

    Abstract  There have been many methods developed to solve the clustering problem. One of them is method in swarm intelligence field such as Particle Swarm Optimization (PSO. Rapid Centroid Estimation (RCE is a method of clustering based Particle Swarm Optimization. RCE, like other variants of PSO clustering, does not depend on initial cluster centers. Moreover, RCE has faster computational time than the previous method like PSC and mPSC. However, RCE has higher standar deviation value than PSC and mPSC in which has impact in the variance of clustering result. It is happaned because of improper equilibrium state, a condition in which the position of the particle does not change anymore, when  the stopping criteria is reached. This study proposes RCE-Kmeans which is a  method applying K-means after the equilibrium state of RCE  reached to update the particle's position which is generated from the RCE method. The results showed that RCE-Kmeans has better quality of the clustering scheme in 7 of 10 datasets compared to K-means and better in 8 of 10 dataset then RCE method. The use of K-means clustering on the RCE method is also able to reduce the standard deviation from RCE method.   Keywords—Data Clustering, Particle Swarm, K-means, Rapid Centroid Estimation.

  15. Best Approximation of the Fractional Semi-Derivative Operator by Exponential Series

    Directory of Open Access Journals (Sweden)

    Vladimir D. Zakharchenko

    2018-01-01

    Full Text Available A significant reduction in the time required to obtain an estimate of the mean frequency of the spectrum of Doppler signals when seeking to measure the instantaneous velocity of dangerous near-Earth cosmic objects (NEO is an important task being developed to counter the threat from asteroids. Spectral analysis methods have shown that the coordinate of the centroid of the Doppler signal spectrum can be found by using operations in the time domain without spectral processing. At the same time, an increase in the speed of resolving the algorithm for estimating the mean frequency of the spectrum is achieved by using fractional differentiation without spectral processing. Thus, an accurate estimate of location of the centroid for the spectrum of Doppler signals can be obtained in the time domain as the signal arrives. This paper considers the implementation of a fractional-differentiating filter of the order of ½ by a set of automation astatic transfer elements, which greatly simplifies practical implementation. Real technical devices have the ultimate time delay, albeit small in comparison with the duration of the signal. As a result, the real filter will process the signal with some error. In accordance with this, this paper introduces and uses the concept of a “pre-derivative” of ½ of magnitude. An optimal algorithm for realizing the structure of the filter is proposed based on the criterion of minimum mean square error. Relations are obtained for the quadrature coefficients that determine the structure of the filter.

  16. THE HEIGHT OF A WHITE-LIGHT FLARE AND ITS HARD X-RAY SOURCES

    Energy Technology Data Exchange (ETDEWEB)

    Martinez Oliveros, Juan-Carlos; Hudson, Hugh S.; Hurford, Gordon J.; Krucker, Saem; Lin, R. P. [Space Sciences Laboratory, UC Berkeley, Berkeley, CA 94720 (United States); Lindsey, Charles [North West Research Associates, CORA Division, Boulder, CO (United States); Couvidat, Sebastien; Schou, Jesper [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA (United States); Thompson, W. T. [Adnet Systems, Inc., NASA Goddard Space Flight Center, code 671, Greenbelt, MD (United States)

    2012-07-10

    We describe observations of a white-light (WL) flare (SOL2011-02-24T07:35:00, M3.5) close to the limb of the Sun, from which we obtain estimates of the heights of the optical continuum sources and those of the associated hard X-ray (HXR) sources. For this purpose, we use HXR images from the Reuven Ramaty High Energy Spectroscopic Imager and optical images at 6173 A from the Solar Dynamics Observatory. We find that the centroids of the impulsive-phase emissions in WL and HXRs (30-80 keV) match closely in central distance (angular displacement from Sun center), within uncertainties of order 0.''2. This directly implies a common source height for these radiations, strengthening the connection between visible flare continuum formation and the accelerated electrons. We also estimate the absolute heights of these emissions as vertical distances from Sun center. Such a direct estimation has not been done previously, to our knowledge. Using a simultaneous 195 Angstrom-Sign image from the Solar-Terrestrial RElations Observatory spacecraft to identify the heliographic coordinates of the flare footpoints, we determine mean heights above the photosphere (as normally defined; {tau} = 1 at 5000 A) of 305 {+-} 170 km and 195 {+-} 70 km, respectively, for the centroids of the HXR and WL footpoint sources of the flare. These heights are unexpectedly low in the atmosphere, and are consistent with the expected locations of {tau} = 1 for the 6173 Angstrom-Sign and the {approx}40 keV photons observed, respectively.

  17. The Height of a White-Light Flare and its Hard X-Ray Sources

    Science.gov (United States)

    Oliveros, Juan-Carlos Martinez; Hudson, Hugh S.; Hurford, Gordon J.; Kriucker, Saem; Lin, R. P.; Lindsey, Charles; Couvidat, Sebastien; Schou, Jesper; Thompson, W. T.

    2012-01-01

    We describe observations of a white-light (WL) flare (SOL2011-02-24T07:35:00, M3.5) close to the limb of the Sun, from which we obtain estimates of the heights of the optical continuum sources and those of the associated hard X-ray (HXR) sources. For this purpose, we use HXR images from the Reuven Ramaty High Energy Spectroscopic Imager and optical images at 6173 Ang. from the Solar Dynamics Observatory.We find that the centroids of the impulsive-phase emissions in WL and HXRs (30 -80 keV) match closely in central distance (angular displacement from Sun center), within uncertainties of order 0".2. This directly implies a common source height for these radiations, strengthening the connection between visible flare continuum formation and the accelerated electrons. We also estimate the absolute heights of these emissions as vertical distances from Sun center. Such a direct estimation has not been done previously, to our knowledge. Using a simultaneous 195 Ang. image from the Solar-Terrestrial RElations Observatory spacecraft to identify the heliographic coordinates of the flare footpoints, we determine mean heights above the photosphere (as normally defined; tau = 1 at 5000 Ang.) of 305 +/- 170 km and 195 +/- 70 km, respectively, for the centroids of the HXR and WL footpoint sources of the flare. These heights are unexpectedly low in the atmosphere, and are consistent with the expected locations of tau = 1 for the 6173 Ang and the approx 40 keV photons observed, respectively.

  18. Regularization of Instantaneous Frequency Attribute Computations

    Science.gov (United States)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  19. Geodetic slip solutions for the Mw = 7.4 Champerico (Guatemala) earthquake of 2012 November 7 and its postseismic deformation

    Science.gov (United States)

    Ellis, Andria P.; DeMets, Charles; Briole, Pierre; Molina, Enrique; Flores, Omar; Rivera, Jeffrey; Lasserre, Cécile; Lyon-Caen, Hélène; Lord, Neal

    2015-05-01

    As the first large subduction thrust earthquake off the coast of western Guatemala in the past several decades, the 2012 November 7 Mw = 7.4 earthquake offers the first opportunity to study coseismic and postseismic behaviour along a segment of the Middle America trench where frictional coupling makes a transition from weak coupling off the coast of El Salvador to strong coupling in southern Mexico. We use measurements at 19 continuous GPS sites in Guatemala, El Salvador and Mexico to estimate the coseismic slip and postseismic deformation of the November 2012 Champerico (Guatemala) earthquake. An inversion of the coseismic offsets, which range up to ˜47 mm at the surface near the epicentre, indicates that up to ˜2 m of coseismic slip occurred on a ˜30 × 30 km rupture area between ˜10 and 30 km depth, which is near the global CMT centroid. The geodetic moment of 13 × 1019 N m and corresponding magnitude of 7.4 both agree well with independent seismological estimates. Transient postseismic deformation that was recorded at 11 GPS sites is attributable to a combination of fault afterslip and viscoelastic flow in the lower crust and/or mantle. Modelling of the viscoelastic deformation suggests that it constituted no more than ˜30 per cent of the short-term postseismic deformation. GPS observations that extend six months after the earthquake are well fit by a model in which most afterslip occurred at the same depth or directly downdip from the rupture zone and released energy equivalent to no more than ˜20 per cent of the coseismic moment. An independent seismological slip solution that features more highly concentrated coseismic slip than our own fits the GPS offsets well if its slip centroid is translated ˜50 km to the west to a position close to our slip centroid. The geodetic and seismologic slip solutions thus suggest bounds of 2-7 m for the peak slip along a region of the interface no larger than 30 × 30 km.

  20. Cf-252 based neutron radiography using real-time image processing system

    International Nuclear Information System (INIS)

    Mochiki, Koh-ichi; Koiso, Manabu; Yamaji, Akihiro; Iwata, Hideki; Kihara, Yoshitaka; Sano, Shigeru; Murata, Yutaka

    2001-01-01

    For compact Cf-252 based neutron radiography, a real-time image processing system by particle counting technique has been developed. The electronic imaging system consists of a supersensitive imaging camera, a real-time corrector, a real-time binary converter, a real-time calculator for centroid, a display monitor and a computer. Three types of accumulated NR image; ordinary, binary and centroid images, can be observed during a measurement. Accumulated NR images were taken by the centroid mode, the binary mode and ordinary mode using of Cf-252 neutron source and those images were compared. The centroid mode presented the sharpest image and its statistical characteristics followed the Poisson distribution, while the ordinary mode showed the smoothest image as the averaging effect by particle bright spots with distributed brightness was most dominant. (author)

  1. On the effects of rotation on interstellar molecular line profiles

    International Nuclear Information System (INIS)

    Adelson, L.M.; Chunming Leung

    1988-01-01

    Theoretical models are constructed to study the effects of systematic gas rotation on the emergent profiles of interstellar molecular lines, in particular the effects of optical depth and different velocity laws. Both rotational and radial motions (expansion or contraction) may produce similar asymmetric profiles, but the behaviour of the velocity centroid of the emergent profile over the whole cloud (iso-centroid maps) can be used to distinguish between these motions. Iso-centroid maps can also be used to determine the location and orientation of the rotation axis and of the equatorial axis. For clouds undergoing both radial and rotational motion, the component of the centroid due to the rotational motion can be separated from that due to the radial motion. Information on the form of the rotational velocity law can also be derived. (author)

  2. Projected changes of rainfall seasonality and dry spells in a high greenhouse gas emissions scenario

    OpenAIRE

    Pascale, Salvatore; Lucarini, Valerio; Feng, Xue; Porporato, Amilcare; ul Hasson, Shabeh

    2016-01-01

    In this diagnostic study we analyze changes of rainfall seasonality and dry spells by the end of the twenty-first century under the most extreme IPCC5 emission scenario (RCP8.5) as projected by twenty-four coupled climate models contributing to Coupled Model Intercomparison Project 5 (CMIP5). We use estimates of the centroid of the monthly rainfall distribution as an index of the rainfall timing and a threshold-independent, information theory-based quantity such as relative entropy (RE) to qu...

  3. Hose-Modulation Instability of Laser Pulses in Plasmas

    International Nuclear Information System (INIS)

    Sprangle, P.; Krall, J.; Esarey, E.

    1994-01-01

    A laser pulse propagating in a uniform plasma or a preformed plasma density channel is found to undergo a combination of hose and modulation instabilities, provided the pulse centroid has an initial tilt. Coupled equations for the laser centroid and envelope are derived and solved for a finite-length laser pulse. Significant coupling between the centroid and the envelope, harmonic generation in the envelope, and strong modification of the wake field can occur. Methods to reduce the growth rate of the laser hose instability are demonstrated

  4. The investigation of Martian dune fields using very high resolution photogrammetric measurements and time series analysis

    Science.gov (United States)

    Kim, J.; Park, M.; Baik, H. S.; Choi, Y.

    2016-12-01

    At the present time, arguments continue regarding the migration speeds of Martian dune fields and their correlation with atmospheric circulation. However, precisely measuring the spatial translation of Martian dunes has rarely conducted only a very few times Therefore, we developed a generic procedure to precisely measure the migration of dune fields with recently introduced 25-cm resolution High Resolution Imaging Science Experimen (HIRISE) employing a high-accuracy photogrammetric processor and sub-pixel image correlator. The processor was designed to trace estimated dune migration, albeit slight, over the Martian surface by 1) the introduction of very high resolution ortho images and stereo analysis based on hierarchical geodetic control for better initial point settings; 2) positioning error removal throughout the sensor model refinement with a non-rigorous bundle block adjustment, which makes possible the co-alignment of all images in a time series; and 3) improved sub-pixel co-registration algorithms using optical flow with a refinement stage conducted on a pyramidal grid processor and a blunder classifier. Moreover, volumetric changes of Martian dunes were additionally traced by means of stereo analysis and photoclinometry. The established algorithms have been tested using high-resolution HIRISE images over a large number of Martian dune fields covering whole Mars Global Dune Database. Migrations over well-known crater dune fields appeared to be almost static for the considerable temporal periods and were weakly correlated with wind directions estimated by the Mars Climate Database (Millour et al. 2015). Only over a few Martian dune fields, such as Kaiser crater, meaningful migration speeds (>1m/year) compared to phtotogrammetric error residual have been measured. Currently a technical improved processor to compensate error residual using time series observation is under developing and expected to produce the long term migration speed over Martian dune

  5. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  6. Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation

    Science.gov (United States)

    Nyabeze, W. R.

    A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.

  7. Cluster analysis of received constellations for optical performance monitoring

    NARCIS (Netherlands)

    van Weerdenburg, J.J.A.; van Uden, R.; Sillekens, E.; de Waardt, H.; Koonen, A.M.J.; Okonkwo, C.

    2016-01-01

    Performance monitoring based on centroid clustering to investigate constellation generation offsets. The tool allows flexibility in constellation generation tolerances by forwarding centroids to the demapper. The relation of fibre nonlinearities and singular value decomposition of intra-cluster

  8. A method of detection to the grinding wheel layer thickness based on computer vision

    Science.gov (United States)

    Ji, Yuchen; Fu, Luhua; Yang, Dujuan; Wang, Lei; Liu, Changjie; Wang, Zhong

    2018-01-01

    This paper proposed a method of detection to the grinding wheel layer thickness based on computer vision. A camera is used to capture images of grinding wheel layer on the whole circle. Forward lighting and back lighting are used to enables a clear image to be acquired. Image processing is then executed on the images captured, which consists of image preprocessing, binarization and subpixel subdivision. The aim of binarization is to help the location of a chord and the corresponding ring width. After subpixel subdivision, the thickness of the grinding layer can be calculated finally. Compared with methods usually used to detect grinding wheel wear, method in this paper can directly and quickly get the information of thickness. Also, the eccentric error and the error of pixel equivalent are discussed in this paper.

  9. APPLICATION OF A LATTICE GAS MODEL FOR SUBPIXEL PROCESSING OF LOW-RESOLUTION IMAGES OF BINARY STRUCTURES

    Directory of Open Access Journals (Sweden)

    Zbisław Tabor

    2011-05-01

    Full Text Available In the study an algorithm based on a lattice gas model is proposed as a tool for enhancing quality of lowresolution images of binary structures. Analyzed low-resolution gray-level images are replaced with binary images, in which pixel size is decreased. The intensity in the pixels of these new images is determined by corresponding gray-level intensities in the original low-resolution images. Then the white phase pixels in the binary images are assumed to be particles interacting with one another, interacting with properly defined external field and allowed to diffuse. The evolution is driven towards a state with maximal energy by Metropolis algorithm. This state is used to estimate the imaged object. The performance of the proposed algorithm and local and global thresholding methods are compared.

  10. Soft X-ray radiation damage in EM-CCDs used for Resonant Inelastic X-ray Scattering

    Science.gov (United States)

    Gopinath, D.; Soman, M.; Holland, A.; Keelan, J.; Hall, D.; Holland, K.; Colebrook, D.

    2018-02-01

    Advancement in synchrotron and free electron laser facilities means that X-ray beams with higher intensity than ever before are being created. The high brilliance of the X-ray beam, as well as the ability to use a range of X-ray energies, means that they can be used in a wide range of applications. One such application is Resonant Inelastic X-ray Scattering (RIXS). RIXS uses the intense and tuneable X-ray beams in order to investigate the electronic structure of materials. The photons are focused onto a sample material and the scattered X-ray beam is diffracted off a high resolution grating to disperse the X-ray energies onto a position sensitive detector. Whilst several factors affect the total system energy resolution, the performance of RIXS experiments can be limited by the spatial resolution of the detector used. Electron-Multiplying CCDs (EM-CCDs) at high gain in combination with centroiding of the photon charge cloud across several detector pixels can lead to sub-pixel spatial resolution of 2-3 μm. X-ray radiation can cause damage to CCDs through ionisation damage resulting in increases in dark current and/or a shift in flat band voltage. Understanding the effect of radiation damage on EM-CCDs is important in order to predict lifetime as well as the change in performance over time. Two CCD-97s were taken to PTB at BESSY II and irradiated with large doses of soft X-rays in order to probe the front and back surfaces of the device. The dark current was shown to decay over time with two different exponential components to it. This paper will discuss the use of EM-CCDs for readout of RIXS spectrometers, and limitations on spatial resolution, together with any limitations on instrument use which may arise from X-ray-induced radiation damage.

  11. Statistical properties of antisymmetrized molecular dynamics for non-nucleon-emission and nucleon-emission processes

    International Nuclear Information System (INIS)

    Ono, A.; Horiuchi, H.

    1996-01-01

    Statistical properties of antisymmetrized molecular dynamics (AMD) are classical in the case of nucleon-emission processes, while they are quantum mechanical for the processes without nucleon emission. In order to understand this situation, we first clarify that there coexist mutually opposite two statistics in the AMD framework: One is the classical statistics of the motion of wave packet centroids and the other is the quantum statistics of the motion of wave packets which is described by the AMD wave function. We prove the classical statistics of wave packet centroids by using the framework of the microcanonical ensemble of the nuclear system with a realistic effective two-nucleon interaction. We show that the relation between the classical statistics of wave packet centroids and the quantum statistics of wave packets can be obtained by taking into account the effects of the wave packet spread. This relation clarifies how the quantum statistics of wave packets emerges from the classical statistics of wave packet centroids. It is emphasized that the temperature of the classical statistics of wave packet centroids is different from the temperature of the quantum statistics of wave packets. We then explain that the statistical properties of AMD for nucleon-emission processes are classical because nucleon-emission processes in AMD are described by the motion of wave packet centroids. We further show that when we improve the description of the nucleon-emission process so as to take into account the momentum fluctuation due to the wave packet spread, the AMD statistical properties for nucleon-emission processes change drastically into quantum statistics. Our study of nucleon-emission processes can be conversely regarded as giving another kind of proof of the fact that the statistics of wave packets is quantum mechanical while that of wave packet centroids is classical. copyright 1996 The American Physical Society

  12. Accurate kinematic measurement at interfaces between dissimilar materials using conforming finite-element-based digital image correlation

    KAUST Repository

    Tao, Ran; Moussawi, Ali; Lubineau, Gilles; Pan, Bing

    2016-01-01

    Digital image correlation (DIC) is now an extensively applied full-field measurement technique with subpixel accuracy. A systematic drawback of this technique, however, is the smoothening of the kinematic field (e.g., displacement and strains

  13. Kolmogorov and Zabih’s Graph Cuts Stereo Matching Algorithm

    Directory of Open Access Journals (Sweden)

    Vladimir Kolmogorov

    2014-10-01

    Full Text Available Binocular stereovision estimates the three-dimensional shape of a scene from two photographs taken from different points of view. In rectified epipolar geometry, this is equivalent to a matching problem. This article describes a method proposed by Kolmogorov and Zabih in 2001, which puts forward an energy-based formulation. The aim is to minimize a four-term-energy. This energy is not convex and cannot be minimized except among a class of perturbations called expansion moves, in which case an exact minimization can be done with graph cuts techniques. One noteworthy feature of this method is that it handles occlusion: The algorithm detects points that cannot be matched with any point in the other image. In this method displacements are pixel accurate (no subpixel refinement.

  14. Identification of hydrometeor mixtures in polarimetric radar measurements and their linear de-mixing

    Science.gov (United States)

    Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis

    2017-04-01

    The issue of hydrometeor mixtures affects radar sampling volumes without a clear dominant hydrometeor type. Containing a number of different hydrometeor types which significantly contribute to the polarimetric variables, these volumes are likely to occur in the vicinity of the melting layer and mainly, at large distance from a given radar. Motivated by potential benefits for both quantitative and qualitative applications of dual-pol radar, we propose a method for the identification of hydrometeor mixtures and their subsequent linear de-mixing. This method is intrinsically related to our recently proposed semi-supervised approach for hydrometeor classification. The mentioned classification approach [1] performs labeling of radar sampling volumes by using as a criterion the Euclidean distance with respect to five-dimensional centroids, depicting nine hydrometeor classes. The positions of the centroids in the space formed by four radar moments and one external parameter (phase indicator), are derived through a technique of k-medoids clustering, applied on a selected representative set of radar observations, and coupled with statistical testing which introduces the assumed microphysical properties of the different hydrometeor types. Aside from a hydrometeor type label, each radar sampling volume is characterized by an entropy estimate, indicating the uncertainty of the classification. Here, we revisit the concept of entropy presented in [1], in order to emphasize its presumed potential for the identification of hydrometeor mixtures. The calculation of entropy is based on the estimate of the probability (pi ) that the observation corresponds to the hydrometeor type i (i = 1,ṡṡṡ9) . The probability is derived from the Euclidean distance (di ) of the observation to the centroid characterizing the hydrometeor type i . The parametrization of the d → p transform is conducted in a controlled environment, using synthetic polarimetric radar datasets. It ensures balanced

  15. Dual affine isoperimetric inequalities

    Directory of Open Access Journals (Sweden)

    Bin Xiong

    2006-01-01

    Full Text Available We establish some inequalities for the dual -centroid bodies which are the dual forms of the results by Lutwak, Yang, and Zhang. Further, we establish a Brunn-Minkowski-type inequality for the polar of dual -centroid bodies.

  16. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    Science.gov (United States)

    Raknes, Guttorm; Hunskaar, Steinar

    2014-01-01

    We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  17. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    Directory of Open Access Journals (Sweden)

    Guttorm Raknes

    Full Text Available We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  18. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  19. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  20. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  1. Kinetic equilibrium of space charge dominated beams in a misaligned quadrupole focusing channel

    International Nuclear Information System (INIS)

    Goswami, A.; Sing Babu, P.; Pandit, V. S.

    2013-01-01

    The dynamics of intense beam propagation through the misaligned quadrupole focusing channel has been studied in a self-consistent manner using nonlinear Vlasov-Maxwell equations. The equations of motion of the beam centroid have been developed and found to be independent of any specific beam distribution. A Vlasov equilibrium distribution and beam envelope equations have been obtained, which provide us a theoretical tool to investigate the dynamics of intense beam propagating in a misaligned quadrupole focusing channel. It is shown that the displaced quadrupoles only cause the centroid of the beam to wander off axis. The beam envelope around the centroid obeys the familiar Kapchinskij-Vladimirskij envelope equation that is independent of the centroid motion. However, the rotation of the quadrupole about its optical axis affects the beam envelope and causes an increase in the projected emittances in the two transverse planes due to the inter-plane coupling

  2. Image thresholding in the high resolution target movement monitor

    Science.gov (United States)

    Moss, Randy H.; Watkins, Steve E.; Jones, Tristan H.; Apel, Derek B.; Bairineni, Deepti

    2009-03-01

    Image thresholding in the High Resolution Target Movement Monitor (HRTMM) is examined. The HRTMM was developed at the Missouri University of Science and Technology to detect and measure wall movements in underground mines to help reduce fatality and injury rates. The system detects the movement of a target with sub-millimeter accuracy based on the images of one or more laser dots projected on the target and viewed by a high-resolution camera. The relative position of the centroid of the laser dot (determined by software using thresholding concepts) in the images is the key factor in detecting the target movement. Prior versions of the HRTMM set the image threshold based on a manual, visual examination of the images. This work systematically examines the effect of varying threshold on the calculated centroid position and describes an algorithm for determining a threshold setting. First, the thresholding effects on the centroid position are determined for a stationary target. Plots of the centroid positions as a function of varying thresholds are obtained to identify clusters of thresholds for which the centroid position does not change for stationary targets. Second, the target is moved away from the camera in sub-millimeter increments and several images are obtained at each position and analyzed as a function of centroid position, target movement and varying threshold values. With this approach, the HRTMM can accommodate images in batch mode without the need for manual intervention. The capability for the HRTMM to provide automated, continuous monitoring of wall movement is enhanced.

  3. Vibrational analysis of Fourier transform spectrum of the B u )–X g ...

    Indian Academy of Sciences (India)

    improved by putting the wave number of band origins in Deslandre table. The vibrational analysis was supported by determining the Franck–Condon factor and r-centroid values. Keywords. Fourier transform spectroscopy; electronic spectrum of selenium dimer; vibrational analysis; Franck–Condon factor; r-centroid values.

  4. Estimating Global Cropland Extent with Multi-year MODIS Data

    Directory of Open Access Journals (Sweden)

    Christopher O. Justice

    2010-07-01

    Full Text Available This study examines the suitability of 250 m MODIS (MODerate Resolution Imaging Spectroradiometer data for mapping global cropland extent. A set of 39 multi-year MODIS metrics incorporating four MODIS land bands, NDVI (Normalized Difference Vegetation Index and thermal data was employed to depict cropland phenology over the study period. Sub-pixel training datasets were used to generate a set of global classification tree models using a bagging methodology, resulting in a global per-pixel cropland probability layer. This product was subsequently thresholded to create a discrete cropland/non-cropland indicator map using data from the USDA-FAS (Foreign Agricultural Service Production, Supply and Distribution (PSD database describing per-country acreage of production field crops. Five global land cover products, four of which attempted to map croplands in the context of multiclass land cover classifications, were subsequently used to perform regional evaluations of the global MODIS cropland extent map. The global probability layer was further examined with reference to four principle global food crops: corn, soybeans, wheat and rice. Overall results indicate that the MODIS layer best depicts regions of intensive broadleaf crop production (corn and soybean, both in correspondence with existing maps and in associated high probability matching thresholds. Probability thresholds for wheat-growing regions were lower, while areas of rice production had the lowest associated confidence. Regions absent of agricultural intensification, such as Africa, are poorly characterized regardless of crop type. The results reflect the value of MODIS as a generic global cropland indicator for intensive agriculture production regions, but with little sensitivity in areas of low agricultural intensification. Variability in mapping accuracies between areas dominated by different crop types also points to the desirability of a crop-specific approach rather than attempting

  5. Three-dimensional glue detection and evaluation based on linear structured light

    Science.gov (United States)

    Xiao, Zhitao; Yang, Ruipeng; Geng, Lei; Liu, Yanbei

    2018-01-01

    During the online glue detection of body in white (BIW), the purpose of traditional glue detection based on machine vision is the localization and segmentation of glue, which is dissatisfactory for estimating the uniformity of glue with complex shape. A three-dimensional glue detection method based on the linear structured light and the movement parameters of robot is proposed. Firstly, the linear structured light and epipolar constraint algorithm are used for sign matching of binocular vision. Then, hand-eye relationship between robot and binocular camera is utilized to unified coordinate system. Finally, a structured light stripe extraction method is proposed to extract the sub-pixel coordinates of the light strip center. Experiments results demonstrate that the propose method can estimate the shape of glue accurately. For three kinds of glue with complex shape and uneven illumination, our method can detect the positions of blemishes. The absolute error of measurement is less than 1.04mm and the relative error is less than 10% respectively, which is suitable for online glue detection in BIW.

  6. Electron-Cloud Simulation and Theory for High-Current Heavy-Ion Beams

    International Nuclear Information System (INIS)

    Cohen, R; Friedman, A; Lund, S; Molvik, A; Lee, E; Azevedo, T; Vay, J; Stoltz, P; Veitzer, S

    2004-01-01

    Stray electrons can arise in positive-ion accelerators for heavy ion fusion or other applications as a result of ionization of ambient gas or gas released from walls due to halo-ion impact, or as a result of secondary- electron emission. We summarize the distinguishing features of electron cloud issues in heavy-ion-fusion accelerators and a plan for developing a self-consistent simulation capability for heavy-ion beams and electron clouds. We also present results from several ingredients in this capability: (1) We calculate the electron cloud produced by electron desorption from computed beam-ion loss, which illustrates the importance of retaining ion reflection at the walls. (2) We simulate of the effect of specified electron cloud distributions on ion beam dynamics. We consider here electron distributions with axially varying density, centroid location, or radial shape, and examine both random and sinusoidally varying perturbations. We find that amplitude variations are most effective in spoiling ion beam quality, though for sinusoidal variations which match the natural ion beam centroid oscillation or breathing mode frequencies, the centroid and shape perturbations can also have significant impact. We identify an instability associated with a resonance between the beam-envelope ''breathing'' mode and the electron perturbation. We estimate its growth rate, which is moderate (compared to the reciprocal of a typical pulse duration). One conclusion from this study is that heavy-ion beams are surprisingly robust to electron clouds, compared to a priori expectations. (3) We report first results from a long-timestep algorithm for electron dynamics, which holds promise for efficient simultaneous solution of electron and ion dynamics

  7. GALAXIES IN X-RAY GROUPS. II. A WEAK LENSING STUDY OF HALO CENTERING

    Energy Technology Data Exchange (ETDEWEB)

    George, Matthew R.; Ma, Chung-Pei [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Leauthaud, Alexie; Bundy, Kevin [Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), Todai Institutes for Advanced Study, University of Tokyo, Kashiwa 277-8583 (Japan); Finoguenov, Alexis [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Rykoff, Eli S. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Tinker, Jeremy L. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Wechsler, Risa H. [Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Massey, Richard [Department of Physics, University of Durham, South Road, Durham DH1 3LE (United Kingdom); Mei, Simona, E-mail: mgeorge@astro.berkeley.edu [Bureau des Galaxies, Etoiles, Physique, Instrumentation (GEPI), University of Paris Denis Diderot, F-75205 Paris Cedex 13 (France)

    2012-09-20

    Locating the centers of dark matter halos is critical for understanding the mass profiles of halos, as well as the formation and evolution of the massive galaxies that they host. The task is observationally challenging because we cannot observe halos directly, and tracers such as bright galaxies or X-ray emission from hot plasma are imperfect. In this paper, we quantify the consequences of miscentering on the weak lensing signal from a sample of 129 X-ray-selected galaxy groups in the COSMOS field with redshifts 0 < z < 1 and halo masses in the range 10{sup 13}-10{sup 14} M{sub Sun }. By measuring the stacked lensing signal around eight different candidate centers (such as the brightest member galaxy, the mean position of all member galaxies, or the X-ray centroid), we determine which candidates best trace the center of mass in halos. In this sample of groups, we find that massive galaxies near the X-ray centroids trace the center of mass to {approx}< 75 kpc, while the X-ray position and centroids based on the mean position of member galaxies have larger offsets primarily due to the statistical uncertainties in their positions (typically {approx}50-150 kpc). Approximately 30% of groups in our sample have ambiguous centers with multiple bright or massive galaxies, and some of these groups show disturbed mass profiles that are not well fit by standard models, suggesting that they are merging systems. We find that halo mass estimates from stacked weak lensing can be biased low by 5%-30% if inaccurate centers are used and the issue of miscentering is not addressed.

  8. Electron-cloud simulation and theory for high-current heavy-ion beams

    Directory of Open Access Journals (Sweden)

    R. H. Cohen

    2004-12-01

    Full Text Available Stray electrons can arise in positive-ion accelerators for heavy-ion fusion or other applications as a result of ionization of ambient gas or gas released from walls due to halo-ion impact, or as a result of secondary-electron emission. We summarize the distinguishing features of electron-cloud issues in heavy-ion-fusion accelerators and a plan for developing a self-consistent simulation capability for heavy-ion beams and electron clouds (also applicable to other accelerators. We also present results from several ingredients in this capability. (1 We calculate the electron cloud produced by electron desorption from computed beam-ion loss, which illustrates the importance of retaining ion reflection at the walls. (2 We simulate the effect of specified electron-cloud distributions on ion beam dynamics. We consider here electron distributions with axially varying density, centroid location, or radial shape, and examine both random and sinusoidally varying perturbations. We find that amplitude variations are most effective in spoiling ion beam quality, though for sinusoidal variations which match the natural ion beam centroid oscillation or breathing-mode frequencies, the centroid and shape perturbations can also have significant impact. We identify an instability associated with a resonance between the beam-envelope “breathing” mode and the electron perturbation. We estimate its growth rate, which is moderate (compared to the reciprocal of a typical pulse duration. One conclusion from this study is that heavy-ion beams are surprisingly robust to electron clouds, compared to a priori expectations. (3 We report first results from a long-time-step algorithm for electron dynamics, which holds promise for efficient simultaneous solution of electron and ion dynamics.

  9. Core polarization and 3/2 states of some f-p shell nuclei

    International Nuclear Information System (INIS)

    Shelly, S.

    1976-01-01

    The energies, wavefunctions, spectroscopic factors and M1 transition strengths have been calculated for the 3/2 - states excited via single proton transfer to 2p3/2 orbit of the target nuclei 50 Ti, 52 Cr, 54 Fe and 56 Fe. The calculations have been done by using the Kuo and Brown interaction in the entire four shell space as well as the shrunk Kuo and Brown interaction calculated in (1f7/2-2p3/2) space. The salient feature of the calculation is that whereas the systematics of single particle strength distribution are well reproduced, the energy splitting between the calculated T> centroid and the centroid of T> states is always much smaller than that observed experimentally. It has been found, however, that the modified KB interaction widens the energy gap between the T> centroid and the centroid of T> states without appreciably affecting the final wave-functions. (author)

  10. 30.2 Digital PWM-driven AMOLED display on flex reducing static power consumption

    NARCIS (Netherlands)

    Genoe, J.; Obata, K.; Ameys, M.; Myny, K.; Ke, T.H.; Nag, M.; Steudel, S.; Schols, S.; Maas, J.; Tripathi, A.; Van Der Steen, J.-L.; Ellis, T.; Gelinck, G.H.; Heremans, P.

    2014-01-01

    The efficiency of small-molecule OLED devices increased substantially in recent years, creating opportunities for power-efficient displays, as only light is generated proportional to the subpixel intensity. However, current active matrix OLED (AMOLED) displays on foil do not validate this

  11. Transition probabilities and dissociation energies of MnH and MnD molecules

    International Nuclear Information System (INIS)

    Nagarajan, K.; Rajamanickam, N.

    1997-01-01

    The Frank-Condon factors (vibrational transition probabilities) and r-centroids have been evaluated by the more reliable numerical integration procedure for the bands of A-X system of MnH and MnD molecules, using a suitable potential. By fitting the Hulburt- Hirschfelder function to the experimental potential curve using correlation coefficient, the dissociation energy for the electronic ground states of MnH and MnD molecules, respectively have been estimated as D 0 0 =251±5 KJ.mol -1 and D 0 0 =312±6 KJ.mol -1 . (authors)

  12. Real-Time Forecasting of Echo-Centroid Motion.

    Science.gov (United States)

    1979-01-01

    is apparent that after five observations are obtained, the forecast error drops considerably. The normal lifetime of an echo (25 to 30 min) is...10kmI I ! Fig. 11. Track of 5 April 1978 mesocyclone (M) and two TVS’s (1) and (2). Times are CST. Pumpkin Center tornado is hatched and Marlow tornado is

  13. Circumcenter, Circumcircle and Centroid of a Triangle

    OpenAIRE

    Coghetto Roland

    2016-01-01

    We introduce, using the Mizar system [1], some basic concepts of Euclidean geometry: the half length and the midpoint of a segment, the perpendicular bisector of a segment, the medians (the cevians that join the vertices of a triangle to the midpoints of the opposite sides) of a triangle.

  14. Circumcenter, Circumcircle and Centroid of a Triangle

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2016-03-01

    Full Text Available We introduce, using the Mizar system [1], some basic concepts of Euclidean geometry: the half length and the midpoint of a segment, the perpendicular bisector of a segment, the medians (the cevians that join the vertices of a triangle to the midpoints of the opposite sides of a triangle.

  15. Center for Research on Infrared Detectors (CENTROID)

    Science.gov (United States)

    2006-09-30

    of growth, x is also monitored in situ by SE, and Tis measured by a thermocouple, a pyrometer and indirectly by the heating power and is calibrated...Polar optical, acoustic , and inter-valley phonon scattering are included, as wells as scattering from the quantum dots. The simulation includes

  16. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  17. Ratio of basin lag times for runoff and sediment yield processes recorded in various environments

    Directory of Open Access Journals (Sweden)

    K. Banasik

    2015-03-01

    Full Text Available River basin lag time (LAG, defined as the elapsed time between the occurrence of the centroids of the effective rainfall intensity hyetograph and the storm runoff hydrograph, is an important factor in determining the time to peak and the peak value of the instantaneous unit hydrograph (IUH. In the procedure of predicting a sedimentgraph (suspended sediment load as a function of time, the equivalent parameter is the lag time for the sediment yield (LAGs, which is defined as the elapsed time between the occurrence of the centroids of sediment production during a storm event and the observed sedimentgraph at the gauging station. Data of over 150 events recorded in 11 small river catchments (located in Poland, Germany, UK and USA with a drainage area of 0.02 km2 to 82 km2 have been analysed to estimate the ratio of LAGs/LAG. The ratio, in majority of cases was smaller than 1, and decreased with increase of river basin slope. Special attention is given to the data collected in a small agricultural catchment and also during snowmelt periods, which is located in central Poland.

  18. The use of ecological niche modeling to infer potential risk areas of snakebite in the Mexican state of Veracruz.

    Directory of Open Access Journals (Sweden)

    Carlos Yañez-Arenas

    Full Text Available Many authors have claimed that snakebite risk is associated with human population density, human activities, and snake behavior. Here we analyzed whether environmental suitability of vipers can be used as an indicator of snakebite risk. We tested several hypotheses to explain snakebite incidence, through the construction of models incorporating both environmental suitability and socioeconomic variables in Veracruz, Mexico.Ecological niche modeling (ENM was used to estimate potential geographic and ecological distributions of nine viper species' in Veracruz. We calculated the distance to the species' niche centroid (DNC; this distance may be associated with a prediction of abundance. We found significant inverse relationships between snakebites and DNCs of common vipers (Crotalus simus and Bothrops asper, explaining respectively 15% and almost 35% of variation in snakebite incidence. Additionally, DNCs for these two vipers, in combination with marginalization of human populations, accounted for 76% of variation in incidence.Our results suggest that niche modeling and niche-centroid distance approaches can be used to mapping distributions of environmental suitability for venomous snakes; combining this ecological information with socioeconomic factors may help with inferring potential risk areas for snakebites, since hospital data are often biased (especially when incidences are low.

  19. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  20. The use of ecological niche modeling to infer potential risk areas of snakebite in the Mexican state of Veracruz.

    Science.gov (United States)

    Yañez-Arenas, Carlos; Peterson, A Townsend; Mokondoko, Pierre; Rojas-Soto, Octavio; Martínez-Meyer, Enrique

    2014-01-01

    Many authors have claimed that snakebite risk is associated with human population density, human activities, and snake behavior. Here we analyzed whether environmental suitability of vipers can be used as an indicator of snakebite risk. We tested several hypotheses to explain snakebite incidence, through the construction of models incorporating both environmental suitability and socioeconomic variables in Veracruz, Mexico. Ecological niche modeling (ENM) was used to estimate potential geographic and ecological distributions of nine viper species' in Veracruz. We calculated the distance to the species' niche centroid (DNC); this distance may be associated with a prediction of abundance. We found significant inverse relationships between snakebites and DNCs of common vipers (Crotalus simus and Bothrops asper), explaining respectively 15% and almost 35% of variation in snakebite incidence. Additionally, DNCs for these two vipers, in combination with marginalization of human populations, accounted for 76% of variation in incidence. Our results suggest that niche modeling and niche-centroid distance approaches can be used to mapping distributions of environmental suitability for venomous snakes; combining this ecological information with socioeconomic factors may help with inferring potential risk areas for snakebites, since hospital data are often biased (especially when incidences are low).

  1. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  2. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    International Nuclear Information System (INIS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-01-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally. -- Highlights: • A rapid interpolation method of system matrices (H) is proposed, named DW-GIMGPE. • Reduce H acquisition time by 15.2× with simplified grid scan and 2× interpolation. • Reconstructions of a hot-rod phantom with measured and DW-GIMGPE H were similar. • The imaging study of normal

  3. Statistical analysis of uncertainties of gamma-peak identification and area calculation in particulate air-filter environment radionuclide measurements using the results of a Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) organized intercomparison, Part I: Assessment of reliability and uncertainties of isotope detection and energy precision using artificial spiked test spectra, Part II: Assessment of the true type I error rate and the quality of peak area estimators in relation to type II errors using large numbers of natural spectra

    International Nuclear Information System (INIS)

    Zhang, W.; Zaehringer, M.; Ungar, K.; Hoffman, I.

    2008-01-01

    In this paper, the uncertainties of gamma-ray small peak analysis have been examined. As the intensity of a gamma-ray peak approaches its detection decision limit, derived parameters such as centroid channel energy, peak area, peak area uncertainty, baseline determination, and peak significance are statistically sensitive. The intercomparison exercise organized by the CTBTO provided an excellent opportunity for this to be studied. Near background levels, the false-positive and false-negative peak identification frequencies in artificial test spectra have been compared to statistically predictable limiting values. In addition, naturally occurring radon progeny were used to compare observed variance against nominal uncertainties. The results infer that the applied fit algorithms do not always represent the best estimator. Understanding the statistically predicted peak-finding limit is important for data evaluation and analysis assessment. Furthermore, these results are useful to optimize analytical procedures to achieve the best results

  4. Multi-scale analysis of collective behavior in 2D self-propelled particle models of swarms: An Advection-Diffusion with Memory Approach

    Science.gov (United States)

    Raghib, Michael; Levin, Simon; Kevrekidis, Ioannis

    2010-05-01

    Self-propelled particle models (SPP's) are a class of agent-based simulations that have been successfully used to explore questions related to various flavors of collective motion, including flocking, swarming, and milling. These models typically consist of particle configurations, where each particle moves with constant speed, but changes its orientation in response to local averages of the positions and orientations of its neighbors found within some interaction region. These local averages are based on `social interactions', which include avoidance of collisions, attraction, and polarization, that are designed to generate configurations that move as a single object. Errors made by the individuals in the estimates of the state of the local configuration are modeled as a random rotation of the updated orientation resulting from the social rules. More recently, SPP's have been introduced in the context of collective decision-making, where the main innovation consists of dividing the population into naïve and `informed' individuals. Whereas naïve individuals follow the classical collective motion rules, members of the informed sub-population update their orientations according to a weighted average of the social rules and a fixed `preferred' direction, shared by all the informed individuals. Collective decision-making is then understood in terms of the ability of the informed sub-population to steer the whole group along the preferred direction. Summary statistics of collective decision-making are defined in terms of the stochastic properties of the random walk followed by the centroid of the configuration as the particles move about, in particular the scaling behavior of the mean squared displacement (msd). For the region of parameters where the group remains coherent , we note that there are two characteristic time scales, first there is an anomalous transient shared by both purely naïve and informed configurations, i.e. the scaling exponent lies between 1 and

  5. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  6. Detection of Noncircularity and Eccentricity of a Rolling Winder by Artificial Vision

    Directory of Open Access Journals (Sweden)

    Dominique Knittel

    2002-07-01

    Full Text Available A common objective in the web transport industry is to increase the velocity as much as possible. Some disturbances drastically limit this velocity. Time-varying eccentricity of the rolling winder is one of the major disturbances which affect the quality of the rolling winder. This unsuitable factor can lead to a web break for a high-speed winding process. The main contribution of this work is to offer a new measurement technique that is able to provide on-line the estimation of the roll radius and its variations with a subpixel accuracy. A key feature within this work is the contour curvature classification by means of wavelets decomposition of the edge orientation function. We also propose a new model accounting for the increasing radius of the rolling winder, which confirms the experimental results and the reliability of the proposed approach.

  7. The Extended-Window Channel Estimator for Iterative Channel-and-Symbol Estimation

    Directory of Open Access Journals (Sweden)

    Barry John R

    2005-01-01

    Full Text Available The application of the expectation-maximization (EM algorithm to channel estimation results in a well-known iterative channel-and-symbol estimator (ICSE. The EM-ICSE iterates between a symbol estimator based on the forward-backward recursion (BCJR equalizer and a channel estimator, and may provide approximate maximum-likelihood blind or semiblind channel estimates. Nevertheless, the EM-ICSE has high complexity, and it is prone to misconvergence. In this paper, we propose the extended-window (EW estimator, a novel channel estimator for ICSE that can be used with any soft-output symbol estimator. Therefore, the symbol estimator may be chosen according to performance or complexity specifications. We show that the EW-ICSE, an ICSE that uses the EW estimator and the BCJR equalizer, is less complex and less susceptible to misconvergence than the EM-ICSE. Simulation results reveal that the EW-ICSE may converge faster than the EM-ICSE.

  8. Solution-processed photovoltaics with advanced characterization and analysis

    Science.gov (United States)

    Duan, Hsin-Sheng

    In support of hyperspectral imaging system design and parameter trade-off research, an analytical end-to-end model to simulate the remote sensing system pipeline and to forecast remote sensing system performance has been implemented. It is also being made available to the remote sensing community through a website. Users are able to forecast hyperspectral imaging system performance by defining an observational scenario along with imaging system parameters. For system modeling, the implemented analytical model includes scene, sensor and target characteristics as well as atmospheric features, background spectral reflectance statistics, sensor specifications and target class reflectance statistics. The sensor model has been extended to include the airborne ProspecTIR instrument. To validate the analytical model, experiments were designed and conducted. The predictive system model has been verified by comparing the forecast results to ones obtained using real world data collected during the RIT SHARE 2012 collection. Results include the use of large calibration panels to show the predicted radiance consistent with the collected data. Grass radiance predicted from ground truth reflectance data also compare well with the real world collected data, and an eigenvector analysis also supports the validity of the predictions. Two examples of subpixel target detection scenario are presented. One is to detect subpixel wood yellow painted planks in an asphalt playground, and the other is to detect subpixel green painted wood planks in grass. To validate our system performance, the detection performance are analyzed using receiver operating characteristic (ROC) curves in a comprehensive scenario setting. The predicted ROC result of the yellow planks matches well the ROC derived from collected data. However, the predicted ROC curve of green planks differs from collected data ROC curve. Additional experiments were conducted and analyzed to discuss the possible reasons of the

  9. Method of transient identification based on a possibilistic approach, optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos Soares de

    2001-02-01

    This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)

  10. Correction method and software for image distortion and nonuniform response in charge-coupled device-based x-ray detectors utilizing x-ray image intensifier

    International Nuclear Information System (INIS)

    Ito, Kazuki; Kamikubo, Hironari; Yagi, Naoto; Amemiya, Yoshiyuki

    2005-01-01

    An on-site method of correcting the image distortion and nonuniform response of a charge-coupled device (CCD)-based X-ray detector was developed using the response of the imaging plate as a reference. The CCD-based X-ray detector consists of a beryllium-windowed X-ray image intensifier (Be-XRII) and a CCD as the image sensor. An image distortion of 29% was improved to less than 1% after the correction. In the correction of nonuniform response due to image distortion, subpixel approximation was performed for the redistribution of pixel values. The optimal number of subpixels was also discussed. In an experiment with polystyrene (PS) latex, it was verified that the correction of both image distortion and nonuniform response worked properly. The correction for the 'contrast reduction' problem was also demonstrated for an isotropic X-ray scattering pattern from the PS latex. (author)

  11. Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using...

  12. Tracking Subpixel Targets with Critically Sampled Optical Sensors

    Science.gov (United States)

    2012-09-01

    LEFT BLANK xii LIST OF ACRONYMS AND ABBREVIATIONS PSF point spread function SNR signal-to-noise ratio SLAM simultaneous localization and tracking EO... LIDAR light detection and ranging FOV field of view RMS root mean squared PF particle filter TBD track before detect MCMC monte carlo markov chain

  13. Radar subpixel-scale rainfall variability and uncertainty: lessons learned from observations of a dense rain-gauge network

    Directory of Open Access Journals (Sweden)

    N. Peleg

    2013-06-01

    Full Text Available Runoff and flash flood generation are very sensitive to rainfall's spatial and temporal variability. The increasing use of radar and satellite data in hydrological applications, due to the sparse distribution of rain gauges over most catchments worldwide, requires furthering our knowledge of the uncertainties of these data. In 2011, a new super-dense network of rain gauges containing 14 stations, each with two side-by-side gauges, was installed within a 4 km2 study area near Kibbutz Galed in northern Israel. This network was established for a detailed exploration of the uncertainties and errors regarding rainfall variability within a common pixel size of data obtained from remote sensing systems for timescales of 1 min to daily. In this paper, we present the analysis of the first year's record collected from this network and from the Shacham weather radar, located 63 km from the study area. The gauge–rainfall spatial correlation and uncertainty were examined along with the estimated radar error. The nugget parameter of the inter-gauge rainfall correlations was high (0.92 on the 1 min scale and increased as the timescale increased. The variance reduction factor (VRF, representing the uncertainty from averaging a number of rain stations per pixel, ranged from 1.6% for the 1 min timescale to 0.07% for the daily scale. It was also found that at least three rain stations are needed to adequately represent the rainfall (VRF < 5% on a typical radar pixel scale. The difference between radar and rain gauge rainfall was mainly attributed to radar estimation errors, while the gauge sampling error contributed up to 20% to the total difference. The ratio of radar rainfall to gauge-areal-averaged rainfall, expressed by the error distribution scatter parameter, decreased from 5.27 dB for 3 min timescale to 3.21 dB for the daily scale. The analysis of the radar errors and uncertainties suggest that a temporal scale of at least 10 min should be used for

  14. UTILIZAÇÃO DE ALVOS CODIFICADOS DO TIPO ARUCO NA AUTOMAÇÃO DO PROCESSO DE CALIBRAÇÃO DE CÂMARAS

    Directory of Open Access Journals (Sweden)

    Sérgio Leandro Alves da Silva

    Full Text Available Este trabalho apresenta uma proposta para a automatização do processo de calibração de câmaras usando a localização e medição de pontos em alvos codificados, com precisão subpixel, o que contribui para minimizar os erros de localização, independente da orientação da câmara e da escala da imagem. Para alcançar este objetivo foram analisados os alvos codificados mais relevantes na literatura, sendo escolhido o padrão ARUCO, devido à sua flexibilidade, capacidade para representar até 1.024 alvos diferentes e disponibilidade de código fonte, implementado com a biblioteca OpenCV, que garante simplicidade de implementação e alta confiabilidade. Após a geração dos alvos com o padrão ARUCO, foram criados os painéis usados para a aquisição das imagens empregadas na calibração da câmara. O programa desenvolvido foi usado para localizar os alvos nas imagens e extrair automaticamente as coordenadas dos quatro cantos, com precisão subpixel. Os experimentos realizados mostraram que a maioria dos alvos foi identificada corretamente. Os resultados do experimento com a calibração de uma câmara de baixo custo mostraram que o processo funciona e que a precisão das medidas dos cantos atinge o nível subpixel.

  15. Dynamics of Hierarchical Urban Green Space Patches and Implications for Management Policy.

    Science.gov (United States)

    Yu, Zhoulu; Wang, Yaohui; Deng, Jinsong; Shen, Zhangquan; Wang, Ke; Zhu, Jinxia; Gan, Muye

    2017-06-06

    Accurately quantifying the variation of urban green space is the prerequisite for fully understanding its ecosystem services. However, knowledge about the spatiotemporal dynamics of urban green space is still insufficient due to multiple challenges that remain in mapping green spaces within heterogeneous urban environments. This paper uses the city of Hangzhou to demonstrate an analysis methodology that integrates sub-pixel mapping technology and landscape analysis to fully investigate the spatiotemporal pattern and variation of hierarchical urban green space patches. Firstly, multiple endmember spectral mixture analysis was applied to time series Landsat data to derive green space coverage at the sub-pixel level. Landscape metric analysis was then employed to characterize the variation pattern of urban green space patches. Results indicate that Hangzhou has experienced a significant loss of urban greenness, producing a more fragmented and isolated vegetation landscape. Additionally, a remarkable amelioration of urban greenness occurred in the city core from 2002 to 2013, characterized by the significant increase of small-sized green space patches. The green space network has been formed as a consequence of new urban greening strategies in Hangzhou. These strategies have greatly fragmented the built-up areas and enriched the diversity of the urban landscape. Gradient analysis further revealed a distinct pattern of urban green space landscape variation in the process of urbanization. By integrating both sub-pixel mapping technology and landscape analysis, our approach revealed the subtle variation of urban green space patches which are otherwise easy to overlook. Findings from this study will help us to refine our understanding of the evolution of heterogeneous urban environments.

  16. A new estimator for vector velocity estimation [medical ultrasonics

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2001-01-01

    A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...

  17. Mapping canopy gaps in an indigenous subtropical coastal forest using high resolution WorldView-2 data

    CSIR Research Space (South Africa)

    Malahlela, O

    2014-01-01

    Full Text Available of subpixel treefall 565 gaps with Landsat imagery in Central Amazon. Remote Sensing of Environment 115, pp. 566 3322 – 3328. 567 568 Nelson, R., Oderwald, R., and Gregoire, T.G., 1997, Separating the ground and airborne laser 569 sampling phases...

  18. GPS survey in long baseline neutrino-oscillation measurement

    CERN Document Server

    Noumi, H; Inagaki, T; Hasegawa, T; Katoh, Y; Kohama, M; Kurodai, M; Kusano, E; Maruyama, T; Minakawa, M; Nakamura, K; Nishikawa, K; Sakuda, M; Suzuki, Y; Takasaki, M; Tanaka, K H; Yamanoi, Y; 10.1109/TNS.2004.836042

    2004-01-01

    We made a series of surveys to obtain neutrino beam line direction toward SuperKamiokande (SK) at a distance of 250 km for the long- baseline neutrino oscillation experiment at KEK. We found that the beam line is directed to SK within 0.03 mr and 0.09 mr (in sigma) in the horizontal and vertical directions, respectively. During beam operation, we monitored the muon distribution from secondary pions produced at the target and collected by a magnetic horn system. We found that the horn system functions like a lens of a point-to- parallel optics with magnification of approximately -100 and the focal length of 2.3 m. Namely, a small displacement of the primary beam position at the target is magnified about a factor -100 at the muon centroid, while the centroid position is almost stable against a change of the incident angle of the primary beam. Therefore, the muon centroid can be a useful monitor of the neutrino beam direction. We could determine the muon centroid within 6 mm and 12 mm in horizontal and vertical ...

  19. Two-dimensional shape recognition using oriented-polar representation

    Science.gov (United States)

    Hu, Neng-Chung; Yu, Kuo-Kan; Hsu, Yung-Li

    1997-10-01

    To deal with such a problem as object recognition of position, scale, and rotation invariance (PSRI), we utilize some PSRI properties of images obtained from objects, for example, the centroid of the image. The corresponding position of the centroid to the boundary of the image is invariant in spite of rotation, scale, and translation of the image. To obtain the information of the image, we use the technique similar to Radon transform, called the oriented-polar representation of a 2D image. In this representation, two specific points, the centroid and the weighted mean point, are selected to form an initial ray, then the image is sampled with N angularly equispaced rays departing from the initial rays. Each ray contains a number of intersections and the distance information obtained from the centroid to the intersections. The shape recognition algorithm is based on the least total error of these two items of information. Together with a simple noise removal and a typical backpropagation neural network, this algorithm is simple, but the PSRI is achieved with a high recognition rate.

  20. 7 CFR 1435.301 - Annual estimates and quarterly re-estimates.

    Science.gov (United States)

    2010-01-01

    ... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing..., estimates, and re-estimates in this subpart will use available USDA statistics and estimates of production, consumption, and stocks, taking into account, where appropriate, data supplied in reports submitted pursuant...

  1. Development of a novel preclinical pancreatic cancer research model: bioluminescence image-guided focal irradiation and tumor monitoring of orthotopic xenografts.

    Science.gov (United States)

    Tuli, Richard; Surmak, Andrew; Reyes, Juvenal; Hacker-Prietz, Amy; Armour, Michael; Leubner, Ashley; Blackford, Amanda; Tryggestad, Erik; Jaffee, Elizabeth M; Wong, John; Deweese, Theodore L; Herman, Joseph M

    2012-04-01

    We report on a novel preclinical pancreatic cancer research model that uses bioluminescence imaging (BLI)-guided irradiation of orthotopic xenograft tumors, sparing of surrounding normal tissues, and quantitative, noninvasive longitudinal assessment of treatment response. Luciferase-expressing MiaPaCa-2 pancreatic carcinoma cells were orthotopically injected in nude mice. BLI was compared to pathologic tumor volume, and photon emission was assessed over time. BLI was correlated to positron emission tomography (PET)/computed tomography (CT) to estimate tumor dimensions. BLI and cone-beam CT (CBCT) were used to compare tumor centroid location and estimate setup error. BLI and CBCT fusion was performed to guide irradiation of tumors using the small animal radiation research platform (SARRP). DNA damage was assessed by γ-H2Ax staining. BLI was used to longitudinally monitor treatment response. Bioluminescence predicted tumor volume (R = 0.8984) and increased linearly as a function of time up to a 10-fold increase in tumor burden. BLI correlated with PET/CT and necropsy specimen in size (P < .05). Two-dimensional BLI centroid accuracy was 3.5 mm relative to CBCT. BLI-guided irradiated pancreatic tumors stained positively for γ-H2Ax, whereas surrounding normal tissues were spared. Longitudinal assessment of irradiated tumors with BLI revealed significant tumor growth delay of 20 days relative to controls. We have successfully applied the SARRP to a bioluminescent, orthotopic preclinical pancreas cancer model to noninvasively: 1) allow the identification of tumor burden before therapy, 2) facilitate image-guided focal radiation therapy, and 3) allow normalization of tumor burden and longitudinal assessment of treatment response.

  2. Reactivity estimation using digital nonlinear H∞ estimator for VHTRC experiment

    International Nuclear Information System (INIS)

    Suzuki, Katsuo; Nabeshima, Kunihiko; Yamane, Tsuyoshi

    2003-01-01

    On-line and real-time estimation of time-varying reactivity in a nuclear reactor in necessary for early detection of reactivity anomaly and safe operation. Using a digital nonlinear H ∞ estimator, an experiment of real-time dynamic reactivity estimation was carried out in the Very High Temperature Reactor Critical Assembly (VHTRC) of Japan Atomic Energy Research Institute. Some technical issues of the experiment are described, such as reactivity insertion, data sampling frequency, anti-aliasing filter, experimental circuit and digitalising nonlinear H ∞ reactivity estimator, and so on. Then, we discussed the experimental results obtained by the digital nonlinear H ∞ estimator with sampled data of the nuclear instrumentation signal for the power responses under various reactivity insertions. Good performances of estimated reactivity were observed, with almost no delay to the true reactivity and sufficient accuracy between 0.05 cent and 0.1 cent. The experiment shows that real-time reactivity for data sampling period of 10 ms can be certainly realized. From the results of the experiment, it is concluded that the digital nonlinear H ∞ reactivity estimator can be applied as on-line real-time reactivity meter for actual nuclear plants. (author)

  3. True-color 640 ppi OLED arrays patterned by CA i-line photolithography

    NARCIS (Netherlands)

    Malinowski, P.E.; Ke, T.; Nakamura, A.; Chang, T.-Y.; Gokhale, P.; Steudel, S.; Janssen, D.; Kamochi, Y.; Koyama, I.; Iwai,Y.; Heremans, P.

    2015-01-01

    In this paper, side-by-side patterning of red, green and blue OLEDs is demonstrated. To achieve 640 ppi arrays with 20 µm subpixel pitch, chemically amplified, i-line photoresist system with submicron resolution was used. These results show feasibility of obtaining full-color displays with

  4. Detecting spatio-temporal changes in agricultural land use in Heilongjiang province, China using MODIS time-series data and a random forest regression model

    Science.gov (United States)

    Hu, Q.; Friedl, M. A.; Wu, W.

    2017-12-01

    Accurate and timely information regarding the spatial distribution of crop types and their changes is essential for acreage surveys, yield estimation, water management, and agricultural production decision-making. In recent years, increasing population, dietary shifts and climate change have driven drastic changes in China's agricultural land use. However, no maps are currently available that document the spatial and temporal patterns of these agricultural land use changes. Because of its short revisit period, rich spectral bands and global coverage, MODIS time series data has been shown to have great potential for detecting the seasonal dynamics of different crop types. However, its inherently coarse spatial resolution limits the accuracy with which crops can be identified from MODIS in regions with small fields or complex agricultural landscapes. To evaluate this more carefully and specifically understand the strengths and weaknesses of MODIS data for crop-type mapping, we used MODIS time-series imagery to map the sub-pixel fractional crop area for four major crop types (rice, corn, soybean and wheat) at 500-m spatial resolution for Heilongjiang province, one of the most important grain-production regions in China where recent agricultural land use change has been rapid and pronounced. To do this, a random forest regression (RF-g) model was constructed to estimate the percentage of each sub-pixel crop type in 2006, 2011 and 2016. Crop type maps generated through expert visual interpretation of high spatial resolution images (i.e., Landsat and SPOT data) were used to calibrate the regression model. Five different time series of vegetation indices (155 features) derived from different spectral channels of MODIS land surface reflectance (MOD09A1) data were used as candidate features for the RF-g model. An out-of-bag strategy and backward elimination approach was applied to select the optimal spectra-temporal feature subset for each crop type. The resulting crop maps

  5. 3D Measurement Technology by Structured Light Using Stripe-Edge-Based Gray Code

    International Nuclear Information System (INIS)

    Wu, H B; Chen, Y; Wu, M Y; Guan, C R; Yu, X Y

    2006-01-01

    The key problem of 3D vision measurement using triangle method based on structured light is to acquiring projecting angle of projecting light accurately. In order to acquire projecting angle thereby determine the corresponding relationship between sampling point and image point, method for encoding and decoding structured light based on stripe edge of Gray code is presented. The method encoded with Gray code stripe and decoded with stripe edge acquired by sub-pixel technology instead of pixel centre, so latter one-bit decoding error was removed. Accuracy of image sampling point location and correspondence between image sampling point and object sampling point achieved sub-pixel degree. In addition, measurement error caused by dividing projecting angle irregularly by even-width encoding stripe was analysed and corrected. Encoding and decoding principle and decoding equations were described. Finally, 3dsmax and Matlab software were used to simulate measurement system and reconstruct measured surface. Indicated by experimental results, measurement error is about 0.05%

  6. A hierarchical estimator development for estimation of tire-road friction coefficient.

    Directory of Open Access Journals (Sweden)

    Xudong Zhang

    Full Text Available The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  7. A hierarchical estimator development for estimation of tire-road friction coefficient.

    Science.gov (United States)

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  8. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  9. PROTON RADIOGRAPHY WITH THE PIXEL DETECTOR TIMEPIX

    Directory of Open Access Journals (Sweden)

    Václav Olšanský

    2016-12-01

    Full Text Available This article presents the processing of radiographic data acquired using the position-sensitive hybrid semiconductor pixel detector Timepix. Measurements were made on thin samples at the medical ion-synchrotron HIT [1] in Heidelberg (Germany with a 221 MeV proton beam. The charge is energy by the particles crossing the sample is registered for generation of image contrast. Experimental data from the detector were processed for derivation of the energy loss of each proton using calibration matrices. The interaction point of the protons on the detector were determined with subpixel resolution by model fitting of the individual signals in the pixelated matrix. Three methods were used for calculation of these coordinates: Hough transformation, 2D Gaussian fitting and estimate the 2D mean. Parameters of calculation accuracy and calculation time are compared for each method. The final image was created by method with best parameters.

  10. Using remote sensing imagery to monitoring sea surface pollution cause by abandoned gold-copper mine

    Science.gov (United States)

    Kao, H. M.; Ren, H.; Lee, Y. T.

    2010-08-01

    The Chinkuashih Benshen mine was the largest gold-copper mine in Taiwan before the owner had abandoned the mine in 1987. However, even the mine had been closed, the mineral still interacts with rain and underground water and flowed into the sea. The polluted sea surface had appeared yellow, green and even white color, and the pollutants had carried by the coast current. In this study, we used the optical satellite images to monitoring the sea surface. Several image processing algorithms are employed especial the subpixel technique and linear mixture model to estimate the concentration of pollutants. The change detection approach is also applied to track them. We also conduct the chemical analysis of the polluted water to provide the ground truth validation. By the correlation analysis between the satellite observation and the ground truth chemical analysis, an effective approach to monitoring water pollution could be established.

  11. Extracting flat-field images from scene-based image sequences using phase correlation

    Energy Technology Data Exchange (ETDEWEB)

    Caron, James N., E-mail: Caron@RSImd.com [Research Support Instruments, 4325-B Forbes Boulevard, Lanham, Maryland 20706 (United States); Montes, Marcos J. [Naval Research Laboratory, Code 7231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States); Obermark, Jerome L. [Naval Research Laboratory, Code 8231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States)

    2016-06-15

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method uses sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.

  12. L’estime de soi : un cas particulier d’estime sociale ?

    OpenAIRE

    Santarelli, Matteo

    2016-01-01

    Un des traits plus originaux de la théorie intersubjective de la reconnaissance d’Axel Honneth, consiste dans la façon dont elle discute la relation entre estime sociale et estime de soi. En particulier, Honneth présente l’estime de soi comme un reflet de l’estime sociale au niveau individuel. Dans cet article, je discute cette conception, en posant la question suivante : l’estime de soi est-elle un cas particulier de l’estime sociale ? Pour ce faire, je me concentre sur deux problèmes crucia...

  13. BullsEye

    DEFF Research Database (Denmark)

    Klokmose, Clemens Nylandsted; Kristensen, Janus Bager; Bagge, Rolf

    2014-01-01

    implemented primarily in shaders on the GPU. The techniques are realized in the BullsEye computer vision software. We demonstrate experimentally that BullsEye provides sub-pixel accuracy down to a tenth of a pixel, which is a significant improvement compared to the commonly used reacTIVision software....

  14. 3-Phenyl-6-(2-pyridyl-1,2,4,5-tetrazine

    Directory of Open Access Journals (Sweden)

    Daniel Chartrand

    2008-01-01

    Full Text Available The title compound, C13H9N5, is the first asymmetric diaryl-1,2,4,5-tetrazine to be crystallographically characterized. We have been interested in this motif for incorporation into supramolecular assemblies based on coordination chemistry. The solid state structure shows a centrosymmetric molecule, forcing a positional disorder of the terminal phenyl and pyridyl rings. The molecule is completely planar, unusual for aromatic rings with N atoms in adjacent ortho positions. The stacking observed is very common in diaryltetrazines and is dominated by π stacking [centroid-to-centroid distance between the tetrazine ring and the aromatic ring of an adjacent molecule is 3.6 Å, perpendicular (centroid-to-plane distance of about 3.3 Å].

  15. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  16. 3D depth image analysis for indoor fall detection of elderly people

    Directory of Open Access Journals (Sweden)

    Lei Yang

    2016-02-01

    Full Text Available This paper presents a new fall detection method of elderly people in a room environment based on shape analysis of 3D depth images captured by a Kinect sensor. Depth images are pre-processed by a median filter both for background and target. The silhouette of moving individual in depth images is achieved by a subtraction method for background frames. The depth images are converted to disparity map, which is obtained by the horizontal and vertical projection histogram statistics. The initial floor plane information is obtained by V disparity map, and the floor plane equation is estimated by the least square method. Shape information of human subject in depth images is analyzed by a set of moment functions. Coefficients of ellipses are calculated to determine the direction of individual. The centroids of the human body are calculated and the angle between the human body and the floor plane is calculated. When both the distance from the centroids of the human body to the floor plane and the angle between the human body and the floor plane are lower than some thresholds, fall incident will be detected. Experiments with different falling direction are performed. Experimental results show that the proposed method can detect fall incidents effectively.

  17. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  18. Using network metrics to investigate football team players' connections: A pilot study

    Directory of Open Access Journals (Sweden)

    Filipe Manuel Clemente

    2014-09-01

    Full Text Available The aim of this pilot study was propose a set of network methods to measure the specific properties of football teams. These metrics were organized on "meso" and "micro" analysis levels. Five official matches of the same team on the First Portuguese Football League were analyzed. An overall of 577 offensive plays were analyzed from the five matches. From the adjacency matrices developed per each offensive play it were computed the scaled connectivity, the clustering coefficient and the centroid significance and centroid conformity. Results showed that the highest values of scaled connectivity were found in lateral defenders and central and midfielder players and the lowest values were found in the striker and goalkeeper. The highest values of clustering coefficient were generally found in midfielders and forwards. In addition, the centroid results showed that lateral and central defenders tend to be the centroid players in the attacking process. In sum, this study showed that network metrics can be a powerful tool to help coaches to understanding the specific team's properties, thus supporting decision-making and improving sports training based on match analysis.

  19. Multivariate Location Estimation Using Extension of $R$-Estimates Through $U$-Statistics Type Approach

    OpenAIRE

    Chaudhuri, Probal

    1992-01-01

    We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...

  20. A New Approach to Spindle Radial Error Evaluation Using a Machine Vision System

    Directory of Open Access Journals (Sweden)

    Kavitha C.

    2017-03-01

    Full Text Available The spindle rotational accuracy is one of the important issues in a machine tool which affects the surface topography and dimensional accuracy of a workpiece. This paper presents a machine-vision-based approach to radial error measurement of a lathe spindle using a CMOS camera and a PC-based image processing system. In the present work, a precisely machined cylindrical master is mounted on the spindle as a datum surface and variations of its position are captured using the camera for evaluating runout of the spindle. The Circular Hough Transform (CHT is used to detect variations of the centre position of the master cylinder during spindle rotation at subpixel level from a sequence of images. Radial error values of the spindle are evaluated using the Fourier series analysis of the centre position of the master cylinder calculated with the least squares curve fitting technique. The experiments have been carried out on a lathe at different operating speeds and the spindle radial error estimation results are presented. The proposed method provides a simpler approach to on-machine estimation of the spindle radial error in machine tools.

  1. Spatiotemporal Super-Resolution Reconstruction Based on Robust Optical Flow and Zernike Moment for Video Sequences

    Directory of Open Access Journals (Sweden)

    Meiyu Liang

    2013-01-01

    Full Text Available In order to improve the spatiotemporal resolution of the video sequences, a novel spatiotemporal super-resolution reconstruction model (STSR based on robust optical flow and Zernike moment is proposed in this paper, which integrates the spatial resolution reconstruction and temporal resolution reconstruction into a unified framework. The model does not rely on accurate estimation of subpixel motion and is robust to noise and rotation. Moreover, it can effectively overcome the problems of hole and block artifacts. First we propose an efficient robust optical flow motion estimation model based on motion details preserving, then we introduce the biweighted fusion strategy to implement the spatiotemporal motion compensation. Next, combining the self-adaptive region correlation judgment strategy, we construct a fast fuzzy registration scheme based on Zernike moment for better STSR with higher efficiency, and then the final video sequences with high spatiotemporal resolution can be obtained by fusion of the complementary and redundant information with nonlocal self-similarity between the adjacent video frames. Experimental results demonstrate that the proposed method outperforms the existing methods in terms of both subjective visual and objective quantitative evaluations.

  2. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  3. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    International Nuclear Information System (INIS)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2

  4. Ellipsoid analysis of calvarial shape.

    Science.gov (United States)

    Jacobsen, Petra A; Becker, Devra; Govier, Daniel P; Krantz, Steven G; Kane, Alex

    2009-09-01

    The purpose of this research was to develop a novel quantitative method of describing calvarial shape by using ellipsoid geometry. The pilot application of Ellipsoid Analysis was to compare calvarial form among individuals with untreated unilateral coronal synostosis, metopic synostosis, and sagittal synostosis and normal subjects. The frontal, parietal, and occipital bones of 10 preoperative patients for each of the four study groups were bilaterally segmented into six regions using three-dimensional skull reconstructions generated by ANALYZE imaging software from high-resolution computed tomography scans. Points along each segment were extracted and manipulated using a MATLAB-based program. The points were fit to the least-squares nearest ellipsoid. Relationships between the six resultant right and left frontal, parietal, and occipital ellipsoidal centroids (FR, FL, PR, PL, OR, and OL, respectively) were tested for association with a synostotic group. Results from the pilot study showed meaningful differences between length ratio, angular, and centroid distance relationships among synostotic groups. The most substantial difference was exhibited in the centroid distance PL-PR between patients with sagittal synostosis and metopic synostosis. The measures most commonly significant were centroid distances FL-PR and FL-PL and the angle OR-FR-PR. Derived centroid relationships were reproducible. Ellipsoid Analysis may offer a more refined approach to quantitative analysis of cranial shape. Symmetric and asymmetric forms can be compared directly. Relevant shape information between traditional landmarks is characterized. These techniques may have wider applicability in quantifying craniofacial morphology with increase in both specificity and general applicability over current methods.

  5. Dominant π⋅⋅⋅π interaction in the self assemblies of 4 ...

    Indian Academy of Sciences (India)

    Administrator

    The centroid to centroid distance is 3⋅78 Å and the perpendicular distance between the offset stacked rings is 3⋅42 Å with displacement angle. 19. 21° and lateral displace- ment 1⋅61 Å. The offset stacked molecules form a zig–zag pattern through the π⋅⋅⋅π offset stacks. (3⋅78 Å) and stacked molecules. A similar array ...

  6. Automated Slicing for a Multi-Axis Metal Deposition System (Preprint)

    Science.gov (United States)

    2006-09-01

    experimented with different materials like H13 tool steel to build the part. Following the same slicing and scanning toolpath result, there is a geometric...and analysis tool -centroidal axis. Similar to medial axis, it contains geometry and topological information but is significantly computationally...geometry reasoning and analysis tool -centroidal axis. Similar to medial axis, it contains geometry and topological information but is significantly

  7. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  8. A hybrid method for accurate star tracking using star sensor and gyros.

    Science.gov (United States)

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  9. Centroid-Based Document Classification Algorithms: Analysis & Experimental Results

    Science.gov (United States)

    2000-03-06

    0.24 traffick 0.23 gang 0.23 polic 0.20 heroin 0.17 arrest 0.16 narcot 0.16 kg 0.15 addict 0.12 cocain 11 0.49 nafta 0.40 mexico 0.24 job 0.23...mafia 0.12 crime 19 0.36 speci 0.25 whale 0.23 endang 0.23 wolve 0.22 wildlif 0.17 hyph 0.17 blank 0.16 mammal 0.15 marin 0.15 wolf 20 0.30 rwanda 0.25...oil 34 0.64 drug 0.20 legal 0.16 greif 0.15 court 0.14 colombia 0.14 addict 0.13 de 0.11 traffick 0.11 bogota 0.11 decrimin 35 0.24 boate 0.23 ship

  10. Determination of star bodies from p-centroid bodies

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... Home; Journals; Proceedings – Mathematical Sciences; Volume 123; Issue 4 ... The domain part of the email address of all email addresses used by the office of Indian Academy of Sciences, including those of the staff, the journals, various programmes, and Current Science, has changed from 'ias.ernet.in' ...

  11. Monte Carlo calculations of channeling radiation

    International Nuclear Information System (INIS)

    Bloom, S.D.; Berman, B.L.; Hamilton, D.C.; Alguard, M.J.; Barrett, J.H.; Datz, S.; Pantell, R.H.; Swent, R.H.

    1981-01-01

    Results of classical Monte Carlo calculations are presented for the radiation produced by ultra-relativistic positrons incident in a direction parallel to the (110) plane of Si in the energy range 30 to 100 MeV. The results all show the characteristic CR(channeling radiation) peak in the energy range 20 keV to 100 keV. Plots of the centroid energies, widths, and total yields of the CR peaks as a function of energy show the power law dependences of γ 1 5 , γ 1 7 , and γ 2 5 respectively. Except for the centroid energies and power-law dependence is only approximate. Agreement with experimental data is good for the centroid energies and only rough for the widths. Adequate experimental data for verifying the yield dependence on γ does not yet exist

  12. Trajectory data privacy protection based on differential privacy mechanism

    Science.gov (United States)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  13. Development of a Novel Preclinical Pancreatic Cancer Research Model: Bioluminescence Image-Guided Focal Irradiation and Tumor Monitoring of Orthotopic Xenografts1

    Science.gov (United States)

    Tuli, Richard; Surmak, Andrew; Reyes, Juvenal; Hacker-Prietz, Amy; Armour, Michael; Leubner, Ashley; Blackford, Amanda; Tryggestad, Erik; Jaffee, Elizabeth M; Wong, John; DeWeese, Theodore L; Herman, Joseph M

    2012-01-01

    PURPOSE: We report on a novel preclinical pancreatic cancer research model that uses bioluminescence imaging (BLI)-guided irradiation of orthotopic xenograft tumors, sparing of surrounding normal tissues, and quantitative, noninvasive longitudinal assessment of treatment response. MATERIALS AND METHODS: Luciferase-expressing MiaPaCa-2 pancreatic carcinoma cells were orthotopically injected in nude mice. BLI was compared to pathologic tumor volume, and photon emission was assessed over time. BLI was correlated to positron emission tomography (PET)/computed tomography (CT) to estimate tumor dimensions. BLI and cone-beam CT (CBCT) were used to compare tumor centroid location and estimate setup error. BLI and CBCT fusion was performed to guide irradiation of tumors using the small animal radiation research platform (SARRP). DNA damage was assessed by γ-H2Ax staining. BLI was used to longitudinally monitor treatment response. RESULTS: Bioluminescence predicted tumor volume (R = 0.8984) and increased linearly as a function of time up to a 10-fold increase in tumor burden. BLI correlated with PET/CT and necropsy specimen in size (P < .05). Two-dimensional BLI centroid accuracy was 3.5 mm relative to CBCT. BLI-guided irradiated pancreatic tumors stained positively for γ-H2Ax, whereas surrounding normal tissues were spared. Longitudinal assessment of irradiated tumors with BLI revealed significant tumor growth delay of 20 days relative to controls. CONCLUSIONS: We have successfully applied the SARRP to a bioluminescent, orthotopic preclinical pancreas cancer model to noninvasively: 1) allow the identification of tumor burden before therapy, 2) facilitate image-guided focal radiation therapy, and 3) allow normalization of tumor burden and longitudinal assessment of treatment response. PMID:22496923

  14. On the relation between S-Estimators and M-Estimators of multivariate location and covariance

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1987-01-01

    We discuss the relation between S-estimators and M-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, S-estimators are shown to satisfy first-order conditions of M-estimators. We show that the influence function IF (x;S F) of

  15. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....

  16. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  17. Track fitting and resolution with digital detectors

    International Nuclear Information System (INIS)

    Duerdoth, I.

    1982-01-01

    The analysis of data from detectors which give digitised measurements, such as MWPCs, is considered. These measurements are necessarily correlated and it is shown that the uncertainty in the combination of N measurements may fall faster than the canonical 1/√N. A new method of track fitting is described which exploits the digital aspects and which takes the correlations into account. It divides the parameter space into cells and the centroid of a cell is taken as the best estimate. The method is shown to have some advantages over the standard least-squares analysis. If the least-squares method is used for digital detectors the goodness-of-fit may not be a reliable estimate of the accuracy. The cell method is particularly suitable for implementation on microcomputers which lack floating point and divide facilities. (orig.)

  18. Morphological estimators on Sunyaev-Zel'dovich maps of MUSIC clusters of galaxies

    Science.gov (United States)

    Cialone, Giammarco; De Petris, Marco; Sembolini, Federico; Yepes, Gustavo; Baldi, Anna Silvia; Rasia, Elena

    2018-06-01

    The determination of the morphology of galaxy clusters has important repercussions for cosmological and astrophysical studies of them. In this paper, we address the morphological characterization of synthetic maps of the Sunyaev-Zel'dovich (SZ) effect for a sample of 258 massive clusters (Mvir > 5 × 1014 h-1 M⊙ at z = 0), extracted from the MUSIC hydrodynamical simulations. Specifically, we use five known morphological parameters (which are already used in X-ray) and two newly introduced ones, and we combine them in a single parameter. We analyse two sets of simulations obtained with different prescriptions of the gas physics (non-radiative and with cooling, star formation and stellar feedback) at four red shifts between 0.43 and 0.82. For each parameter, we test its stability and efficiency in discriminating the true cluster dynamical state, measured by theoretical indicators. The combined parameter is more efficient at discriminating between relaxed and disturbed clusters. This parameter had a mild correlation with the hydrostatic mass (˜0.3) and a strong correlation (˜0.8) with the offset between the SZ centroid and the cluster centre of mass. The latter quantity is, thus, the most accessible and efficient indicator of the dynamical state for SZ studies.

  19. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    Science.gov (United States)

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  20. Measuring coseismic displacements with point-like targets offset tracking

    KAUST Repository

    Hu, Xie; Wang, Teng; Liao, Mingsheng

    2014-01-01

    Offset tracking is an important complement to measure large ground displacements in both azimuth and range dimensions where synthetic aperture radar (SAR) interferometry is unfeasible. Subpixel offsets can be obtained by searching for the cross-correlation peak calculated from the match patches uniformly distributed on two SAR images. However, it has its limitations, including redundant computation and incorrect estimations on decorrelated patches. In this letter, we propose a simple strategy that performs offset tracking on detected point-like targets (PT). We first detect image patches within bright PT by using a sinc-like template from a single SAR image and then perform offset tracking on them to obtain the pixel shifts. Compared with the standard method, the application on the 2010 M 7.2 El Mayor-Cucapah earthquake shows that the proposed PT offset tracking can significantly increase the cross-correlation and thus result in both efficiency and reliability improvements. © 2013 IEEE.

  1. Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers

    Directory of Open Access Journals (Sweden)

    Fabrizio Cutolo

    2016-09-01

    Full Text Available In the context of surgical navigation systems based on augmented reality (AR, the key challenge is to ensure the highest degree of realism in merging computer-generated elements with live views of the surgical scene. This paper presents an algorithm suited for wearable stereoscopic augmented reality video see-through systems for use in a clinical scenario. A video-based tracking solution is proposed that relies on stereo localization of three monochromatic markers rigidly constrained to the scene. A PnP-based optimization step is introduced to refine separately the pose of the two cameras. Video-based tracking methods using monochromatic markers are robust to non-controllable and/or inconsistent lighting conditions. The two-stage camera pose estimation algorithm provides sub-pixel registration accuracy. From a technological and an ergonomic standpoint, the proposed approach represents an effective solution to the implementation of wearable AR-based surgical navigation systems wherever rigid anatomies are involved.

  2. A quasi-dense matching approach and its calibration application with Internet photos.

    Science.gov (United States)

    Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei

    2015-03-01

    This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.

  3. Assessing long-term variations in sagebrush habitat: characterization of spatial extents and distribution patterns using multi-temporal satellite remote-sensing data

    Science.gov (United States)

    Xian, George; Homer, Collin G.; Aldridge, Cameron L.

    2012-01-01

    An approach that can generate sagebrush habitat change estimates for monitoring large-area sagebrush ecosystems has been developed and tested in southwestern Wyoming, USA. This prototype method uses a satellite-based image change detection algorithm and regression models to estimate sub-pixel percentage cover for five sagebrush habitat components: bare ground, herbaceous, litter, sagebrush and shrub. Landsat images from three different months in 1988, 1996 and 2006 were selected to identify potential landscape change during these time periods using change vector (CV) analysis incorporated with an image normalization algorithm. Regression tree (RT) models were used to estimate percentage cover for five components on all change areas identified in 1988 and 1996, using unchanged 2006 baseline data as training for both estimates. Over the entire study area (24 950 km2), a net increase of 98.83 km2, or 0.7%, for bare ground was measured between 1988 and 2006. Over the same period, the other four components had net losses of 20.17 km2, or 0.6%, for herbaceous vegetation; 30.16 km2, or 0.7%, for litter; 32.81 km2, or 1.5%, for sagebrush; and 33.34 km2, or 1.2%, for shrubs. The overall accuracy for shrub vegetation change between 1988 and 2006 was 89.56%. Change patterns within sagebrush habitat components differ spatially and quantitatively from each other, potentially indicating unique responses by these components to disturbances imposed upon them.

  4. Characterization of the variability of the South Pacific Convergence Zone using satellite and reanalysis wind product

    Science.gov (United States)

    Lee, T.; Kidwell, A. N.; Jo, Y. H.; Yan, X. H.

    2016-02-01

    The variability of the South Pacific Convergence Zone (SPCZ) is evaluated using ocean surface wind products derived from the QuickSCAT satellite scatterometer for the period of 1999-2009and ERA-Interim atmospheric reanalysis for the period of 1981-2014. From these products, indices were developed to represent the SPCZ strength, area, and centroid location. Excellent agreement is found between the indices derived from the two wind products during the QuikSCAT period in terms of the spatio-temporal structures of the SPCZ. The longer ERA-Interim product is then used to study the variations of SPCZ properties on intraseasonal, seasonal, interannual, and decadal time scales. The SPCZ strength, area, and centroid latitude have a dominant seasonal cycle. In contrast, the SPCZ centroid longitude is dominated by intraseasonal variability due to the influence by the Madden-Julian Oscillation. The SPCZ indices are all correlated with El Niño-Southern Oscillation indices. Interannual and intraseasonal variations of SPCZ strength during strong El Niño are approximately twice as large as the respective seasonal variations. SPCZ strength depends more on the intensity of El Niño rather than the central- vs. eastern-Pacific type. The change from positive to negative Pacific Decadal Oscillation phase around 1999 results in a westward shift of the SPCZ centroid longitude, much smaller interannual swing in centroid latitude, and a decrease in SPCZ area. This study improves the understanding of the variations of the SPCZ on multiple time scales and reveals the variations of SPCZ strength not reported previously. The diagnostics analyses can be used to evaluate climate models.

  5. Power saving through state retention in IGZO-TFT AMOLED displays for wearable applications

    NARCIS (Netherlands)

    Steudel, S.; van der Steen, J.L.P.J.; Nag, M.; Ke, T.H.; Smout, S.; Bel, T.; van Diesen, K.; de Haas, G.; Maas, J.; de Riet, J.; Rovers, M.; Verbeek, R.; Huang, Y.Y.; Chiang, S.C.; Ameys, M.; De Roose, F.; Dehaene, W.; Genoe, J.; Heremans, P.; Gelinck, G.H.; Kronemeijer, A.J.

    2017-01-01

    We present a qHD (960 × 540 with three sub-pixels) top-emitting active-matrix organic light-emitting diode display with a 340-ppi resolution using a self-aligned IGZO thin-film transistor backplane on polyimide foil with a humidity barrier. The back plane process flow is based on a seven-layer

  6. Probing active-edge silicon sensors using a high precision telescope

    NARCIS (Netherlands)

    Akiba, K.; Artuso, M.; van Beveren, V.; van Beuzekom, M.; Boterenbrood, H.; Buytaert, J.; Collins, P.; Dumps, R.; van der Heijden, B.; Hombach, C.; Hynds, D.; Hsu, D.; John, M.; Koffeman, E.; Leflat, A.; Li, Y.; Longstaff, I.; Morton, A.; PérezTrigo, E.; Plackett, R.; Reid, M.M.; Rodríguez Perez, P.; Schindler, H.; Tsopelas, P.; Vázquez Sierra, C.; Wysokiński, M.

    2015-01-01

    The performance of prototype active-edge VTT sensors bump-bonded to the Timepix ASIC is presented. Non-irradiated sensors of thicknesses 100-200 μm and pixel-to-edge distances of 50 μm and 100 μm were probed with a beam of charged hadrons with sub-pixel precision using the Timepix telescope

  7. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  8. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    Science.gov (United States)

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  9. Prevalence and Severity of Off-Centering During Diagnostic CT: Observations From 57,621 CT scans of the Chest, Abdomen, and/or Pelvis.

    Science.gov (United States)

    Akin-Akintayo, Oladunni O; Alexander, Lauren F; Neill, Rebecca; Krupinksi, Elizabeth A; Tang, Xiangyang; Mittal, Pardeep K; Small, William C; Moreno, Courtney C

    2018-02-23

    To determine distances between patient centroid and gantry isocenter during CT imaging of the chest, abdomen, and/or pelvis, and to evaluate differences based on patient gender, scan region, patient position, and gantry aperture. A water phantom and an anthropomorphic phantom were imaged in the centered position in the CT gantry and at several off-centered positions. Additionally, data from 57,621 adult chest, abdomen, and/or pelvic CT acquisitions were evaluated. Data were analyzed with an analysis of variance using the centroid-to-isocenter data as the dependent variable and the other parameters as independent variables. The majority of patient acquisitions (83.7% (48271/57621)) were performed with the patient's centroid positioned below isocenter (mean 1.7 cm below isocenter (SD 1.8 cm); range 12.1 cm below to 7.8 cm above isocenter). Off-centering in the x-axis was less severe (mean 0.01 cm left of isocenter (SD 1.6 cm)). Distance between centroid and isocenter in the y-axis did not differ as a function of sex but did differ based on scan region, patient position, and gantry aperture. Off-centering is common during CT imaging and has been previously demonstrated to impact dose and image quality. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Accurate characterisation of hole size and location by projected fringe profilometry

    Science.gov (United States)

    Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.

    2018-06-01

    The ability to accurately estimate the location and geometry of holes is often required in the field of quality control and automated assembly. Projected fringe profilometry is a potentially attractive technique on account of being non-contacting, of lower cost, and orders of magnitude faster than the traditional coordinate measuring machine. However, we demonstrate in this paper that fringe projection is susceptible to significant (hundreds of µm) measurement artefacts in the neighbourhood of hole edges, which give rise to errors of a similar magnitude in the estimated hole geometry. A mechanism for the phenomenon is identified based on the finite size of the imaging system’s point spread function and the resulting bias produced near to sample discontinuities in geometry and reflectivity. A mathematical model is proposed, from which a post-processing compensation algorithm is developed to suppress such errors around the holes. The algorithm includes a robust and accurate sub-pixel edge detection method based on a Fourier descriptor of the hole contour. The proposed algorithm was found to reduce significantly the measurement artefacts near the hole edges. As a result, the errors in estimated hole radius were reduced by up to one order of magnitude, to a few tens of µm for hole radii in the range 2–15 mm, compared to those from the uncompensated measurements.

  11. GEOMETRIC AND RADIOMETRIC EVALUATION OF RASAT IMAGES

    Directory of Open Access Journals (Sweden)

    A. Cam

    2016-06-01

    Full Text Available RASAT, the second remote sensing satellite of Turkey, was designed and assembled, and also is being operated by TÜBİTAK Uzay (Space Technologies Research Institute (Ankara. RASAT images in various levels are available free-of-charge via Gezgin portal for Turkish citizens. In this paper, the images in panchromatic (7.5 m GSD and RGB (15 m GSD bands in various levels were investigated with respect to its geometric and radiometric characteristics. The first geometric analysis is the estimation of the effective GSD as less than 1 pixel for radiometrically processed level (L1R of both panchromatic and RGB images. Secondly, 2D georeferencing accuracy is estimated by various non-physical transformation models (similarity, 2D affine, polynomial, affine projection, projective, DLT and GCP based RFM reaching sub-pixel accuracy using minimum 39 and maximum 52 GCPs. The radiometric characteristics are also investigated for 8 bits, estimating SNR between 21.8-42.2, and noise 0.0-3.5 for panchromatic and MS images for L1R when the sea is masked to obtain the results for land areas. The analysis show that RASAT images satisfies requirements for various applications. The research is carried out in Zonguldak test site which is mountainous and partly covered by dense forest and urban areas.

  12. ALOS PALSAR Winter Coherence and Summer Intensities for Large Scale Forest Monitoring in Siberia

    Science.gov (United States)

    Thiel, Christian; Thiel, Carolin; Santoro, Maurizio; Schmullius, Christiane

    2008-11-01

    In this paper summer intensity and winter coherence images are used for large scale forest monitoring. The intensities (FBD HH/HV) have been acquired during summer 2007 and feature the K&C intensity stripes [1]. The processing consisted of radiometric calibration, orthorectification, and topographic normalisation. The coherence has been estimated from interferometric pairs with 46-days repeat-pass intervals. The pairs have been acquired during the winters 2006/2007 and 2007/2008. During both winters suited weather conditions have been reported. Interferometric processing consisted of SLC co-registration at sub-pixel level, common-band filtering in range and azimuth and generation of a differential interferogram, which was used in the coherence estimation procedure based on adaptive estimation. All images were geocoded using SRTM data. The pixel size of the final SAR products is 50 m x 50 m. It could already be demonstrated, that by using PALSAR intensities and winter coherence forest and non-forest can be clearly separated [2]. By combining both data types hardly any overlap of the class signatures was detected, even though the analysis was conducted on pixel level and no speckle filter has been applied. Thus, the delineation of a forest cover mask could be executed operationally. The major hitch is the definition of a biomass threshold for regrowing forest to be distinguished as forest.

  13. 2,4-Diamino-6-methyl-1,3,5-triazin-1-ium hydrogen oxalate

    Directory of Open Access Journals (Sweden)

    Bohari M. Yamin

    2012-05-01

    Full Text Available The title compound, C4H8N5+·C2HO4−, was obtained from the reaction of oxalic acid and 2,4-diamino-6-methyl-1,3,5-triazine. The protonated triazine ring is essentially planar with a maximum deviation of 0.035 (1 Å, but the hydrogen oxalate anion is less planar, with a maximum deviation of 0.131 (1 Å for both carbonyl O atoms. In the crystal, the ions are linked by intermolecular N—H...O, N—H...N, O—H...O and C—H...O hydrogen bonds, forming a three-dimensional network. Weak π–π [centroid–centroid distance = 3.763 Å] and C—O...π interactions [O...centroid = 3.5300 (16 Å, C—O...centroid = 132.19 (10°] are also present.

  14. 2,6-Diaminopyridinium bis(4-hydroxypyridine-2,6-dicarboxylato-κ3O2,N,O6ferrate(III dihydrate

    Directory of Open Access Journals (Sweden)

    Andya Nemati

    2008-10-01

    Full Text Available The reaction of iron(II sulfate heptahydrate with the proton-transfer compound (pydaH(hypydcH (pyda = pyridine-2,6-diamine; hypydcH2 = 4-hydroxypyridine-2,6-dicarboxylic acid in an aqueous solution led to the formation of the title compound, (C5H8N3[Fe(C7H3NO52]·2H2O. The anion is a six-coordinated complex with a distorted octahedral geometry around the FeIII atom. Extensive intermolecular O—H...O, N—H...O and C—H...O hydrogen bonds, involving the complex anion, (pydaH+ counter-ion and two uncoordinated water molecules, and π–π [centroid-to-centroid distance 3.323 (11 Å] and C—O...π [O–centroid distance 3.150 (15 Å] interactions connect the various components into a supramolecular structure.

  15. Study of position resolution for cathode readout MWPC with measurement of induced charge distribution

    International Nuclear Information System (INIS)

    Chiba, J.; Iwasaki, H.; Kageyama, T.; Kuribayashi, S.; Nakamura, K.; Sumiyoshi, T.; Takeda, T.

    1983-01-01

    A readout technqiue of multiwire proportional chambers by measurement of charges induced on cathode strips, orthogonal to anode wires, requires an algorithm to relate the measured charge distribution to the avalanche position. With given chamber parameters and under the influence of noise, resolution limits depend on the chosen algorithm. We have studied the position resolution obtained by the centroid method and by the charge-ratio method, both using three consecutive cathode strips. While the centroid method uses a single number, the center of gravity of the measured charges, the charge-ratio method uses the ratios of the charges Qsub(i-1)/Qsub(i) and Qsub(i+1)/Qsub(i) where Qsub(i) is the largest. To obtain a given resolution, the charge-ratio method generally allows wider cathode strips and therefore a smaller number of readout channels than the centroid method. (orig.)

  16. Channeling and stability of laser pulses in plasmas

    International Nuclear Information System (INIS)

    Sprangle, P.; Krall, J.; Esarey, E.

    1995-01-01

    A laser pulse propagating in a plasma is found to undergo a combination of hose and modulation instabilities. The coupled equations for the laser beam envelope and centroid are derived and solved for a laser pulse of finite length propagating through either a uniform plasma or preformed plasma density channel. The laser envelope equation describes the pulse self-focusing and optical guiding in plasmas and is used to analyze the self-modulation instability. The laser centroid equation describes the transverse motion of the laser pulse (hosing) in plasmas. Significant coupling between the centroid and envelope motion as well as harmonic generation in the envelope can occur. In addition, the transverse profile of the generated wake field is strongly affected by the laser hose instability. Methods to reduce the laser hose instability are demonstrated. copyright 1995 American Institute of Physics

  17. Normalization based K means Clustering Algorithm

    OpenAIRE

    Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika

    2015-01-01

    K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...

  18. 4-[(1E)-3-(2,6-Dichloro-3-fluoro-phen-yl)-3-oxoprop-1-en-1-yl]benzonitrile.

    Science.gov (United States)

    Praveen, Aletti S; Yathirajan, Hemmige S; Narayana, Badiadka; Gerber, Thomas; Hosten, Eric; Betz, Richard

    2012-05-01

    In the title mol-ecule, C(16)H(8)Cl(2)FNO, the benzene rings form a dihedral angle of 78.69 (8)°. The F atom is disordered over two positions in a 0.530 (3):0.470 (3) ratio. The crystal packing exhibits π-π inter-actions between dichloro-substituted rings [centroid-centroid distance = 3.6671 (10) Å] and weak inter-molecular C-H⋯F contacts.

  19. (2RS)-2-(2,4-Difluoro-phen-yl)-1-[(4-iodo-benz-yl)(meth-yl)amino]-3-(1H-1,2,4-tri-azol-1-yl)propan-2-ol.

    Science.gov (United States)

    Xiong, Hui-Ping; Gao, Shou-Hong; Li, Chun-Tong; Wu, Zhi-Jun

    2012-08-01

    IN THE TITLE COMPOUND (COMMON NAME: iodiconazole), C(19)H(19)F(2)IN(4)O, there is an intra-molecular O-H⋯N hydrogen bond and mol-ecules are linked by weak inter-actions only, namely C-H⋯N, C-H⋯O and C-H⋯F hydrogen bonds, and π-electron ring-π-electron ring inter-actions between the triazole rings with centroid-centroid distances of 3.725 (3) Å.

  20. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA

    OpenAIRE

    Huang, G; Nix, AR; Armour, SMD

    2010-01-01

    Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...