Demonstration of biased membrane static figure mapping by optical beam subpixel centroid shift
Energy Technology Data Exchange (ETDEWEB)
Pinto, Fabrizio, E-mail: fpinto@jazanu.edu.sa [Laboratory for Quantum Vacuum Applications, Department of Physics, Faculty of Science, Jazan University, P.O. Box 114, Gizan 45142 (Saudi Arabia)
2016-06-10
The measurement of Casimir forces by means of condenser microphones has been shown to be quite promising since its early introduction almost half-a-century ago. However, unlike the remarkable progress achieved in characterizing the vibrating membrane in the dynamical case, the accurate determination of the membrane static figure under electrostatic bias remains a challenge. In this paper, we discuss our first data obtained by measuring the centroid shift of an optical beam with subpixel accuracy by charge coupled device (CCD) and by an extensive analysis of noise sources present in the experimental setup.
Bayesian centroid estimation for motif discovery.
Carvalho, Luis
2013-01-01
Biological sequences may contain patterns that signal important biomolecular functions; a classical example is regulation of gene expression by transcription factors that bind to specific patterns in genomic promoter regions. In motif discovery we are given a set of sequences that share a common motif and aim to identify not only the motif composition, but also the binding sites in each sequence of the set. We propose a new centroid estimator that arises from a refined and meaningful loss function for binding site inference. We discuss the main advantages of centroid estimation for motif discovery, including computational convenience, and how its principled derivation offers further insights about the posterior distribution of binding site configurations. We also illustrate, using simulated and real datasets, that the centroid estimator can differ from the traditional maximum a posteriori or maximum likelihood estimators.
Bayesian centroid estimation for motif discovery.
Directory of Open Access Journals (Sweden)
Luis Carvalho
Full Text Available Biological sequences may contain patterns that signal important biomolecular functions; a classical example is regulation of gene expression by transcription factors that bind to specific patterns in genomic promoter regions. In motif discovery we are given a set of sequences that share a common motif and aim to identify not only the motif composition, but also the binding sites in each sequence of the set. We propose a new centroid estimator that arises from a refined and meaningful loss function for binding site inference. We discuss the main advantages of centroid estimation for motif discovery, including computational convenience, and how its principled derivation offers further insights about the posterior distribution of binding site configurations. We also illustrate, using simulated and real datasets, that the centroid estimator can differ from the traditional maximum a posteriori or maximum likelihood estimators.
Generalized Centroid Estimators in Bioinformatics
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
2011-01-01
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
Estimating the Doppler centroid of SAR data
DEFF Research Database (Denmark)
Madsen, Søren Nørvang
1989-01-01
attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...
Ordinal Regression Based Subpixel Shift Estimation for Video Super-Resolution
Directory of Open Access Journals (Sweden)
Petrovic Nemanja
2007-01-01
Full Text Available We present a supervised learning-based approach for subpixel motion estimation which is then used to perform video super-resolution. The novelty of this work is the formulation of the problem of subpixel motion estimation in a ranking framework. The ranking formulation is a variant of classification and regression formulation, in which the ordering present in class labels namely, the shift between patches is explicitly taken into account. Finally, we demonstrate the applicability of our approach on superresolving synthetically generated images with global subpixel shifts and enhancing real video frames by accounting for both local integer and subpixel shifts.
Diffeomorphic Iterative Centroid Methods for Template Estimation on Large Datasets
Cury , Claire; Glaunès , Joan Alexis; Colliot , Olivier
2014-01-01
International audience; A common approach for analysis of anatomical variability relies on the stimation of a template representative of the population. The Large Deformation Diffeomorphic Metric Mapping is an attractive framework for that purpose. However, template estimation using LDDMM is computationally expensive, which is a limitation for the study of large datasets. This paper presents an iterative method which quickly provides a centroid of the population in the shape space. This centr...
Prediction of RNA secondary structure using generalized centroid estimators.
Hamada, Michiaki; Kiryu, Hisanori; Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi
2009-02-15
Recent studies have shown that the methods for predicting secondary structures of RNAs on the basis of posterior decoding of the base-pairing probabilities has an advantage with respect to prediction accuracy over the conventionally utilized minimum free energy methods. However, there is room for improvement in the objective functions presented in previous studies, which are maximized in the posterior decoding with respect to the accuracy measures for secondary structures. We propose novel estimators which improve the accuracy of secondary structure prediction of RNAs. The proposed estimators maximize an objective function which is the weighted sum of the expected number of the true positives and that of the true negatives of the base pairs. The proposed estimators are also improved versions of the ones used in previous works, namely CONTRAfold for secondary structure prediction from a single RNA sequence and McCaskill-MEA for common secondary structure prediction from multiple alignments of RNA sequences. We clarify the relations between the proposed estimators and the estimators presented in previous works, and theoretically show that the previous estimators include additional unnecessary terms in the evaluation measures with respect to the accuracy. Furthermore, computational experiments confirm the theoretical analysis by indicating improvement in the empirical accuracy. The proposed estimators represent extensions of the centroid estimators proposed in Ding et al. and Carvalho and Lawrence, and are applicable to a wide variety of problems in bioinformatics. Supporting information and the CentroidFold software are available online at: http://www.ncrna.org/software/centroidfold/.
Improved measurements of RNA structure conservation with generalized centroid estimators
Directory of Open Access Journals (Sweden)
Yohei eOkada
2011-08-01
Full Text Available Identification of non-protein-coding RNAs (ncRNAs in genomes is acrucial task for not only molecular cell biology but alsobioinformatics. Secondary structures of ncRNAs are employed as a keyfeature of ncRNA analysis since biological functions of ncRNAs aredeeply related to their secondary structures. Although the minimumfree energy (MFE structure of an RNA sequence is regarded as the moststable structure, MFE alone could not be an appropriate measure foridentifying ncRNAs since the free energy is heavily biased by thenucleotide composition. Therefore, instead of MFE itself, severalalternative measures for identifying ncRNAs have been proposed such asthe structure conservation index (SCI and the base pair distance(BPD, both of which employ MFE structures. However, thesemeasurements are unfortunately not suitable for identifying ncRNAs insome cases including the genome-wide search and incur high falsediscovery rate. In this study, we propose improved measurements basedon SCI and BPD, applying generalized centroid estimators toincorporate the robustness against low quality multiple alignments.Our experiments show that our proposed methods achieve higher accuracythan the original SCI and BPD for not only human-curated structuralalignments but also low quality alignments produced by CLUSTALW. Furthermore, the centroid-based SCI on CLUSTAL W alignments is moreaccurate than or comparable with that of the original SCI onstructural alignments generated with RAF, a high quality structuralaligner, for which two-fold expensive computational time is requiredon average. We conclude that our methods are more suitable forgenome-wide alignments which are of low quality from the point of viewon secondary structures than the original SCI and BPD.
Huang, C.; Townshend, J.R.G.
2003-01-01
A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.
Comparison of performance of some common Hartmann-Shack centroid estimation methods
Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.
2016-03-01
The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.
A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain
Directory of Open Access Journals (Sweden)
Ibn-Elhaj E
2009-01-01
Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.
A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain
Directory of Open Access Journals (Sweden)
E. M. Ismaili Aalaoui
2009-02-01
Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.
Doppler Centroid Estimation for Airborne SAR Supported by POS and DEM
Directory of Open Access Journals (Sweden)
CHENG Chunquan
2015-05-01
Full Text Available It is difficult to estimate the Doppler frequency and modulating rate for airborne SAR by using traditional vector method due to instable flight and complex terrain. In this paper, it is qualitatively analyzed that the impacts of POS, DEM and their errors on airborne SAR Doppler parameters. Then an innovative vector method is presented based on the range-coplanarity equation to estimate the Doppler centroid taking the POS and DEM as auxiliary data. The effectiveness of the proposed method is validated and analyzed via the simulation experiments. The theoretical analysis and experimental results show that the method can be used to estimate the Doppler centroid with high accuracy even in the cases of high relief, instable flight, and large squint SAR.
Performance Evaluation of the Spectral Centroid Downshift Method for Attenuation Estimation
Samimi, Kayvan; Varghese, Tomy
2015-01-01
Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequency-domain approaches applied to this problem. In this study, a statistical analysis of this method’s performance was carried out based on a parametric m...
Performance evaluation of the spectral centroid downshift method for attenuation estimation.
Samimi, Kayvan; Varghese, Tomy
2015-05-01
Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequencydomain approaches applied to this problem. In this study, a statistical analysis of this method's performance was carried out based on a parametric model of the signal power spectrum in the presence of electronic noise. The parametric model used for the power spectrum of received RF data assumes a Gaussian spectral profile for the transmit pulse, and incorporates effects of attenuation, windowing, and electronic noise. Spectral moments were calculated and used to estimate second-order centroid statistics. A theoretical expression for the variance of a maximum likelihood estimator of attenuation coefficient was derived in terms of the centroid statistics and other model parameters, such as transmit pulse center frequency and bandwidth, RF data window length, SNR, and number of regression points. Theoretically predicted estimation variances were compared with experimentally estimated variances on RF data sets from both computer-simulated and physical tissue-mimicking phantoms. Scan parameter ranges for this study were electronic SNR from 10 to 70 dB, transmit pulse standard deviation from 0.5 to 4.1 MHz, transmit pulse center frequency from 2 to 8 MHz, and data window length from 3 to 17 mm. Acceptable agreement was observed between theoretical predictions and experimentally estimated values with differences smaller than 0.05 dB/cm/MHz across the parameter ranges investigated. This model helps predict the best attenuation estimation variance achievable with the CDS method, in terms of said scan parameters.
Target Centroid Position Estimation of Phase-Path Volume Kalman Filtering
Directory of Open Access Journals (Sweden)
Fengjun Hu
2016-01-01
Full Text Available For the problem of easily losing track target when obstacles appear in intelligent robot target tracking, this paper proposes a target tracking algorithm integrating reduced dimension optimal Kalman filtering algorithm based on phase-path volume integral with Camshift algorithm. After analyzing the defects of Camshift algorithm, compare the performance with the SIFT algorithm and Mean Shift algorithm, and Kalman filtering algorithm is used for fusion optimization aiming at the defects. Then aiming at the increasing amount of calculation in integrated algorithm, reduce dimension with the phase-path volume integral instead of the Gaussian integral in Kalman algorithm and reduce the number of sampling points in the filtering process without influencing the operational precision of the original algorithm. Finally set the target centroid position from the Camshift algorithm iteration as the observation value of the improved Kalman filtering algorithm to fix predictive value; thus to make optimal estimation of target centroid position and keep the target tracking so that the robot can understand the environmental scene and react in time correctly according to the changes. The experiments show that the improved algorithm proposed in this paper shows good performance in target tracking with obstructions and reduces the computational complexity of the algorithm through the dimension reduction.
An improved Q estimation approach: the weighted centroid frequency shift method
Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Dong, Chunhui; Tao, Yonghui; Zhou, Yatao
2016-06-01
Seismic wave propagation in subsurface media suffers from absorption, which can be quantified by the quality factor Q. Accurate estimation of the Q factor is of great importance for the resolution enhancement of seismic data, precise imaging and interpretation, and reservoir prediction and characterization. The centroid frequency shift method (CFS) is currently one of the most commonly used Q estimation methods. However, for seismic data that contain noise, the accuracy and stability of Q extracted using CFS depend on the choice of frequency band. In order to reduce the influence of frequency band choices and obtain Q with greater precision and robustness, we present an improved CFS Q measurement approach—the weighted CFS method (WCFS), which incorporates a Gaussian weighting coefficient into the calculation procedure of the conventional CFS. The basic idea is to enhance the proportion of advantageous frequencies in the amplitude spectrum and reduce the weight of disadvantageous frequencies. In this novel method, we first construct a Gauss function using the centroid frequency and variance of the reference wavelet. Then we employ it as the weighting coefficient for the amplitude spectrum of the original signal. Finally, the conventional CFS is adopted for the weighted amplitude spectrum to extract the Q factor. Numerical tests of noise-free synthetic data demonstrate that the WCFS is feasible and efficient, and produces more accurate results than the conventional CFS. Tests for noisy synthetic data indicate that the new method has better anti-noise capability than the CFS. The application to field vertical seismic profile (VSP) data further demonstrates its validity5.
Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie
2016-10-01
Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined
Sub-pixel estimation of tree cover and bare surface densities using regression tree analysis
Directory of Open Access Journals (Sweden)
Carlos Augusto Zangrando Toneli
2011-09-01
Full Text Available Sub-pixel analysis is capable of generating continuous fields, which represent the spatial variability of certain thematic classes. The aim of this work was to develop numerical models to represent the variability of tree cover and bare surfaces within the study area. This research was conducted in the riparian buffer within a watershed of the São Francisco River in the North of Minas Gerais, Brazil. IKONOS and Landsat TM imagery were used with the GUIDE algorithm to construct the models. The results were two index images derived with regression trees for the entire study area, one representing tree cover and the other representing bare surface. The use of non-parametric and non-linear regression tree models presented satisfactory results to characterize wetland, deciduous and savanna patterns of forest formation.
International Nuclear Information System (INIS)
Swift, G.
1990-01-01
This paper presents an algorithm for finding peaks in data spectra. It is based on calculating a moving centroid across the spectrum and picking off the points between which the calculated centroid crosses the channel number. Interpolation can then yield a more precise peak location. This algorithm can be implemented very efficiently requiring about one addition, subtraction, multiplication, and division operation per data point. With integer data and a centroid window equal to a power of two (so that the division can be done with shifts), the algorithm is particularly suited to efficient machine language implementation. With suitable adjustments (involving only little overhead except at suspected peaks), it is possible to minimize either false peak location or missing good peaks. Extending the method to more dimensions is straightforward although interpolating is more difficult. The algorithm has been used on a variety of nuclear data spectra with great success
Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.
2007-09-01
When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
Cui, Qian; Shi, Jiancheng; Xu, Yuanliu
2011-12-01
Water is the basic needs for human society, and the determining factor of stability of ecosystem as well. There are lots of lakes on Tibet Plateau, which will lead to flood and mudslide when the water expands sharply. At present, water area is extracted from TM or SPOT data for their high spatial resolution; however, their temporal resolution is insufficient. MODIS data have high temporal resolution and broad coverage. So it is valuable resource for detecting the change of water area. Because of its low spatial resolution, mixed-pixels are common. In this paper, four spectral libraries are built using MOD09A1 product, based on that, water body is extracted in sub-pixels utilizing Multiple Endmembers Spectral Mixture Analysis (MESMA) using MODIS daily reflectance data MOD09GA. The unmixed result is comparing with contemporaneous TM data and it is proved that this method has high accuracy.
Directory of Open Access Journals (Sweden)
Uttam Kumar
2017-10-01
Full Text Available Land cover (LC refers to the physical and biological cover present over the Earth’s surface in terms of the natural environment such as vegetation, water, bare soil, etc. Most LC features occur at finer spatial scales compared to the resolution of primary remote sensing satellites. Therefore, observed data are a mixture of spectral signatures of two or more LC features resulting in mixed pixels. One solution to the mixed pixel problem is the use of subpixel learning algorithms to disintegrate the pixel spectrum into its constituent spectra. Despite the popularity and existing research conducted on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of several subpixel learning algorithms based on least squares, sparse regression, signal–subspace and geometrical methods. Analysis of the results obtained through computer-simulated and Landsat data indicated that fully constrained least squares (FCLS outperformed the other techniques. Further, FCLS was used to unmix global Web-Enabled Landsat Data to obtain abundances of substrate (S, vegetation (V and dark object (D classes. Due to the sheer nature of data and computational needs, we leveraged the NASA Earth Exchange (NEX high-performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into four classes, namely forest, farmland, water and urban areas (in conjunction with nighttime lights data over California, USA using a random forest classifier. Validation of these LC maps with the National Land Cover Database 2011 products and North American Forest Dynamics static forest map shows a 6% improvement in unmixing-based classification relative to per-pixel classification. As such, abundance maps continue to offer a useful alternative to high-spatial-resolution classified maps for forest inventory analysis, multi
Noise in position measurement by centroid calculation
International Nuclear Information System (INIS)
Volkov, P.
1996-01-01
The position of a particle trajectory in a gaseous (or semiconductor) detector can be measured by calculating the centroid of the induced charge on the cathode plane. The charge amplifiers attached to each cathode strip introduce noise which is added to the signal. This noise broadens the position resolution line. Our article gives an analytical tool to estimate the resolution broadening due to the noise per strip and the number of strips involved in the centroid calculation. It is shown that the position resolution increases faster than the square root of the number of strips involved. We also consider the consequence of added interstrip capacitors, intended to diminish the differential nonlinearity. It is shown that the position error increases slower than linearly with the interstrip capacities, due to the cancellation of correlated noise. The estimation we give, can be applied to calculations of position broadening other than the centroid finding. (orig.)
Ambiguity Of Doppler Centroid In Synthetic-Aperture Radar
Chang, Chi-Yung; Curlander, John C.
1991-01-01
Paper discusses performances of two algorithms for resolution of ambiguity in estimated Doppler centroid frequency of echoes in synthetic-aperture radar. One based on range-cross-correlation technique, other based on multiple-pulse-repetition-frequency technique.
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
Spatial scaling of net primary productivity using subpixel landcover information
Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.
2008-10-01
Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.
Networks and centroid metrics for understanding football
African Journals Online (AJOL)
Gonçalo Dias
games. However, it seems that the centroid metric, supported only by the position of players in the field ...... the strategy adopted by the coach (Gama et al., 2014). ... centroid distance as measures of team's tactical performance in youth football.
The Centroid of a Lie Triple Algebra
Directory of Open Access Journals (Sweden)
Xiaohong Liu
2013-01-01
Full Text Available General results on the centroids of Lie triple algebras are developed. Centroids of the tensor product of a Lie triple algebra and a unitary commutative associative algebra are studied. Furthermore, the centroid of the tensor product of a simple Lie triple algebra and a polynomial ring is completely determined.
International Nuclear Information System (INIS)
Fiuza, K.; Rizzato, F.B.; Pakter, R.
2006-01-01
In this paper we analyze the combined envelope-centroid dynamics of magnetically focused high-intensity charged beams surrounded by conducting walls. Similar to the case where conducting walls are absent, it is shown that the envelope and centroid dynamics decouple from each other. Mismatched envelopes still decay into equilibrium with simultaneous emittance growth, but the centroid keeps oscillating with no appreciable energy loss. Some estimates are performed to analytically obtain characteristics of halo formation seen in the full simulations
2D Sub-Pixel Disparity Measurement Using QPEC / Medicis
Directory of Open Access Journals (Sweden)
M. Cournet
2016-06-01
Full Text Available In the frame of its earth observation missions, CNES created a library called QPEC, and one of its launcher called Medicis. QPEC / Medicis is a sub-pixel two-dimensional stereo matching algorithm that works on an image pair. This tool is a block matching algorithm, which means that it is based on a local method. Moreover it does not regularize the results found. It proposes several matching costs, such as the Zero mean Normalised Cross-Correlation or statistical measures (the Mutual Information being one of them, and different match validation flags. QPEC / Medicis is able to compute a two-dimensional dense disparity map with a subpixel precision. Hence, it is more versatile than disparity estimation methods found in computer vision literature, which often assume an epipolar geometry. CNES uses Medicis, among other applications, during the in-orbit image quality commissioning of earth observation satellites. For instance the Pléiades-HR 1A & 1B and the Sentinel-2 geometric calibrations are based on this block matching algorithm. Over the years, it has become a common tool in ground segments for in-flight monitoring purposes. For these two kinds of applications, the two-dimensional search and the local sub-pixel measure without regularization can be essential. This tool is also used to generate automatic digital elevation models, for which it was not initially dedicated. This paper deals with the QPEC / Medicis algorithm. It also presents some of its CNES applications (in-orbit commissioning, in flight monitoring or digital elevation model generation. Medicis software is distributed outside the CNES as well. This paper finally describes some of these external applications using Medicis, such as ground displacement measurement, or intra-oral scanner in the dental domain.
Major shell centroids in the symplectic collective model
International Nuclear Information System (INIS)
Draayer, J.P.; Rosensteel, G.; Tulane Univ., New Orleans, LA
1983-01-01
Analytic expressions are given for the major shell centroids of the collective potential V(#betta#, #betta#) and the shape observable #betta# 2 in the Sp(3,R) symplectic model. The tools of statistical spectroscopy are shown to be useful, firstly, in translating a requirement that the underlying shell structure be preserved into constraints on the parameters of the collective potential and, secondly, in giving a reasonable estimate for a truncation of the infinite dimensional symplectic model space from experimental B(E2) transition strengths. Results based on the centroid information are shown to compare favorably with results from exact calculations in the case of 20 Ne. (orig.)
Subpixel level mapping of remotely sensed image using colorimetry
Directory of Open Access Journals (Sweden)
M. Suresh
2018-04-01
Full Text Available The problem of extracting proportion of classes present within a pixel has been a challenge for researchers for which already numerous methodologies have been developed but still saturation is far ahead, since still the methods accounting these mixed classes are not perfect and they would never be perfect until one can talk about one to one correspondence for each pixel and ground data, which is practically impossible. In this paper a step towards generation of new method for finding out mixed class proportions in a pixel on the basis of the mixing property of colors as per colorimetry. The methodology involves locating the class color of a mixed pixel on chromaticity diagram and then using contextual information mainly the location of neighboring pixels on chromaticity diagram to estimate the proportion of classes in the mixed pixel.Also the resampling method would be more accurate when accounting for sharp and exact boundaries. With the usage of contextual information can generate the resampled image containing only the colors which really exist. The process is simply accounting the fraction and then the number of pixels by multiplying the fraction by total number of pixels into which one pixel is splitted to get number of pixels of each color based on contextual information. Keywords: Subpixel classification, Remote sensing imagery, Colorimetric color space, Sampling and subpixel mapping
Centroid motion in periodically focused beams
International Nuclear Information System (INIS)
Moraes, J.S.; Pakter, R.; Rizzato, F.B.
2005-01-01
The role of the centroid dynamics in the transport of periodically focused particle beams is investigated. A Kapchinskij-Vladimirskij equilibrium distribution for an off-axis beam is derived. It is shown that centroid and envelope dynamics are uncoupled and that unstable regions for the centroid dynamics overlap with previously stable regions for the envelope dynamics alone. Multiparticle simulations validate the findings. The effects of a conducting pipe encapsulating the beam are also investigated. It is shown that the charge induced at the pipe may generate chaotic orbits which can be detrimental to the adequate functioning of the transport mechanism
Chandra ACIS Sub-pixel Resolution
Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.
2011-05-01
We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy
Centroids of effective interactions from measured single-particle energies: An application
International Nuclear Information System (INIS)
Cole, B.J.
1990-01-01
Centroids of the effective nucleon-nucleon interaction for the mass region A=28--64 are extracted directly from experimental single-particle spectra, by comparing single-particle energies relative to different cores. Uncertainties in the centroids are estimated at approximately 100 keV, except in cases of exceptional fragmentation of the single-particle strength. The use of a large number of inert cores allows the dependence of the interaction on mass or model space to be investigated. The method permits accurate empirical modifications to be made to realistic interactions calculated from bare nucleon-nucleon potentials, which are known to possess defective centroids in many cases. In addition, the centroids can be used as input to the more sophisticated fitting procedures that are employed to produce matrix elements of the effective interaction
A Hybridized Centroid Technique for 3D Molodensky-Badekas ...
African Journals Online (AJOL)
Richannan
the same point in a second reference frame (Ghilani, 2010). ... widely used approach by most researchers to compute values of centroid coordinates in the ... choice of centroid method on the Veis model has been investigated by Ziggah et al.
Analysis of the positon resolution in centroid measurements in MWPC
International Nuclear Information System (INIS)
Gatti, E.; Longoni, A.
1981-01-01
Resolution limits in avalanche localization along the anode wires of an MWPC with cathodes connected by resistors and equally spaced amplifiers, are evaluated. A simple weighted-centroid method and a highly linear method based on a linear centroid finding filter, are considered. The contributions to the variance of the estimator of the avalanche position, due to the series noise of the amplifiers and to the thermal noise of the resistive line are separately calculated and compared. A comparison is made with the resolution of the MWPC with isolated cathodes. The calculations are performed with a distributed model of the diffusive line formed by the cathodes and the resistors. A comparison is also made with the results obtained with a simple lumped model of the diffusive line. A number of graphs useful in determining the best parameters of a MWPC, with a specified position and time resolution, are given. It has been found that, for short resolution times, an MWPC with cathodes connected by resitors presents better resolution (lower variance of the estimator of the avalanche position) than an MWPC with isolated cathodes. Conversely, for long resolution times, the variance of the estimator of the avalanche position is lower in an MWPC with isolated cathodes. (orig.)
International Nuclear Information System (INIS)
Roupioz, L; Nerry, F; Jia, L; Menenti, M
2014-01-01
The Qinghai-Tibetan Plateau is characterised by a very strong relief which affects albedo retrieval from satellite data. The objective of this study is to highlight the effects of sub-pixel topography and to account for those effects when retrieving land surface albedo from geostationary satellite FengYun-2D (FY-2D) data with 1.25km spatial resolution using the high spatial resolution (30 m) data of the Digital Elevation Model (DEM) from ASTER. The methodology integrates the effects of sub-pixel topography on the estimation of the total irradiance received at the surface, allowing the computation of the topographically corrected surface reflectance. Furthermore, surface albedo is estimated by applying the parametric BRDF (Bidirectional Reflectance Distribution Function) model called RPV (Rahman-Pinty-Verstraete) to the terrain corrected surface reflectance. The results, evaluated against ground measurements collected over several experimental sites on the Qinghai-Tibetan Plateau, document the advantage of integrating the sub-pixel topography effects in the land surface reflectance at 1km resolution to estimate the land surface albedo. The results obtained after using sub-pixel topographic correction are compared with the ones obtained after using pixel level topographic correction. The preliminary results imply that, in highly rugged terrain, the sub-pixel topography correction method gives more accurate results. The pixel level correction tends to overestimate surface albedo
Subpixel edge localization with reduced uncertainty by violating the Nyquist criterion
Heidingsfelder, Philipp; Gao, Jun; Wang, Kun; Ott, Peter
2014-12-01
In this contribution, the extent to which the Nyquist criterion can be violated in optical imaging systems with a digital sensor, e.g., a digital microscope, is investigated. In detail, we analyze the subpixel uncertainty of the detected position of a step edge, the edge of a stripe with a varying width, and that of a periodic rectangular pattern for varying pixel pitches of the sensor, thus also in aliased conditions. The analysis includes the investigation of different algorithms of edge localization based on direct fitting or based on the derivative of the edge profile, such as the common centroid method. In addition to the systematic error of these algorithms, the influence of the photon noise (PN) is included in the investigation. A simplified closed form solution for the uncertainty of the edge position caused by the PN is derived. The presented results show that, in the vast majority of cases, the pixel pitch can exceed the Nyquist sampling distance by about 50% without an increase of the uncertainty of edge localization. This allows one to increase the field-of-view without increasing the resolution of the sensor and to decrease the size of the setup by reducing the magnification. Experimental results confirm the simulation results.
Transverse centroid oscillations in solenoidially focused beam transport lattices
International Nuclear Information System (INIS)
Lund, Steven M.; Wootton, Christopher J.; Lee, Edward P.
2009-01-01
Transverse centroid oscillations are analyzed for a beam in a solenoid transport lattice. Linear equations of motion are derived that describe small-amplitude centroid oscillations induced by displacement and rotational misalignments of the focusing solenoids in the transport lattice, dipole steering elements, and initial centroid offset errors. These equations are analyzed in a local rotating Larmor frame to derive complex-variable 'alignment functions' and 'bending functions' that efficiently describe the characteristics of the centroid oscillations induced by both mechanical misalignments of the solenoids and dipole steering elements. The alignment and bending functions depend only on the properties of the ideal lattice in the absence of errors and steering, and have associated expansion amplitudes set by the misalignments and steering fields, respectively. Applications of this formulation are presented for statistical analysis of centroid oscillations, calculation of actual lattice misalignments from centroid measurements, and optimal beam steering.
Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations
International Nuclear Information System (INIS)
Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim
2011-01-01
A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.
Centroid finding method for position-sensitive detectors
International Nuclear Information System (INIS)
Radeka, V.; Boie, R.A.
1979-10-01
A new centroid finding method for all detectors where the signal charge is collected or induced on strips of wires, or on subdivided resistive electrodes, is presented. The centroid of charge is determined by convolution of the sequentially switched outputs from these subdivisions or from the strips with a linear centroid finding filter. The position line width is inversely proportional to N/sup 3/2/, where N is the number of subdivisions
Centroid finding method for position-sensitive detectors
International Nuclear Information System (INIS)
Radeka, V.; Boie, R.A.
1980-01-01
A new centroid finding method for all detectors where the signal charge is collected or induced on strips or wires, or on subdivided resistive electrodes, is presented. The centroid of charge is determined by convolution of the sequentially switched outputs from these subdivisions or from the strips with a linear centroid finding filter. The position line width is inversely proportional to N 3 sup(/) 2 , where N is the number of subdivisions. (orig.)
FINGERPRINT MATCHING BASED ON PORE CENTROIDS
Directory of Open Access Journals (Sweden)
S. Malathi
2011-05-01
Full Text Available In recent years there has been exponential growth in the use of bio- metrics for user authentication applications. Automated Fingerprint Identification systems have become popular tool in many security and law enforcement applications. Most of these systems rely on minutiae (ridge ending and bifurcation features. With the advancement in sensor technology, high resolution fingerprint images (1000 dpi pro- vide micro level of features (pores that have proven to be useful fea- tures for identification. In this paper, we propose a new strategy for fingerprint matching based on pores by reliably extracting the pore features The extraction of pores is done by Marker Controlled Wa- tershed segmentation method and the centroids of each pore are con- sidered as feature vectors for matching of two fingerprint images. Experimental results shows that the proposed method has better per- formance with lower false rates and higher accuracy.
Star point centroid algorithm based on background forecast
Wang, Jin; Zhao, Rujin; Zhu, Nan
2014-09-01
The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.
Peak-locking centroid bias in Shack-Hartmann wavefront sensing
Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.
2018-05-01
Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.
Huang, Qiongyu; Sauer, John R.; Swatantran, Anu; Dubayah, Ralph
2016-01-01
Drastic shifts in species distributions are a cause of concern for ecologists. Such shifts pose great threat to biodiversity especially under unprecedented anthropogenic and natural disturbances. Many studies have documented recent shifts in species distributions. However, most of these studies are limited to regional scales, and do not consider the abundance structure within species ranges. Developing methods to detect systematic changes in species distributions over their full ranges is critical for understanding the impact of changing environments and for successful conservation planning. Here, we demonstrate a centroid model for range-wide analysis of distribution shifts using the North American Breeding Bird Survey. The centroid model is based on a hierarchical Bayesian framework which models population change within physiographic strata while accounting for several factors affecting species detectability. Yearly abundance-weighted range centroids are estimated. As case studies, we derive annual centroids for the Carolina wren and house finch in their ranges in the U.S. We further evaluate the first-difference correlation between species’ centroid movement and changes in winter severity, total population abundance. We also examined associations of change in centroids from sub-ranges. Change in full-range centroid movements of Carolina wren significantly correlate with snow cover days (r = −0.58). For both species, the full-range centroid shifts also have strong correlation with total abundance (r = 0.65, and 0.51 respectively). The movements of the full-range centroids of the two species are correlated strongly (up to r = 0.76) with that of the sub-ranges with more drastic population changes. Our study demonstrates the usefulness of centroids for analyzing distribution changes in a two-dimensional spatial context. Particularly it highlights applications that associate the centroid with factors such as environmental stressors, population characteristics
Implementation of the Centroid Method for the Correction of Turbulence
Directory of Open Access Journals (Sweden)
Enric Meinhardt-Llopis
2014-07-01
Full Text Available The centroid method for the correction of turbulence consists in computing the Karcher-Fréchet mean of the sequence of input images. The direction of deformation between a pair of images is determined by the optical flow. A distinguishing feature of the centroid method is that it can produce useful results from an arbitrarily small set of input images.
Hybridized centroid technique for 3D Molodensky-Badekas ...
African Journals Online (AJOL)
In view of this, the present study developed and tested two new hybrid centroid techniques known as the harmonic-quadratic mean and arithmetic-quadratic mean centroids. The proposed hybrid approaches were compared with the geometric mean, harmonic mean, median, quadratic mean and arithmetic mean. In addition ...
A focal plane metrology system and PSF centroiding experiment
Li, Haitao; Li, Baoquan; Cao, Yang; Li, Ligang
2016-10-01
In this paper, we present an overview of a detector array equipment metrology testbed and a micro-pixel centroiding experiment currently under development at the National Space Science Center, Chinese Academy of Sciences. We discuss on-going development efforts aimed at calibrating the intra-/inter-pixel quantum efficiency and pixel positions for scientific grade CMOS detector, and review significant progress in achieving higher precision differential centroiding for pseudo star images in large area back-illuminated CMOS detector. Without calibration of pixel positions and intrapixel response, we have demonstrated that the standard deviation of differential centroiding is below 2.0e-3 pixels.
An Adaptive Connectivity-based Centroid Algorithm for Node Positioning in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Aries Pratiarso
2015-06-01
Full Text Available In wireless sensor network applications, the position of nodes is randomly distributed following the contour of the observation area. A simple solution without any measurement tools is provided by range-free method. However, this method yields the coarse estimating position of the nodes. In this paper, we propose Adaptive Connectivity-based (ACC algorithm. This algorithm is a combination of Centroid as range-free based algorithm, and hop-based connectivity algorithm. Nodes have a possibility to estimate their own position based on the connectivity level between them and their reference nodes. Each node divides its communication range into several regions where each of them has a certain weight depends on the received signal strength. The weighted value is used to obtain the estimated position of nodes. Simulation result shows that the proposed algorithm has up to 3 meter error of estimated position on 100x100 square meter observation area, and up to 3 hop counts for 80 meters' communication range. The proposed algorithm performs an average error positioning up to 10 meters better than Weighted Centroid algorithm. Keywords: adaptive, connectivity, centroid, range-free.
Optimisation of centroiding algorithms for photon event counting imaging
International Nuclear Information System (INIS)
Suhling, K.; Airey, R.W.; Morgan, B.L.
1999-01-01
Approaches to photon event counting imaging in which the output events of an image intensifier are located using a centroiding technique have long been plagued by fixed pattern noise in which a grid of dimensions similar to those of the CCD pixels is superimposed on the image. This is caused by a mismatch between the photon event shape and the centroiding algorithm. We have used hyperbolic cosine, Gaussian, Lorentzian, parabolic as well as 3-, 5-, and 7-point centre of gravity algorithms, and hybrids thereof, to assess means of minimising this fixed pattern noise. We show that fixed pattern noise generated by the widely used centre of gravity centroiding is due to intrinsic features of the algorithm. Our results confirm that the recently proposed use of Gaussian centroiding does indeed show a significant reduction of fixed pattern noise compared to centre of gravity centroiding (Michel et al., Mon. Not. R. Astron. Soc. 292 (1997) 611-620). However, the disadvantage of a Gaussian algorithm is a centroiding failure for small pulses, caused by a division by zero, which leads to a loss of detective quantum efficiency (DQE) and to small amounts of residual fixed pattern noise. Using both real data from an image intensifier system employing a progressive scan camera, framegrabber and PC, and also synthetic data from Monte-Carlo simulations, we find that hybrid centroiding algorithms can reduce the fixed pattern noise without loss of resolution or loss of DQE. Imaging a test pattern to assess the features of the different algorithms shows that a hybrid of Gaussian and 3-point centre of gravity centroiding algorithms results in an optimum combination of low fixed pattern noise (lower than a simple Gaussian), high DQE, and high resolution. The Lorentzian algorithm gives the worst results in terms of high fixed pattern noise and low resolution, and the Gaussian and hyperbolic cosine algorithms have the lowest DQEs
Directory of Open Access Journals (Sweden)
Drzewiecki Wojciech
2017-12-01
Full Text Available We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
Drzewiecki, Wojciech
2017-12-01
We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
Dettmer, J.; Benavente, R. F.; Cummins, P. R.
2017-12-01
This work considers probabilistic, non-linear centroid moment tensor inversion of data from earthquakes at teleseismic distances. The moment tensor is treated as deviatoric and centroid location is parametrized with fully unknown latitude, longitude, depth and time delay. The inverse problem is treated as fully non-linear in a Bayesian framework and the posterior density is estimated with interacting Markov chain Monte Carlo methods which are implemented in parallel and allow for chain interaction. The source mechanism and location, including uncertainties, are fully described by the posterior probability density and complex trade-offs between various metrics are studied. These include the percent of double couple component as well as fault orientation and the probabilistic results are compared to results from earthquake catalogs. Additional focus is on the analysis of complex events which are commonly not well described by a single point source. These events are studied by jointly inverting for multiple centroid moment tensor solutions. The optimal number of sources is estimated by the Bayesian information criterion to ensure parsimonious solutions. [Supported by NSERC.
Plasma Channel Diagnostic Based on Laser Centroid Oscillations
International Nuclear Information System (INIS)
Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim
2010-01-01
A technique has been developed for measuring the properties of discharge-based plasma channels by monitoring the centroid location of a laser beam exiting the channel as a function of input alignment offset between the laser and the channel. The centroid position of low-intensity ( 14 Wcm -2 ) laser pulses focused at the input of a hydrogen-filled capillary discharge waveguide was scanned and the exit positions recorded to determine the channel shape and depth with an accuracy of a few %. In addition, accurate alignment of the laser beam through the plasma channel can be provided by minimizing laser centroid motion at the channel exit as the channel depth is scanned either by scanning the plasma density or the discharge timing. The improvement in alignment accuracy provided by this technique will be crucial for minimizing electron beam pointing errors in laser plasma accelerators.
Characterizing Subpixel Spatial Resolution of a Hybrid CMOS Detector
Bray, Evan; Burrows, Dave; Chattopadhyay, Tanmoy; Falcone, Abraham; Hull, Samuel; Kern, Matthew; McQuaide, Maria; Wages, Mitchell
2018-01-01
The detection of X-rays is a unique process relative to other wavelengths, and allows for some novel features that increase the scientific yield of a single observation. Unlike lower photon energies, X-rays liberate a large number of electrons from the silicon absorber array of the detector. This number is usually on the order of several hundred to a thousand for moderate-energy X-rays. These electrons tend to diffuse outward into what is referred to as the charge cloud. This cloud can then be picked up by several pixels, forming a specific pattern based on the exact incident location. By conducting the first ever “mesh experiment" on a hybrid CMOS detector (HCD), we have experimentally determined the charge cloud shape and used it to characterize responsivity of the detector with subpixel spatial resolution.
Determination of star bodies from p-centroid bodies
Indian Academy of Sciences (India)
An immediate consequence of the definition of the p-centroid body of K is that for any .... The dual mixed volume ˜V−p(K, L) of star bodies K, L can be defined by d. −p ..... [16] Lindenstrauss J and Milman V D, Local theory of normed spaces and ...
Networks and centroid metrics for understanding football | Gama ...
African Journals Online (AJOL)
This study aimedto verifythe network of contacts resulting from the collective behaviour of professional football teams through the centroid method and networks as well, therebyproviding detailed information about the match to coaches and sport analysts. For this purpose, 999 collective attacking actions from twoteams were ...
Determination of star bodies from p-centroid bodies
Indian Academy of Sciences (India)
In this paper, we prove that an origin-symmetric star body is uniquely determined by its -centroid body. Furthermore, using spherical harmonics, we establish a result for non-symmetric star bodies. As an application, we show that there is a unique member of p ⟨ K ⟩ characterized by having larger volume than any other ...
Optimizing the calculation of point source count-centroid in pixel size measurement
International Nuclear Information System (INIS)
Zhou Luyi; Kuang Anren; Su Xianyu
2004-01-01
Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean
Optimizing the calculation of point source count-centroid in pixel size measurement
International Nuclear Information System (INIS)
Zhou Luyi; Kuang Anren; Su Xianyu
2004-01-01
Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased
Intraoperative cyclorotation and pupil centroid shift during LASIK and PRK.
Narváez, Julio; Brucks, Matthew; Zimmerman, Grenith; Bekendam, Peter; Bacon, Gregory; Schmid, Kristin
2012-05-01
To determine the degree of cyclorotation and centroid shift in the x and y axis that occurs intraoperatively during LASIK and photorefractive keratectomy (PRK). Intraoperative cyclorotation and centroid shift were measured in 63 eyes from 34 patients with a mean age of 34 years (range: 20 to 56 years) undergoing either LASIK or PRK. Preoperatively, an iris image of each eye was obtained with the VISX WaveScan Wavefront System (Abbott Medical Optics Inc) with iris registration. A VISX Star S4 (Abbott Medical Optics Inc) laser was later used to measure cyclotorsion and pupil centroid shift at the beginning of the refractive procedure and after flap creation or epithelial removal. The mean change in intraoperative cyclorotation was 1.48±1.11° in LASIK eyes and 2.02±2.63° in PRK eyes. Cyclorotation direction changed by >2° in 21% of eyes after flap creation in LASIK and in 32% of eyes after epithelial removal in PRK. The respective mean intraoperative shift in the x axis and y axis was 0.13±0.15 mm and 0.17±0.14 mm, respectively, in LASIK eyes, and 0.09±0.07 mm and 0.10±0.13 mm, respectively, in PRK eyes. Intraoperative centroid shifts >100 μm in either the x axis or y axis occurred in 71% of LASIK eyes and 55% of PRK eyes. Significant changes in cyclotorsion and centroid shifts were noted prior to surgery as well as intraoperatively with both LASIK and PRK. It may be advantageous to engage iris registration immediately prior to ablation to provide a reference point representative of eye position at the initiation of laser delivery. Copyright 2012, SLACK Incorporated.
Subpixel Snow Cover Mapping from MODIS Data by Nonparametric Regression Splines
Akyurek, Z.; Kuter, S.; Weber, G. W.
2016-12-01
Spatial extent of snow cover is often considered as one of the key parameters in climatological, hydrological and ecological modeling due to its energy storage, high reflectance in the visible and NIR regions of the electromagnetic spectrum, significant heat capacity and insulating properties. A significant challenge in snow mapping by remote sensing (RS) is the trade-off between the temporal and spatial resolution of satellite imageries. In order to tackle this issue, machine learning-based subpixel snow mapping methods, like Artificial Neural Networks (ANNs), from low or moderate resolution images have been proposed. Multivariate Adaptive Regression Splines (MARS) is a nonparametric regression tool that can build flexible models for high dimensional and complex nonlinear data. Although MARS is not often employed in RS, it has various successful implementations such as estimation of vertical total electron content in ionosphere, atmospheric correction and classification of satellite images. This study is the first attempt in RS to evaluate the applicability of MARS for subpixel snow cover mapping from MODIS data. Total 16 MODIS-Landsat ETM+ image pairs taken over European Alps between March 2000 and April 2003 were used in the study. MODIS top-of-atmospheric reflectance, NDSI, NDVI and land cover classes were used as predictor variables. Cloud-covered, cloud shadow, water and bad-quality pixels were excluded from further analysis by a spatial mask. MARS models were trained and validated by using reference fractional snow cover (FSC) maps generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also developed. The mutual comparison of obtained MARS and ANN models was accomplished on independent test areas. The MARS model performed better than the ANN model with an average RMSE of 0.1288 over the independent test areas; whereas the average RMSE of the ANN model
Zhang, Z.; Werner, F.; Cho, H. -M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry
2016-01-01
to estimate the retrieval uncertainty from sub-pixel reflectance variations in operational satellite cloud products and to help understand the differences in and re retrievals between two instruments.
Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, K.
2016-01-01
framework can be used to estimate the retrieval uncertainty from subpixel reflectance variations in operational satellite cloud products and to help understand the differences in t and re retrievals between two instruments.
Multiple centroid method to evaluate the adaptability of alfalfa genotypes
Directory of Open Access Journals (Sweden)
Moysés Nascimento
2015-02-01
Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.
A Novel Approach Based on MEMS-Gyro's Data Deep Coupling for Determining the Centroid of Star Spot
Directory of Open Access Journals (Sweden)
Xing Fei
2012-01-01
Full Text Available The traditional approach of star tracker for determining the centroid of spot requires enough energy and good shape, so a relatively long exposure time and stable three-axis state become necessary conditions to maintain high accuracy, these limit its update rate and dynamic performance. In view of these issues, this paper presents an approach for determining the centroid of star spot which based on MEMS-Gyro's data deep coupling, it achieves the deep fusion of the data of star tracker and MEMS-Gyro at star map level through the introduction of EKF. The trajectory predicted by using the angular velocity of three axes can be used to set the extraction window, this enhances the dynamic performance because of the accurate extraction when the satellite has angular speed. The optimal estimations of the centroid position and the drift in the output signal of MEMS-Gyro through this approach reduce the influence of noise of the detector on accuracy of the traditional approach for determining the centroid and effectively correct the output signal of MEMS-Gyro. At the end of this paper, feasibility of this approach is verified by simulation.
Non-obtuse Remeshing with Centroidal Voronoi Tessellation
Yan, Dongming; Wonka, Peter
2015-01-01
We present a novel remeshing algorithm that avoids triangles with small and triangles with large (obtuse) angles. Our solution is based on an extension to Centroidal Voronoi Tesselation (CVT). We augment the original CVT formulation by a penalty term that penalizes short Voronoi edges, while the CVT term helps to avoid small angles. Our results show significant improvements of the remeshing quality over the state of the art.
Non-obtuse Remeshing with Centroidal Voronoi Tessellation
Yan, Dongming
2015-12-03
We present a novel remeshing algorithm that avoids triangles with small and triangles with large (obtuse) angles. Our solution is based on an extension to Centroidal Voronoi Tesselation (CVT). We augment the original CVT formulation by a penalty term that penalizes short Voronoi edges, while the CVT term helps to avoid small angles. Our results show significant improvements of the remeshing quality over the state of the art.
Directory of Open Access Journals (Sweden)
Haoming Xia
2017-01-01
Full Text Available Wetland inundation is crucial to the survival and prosperity of fauna and flora communities in wetland ecosystems. Even small changes in surface inundation may result in a substantial impact on the wetland ecosystem characteristics and function. This study presented a novel method for wetland inundation mapping at a subpixel scale in a typical wetland region on the Zoige Plateau, northeast Tibetan Plateau, China, by combining use of an unmanned aerial vehicle (UAV and Landsat-8 Operational Land Imager (OLI data. A reference subpixel inundation percentage (SIP map at a Landsat-8 OLI 30 m pixel scale was first generated using high resolution UAV data (0.16 m. The reference SIP map and Landsat-8 OLI imagery were then used to develop SIP estimation models using three different retrieval methods (Linear spectral unmixing (LSU, Artificial neural networks (ANN, and Regression tree (RT. Based on observations from 2014, the estimation results indicated that the estimation model developed with RT method could provide the best fitting results for the mapping wetland SIP (R2 = 0.933, RMSE = 8.73% compared to the other two methods. The proposed model with RT method was validated with observations from 2013, and the estimated SIP was highly correlated with the reference SIP, with an R2 of 0.986 and an RMSE of 9.84%. This study highlighted the value of high resolution UAV data and globally and freely available Landsat data in combination with the developed approach for monitoring finely gradual inundation change patterns in wetland ecosystems.
Error diffusion applied to the manipulation of liquid-crystal display subpixels
Dallas, William J.; Fan, Jiahua; Roehrig, Hans; Krupinski, Elizabeth A.
2004-05-01
Flat-panel displays based on liquid crystal technology are becoming widely used in the medical imaging arena. Despite the impressive capabilities of presently-existing panels, some medical images push their boundaries. We are working with mammograms that contain up to 4800 x 6400 14-bit pixels. Stated differently, these images contain 30 mega-pixels each. In the standard environment, for film viewing, the mammograms are hung four-up, i.e. four images are located side by side. Because many of the LCD panels used for monochrome display of medical images are based on color models, the pixels of the panels are divided into sub-pixels. These sub-pixels vary in their numbers and in the degrees of independence. Manufacturers have used both spatial and temporal modulation of these sub-pixels to improve the quality of images presented by the monitors. In this presentation we show how the sub-pixel structure of some present and future displays can be used to attain higher spatial resolution than the full-pixel resolution specification would suggest while also providing increased contrast resolution. The error diffusion methods we discuss provide a natural way of controlling sub-pixels and implementing trade-offs. In smooth regions of the image contrast resolution can maximized. In rapidly-varying regions of the image spatial resolution can be favored.
Radial lens distortion correction with sub-pixel accuracy for X-ray micro-tomography.
Vo, Nghia T; Atwood, Robert C; Drakopoulos, Michael
2015-12-14
Distortion correction or camera calibration for an imaging system which is highly configurable and requires frequent disassembly for maintenance or replacement of parts needs a speedy method for recalibration. Here we present direct techniques for calculating distortion parameters of a non-linear model based on the correct determination of the center of distortion. These techniques are fast, very easy to implement, and accurate at sub-pixel level. The implementation at the X-ray tomography system of the I12 beamline, Diamond Light Source, which strictly requires sub-pixel accuracy, shows excellent performance in the calibration image and in the reconstructed images.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
Energy Technology Data Exchange (ETDEWEB)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Measurement of centroid trajectory of Dragon-I electron beam
International Nuclear Information System (INIS)
Jiang Xiaoguo; Wang Yuan; Zhang Wenwei; Zhang Kaizhi; Li Jing; Li Chenggang; Yang Guojun
2005-01-01
The control of the electron beam in an intense current linear induction accelerator (LIA) is very important. The center position of the electron beam and the beam profile are two important parameters which should be measured accurately. The setup of a time-resolved measurement system and a data processing method for determining the beam center position are introduced for the purpose of obtaining Dragon-I electron beam trajectory including beam profile. The actual results show that the centroid position error can be controlled in one to two pixels. the time-resolved beam centroid trajectory of Dragon-I (18.5 MeV, 2 kA, 90 ns) is obtained recently in 10 ns interval, 3 ns exposure time with a multi-frame gated camera. The results show that the screw movement of the electron beam is mainly limited in an area with a radius of 0.5 mm and the time-resolved diameters of the beam are 8.4 mm, 8.8 mm, 8.5 mm, 9.3 mm and 7.6 mm. These results have provided a very important support to several research areas such as beam trajectory tuning and beam transmission. (authors)
Maksimova, L. A.; Ryabukho, P. V.; Mysina, N. Yu.; Lyakin, D. V.; Ryabukho, V. P.
2018-04-01
We have investigated the capabilities of the method of digital speckle interferometry for determining subpixel displacements of a speckle structure formed by a displaceable or deformable object with a scattering surface. An analysis of spatial spectra of speckle structures makes it possible to perform measurements with a subpixel accuracy and to extend the lower boundary of the range of measurements of displacements of speckle structures to the range of subpixel values. The method is realized on the basis of digital recording of the images of undisplaced and displaced speckle structures, their spatial frequency analysis using numerically specified constant phase shifts, and correlation analysis of spatial spectra of speckle structures. Transformation into the frequency range makes it possible to obtain quantities to be measured with a subpixel accuracy from the shift of the interference-pattern minimum in the diffraction halo by introducing an additional phase shift into the complex spatial spectrum of the speckle structure or from the slope of the linear plot of the function of accumulated phase difference in the field of the complex spatial spectrum of the displaced speckle structure. The capabilities of the method have been investigated in natural experiment.
Feature selection and nearest centroid classification for protein mass spectrometry
Directory of Open Access Journals (Sweden)
Levner Ilya
2005-03-01
Full Text Available Abstract Background The use of mass spectrometry as a proteomics tool is poised to revolutionize early disease diagnosis and biomarker identification. Unfortunately, before standard supervised classification algorithms can be employed, the "curse of dimensionality" needs to be solved. Due to the sheer amount of information contained within the mass spectra, most standard machine learning techniques cannot be directly applied. Instead, feature selection techniques are used to first reduce the dimensionality of the input space and thus enable the subsequent use of classification algorithms. This paper examines feature selection techniques for proteomic mass spectrometry. Results This study examines the performance of the nearest centroid classifier coupled with the following feature selection algorithms. Student-t test, Kolmogorov-Smirnov test, and the P-test are univariate statistics used for filter-based feature ranking. From the wrapper approaches we tested sequential forward selection and a modified version of sequential backward selection. Embedded approaches included shrunken nearest centroid and a novel version of boosting based feature selection we developed. In addition, we tested several dimensionality reduction approaches, namely principal component analysis and principal component analysis coupled with linear discriminant analysis. To fairly assess each algorithm, evaluation was done using stratified cross validation with an internal leave-one-out cross-validation loop for automated feature selection. Comprehensive experiments, conducted on five popular cancer data sets, revealed that the less advocated sequential forward selection and boosted feature selection algorithms produce the most consistent results across all data sets. In contrast, the state-of-the-art performance reported on isolated data sets for several of the studied algorithms, does not hold across all data sets. Conclusion This study tested a number of popular feature
International Nuclear Information System (INIS)
Lu, Bo; Samant, Sanjiv; Mittauer, Kathryn; Lee, Soyoung; Huang, Yin; Li, Jonathan; Kahler, Darren; Liu, Chihray
2013-01-01
Purpose: Our previous study [B. Lu et al., “A patient alignment solution for lung SBRT setups based on a deformable registration technique,” Med. Phys. 39(12), 7379–7389 (2012)] proposed a deformable-registration-based patient setup strategy called the centroid-to-centroid (CTC) method, which can perform an accurate alignment of internal-target-volume (ITV) centroids between averaged four-dimensional computed tomography and cone-beam computed tomography (CBCT) images. Scenarios with variations between CBCT and simulation CT caused by irregular breathing and/or tumor change were not specifically considered in the patient study [B. Lu et al., “A patient alignment solution for lung SBRT setups based on a deformable registration technique,” Med. Phys. 39(12), 7379–7389 (2012)] due to the lack of both a sufficiently large patient data sample and a method of tumor tracking. The aim of this study is to thoroughly investigate and compare the impacts of breathing pattern and tumor change on both the CTC and the translation-only (T-only) gray-value mode strategies by employing a four-dimensional (4D) lung phantom.Methods: A sophisticated anthropomorphic 4D phantom (CIRS Dynamic Thorax Phantom model 008) was employed to simulate all desired respiratory variations. The variation scenarios were classified into four groups: inspiration to expiration ratio (IE ratio) change, tumor trajectory change, tumor position change, tumor size change, and the combination of these changes. For each category the authors designed several scenarios to demonstrate the effects of different levels of breathing variation on both of the T-only and the CTC methods. Each scenario utilized 4DCT and CBCT scans. The ITV centroid alignment discrepancies for CTC and T-only were evaluated. The dose-volume-histograms (DVHs) of ITVs for two extreme cases were analyzed.Results: Except for some extreme cases in the combined group, the accuracy of the CTC registration was about 2 mm for all cases for
International Nuclear Information System (INIS)
Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus
2016-01-01
Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times.
Jasinski, Michael F.
1990-01-01
An analytical framework is provided for examining the physically based behavior of the normalized difference vegetation index (NDVI) in terms of the variability in bulk subpixel landscape components and with respect to variations in pixel scales, within the context of the stochastic-geometric canopy reflectance model. Analysis focuses on regional scale variability in horizontal plant density and soil background reflectance distribution. Modeling is generalized to different plant geometries and solar angles through the use of the nondimensional solar-geometric similarity parameter. Results demonstrate that, for Poisson-distributed plants and for one deterministic distribution, NDVI increases with increasing subpixel fractional canopy amount, decreasing soil background reflectance, and increasing shadows, at least within the limitations of the geometric reflectance model. The NDVI of a pecan orchard and a juniper landscape is presented and discussed.
Design of interpolation functions for subpixel-accuracy stereo-vision systems.
Haller, Istvan; Nedevschi, Sergiu
2012-02-01
Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE
Drzewiecki, Wojciech
2016-12-01
In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-04-01
Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in
Model Independent Analysis of Beam Centroid Dynamics in Accelerators
Energy Technology Data Exchange (ETDEWEB)
Wang, Chun-xi
2003-04-21
Fundamental issues in Beam-Position-Monitor (BPM)-based beam dynamics observations are studied in this dissertation. The major topic is the Model-Independent Analysis (MIA) of beam centroid dynamics. Conventional beam dynamics analysis requires a certain machine model, which itself of ten needs to be refined by beam measurements. Instead of using any particular machine model, MIA relies on a statistical analysis of the vast amount of BPM data that often can be collected non-invasively during normal machine operation. There are two major parts in MIA. One is noise reduction and degrees-of-freedom analysis using a singular value decomposition of a BPM-data matrix, which constitutes a principal component analysis of BPM data. The other is a physical base decomposition of the BPM-data matrix based on the time structure of pulse-by-pulse beam and/or machine parameters. The combination of these two methods allows one to break the resolution limit set by individual BPMs and observe beam dynamics at more accurate levels. A physical base decomposition is particularly useful for understanding various beam dynamics issues. MIA improves observation and analysis of beam dynamics and thus leads to better understanding and control of beams in both linacs and rings. The statistical nature of MIA makes it potentially useful in other fields. Another important topic discussed in this dissertation is the measurement of a nonlinear Poincare section (one-turn) map in circular accelerators. The beam dynamics in a ring is intrinsically nonlinear. In fact, nonlinearities are a major factor that limits stability and influences the dynamics of halos. The Poincare section map plays a basic role in characterizing and analyzing such a periodic nonlinear system. Although many kinds of nonlinear beam dynamics experiments have been conducted, no direct measurement of a nonlinear map has been reported for a ring in normal operation mode. This dissertation analyzes various issues concerning map
Model Independent Analysis of Beam Centroid Dynamics in Accelerators
International Nuclear Information System (INIS)
Wang, Chun-xi
2003-01-01
Fundamental issues in Beam-Position-Monitor (BPM)-based beam dynamics observations are studied in this dissertation. The major topic is the Model-Independent Analysis (MIA) of beam centroid dynamics. Conventional beam dynamics analysis requires a certain machine model, which itself of ten needs to be refined by beam measurements. Instead of using any particular machine model, MIA relies on a statistical analysis of the vast amount of BPM data that often can be collected non-invasively during normal machine operation. There are two major parts in MIA. One is noise reduction and degrees-of-freedom analysis using a singular value decomposition of a BPM-data matrix, which constitutes a principal component analysis of BPM data. The other is a physical base decomposition of the BPM-data matrix based on the time structure of pulse-by-pulse beam and/or machine parameters. The combination of these two methods allows one to break the resolution limit set by individual BPMs and observe beam dynamics at more accurate levels. A physical base decomposition is particularly useful for understanding various beam dynamics issues. MIA improves observation and analysis of beam dynamics and thus leads to better understanding and control of beams in both linacs and rings. The statistical nature of MIA makes it potentially useful in other fields. Another important topic discussed in this dissertation is the measurement of a nonlinear Poincare section (one-turn) map in circular accelerators. The beam dynamics in a ring is intrinsically nonlinear. In fact, nonlinearities are a major factor that limits stability and influences the dynamics of halos. The Poincare section map plays a basic role in characterizing and analyzing such a periodic nonlinear system. Although many kinds of nonlinear beam dynamics experiments have been conducted, no direct measurement of a nonlinear map has been reported for a ring in normal operation mode. This dissertation analyzes various issues concerning map
Research on Centroid Position for Stairs Climbing Stability of Search and Rescue Robot
Directory of Open Access Journals (Sweden)
Yan Guo
2011-01-01
Full Text Available This paper represents the relationship between the stability of stairs climbing and the centroid position of the search and rescue robot. The robot system is considered as a mass point-plane model and the kinematics features are analyzed to find the relationship between centroid position and the maximal pitch angle of stairs the robot could climb up. A computable function about this relationship is given in this paper. During the stairs climbing, there is a maximal stability-keeping angle depends on the centroid position and the pitch angle of stairs, and the numerical formula is developed about the relationship between the maximal stability-keeping angle and the centroid position and pitch angle of stairs. The experiment demonstrates the trustworthy and correction of the method in the paper.
Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.
2013-09-01
The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.
The efficiency of the centroid method compared to a simple average
DEFF Research Database (Denmark)
Eskildsen, Jacob Kjær; Kristensen, Kai; Nielsen, Rikke
Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot.......Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot....
International Nuclear Information System (INIS)
Penev, I.; Andrejtscheff, W.; Protochristov, Ch.; Zhelev, Zh.
1987-01-01
In the applications of the generalized centroid shift method with germanium detectors, the energy dependence of the time centroids of prompt photopeaks (zero-time line) and of Compton background events reveal a peculiar behavior crossing each other at about 100 keV. The effect is plausibly explained as associated with the ratio of γ-quanta causing the photoeffect and Compton scattering, respectively, at the boundaries of the detector. (orig.)
International Nuclear Information System (INIS)
Endo, I.; Kawamoto, T.; Mizuno, Y.; Ohsugi, T.; Taniguchi, T.; Takeshita, T.
1981-01-01
We have investigated the systematic error associtated with the charge centroid evaluation for the cathode read-out multiwire proportional chamber. Correction curves for the systematic error according to six centroid finding algorithms have been obtained by using the charge distribution calculated in a simple electrostatic mode. They have been experimentally examined and proved to be essential for the accurate determination of the irradiated position. (orig.)
Empirical Centroid Fictitious Play: An Approach For Distributed Learning In Multi-Agent Games
Swenson, Brian; Kar, Soummya; Xavier, Joao
2013-01-01
The paper is concerned with distributed learning in large-scale games. The well-known fictitious play (FP) algorithm is addressed, which, despite theoretical convergence results, might be impractical to implement in large-scale settings due to intense computation and communication requirements. An adaptation of the FP algorithm, designated as the empirical centroid fictitious play (ECFP), is presented. In ECFP players respond to the centroid of all players' actions rather than track and respo...
Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas
2017-06-01
A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.
Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas
2017-06-07
A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.
Radiographic measures of thoracic kyphosis in osteoporosis: Cobb and vertebral centroid angles
International Nuclear Information System (INIS)
Briggs, A.M.; Greig, A.M.; Wrigley, T.V.; Tully, E.A.; Adams, P.E.; Bennell, K.L.
2007-01-01
Several measures can quantify thoracic kyphosis from radiographs, yet their suitability for people with osteoporosis remains uncertain. The aim of this study was to examine the validity and reliability of the vertebral centroid and Cobb angles in people with osteoporosis. Lateral radiographs of the thoracic spine were captured in 31 elderly women with osteoporosis. Thoracic kyphosis was measured globally (T1-T12) and regionally (T4-T9) using Cobb and vertebral centroid angles. Multisegmental curvature was also measured by fitting polynomial functions to the thoracic curvature profile. Canonical and Pearson correlations were used to examine correspondence; agreement between measures was examined with linear regression. Moderate to high intra- and inter-rater reliability was achieved (SEM = 0.9-4.0 ). Concurrent validity of the simple measures was established against multisegmental curvature (r = 0.88-0.98). Strong association was observed between the Cobb and centroid angles globally (r = 0.84) and regionally (r 0.83). Correspondence between measures was moderate for the Cobb method (r 0.72), yet stronger for the centroid method (r = 0.80). The Cobb angle was 20% greater for regional measures due to the influence of endplate tilt. Regional Cobb and centroid angles are valid and reliable measures of thoracic kyphosis in people with osteoporosis. However, the Cobb angle is biased by endplate tilt, suggesting that the centroid angle is more appropriate for this population. (orig.)
Radiographic measures of thoracic kyphosis in osteoporosis: Cobb and vertebral centroid angles
Energy Technology Data Exchange (ETDEWEB)
Briggs, A.M.; Greig, A.M. [University of Melbourne, Centre for Health, Exercise and Sports Medicine, School of Physiotherapy, Victoria (Australia); University of Melbourne, Department of Medicine, Royal Melbourne Hospital, Victoria (Australia); Wrigley, T.V.; Tully, E.A.; Adams, P.E.; Bennell, K.L. [University of Melbourne, Centre for Health, Exercise and Sports Medicine, School of Physiotherapy, Victoria (Australia)
2007-08-15
Several measures can quantify thoracic kyphosis from radiographs, yet their suitability for people with osteoporosis remains uncertain. The aim of this study was to examine the validity and reliability of the vertebral centroid and Cobb angles in people with osteoporosis. Lateral radiographs of the thoracic spine were captured in 31 elderly women with osteoporosis. Thoracic kyphosis was measured globally (T1-T12) and regionally (T4-T9) using Cobb and vertebral centroid angles. Multisegmental curvature was also measured by fitting polynomial functions to the thoracic curvature profile. Canonical and Pearson correlations were used to examine correspondence; agreement between measures was examined with linear regression. Moderate to high intra- and inter-rater reliability was achieved (SEM = 0.9-4.0 ). Concurrent validity of the simple measures was established against multisegmental curvature (r = 0.88-0.98). Strong association was observed between the Cobb and centroid angles globally (r = 0.84) and regionally (r = 0.83). Correspondence between measures was moderate for the Cobb method (r = 0.72), yet stronger for the centroid method (r = 0.80). The Cobb angle was 20% greater for regional measures due to the influence of endplate tilt. Regional Cobb and centroid angles are valid and reliable measures of thoracic kyphosis in people with osteoporosis. However, the Cobb angle is biased by endplate tilt, suggesting that the centroid angle is more appropriate for this population. (orig.)
Statistical analysis of x-ray stress measurement by centroid method
International Nuclear Information System (INIS)
Kurita, Masanori; Amano, Jun; Sakamoto, Isao
1982-01-01
The X-ray technique allows a nondestructive and rapid measurement of residual stresses in metallic materials. The centroid method has an advantage over other X-ray methods in that it can determine the angular position of a diffraction line, from which the stress is calculated, even with an asymmetrical line profile. An equation for the standard deviation of the angular position of a diffraction line, σsub(p), caused by statistical fluctuation was derived, which is a fundamental source of scatter in X-ray stress measurements. This equation shows that an increase of X-ray counts by a factor of k results in a decrease of σsub(p) by a factor of 1/√k. It also shows that σsub(p) increases rapidly as the angular range used in calculating the centroid increases. It is therefore important to calculate the centroid using the narrow angular range between the two ends of the diffraction line where it starts to deviate from the straight background line. By using quenched structural steels JIS S35C and S45C, the residual stresses and their standard deviations were calculated by the centroid, parabola, Gaussian curve, and half-width methods, and the results were compared. The centroid of a diffraction line was affected greatly by the background line used. The standard deviation of the stress measured by the centroid method was found to be the largest among the four methods. (author)
A quantum generalization of intrinsic reaction coordinate using path integral centroid coordinates
International Nuclear Information System (INIS)
Shiga, Motoyuki; Fujisaki, Hiroshi
2012-01-01
We propose a generalization of the intrinsic reaction coordinate (IRC) for quantum many-body systems described in terms of the mass-weighted ring polymer centroids in the imaginary-time path integral theory. This novel kind of reaction coordinate, which may be called the ''centroid IRC,'' corresponds to the minimum free energy path connecting reactant and product states with a least amount of reversible work applied to the center of masses of the quantum nuclei, i.e., the centroids. We provide a numerical procedure to obtain the centroid IRC based on first principles by combining ab initio path integral simulation with the string method. This approach is applied to NH 3 molecule and N 2 H 5 - ion as well as their deuterated isotopomers to study the importance of nuclear quantum effects in the intramolecular and intermolecular proton transfer reactions. We find that, in the intramolecular proton transfer (inversion) of NH 3 , the free energy barrier for the centroid variables decreases with an amount of about 20% compared to the classical one at the room temperature. In the intermolecular proton transfer of N 2 H 5 - , the centroid IRC is largely deviated from the ''classical'' IRC, and the free energy barrier is reduced by the quantum effects even more drastically.
Directory of Open Access Journals (Sweden)
Christine Laurendeau
2010-01-01
Full Text Available Increasingly ubiquitous wireless technologies require novel localization techniques to pinpoint the position of an uncooperative node, whether the target is a malicious device engaging in a security exploit or a low-battery handset in the middle of a critical emergency. Such scenarios necessitate that a radio signal source be localized by other network nodes efficiently, using minimal information. We propose two new algorithms for estimating the position of an uncooperative transmitter, based on the received signal strength (RSS of a single target message at a set of receivers whose coordinates are known. As an extension to the concept of centroid localization, our mechanisms weigh each receiver's coordinates based on the message's relative RSS at that receiver, with respect to the span of RSS values over all receivers. The weights may decrease from the highest RSS receiver either linearly or exponentially. Our simulation results demonstrate that for all but the most sparsely populated wireless networks, our exponentially weighted mechanism localizes a target node within the regulations stipulated for emergency services location accuracy.
International Nuclear Information System (INIS)
Romero, Vicente J.; Burkardt, John V.; Gunzburger, Max D.; Peterson, Janet S.
2006-01-01
A recently developed centroidal Voronoi tessellation (CVT) sampling method is investigated here to assess its suitability for use in statistical sampling applications. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-dimensional parameter spaces. On several 2-D test problems CVT has recently been found to provide exceedingly effective and efficient point distributions for response surface generation. Additionally, for statistical function integration and estimation of response statistics associated with uniformly distributed random-variable inputs (uncorrelated), CVT has been found in initial investigations to provide superior points sets when compared against latin-hypercube and simple-random Monte Carlo methods and Halton and Hammersley quasi-random sequence methods. In this paper, the performance of all these sampling methods and a new variant ('Latinized' CVT) are further compared for non-uniform input distributions. Specifically, given uncorrelated normal inputs in a 2-D test problem, statistical sampling efficiencies are compared for resolving various statistics of response: mean, variance, and exceedence probabilities
Segmentation of arterial vessel wall motion to sub-pixel resolution using M-mode ultrasound.
Fancourt, Craig; Azer, Karim; Ramcharan, Sharmilee L; Bunzel, Michelle; Cambell, Barry R; Sachs, Jeffrey R; Walker, Matthew
2008-01-01
We describe a method for segmenting arterial vessel wall motion to sub-pixel resolution, using the returns from M-mode ultrasound. The technique involves measuring the spatial offset between all pairs of scans from their cross-correlation, converting the spatial offsets to relative wall motion through a global optimization, and finally translating from relative to absolute wall motion by interpolation over the M-mode image. The resulting detailed wall distension waveform has the potential to enhance existing vascular biomarkers, such as strain and compliance, as well as enable new ones.
Directory of Open Access Journals (Sweden)
Drzewiecki Wojciech
2016-12-01
Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.
Frazier, Amy E.
Invasive species disrupt landscape patterns and compromise the functionality of ecosystem processes. Non-native saltcedar (Tamarix spp.) poses significant threats to native vegetation and groundwater resources in the southwestern U.S. and Mexico, and quantifying spatial and temporal distribution patterns is essential for monitoring its spread. Advanced remote sensing classification techniques such as sub-pixel classifications are able to detect and discriminate saltcedar from native vegetation with high accuracy, but these types of classifications are not compatible with landscape metrics, which are the primary tool available for statistically assessing distribution patterns, because they do not have discrete class boundaries. The objective of this research is to develop new methods that allow sub-pixel classifications to be analyzed using landscape metrics. The research will be carried out through three specific aims: (1) develop and test a method to transform continuous sub-pixel classifications into categorical representations that are compatible with widely used landscape metric tools, (2) establish a gradient-based concept of landscape using sub-pixel classifications and the technique developed in the first objective to explore the relationships between pattern and process, and (3) generate a new super-resolution mapping technique method to predict the spatial locations of fractional land covers within a pixel. Results show that the threshold gradient method is appropriate for discretizing sub-pixel data, and can be used to generate increased information about the landscape compared to traditional single-value metrics. Additionally, the super-resolution classification technique was also able to provide detailed sub-pixel mapping information, but additional work will be needed to develop rigorous validation and accuracy assessment techniques.
Comparison of BiLinearly Interpolated Subpixel Sensitivity Mapping and Pixel-Level Decorrelation
Challener, Ryan C.; Harrington, Joseph; Cubillos, Patricio; Foster, Andrew S.; Deming, Drake; WASP Consortium
2016-10-01
Exoplanet eclipse signals are weaker than the systematics present in the Spitzer Space Telescope's Infrared Array Camera (IRAC), and thus the correction method can significantly impact a measurement. BiLinearly Interpolated Subpixel Sensitivity (BLISS) mapping calculates the sensitivity of the detector on a subpixel grid and corrects the photometry for any sensitivity variations. Pixel-Level Decorrelation (PLD) removes the sensitivity variations by considering the relative intensities of the pixels around the source. We applied both methods to WASP-29b, a Saturn-sized planet with a mass of 0.24 ± 0.02 Jupiter masses and a radius of 0.84 ± 0.06 Jupiter radii, which we observed during eclipse twice with the 3.6 µm and once with the 4.5 µm channels of IRAC aboard Spitzer in 2010 and 2011 (programs 60003 and 70084, respectively). We compared the results of BLISS and PLD, and comment on each method's ability to remove time-correlated noise. WASP-29b exhibits a strong detection at 3.6 µm and no detection at 4.5 µm. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
The effect of event shape on centroiding in photon counting detectors
International Nuclear Information System (INIS)
Kawakami, Hajime; Bone, David; Fordham, John; Michel, Raul
1994-01-01
High resolution, CCD readout, photon counting detectors employ simple centroiding algorithms for defining the spatial position of each detected event. The accuracy of centroiding is very dependent upon a number of parameters including the profile, energy and width of the intensified event. In this paper, we provide an analysis of how the characteristics of an intensified event change as the input count rate increases and the consequent effect on centroiding. The changes in these parameters are applied in particular to the MIC photon counting detector developed at UCL for ground and space based astronomical applications. This detector has a maximum format of 3072x2304 pixels permitting its use in the highest resolution applications. Individual events, at light level from 5 to 1000k events/s over the detector area, were analysed. It was found that both the asymmetry and width of event profiles were strongly dependent upon the energy of the intensified event. The variation in profile then affected the centroiding accuracy leading to loss of resolution. These inaccuracies have been quantified for two different 3 CCD pixel centroiding algorithms and one 2 pixel algorithm. The results show that a maximum error of less than 0.05 CCD pixel occurs with the 3 pixel algorithms and 0.1 CCD pixel for the 2 pixel algorithm. An improvement is proposed by utilising straight pore MCPs in the intensifier and a 70 μm air gap in front of the CCD. ((orig.))
Automatic centroid detection and surface measurement with a digital Shack–Hartmann wavefront sensor
International Nuclear Information System (INIS)
Yin, Xiaoming; Zhao, Liping; Li, Xiang; Fang, Zhongping
2010-01-01
With the breakthrough of manufacturing technologies, the measurement of surface profiles is becoming a big issue. A Shack–Hartmann wavefront sensor (SHWS) provides a promising technology for non-contact surface measurement with a number of advantages over interferometry. The SHWS splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. So the accuracy of the centroid measurement determines the accuracy of the SHWS. In this paper, we have presented a new centroid measurement algorithm based on an adaptive thresholding and dynamic windowing method by utilizing image-processing techniques. Based on this centroid detection method, we have developed a digital SHWS system which can automatically detect centroids of focal spots, reconstruct the wavefront and measure the 3D profile of the surface. The system has been tested with various simulated and real surfaces such as flat surfaces, spherical and aspherical surfaces as well as deformable surfaces. The experimental results demonstrate that the system has good accuracy, repeatability and immunity to optical misalignment. The system is also suitable for on-line applications of surface measurement
International Nuclear Information System (INIS)
Zhan, Shuyue; Wang, Xiaoping; Liu, Yuling
2011-01-01
To simplify the algorithm for determining the surface plasmon resonance (SPR) angle for special applications and development trends, a fast method for determining an SPR angle, called the fixed-boundary centroid algorithm, has been proposed. Two experiments were conducted to compare three centroid algorithms from the aspects of the operation time, sensitivity to shot noise, signal-to-noise ratio (SNR), resolution, and measurement range. Although the measurement range of this method was narrower, the other performance indices were all better than the other two centroid methods. This method has outstanding performance, high speed, good conformity, low error and a high SNR and resolution. It thus has the potential to be widely adopted
CSIR Research Space (South Africa)
Landmann, T
2003-07-01
Full Text Available Burn severity was quantitatively mapped using a unique linear spectral mixture model to determine sub-pixel abundances of different ashes and combustion completeness measured on the corresponding fire-affected pixels in Landsat data. A new burn...
Centroid vetting of transiting planet candidates from the Next Generation Transit Survey
Günther, Maximilian N.; Queloz, Didier; Gillen, Edward; McCormac, James; Bayliss, Daniel; Bouchy, Francois; Walker, Simon. R.; West, Richard G.; Eigmüller, Philipp; Smith, Alexis M. S.; Armstrong, David J.; Burleigh, Matthew; Casewell, Sarah L.; Chaushev, Alexander P.; Goad, Michael R.; Grange, Andrew; Jackman, James; Jenkins, James S.; Louden, Tom; Moyano, Maximiliano; Pollacco, Don; Poppenhaeger, Katja; Rauer, Heike; Raynard, Liam; Thompson, Andrew P. G.; Udry, Stéphane; Watson, Christopher A.; Wheatley, Peter J.
2017-11-01
The Next Generation Transit Survey (NGTS), operating in Paranal since 2016, is a wide-field survey to detect Neptunes and super-Earths transiting bright stars, which are suitable for precise radial velocity follow-up and characterization. Thereby, its sub-mmag photometric precision and ability to identify false positives are crucial. Particularly, variable background objects blended in the photometric aperture frequently mimic Neptune-sized transits and are costly in follow-up time. These objects can best be identified with the centroiding technique: if the photometric flux is lost off-centre during an eclipse, the flux centroid shifts towards the centre of the target star. Although this method has successfully been employed by the Kepler mission, it has previously not been implemented from the ground. We present a fully automated centroid vetting algorithm developed for NGTS, enabled by our high-precision autoguiding. Our method allows detecting centroid shifts with an average precision of 0.75 milli-pixel (mpix), and down to 0.25 mpix for specific targets, for a pixel size of 4.97 arcsec. The algorithm is now part of the NGTS candidate vetting pipeline and automatically employed for all detected signals. Further, we develop a joint Bayesian fitting model for all photometric and centroid data, allowing to disentangle which object (target or background) is causing the signal, and what its astrophysical parameters are. We demonstrate our method on two NGTS objects of interest. These achievements make NGTS the first ground-based wide-field transit survey ever to successfully apply the centroiding technique for automated candidate vetting, enabling the production of a robust candidate list before follow-up.
Fast Template-based Shape Analysis using Diffeomorphic Iterative Centroid
Cury , Claire; Glaunès , Joan Alexis; Chupin , Marie; Colliot , Olivier
2014-01-01
International audience; A common approach for the analysis of anatomical variability relies on the estimation of a representative template of the population, followed by the study of this population based on the parameters of the deformations going from the template to the population. The Large Deformation Diffeomorphic Metric Mapping framework is widely used for shape analysis of anatomical structures, but computing a template with such framework is computationally expensive. In this paper w...
Robustness of regularities for energy centroids in the presence of random interactions
International Nuclear Information System (INIS)
Zhao, Y.M.; Arima, A.; Yoshida, N.; Ogawa, K.; Yoshinaga, N.; Kota, V. K. B.
2005-01-01
In this paper we study energy centroids such as those with fixed spin and isospin and those with fixed irreducible representations for both bosons and fermions, in the presence of random two-body and/or three-body interactions. Our results show that regularities of energy centroids of fixed-spin states reported in earlier works are very robust in these more complicated cases. We suggest that these behaviors might be intrinsic features of quantum many-body systems interacting by random forces
Donovan, J.; Jordan, T. H.
2012-12-01
Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).
International Nuclear Information System (INIS)
Dorenbos, Pieter
2013-01-01
A review on the wavelengths of all five 4f–5d transitions for Ce 3+ in about 150 different inorganic compounds (fluorides, chlorides, bromides, iodides, oxides, sulfides, selenides, nitrides) is presented. It provides data on the centroid shift and the crystal field splitting of the 5d-configuration which are then used to estimate the Eu 2+ inter 4f-electron Coulomb repulsion energy U(6,A) in compound A. The four semi-empirical models (the redshift model, the centroid shift model, the charge transfer model, and the chemical shift model) on lanthanide levels that were developed past 12 years are briefly reviewed. It will be demonstrated how those models together with the collected data of this work and elsewhere can be united to construct schemes that contain the binding energy of electrons in the 4f and 5d states for each divalent and each trivalent lanthanide ion relative to the vacuum energy. As example the vacuum referred binding energy schemes for LaF 3 and La 2 O 3 will be constructed. - Highlights: ► An compilation on all five Ce 3+ 4f–5d energies in 150 inorganic compounds is presented. ► The relationship between the 5d centroid shift and host cation electronegativity id demonstrated. ► The electronic structure scheme of the lanthanides in La 2 O 3 and LaF 3 is presented.
Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier
Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar
2015-02-01
In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.
Centroid and Theoretical Rotation: Justification for Their Use in Q Methodology Research
Ramlo, Sue
2016-01-01
This manuscript's purpose is to introduce Q as a methodology before providing clarification about the preferred factor analytical choices of centroid and theoretical (hand) rotation. Stephenson, the creator of Q, designated that only these choices allowed for scientific exploration of subjectivity while not violating assumptions associated with…
Study on Zero-Doppler Centroid Control for GEO SAR Ground Observation
Directory of Open Access Journals (Sweden)
Yicheng Jiang
2014-01-01
Full Text Available In geosynchronous Earth orbit SAR (GEO SAR, Doppler centroid compensation is a key step for imaging process, which could be performed by the attitude steering of a satellite platform. However, this zero-Doppler centroid control method does not work well when the look angle of radar is out of an expected range. This paper primarily analyzes the Doppler properties of GEO SAR in the Earth rectangular coordinate. Then, according to the actual conditions of the GEO SAR ground observation, the effective range is presented by the minimum and maximum possible look angles which are directly related to the orbital parameters. Based on the vector analysis, a new approach for zero-Doppler centroid control in GEO SAR, performing the attitude steering by a combination of pitch and roll rotation, is put forward. This approach, considering the Earth’s rotation and elliptical orbit effects, can accurately reduce the residual Doppler centroid. All the simulation results verify the correctness of the range of look angle and the proposed steering method.
A double inequality for bounding Toader mean by the centroidal mean
Indian Academy of Sciences (India)
Annual Meetings · Mid Year Meetings · Discussion Meetings · Public Lectures · Lecture Workshops · Refresher Courses · Symposia · Live Streaming. Home; Journals; Proceedings – Mathematical Sciences; Volume 124; Issue 4. A double inequality for bounding Toader mean by the centroidal mean. Yun Hua Feng Qi.
A double inequality for bounding Toader mean by the centroidal mean
Indian Academy of Sciences (India)
A double inequality for bounding Toader mean by the centroidal mean. YUN HUA1,∗ and FENG QI2. 1Department of Information Engineering, Weihai Vocational College, Weihai City,. Shandong Province 264210, China. 2College of Mathematics, Inner Mongolia University for Nationalities, Tongliao City,. Inner Mongolia ...
International Nuclear Information System (INIS)
Miller, S.L.; Fleetwood, D.M.; McWhorter, P.J.; Reber, R.A. Jr.; Murray, J.R.
1993-01-01
A general methodology is developed to experimentally characterize the spatial distribution of occupied traps in dielectric films on a semiconductor. The effects of parasitics such as leakage, charge transport through more than one interface, and interface trap charge are quantitatively addressed. Charge transport with contributions from multiple charge species is rigorously treated. The methodology is independent of the charge transport mechanism(s), and is directly applicable to multilayer dielectric structures. The centroid capacitance, rather than the centroid itself, is introduced as the fundamental quantity that permits the generic analysis of multilayer structures. In particular, the form of many equations describing stacked dielectric structures becomes independent of the number of layers comprising the stack if they are expressed in terms of the centroid capacitance and/or the flatband voltage. The experimental methodology is illustrated with an application using thermally stimulated current (TSC) measurements. The centroid of changes (via thermal emission) in the amount of trapped charge was determined for two different samples of a triple-layer dielectric structure. A direct consequence of the TSC analyses is the rigorous proof that changes in interface trap charge can contribute, though typically not significantly, to thermally stimulated current
Oscillations of centroid position and surface area of soccer teams in small-sided games
Frencken, Wouter; Lemmink, Koen; Delleman, Nico; Visscher, Chris
2011-01-01
There is a need for a collective variable that captures the dynamics of team sports like soccer at match level. The centroid positions and surface areas of two soccer teams potentially describe the coordinated flow of attacking and defending in small-sided soccer games at team level. The aim of the
Directory of Open Access Journals (Sweden)
Md Khayrul Bashar
Full Text Available Accurate identification of cell nuclei and their tracking using three dimensional (3D microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i Pre-processing, (ii Local enhancement, and (iii Centroid extraction. The first step includes two variations: first variation (Variant-1 uses the whole 3D pre-processed image, whereas the second one (Variant-2 modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2
Zhang, Yunlu; Yan, Lei; Liou, Frank
2018-05-01
The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.
Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang
2008-03-01
Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.
Mincewicz, Grzegorz; Rumiński, Jacek; Krzykowski, Grzegorz
2012-02-01
Recently, we described a model system which included corrections of high-resolution computed tomography (HRCT) bronchial measurements based on the adjusted subpixel method (ASM). To verify the clinical application of ASM by comparing bronchial measurements obtained by means of the traditional eye-driven method, subpixel method alone and ASM in a group comprised of bronchial asthma patients and healthy individuals. The study included 30 bronchial asthma patients and the control group comprised of 20 volunteers with no symptoms of asthma. The lowest internal and external diameters of the bronchial cross-sections (ID and ED) and their derivative parameters were determined in HRCT scans using: (1) traditional eye-driven method, (2) subpixel technique, and (3) ASM. In the case of the eye-driven method, lower ID values along with lower bronchial lumen area and its percentage ratio to total bronchial area were basic parameters that differed between asthma patients and healthy controls. In the case of the subpixel method and ASM, both groups were not significantly different in terms of ID. Significant differences were observed in values of ED and total bronchial area with both parameters being significantly higher in asthma patients. Compared to ASM, the eye-driven method overstated the values of ID and ED by about 30% and 10% respectively, while understating bronchial wall thickness by about 18%. Results obtained in this study suggest that the traditional eye-driven method of HRCT-based measurement of bronchial tree components probably overstates the degree of bronchial patency in asthma patients. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
International Nuclear Information System (INIS)
Mincewicz, Grzegorz; Rumiński, Jacek; Krzykowski, Grzegorz
2012-01-01
Background: Recently, we described a model system which included corrections of high-resolution computed tomography (HRCT) bronchial measurements based on the adjusted subpixel method (ASM). Objective: To verify the clinical application of ASM by comparing bronchial measurements obtained by means of the traditional eye-driven method, subpixel method alone and ASM in a group comprised of bronchial asthma patients and healthy individuals. Methods: The study included 30 bronchial asthma patients and the control group comprised of 20 volunteers with no symptoms of asthma. The lowest internal and external diameters of the bronchial cross-sections (ID and ED) and their derivative parameters were determined in HRCT scans using: (1) traditional eye-driven method, (2) subpixel technique, and (3) ASM. Results: In the case of the eye-driven method, lower ID values along with lower bronchial lumen area and its percentage ratio to total bronchial area were basic parameters that differed between asthma patients and healthy controls. In the case of the subpixel method and ASM, both groups were not significantly different in terms of ID. Significant differences were observed in values of ED and total bronchial area with both parameters being significantly higher in asthma patients. Compared to ASM, the eye-driven method overstated the values of ID and ED by about 30% and 10% respectively, while understating bronchial wall thickness by about 18%. Conclusions: Results obtained in this study suggest that the traditional eye-driven method of HRCT-based measurement of bronchial tree components probably overstates the degree of bronchial patency in asthma patients.
Energy Technology Data Exchange (ETDEWEB)
Du Weiliang; Yang, James [Department of Radiation Physics, University of Texas M D Anderson Cancer Center, 1515 Holcombe Blvd, Unit 94, Houston, TX 77030 (United States)], E-mail: wdu@mdanderson.org
2009-02-07
Uncertainty in localizing the radiation field center is among the major components that contribute to the overall positional error and thus must be minimized. In this study, we developed a Hough transform (HT)-based computer algorithm to localize the radiation center of a circular or rectangular field with subpixel accuracy. We found that the HT method detected the centers of the test circular fields with an absolute error of 0.037 {+-} 0.019 pixels. On a typical electronic portal imager with 0.5 mm image resolution, this mean detection error was translated to 0.02 mm, which was much finer than the image resolution. It is worth noting that the subpixel accuracy described here does not include experimental uncertainties such as linac mechanical instability or room laser inaccuracy. The HT method was more accurate and more robust to image noise and artifacts than the traditional center-of-mass method. Application of the HT method in Winston-Lutz tests was demonstrated to measure the ball-radiation center alignment with subpixel accuracy. Finally, the method was applied to quantitative evaluation of the radiation center wobble during collimator rotation.
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Ivanov, Sergei D.; Witt, Alexander; Shiga, Motoyuki; Marx, Dominik
2010-01-01
Centroid molecular dynamics (CMD) is a popular method to extract approximate quantum dynamics from path integral simulations. Very recently we have shown that CMD gas phase infrared spectra exhibit significant artificial redshifts of stretching peaks, due to the so-called "curvature problem" imprinted by the effective centroid potential. Here we provide evidence that for condensed phases, and in particular for liquid water, CMD produces pronounced artificial redshifts for high-frequency vibrations such as the OH stretching band. This peculiar behavior intrinsic to the CMD method explains part of the unexpectedly large quantum redshifts of the stretching band of liquid water compared to classical frequencies, which is improved after applying a simple and rough "harmonic curvature correction."
Yoshiara, Luciane Yuri; Madeira, Tiago Bervelieri; Delaroza, Fernanda; da Silva, Josemeyre Bonifácio; Ida, Elza Iouko
2012-12-01
The objective of this study was to optimize the extraction of different isoflavone forms (glycosidic, malonyl-glycosidic, aglycone and total) from defatted cotyledon soy flour using the simplex-centroid experimental design with four solvents of varying polarity (water, acetone, ethanol and acetonitrile). The obtained extracts were then analysed by high-performance liquid chromatography. The profile of the different soy isoflavones forms varied with different extractions solvents. Varying the solvent or mixture used, the extraction of different isoflavones was optimized using the centroid-simplex mixture design. The special cubic model best fitted to the four solvents and its combination for soy isoflavones extraction. For glycosidic isoflavones extraction, the polar ternary mixture (water, acetone and acetonitrile) achieved the best extraction; malonyl-glycosidic forms were better extracted with mixtures of water, acetone and ethanol. Aglycone isoflavones, water and acetone mixture were best extracted and total isoflavones, the best solvents were ternary mixture of water, acetone and ethanol.
Hough transform used on the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor
Chia, Chou-Min; Huang, Kuang-Yuh; Chang, Elmer
2016-01-01
An approach to the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor (SHWS) is presented. The SHWS has a common problem, in that while measuring high-order wavefront distortion, the spots may exceed each of the subapertures, which are used to restrict the displacement of spots. This artificial restriction may limit the dynamic range of the SHWS. When using the SHWS to measure adaptive optics or aspheric lenses, the accuracy of the traditional spot-centroiding algorithm may be uncertain because the spots leave or cross the confined area of the subapertures. The proposed algorithm combines the Hough transform with an artificial neural network, which requires no confined subapertures, to increase the dynamic range of the SHWS. This algorithm is then explored in comprehensive simulations and the results are compared with those of the existing algorithm.
International Nuclear Information System (INIS)
Yin Xiaoming; Li Xiang; Zhao Liping; Fang Zhongping
2009-01-01
A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.
CentroidFold: a web server for RNA secondary structure prediction
Sato, Kengo; Hamada, Michiaki; Asai, Kiyoshi; Mituyama, Toutai
2009-01-01
The CentroidFold web server (http://www.ncrna.org/centroidfold/) is a web application for RNA secondary structure prediction powered by one of the most accurate prediction engine. The server accepts two kinds of sequence data: a single RNA sequence and a multiple alignment of RNA sequences. It responses with a prediction result shown as a popular base-pair notation and a graph representation. PDF version of the graph representation is also available. For a multiple alignment sequence, the ser...
Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; Abdul Aziz, Yang Faridah; Chee, Kok Han; McLaughlin, Robert A
2018-06-01
In this paper, we develop and validate an open source, fully automatic algorithm to localize the left ventricular (LV) blood pool centroid in short axis cardiac cine MR images, enabling follow-on automated LV segmentation algorithms. The algorithm comprises four steps: (i) quantify motion to determine an initial region of interest surrounding the heart, (ii) identify potential 2D objects of interest using an intensity-based segmentation, (iii) assess contraction/expansion, circularity, and proximity to lung tissue to score all objects of interest in terms of their likelihood of constituting part of the LV, and (iv) aggregate the objects into connected groups and construct the final LV blood pool volume and centroid. This algorithm was tested against 1140 datasets from the Kaggle Second Annual Data Science Bowl, as well as 45 datasets from the STACOM 2009 Cardiac MR Left Ventricle Segmentation Challenge. Correct LV localization was confirmed in 97.3% of the datasets. The mean absolute error between the gold standard and localization centroids was 2.8 to 4.7 mm, or 12 to 22% of the average endocardial radius. Graphical abstract Fully automated localization of the left ventricular blood pool in short axis cardiac cine MR images.
Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin
2018-05-01
A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.
Noninvasive measurement of cardiopulmonary blood volume: evaluation of the centroid method
International Nuclear Information System (INIS)
Fouad, F.M.; MacIntyre, W.J.; Tarazi, R.C.
1981-01-01
Cardiopulmonary blood volume (CPV) and mean pulmonary transit time (MTT) determined by radionuclide measurements (Tc-99m HSA) were compared with values obtained from simultaneous dye-dilution (DD) studies (indocyanine green). The mean transit time was obtained from radionuclide curves by two methods: the peak-to-peak time and the interval between the two centroids determined from the right and left-ventricular time-concentration curves. Correlation of dye-dilution MTT and peak-to-peak time was significant (r = 0.79, p < 0.001), but its correlation with centroid-derived values was better (r = 0.86, p < 0.001). CPV values (using the centroid method for radionuclide technique) correlated significantly with values derived from dye-dilution curves (r = 0.74, p < 0.001). Discrepancies between the two were greater the more rapid the circulation (r = 0.61, p < 0.01), suggesting that minor inaccuracies of dye-dilution methods, due to positioning or delay of the system, can become magnified in hyperkinetic conditions. The radionuclide method is simple, repeatable, and noninvasive, and it provides simultaneous evaluation of pulmonary and systemic hemodynamics. Further, calculation of the ratio of cardiopulmonary to total blood volume can be used as an index of overall venous distensibility and relocation of intravascular blood volume
Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric
2014-03-12
This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2017-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna
2017-12-01
The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.
International Nuclear Information System (INIS)
AlBalkhi, Khalid M; AlShahrani, Ibrahim; AlMadi, Abdulaziz
2008-01-01
The purpose of this study was to demonstrate how to establish the area center (centroid) of both the soft and hard tissues of the outline of the lateral cephalometric skull image, and to introduce the concept of a new non-anatomical centroid line. Lateral cephalometric radiographs, size 12 x 14 inch, of fifty seven adult subjects were selected based on their pleasant, balanced profile, Class I skeletal and dental relationship and no major dental malocclusion or malrelationship. The area centers (centroids) of both soft and hard tissue skull were practically established using a customized software computer program called the m -file . Connecting the two centers introduced the concept of a new non-anatomical soft and hard centroids line. (author)
International Nuclear Information System (INIS)
Messina, M.; Schenter, G.K.; Garrett, B.C.
1995-01-01
The low temperature behavior of the centroid density method of Voth, Chandler, and Miller (VCM) [J. Chem. Phys. 91, 7749 (1989)] is investigated for tunneling through a one-dimensional barrier. We find that the bottleneck for a quantum activated process as defined by VCM does not correspond to the classical bottleneck for the case of an asymmetric barrier. If the centroid density is constrained to be at the classical bottleneck for an asymmetric barrier, the centroid density method can give transmission coefficients that are too large by as much as five orders of magnitude. We follow a variational procedure, as suggested by VCM, whereby the best transmission coefficient is found by varying the position of the centroid until the minimum value for this transmission coefficient is obtained. This is a procedure that is readily generalizable to multidimensional systems. We present calculations on several test systems which show that this variational procedure greatly enhances the accuracy of the centroid density method compared to when the centroid is constrained to be at the barrier top. Furthermore, the relation of this procedure to the low temperature periodic orbit or ''instanton'' approach is discussed. copyright 1995 American Institute of Physics
Pixel-Level Decorrelation and BiLinearly Interpolated Subpixel Sensitivity applied to WASP-29b
Challener, Ryan; Harrington, Joseph; Cubillos, Patricio; Blecic, Jasmina; Deming, Drake
2017-10-01
Measured exoplanet transit and eclipse depths can vary significantly depending on the methodology used, especially at the low S/N levels in Spitzer eclipses. BiLinearly Interpolated Subpixel Sensitivity (BLISS) models a physical, spatial effect, which is independent of any astrophysical effects. Pixel-Level Decorrelation (PLD) uses the relative variations in pixels near the target to correct for flux variations due to telescope motion. PLD is being widely applied to all Spitzer data without a thorough understanding of its behavior. It is a mathematical method derived from a Taylor expansion, and many of its parameters do not have a physical basis. PLD also relies heavily on binning the data to remove short time-scale variations, which can artifically smooth the data. We applied both methods to 4 eclipse observations of WASP-29b, a Saturn-sized planet, which was observed twice with the 3.6 µm and twice with the 4.5 µm channels of Spitzer's IRAC in 2010, 2011 and 2014 (programs 60003, 70084, and 10054, respectively). We compare the resulting eclipse depths and midpoints from each model, assess each method's ability to remove correlated noise, and discuss how to choose or combine the best data analysis methods. We also refined the orbit from eclipse timings, detecting a significant nonzero eccentricity, and we used our Bayesian Atmospheric Radiative Transfer (BART) code to retrieve the planet's atmosphere, which is consistent with a blackbody. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
Sub-pixel analysis to support graphic security after scanning at low resolution
Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve
2006-02-01
Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced
Directory of Open Access Journals (Sweden)
Marzieh Mokarrama
2018-04-01
Full Text Available The purpose of the present study is preparing a landform classification by using digital elevation model (DEM which has a high spatial resolution. To reach the mentioned aim, a sub-pixel spatial attraction model was used as a novel method for preparing DEM with a high spatial resolution in the north of Darab, Fars province, Iran. The sub-pixel attraction models convert the pixel into sub-pixels based on the neighboring pixels fraction values, which can only be attracted by a central pixel. Based on this approach, a mere maximum of eight neighboring pixels can be selected for calculating of the attraction value. In the mentioned model, other pixels are supposed to be far from the central pixel to receive any attraction. In the present study by using a sub-pixel attraction model, the spatial resolution of a DEM was increased. The design of the algorithm is accomplished by using a DEM with a spatial resolution of 30 m (the Advanced Space borne Thermal Emission and Reflection Radiometer; (ASTER and a 90 m (the Shuttle Radar Topography Mission; (SRTM. In the attraction model, scale factors of (S = 2, S = 3, and S = 4 with two neighboring methods of touching (T = 1 and quadrant (T = 2 are applied to the DEMs by using MATLAB software. The algorithm is evaluated by taking the best advantages of 487 sample points, which are measured by surveyors. The spatial attraction model with scale factor of (S = 2 gives better results compared to those scale factors which are greater than 2. Besides, the touching neighborhood method is turned to be more accurate than the quadrant method. In fact, dividing each pixel into more than two sub-pixels decreases the accuracy of the resulted DEM. On the other hand, in these cases DEM, is itself in charge of increasing the value of root-mean-square error (RMSE and shows that attraction models could not be used for S which is greater than 2. Thus considering results, the proposed model is highly capable of
Hua, Boyang; Wang, Yanbo; Park, Seongjin; Han, Kyu Young; Singh, Digvijay; Kim, Jin H; Cheng, Wei; Ha, Taekjip
2018-03-13
Here, we demonstrate that the use of the single-molecule centroid localization algorithm can improve the accuracy of fluorescence binding assays. Two major artifacts in this type of assay, i.e., nonspecific binding events and optically overlapping receptors, can be detected and corrected during analysis. The effectiveness of our method was confirmed by measuring two weak biomolecular interactions, the interaction between the B1 domain of streptococcal protein G and immunoglobulin G and the interaction between double-stranded DNA and the Cas9-RNA complex with limited sequence matches. This analysis routine requires little modification to common experimental protocols, making it readily applicable to existing data and future experiments.
Hejrani, Babak; Tkalčić, Hrvoje; Fichtner, Andreas
2017-07-01
Although both earthquake mechanism and 3-D Earth structure contribute to the seismic wavefield, the latter is usually assumed to be layered in source studies, which may limit the quality of the source estimate. To overcome this limitation, we implement a method that takes advantage of a 3-D heterogeneous Earth model, recently developed for the Australasian region. We calculate centroid moment tensors (CMTs) for earthquakes in Papua New Guinea (PNG) and the Solomon Islands. Our method is based on a library of Green's functions for each source-station pair for selected Geoscience Australia and Global Seismic Network stations in the region, and distributed on a 3-D grid covering the seismicity down to 50 km depth. For the calculation of Green's functions, we utilize a spectral-element method for the solution of the seismic wave equation. Seismic moment tensors were calculated using least squares inversion, and the 3-D location of the centroid is found by grid search. Through several synthetic tests, we confirm a trade-off between the location and the correct input moment tensor components when using a 1-D Earth model to invert synthetics produced in a 3-D heterogeneous Earth. Our CMT catalogue for PNG in comparison to the global CMT shows a meaningful increase in the double-couple percentage (up to 70%). Another significant difference that we observe is in the mechanism of events with depth shallower then 15 km and Mw region.
Lifetime measurements in {sup 170}Yb using the generalized centroid difference method
Energy Technology Data Exchange (ETDEWEB)
Karayonchev, Vasil; Regis, Jean-Marc; Jolie, Jan; Dannhoff, Moritz; Saed-Samii, Nima; Blazhev, Andrey [Institute of Nuclear Physics, University of Cologne, Cologne (Germany)
2016-07-01
An experiment using the electronic γ-γ ''fast-timing'' technique was performed at the 10 MV Tandem Van-De-Graaff accelerator of the Institute for Nuclear Physics, Cologne in order to measure lifetimes of the yrast states in {sup 170}Yb. The lifetime of the first 2{sup +} state was determined using the slope method, which means by fitting an exponential decay to the ''slope'' seen in the energy-gated time-difference spectra. The value of τ=2.201(57) ns is in good agreement with the lifetimes measured using other techniques. The lifetimes of the first 4{sup +} and the 6{sup +} states are determined for the first time. They are in the ps range and were measured using the generalized centroid difference method, an extension of the well-known centroid-shift method and developed for fast-timing arrays. The derived reduced transition probabilities B(E2) values are compared with calculations done using the confined beta soft model and show good agreement within the experimental uncertainties.
Directory of Open Access Journals (Sweden)
James Joseph Wright
2014-10-01
Full Text Available Receptive fields of neurons in the forelimb region of areas 3b and 1 of primary somatosensory cortex, in cats and monkeys, were mapped using extracellular recordings obtained sequentially from nearly radial penetrations. Locations of the field centroids indicated the presence of a functional system, in which cortical homotypic representations of the limb surfaces are entwined in three-dimensional Mobius-strip-like patterns of synaptic connections. Boundaries of somatosensory receptive field in nested groups irregularly overlie the centroid order, and are interpreted as arising from the superposition of learned connections upon the embryonic order. Since the theory of embryonic synaptic self-organisation used to model these results was devised and earlier used to explain findings in primary visual cortex, the present findings suggest the theory may be of general application throughout cortex, and may reveal a modular functional synaptic system, which, only in some parts of the cortex, and in some species, is manifest as anatomical ordering into columns.
Modelling Perception of Structure and Affect in Music: Spectral Centroid and Wishart's Red Bird
Directory of Open Access Journals (Sweden)
Roger T. Dean
2011-12-01
Full Text Available Pearce (2011 provides a positive and interesting response to our article on time series analysis of the influences of acoustic properties on real-time perception of structure and affect in a section of Trevor Wishart’s Red Bird (Dean & Bailes, 2010. We address the following topics raised in the response and our paper. First, we analyse in depth the possible influence of spectral centroid, a timbral feature of the acoustic stream distinct from the high level general parameter we used initially, spectral flatness. We find that spectral centroid, like spectral flatness, is not a powerful predictor of real-time responses, though it does show some features that encourage its continued consideration. Second, we discuss further the issue of studying both individual responses, and as in our paper, group averaged responses. We show that a multivariate Vector Autoregression model handles the grand average series quite similarly to those of individual members of our participant groups, and we analyse this in greater detail with a wide range of approaches in work which is in press and continuing. Lastly, we discuss the nature and intent of computational modelling of cognition using acoustic and music- or information theoretic data streams as predictors, and how the music- or information theoretic approaches may be applied to electroacoustic music, which is ‘sound-based’ rather than note-centred like Western classical music.
International Nuclear Information System (INIS)
Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas
2010-01-01
In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.
MLC quality assurance using EPID: A fitting technique with subpixel precision
International Nuclear Information System (INIS)
Mamalui-Hunter, Maria; Li, Harold; Low, Daniel A.
2008-01-01
Amorphous silicon based electronic portal imaging devices (EPIDs) have been shown to be a good alternative to radiographic film for routine quality assurance (QA) of multileaf collimator (MLC) positioning accuracy. In this work, we present a method of acquiring an EPID image of a traditional strip-test image using analytical fits of the interleaf and leaf abutment image signatures. After exposure, the EPID image pixel values are divided by an open field image to remove EPID response and radiation field variations. Profiles acquired in the direction orthogonal to the leaf motion exhibit small peaks caused by interleaf leakage. Gaussian profiles are fitted to the interleaf leakage peaks, the results of which are, using multiobjective optimization, used to calculate the image rotational angle with respect to the collimator axis of rotation. The relative angle is used to rotate the image to align the MLC leaf travel to the image pixel axes. The leaf abutments also present peaks that are fitted by heuristic functions, in this case modified Lorentzian functions. The parameters of the Lorentzian functions are used to parameterize the leaf gap width and positions. By imaging a set of MLC fields with varying gaps forming symmetric and asymmetric abutments, calibration curves with regard to relative peak height (RPH) versus nominal gap width are obtained. Based on this calibration data, the individual leaf positions are calculated to compare with the nominal programmed positions. The results demonstrate that the collimator rotation angle can be determined as accurate as 0.01 deg. . A change in MLC gap width of 0.2 mm leads to a change in RPH of about 10%. For asymmetrically produced gaps, a 0.2 mm MLC leaf gap width change causes 0.2 pixel peak position change. Subpixel resolution is obtained by using a parameterized fit of the relatively large abutment peaks. By contrast, for symmetrical gap changes, the peak position remains unchanged with a standard deviation of 0
Directory of Open Access Journals (Sweden)
Luyi Sun
2016-08-01
Full Text Available Sub-Pixel Offset Tracking (sPOT is applied to derive high-resolution centimetre-level landslide rates in the Three Gorges Region of China using TerraSAR-X Hi-resolution Spotlight (TSX HS space-borne SAR images. These results contrast sharply with previous use of conventional differential Interferometric Synthetic Aperture Radar (DInSAR techniques in areas with steep slopes, dense vegetation and large variability in water vapour which indicated around 12% phase coherent coverage. By contrast, sPOT is capable of measuring two dimensional deformation of large gradient over steeply sloped areas covered in dense vegetation. Previous applications of sPOT in this region relies on corner reflectors (CRs, (high coherence features to obtain reliable measurements. However, CRs are expensive and difficult to install, especially in remote areas; and other potential high coherence features comparable with CRs are very few and outside the landslide boundary. The resultant sub-pixel level deformation field can be statistically analysed to yield multi-modal maps of deformation regions. This approach is shown to have a significant impact when compared with previous offset tracking measurements of landslide deformation, as it is demonstrated that sPOT can be applied even in densely vegetated terrain without relying on high-contrast surface features or requiring any de-noising process.
International Nuclear Information System (INIS)
Javanmardi, F.; Matoba, M.; Sakae, T.
1996-01-01
Triple Charge Division (TCD) centroid finding method that uses modified pattern of Backgammon Shape Cathode (MBSC) is introduced for medium range length position sensitive detectors with optimum numbers of cathode segments. MBSC pattern has three separated areas and uses saw tooth like insulator gaps for separating the areas. Side areas of the MBSC pattern are severed by a central common area. Size of the central area is twice of the size of both sides. Whereas central area is the widest area among three, both sides' areas have the main role in position sensing. With the same resolution and linearity, active region of original Backgammon pattern increases twice by using MBSC pattern, and with the same length, linearity of TCD centroid finding is much better than Backgammon charge division readout method. Linearity prediction of TCD centroid finding and experimental results conducted us to find an optimum truncation of the apices of MBCS pattern in the central area. The TCD centroid finding has an especial readout method since charges must be collected from two segments in both sides and from three segments in the central area of MBSC pattern. The so called Graded Charge Division (GCD) is the especial readout method for TCD. The GCD readout is a combination of the charge division readout and sequence grading of serial segments. Position sensing with TCD centroid finding and GCD readout were done by two sizes MBSC patterns (200mm and 80mm) and Spatial resolution about 1% of the detector length is achieved
A walk-free centroid method for lifetime measutement of 207Pb 569.7 keV state
International Nuclear Information System (INIS)
Gu Jiahui; Liu Jingyi; Xiao Genlai
1988-01-01
An improvement have been made in acquiring data of delayed coincidence spectra with ND-620 data acquisition system and off-line data analysis program. The delayed and anti-delayed coincidence spectra can be obtained in one run. The difference of their centroids is the mean lifetime τ. The centroid position of a delayed coincidence spectrum is the zero time of another delayed coincidence spectrum, so the requirement of measuring prompt time spectrum is avoided. The walk of prompt and delayed coincidence spectrum coming from different run are resolved and the walk during the measurement is compensated partly. The delayed coincidence time spectra of 207 Pb 569.7 keV state are measured and the half lifetime is calculated via three different methods (slop method, convolution method, centroid shift). The final value of half lifetime is 129.5±1.4ps. THe experimental reduced transition probability is compared with theoretical values
Energy Technology Data Exchange (ETDEWEB)
Kinugawa, Kenichi [Nara Women`s Univ., Nara (Japan). Dept. of Chemistry
1998-10-01
It has been unsuccessful to solve a set of time-dependent Schroedinger equations numerically for many-body quantum systems which involve, e.g., a number of hydrogen molecules, protons, and excess electrons at a low temperature, where quantum effect evidently appears. This undesirable situation is fatal for the investigation of real low-temperature chemical systems because they are essentially composed of many quantum degrees of freedom. However, if we use a new technique called `path integral centroid molecular dynamics (CMD) simulation` proposed by Cao and Voth in 1994, the real-time semi-classical dynamics of many degrees of freedom can be computed by utilizing the techniques already developed in the traditional classical molecular dynamics (MD) simulations. Therefore, the CMD simulation is expected to be very powerful tool for the quantum dynamics studies or real substances. (J.P.N.)
Directory of Open Access Journals (Sweden)
P. Phani Bushan Rao
2011-01-01
Full Text Available Ranking fuzzy numbers are an important aspect of decision making in a fuzzy environment. Since their inception in 1965, many authors have proposed different methods for ranking fuzzy numbers. However, there is no method which gives a satisfactory result to all situations. Most of the methods proposed so far are nondiscriminating and counterintuitive. This paper proposes a new method for ranking fuzzy numbers based on the Circumcenter of Centroids and uses an index of optimism to reflect the decision maker's optimistic attitude and also an index of modality that represents the neutrality of the decision maker. This method ranks various types of fuzzy numbers which include normal, generalized trapezoidal, and triangular fuzzy numbers along with crisp numbers with the particularity that crisp numbers are to be considered particular cases of fuzzy numbers.
Acquisition and Initial Analysis of H+- and H--Beam Centroid Jitter at LANSCE
Gilpatrick, J. D.; Bitteker, L.; Gulley, M. S.; Kerstiens, D.; Oothoudt, M.; Pillai, C.; Power, J.; Shelley, F.
2006-11-01
During the 2005 Los Alamos Neutron Science Center (LANSCE) beam runs, beam current and centroid-jitter data were observed, acquired, analyzed, and documented for both the LANSCE H+ and H- beams. These data were acquired using three beam position monitors (BPMs) from the 100-MeV Isotope Production Facility (IPF) beam line and three BPMs from the Switchyard transport line at the end of the LANSCE 800-MeV linac. The two types of data acquired, intermacropulse and intramacropulse, were analyzed for statistical and frequency characteristics as well as various other correlations including comparing their phase-space like characteristics in a coordinate system of transverse angle versus transverse position. This paper will briefly describe the measurements required to acquire these data, the initial analysis of these jitter data, and some interesting dilemmas these data presented.
The centroidal algorithm in molecular similarity and diversity calculations on confidential datasets
Trepalin, Sergey; Osadchiy, Nikolay
2005-09-01
Chemical structure provides exhaustive description of a compound, but it is often proprietary and thus an impediment in the exchange of information. For example, structure disclosure is often needed for the selection of most similar or dissimilar compounds. Authors propose a centroidal algorithm based on structural fragments (screens) that can be efficiently used for the similarity and diversity selections without disclosing structures from the reference set. For an increased security purposes, authors recommend that such set contains at least some tens of structures. Analysis of reverse engineering feasibility showed that the problem difficulty grows with decrease of the screen's radius. The algorithm is illustrated with concrete calculations on known steroidal, quinoline, and quinazoline drugs. We also investigate a problem of scaffold identification in combinatorial library dataset. The results show that relatively small screens of radius equal to 2 bond lengths perform well in the similarity sorting, while radius 4 screens yield better results in diversity sorting. The software implementation of the algorithm taking SDF file with a reference set generates screens of various radii which are subsequently used for the similarity and diversity sorting of external SDFs. Since the reverse engineering of the reference set molecules from their screens has the same difficulty as the RSA asymmetric encryption algorithm, generated screens can be stored openly without further encryption. This approach ensures an end user transfers only a set of structural fragments and no other data. Like other algorithms of encryption, the centroid algorithm cannot give 100% guarantee of protecting a chemical structure from dataset, but probability of initial structure identification is very small-order of 10-40 in typical cases.
Hallo, Miroslav; Asano, Kimiyuki; Gallovič, František
2017-09-01
On April 16, 2016, Kumamoto prefecture in Kyushu region, Japan, was devastated by a shallow M JMA7.3 earthquake. The series of foreshocks started by M JMA6.5 foreshock 28 h before the mainshock. They have originated in Hinagu fault zone intersecting the mainshock Futagawa fault zone; hence, the tectonic background for this earthquake sequence is rather complex. Here we infer centroid moment tensors (CMTs) for 11 events with M JMA between 4.8 and 6.5, using strong motion records of the K-NET, KiK-net and F-net networks. We use upgraded Bayesian full-waveform inversion code ISOLA-ObsPy, which takes into account uncertainty of the velocity model. Such an approach allows us to reliably assess uncertainty of the CMT parameters including the centroid position. The solutions show significant systematic spatial and temporal variations throughout the sequence. Foreshocks are right-lateral steeply dipping strike-slip events connected to the NE-SW shear zone. Those located close to the intersection of the Hinagu and Futagawa fault zones are dipping slightly to ESE, while those in the southern area are dipping to WNW. Contrarily, aftershocks are mostly normal dip-slip events, being related to the N-S extensional tectonic regime. Most of the deviatoric moment tensors contain only minor CLVD component, which can be attributed to the velocity model uncertainty. Nevertheless, two of the CMTs involve a significant CLVD component, which may reflect complex rupture process. Decomposition of those moment tensors into two pure shear moment tensors suggests combined right-lateral strike-slip and normal dip-slip mechanisms, consistent with the tectonic settings of the intersection of the Hinagu and Futagawa fault zones.[Figure not available: see fulltext.
A Proposal to Speed up the Computation of the Centroid of an Interval Type-2 Fuzzy Set
Directory of Open Access Journals (Sweden)
Carlos E. Celemin
2013-01-01
Full Text Available This paper presents two new algorithms that speed up the centroid computation of an interval type-2 fuzzy set. The algorithms include precomputation of the main operations and initialization based on the concept of uncertainty bounds. Simulations over different kinds of footprints of uncertainty reveal that the new algorithms achieve computation time reductions with respect to the Enhanced-Karnik algorithm, ranging from 40 to 70%. The results suggest that the initialization used in the new algorithms effectively reduces the number of iterations to compute the extreme points of the interval centroid while precomputation reduces the computational cost of each iteration.
International Nuclear Information System (INIS)
Carvalho Tofani, P. de.
1986-01-01
The subchannel method used in nuclear fuel bundle thermal-hydraulic analysis lies in the statement that subchannel fluid temperatures are taken at mixed mean values. However, the development of mixing correlations and code assessment procedures are, sometimes in the literature, based upon the assumption of identity between lumped and local (subchannel centroid) temperature values. The present paper is concerned with the presentation of an approach for correlating lumped to centroid subchannel temperatures, based upon previously formulated models by the author, applied, applied to a nine heated tube bundle experimental data set. (Author) [pt
International Nuclear Information System (INIS)
Carvalho Tofani, P. de.
1986-01-01
The subchannel method used in nuclear fuel bundle thermal-hydraulic analysis lies in the statement that subchannel fluid temperatures are taken at mixed mean values. However, the development of mixing correlations and code assessment procedures are, sometimes in the literature, based upon the assumption of identity between lumped and local (subchannel centroid) temperature values. The present paper is concerned with the presentation of an approach for correlating lumped to centroid subchannel temperatures, based upon previously formulated models by the author, applied to a nine heated tube bundle experimental data set. (Author) [pt
Hishe, Hadgu; Giday, Kidane; Neka, Mulugeta; Soromessa, Teshome; Van Orshoven, Jos; Muys, Bart
2015-01-01
Comprehensive and less costly forest inventory approaches are required to monitor the spatiotemporal dynamics of key species in forest ecosystems. Subpixel analysis using the earth resources data analysis system imagine subpixel classification procedure was tested to extract Olea europaea subsp. cuspidata and Juniperus procera canopies from Landsat 7 enhanced thematic mapper plus imagery. Control points with various canopy area fractions of the target species were collected to develop signatures for each of the species. With these signatures, the imagine subpixel classification procedure was run for each species independently. The subpixel process enabled the detection of O. europaea subsp. cuspidata and J. procera trees in pure and mixed pixels. Total of 100 pixels each were field verified for both species. An overall accuracy of 85% was achieved for O. europaea subsp. cuspidata and 89% for J. procera. A high overall accuracy level of detecting species at a natural forest was achieved, which encourages using the algorithm for future species monitoring activities. We recommend that the algorithm has to be validated in similar environment to enrich the knowledge on its capability to ensure its wider usage.
Fast image interpolation for motion estimation using graphics hardware
Kelly, Francis; Kokaram, Anil
2004-05-01
Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.
Liebold, F.; Maas, H.-G.
2018-05-01
This paper deals with the determination of crack widths of concrete beams during load tests from monocular image sequences. The procedure starts in a reference image of the probe with suitable surface texture under zero load, where a large number of points is defined by an interest operator. Then a triangulated irregular network is established to connect the points. Image sequences are recorded during load tests with the load increasing continuously or stepwise, or at intermittently changing load. The vertices of the triangles are tracked through the consecutive images of the sequence with sub-pixel accuracy by least squares matching. All triangles are then analyzed for changes by principal strain calculation. For each triangle showing significant strain, a crack width is computed by a thorough geometric analysis of the relative movement of the vertices.
Directory of Open Access Journals (Sweden)
Zhangfang Hu
2014-10-01
Full Text Available The digital speckle correlation is a non-contact in-plane displacement measurement method based on machine vision. Motivated by the facts that the low accuracy and large amount of calculation produced by the traditional digital speckle correlation method in spatial domain, we introduce a sub-pixel displacement measurement algorithm which employs a fast interpolation method based on fractal theory and digital speckle correlation in frequency domain. This algorithm can overcome either the blocking effect or the blurring caused by the traditional interpolation methods, and the frequency domain processing also avoids the repeated searching in the correlation recognition of the spatial domain, thus the operation quantity is largely reduced and the information extracting speed is improved. The comparative experiment is given to verify that the proposed algorithm in this paper is effective.
Gasilov, Sergei V; Coan, Paola
2012-09-01
Several x-ray phase contrast extraction algorithms use a set of images acquired along the rocking curve of a perfect flat analyzer crystal to study the internal structure of objects. By measuring the angular shift of the rocking curve peak, one can determine the local deflections of the x-ray beam propagated through a sample. Additionally, some objects determine a broadening of the crystal rocking curve, which can be explained in terms of multiple refraction of x rays by many subpixel-size inhomogeneities contained in the sample. This fact may allow us to differentiate between materials and features characterized by different refraction properties. In the present work we derive an expression for the beam broadening in the form of a linear integral of the quantity related to statistical properties of the dielectric susceptibility distribution function of the object.
Directory of Open Access Journals (Sweden)
Seung Ah Lee
Full Text Available Miniaturization of imaging systems can significantly benefit clinical diagnosis in challenging environments, where access to physicians and good equipment can be limited. Sub-pixel resolving optofluidic microscope (SROFM offers high-resolution imaging in the form of an on-chip device, with the combination of microfluidics and inexpensive CMOS image sensors. In this work, we report on the implementation of color SROFM prototypes with a demonstrated optical resolution of 0.66 µm at their highest acuity. We applied the prototypes to perform color imaging of red blood cells (RBCs infected with Plasmodium falciparum, a particularly harmful type of malaria parasites and one of the major causes of death in the developing world.
Schomaker, Lambertus; Mangalagiu, D.; Vuurpijl, Louis; Weinfeld, M.; Schomaker, Lambert; Vuurpijl, Louis
2000-01-01
This paper describes treebased classification of character images, comparing two methods of tree formation and two methods of matching: nearest neighbor and nearest centroid. The first method, Preprocess Using Relative Distances (PURD) is a treebased reorganization of a flat list of patterns,
Sirait, Kamson; Tulus; Budhiarti Nababan, Erna
2017-12-01
Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.
Hill, R.; Calvin, W. M.; Harpold, A. A.
2016-12-01
Mountain snow storage is the dominant source of water for humans and ecosystems in western North America. Consequently, the spatial distribution of snow-covered area is fundamental to both hydrological, ecological, and climate models. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were collected along the entire Sierra Nevada mountain range extending from north of Lake Tahoe to south of Mt. Whitney during the 2015 and 2016 snow-covered season. The AVIRIS dataset used in this experiment consists of 224 contiguous spectral channels with wavelengths ranging 400-2500 nanometers at a 15-meter spatial pixel size. Data from the Sierras were acquired on four days: 2/24/15 during a very low snow year, 3/24/16 near maximum snow accumulation, and 5/12/16 and 5/18/16 during snow ablation and snow loss. Previous retrieval of subpixel snow-covered area in alpine regions used multiple snow endmembers due to the sensitivity of snow spectral reflectance to grain size. We will present a model that analyzes multiple endmembers of varying snow grain size, vegetation, rock, and soil in segmented regions along the Sierra Nevada to determine snow-cover spatial extent, snow sub-pixel fraction and approximate grain size or melt state. The root mean squared error will provide a spectrum-wide assessment of the mixture model's goodness-of-fit. Analysis will compare snow-covered area and snow-cover depletion in the 2016 year, and annual variation from the 2015 year. Field data were also acquired on three days concurrent with the 2016 flights in the Sagehen Experimental Forest and will support ground validation of the airborne data set.
Simplex-centroid mixture formulation for optimised composting of kitchen waste.
Abdullah, N; Chin, N L
2010-11-01
Composting is a good recycling method to fully utilise all the organic wastes present in kitchen waste due to its high nutritious matter within the waste. In this present study, the optimised mixture proportions of kitchen waste containing vegetable scraps (V), fish processing waste (F) and newspaper (N) or onion peels (O) were determined by applying the simplex-centroid mixture design method to achieve the desired initial moisture content and carbon-to-nitrogen (CN) ratio for effective composting process. The best mixture was at 48.5% V, 17.7% F and 33.7% N for blends with newspaper while for blends with onion peels, the mixture proportion was 44.0% V, 19.7% F and 36.2% O. The predicted responses from these mixture proportions fall in the acceptable limits of moisture content of 50% to 65% and CN ratio of 20-40 and were also validated experimentally. Copyright 2010 Elsevier Ltd. All rights reserved.
A walk-free centroid method for lifetime measurements with pulsed beams
International Nuclear Information System (INIS)
Julin, R.; Kantele, J.; Luontama, M.; Passoja, A.; Poikolainen, T.
1977-09-01
A delayed-coincidence lifetime measurement method based on a comparison of walk-free centroids of time spectra is presented. The time is measured between the cyclotron RF signal and the pulse from a plastic scintillation detector followed by a fixed energy selection. The events to be time-analyzed are selected from the associated charge-particle spectrum of a silicon detector which is operated in coincidence with the scintillator, i.e., independently of the formation of the signal containing the time information. With this technique, with the micropulse FWHM of typically 500 to 700 ps, half-lives down to the 10 ps region can be measured. The following half-lives are obtained with the new method: 160+-6 ps for the 2032 keV level in 209 Pb; 45+-10 ps and 160+-20 ps for the 1756.8 keV (0 2 + ) and 2027.3 keV (0 3 + ) levels in 116 Sn, respectively. (author)
Decadal Western Pacific Warm Pool Variability: A Centroid and Heat Content Study.
Kidwell, Autumn; Han, Lu; Jo, Young-Heon; Yan, Xiao-Hai
2017-10-13
We examine several characteristics of the Western Pacific Warm Pool (WP) in the past thirty years of mixed interannual variability and climate change. Our study presents the three-dimensional WP centroid (WPC) movement, WP heat content anomaly (HC) and WP volume (WPV) on interannual to decadal time scales. We show the statistically significant correlation between each parameter's interannual anomaly and the NINO 3, NINO 3.4, NINO 4, SOI, and PDO indices. The longitudinal component of the WPC is most strongly correlated with NINO 4 (R = 0.78). The depth component of the WPC has the highest correlation (R = -0.6) with NINO3.4. The WPV and NINO4 have an R-Value of -0.65. HC has the highest correlation with NINO3.4 (R = -0.52). During the study period of 1982-2014, the non-linear trends, derived from ensemble empirical mode decomposition (EEMD), show that the WPV, WP depth and HC have all increased. The WPV has increased by 14% since 1982 and the HC has increased from -1 × 10 8 J/m 2 in 1993 to 10 × 10 8 J/m 2 in 2014. While the largest variances in the latitudinal and longitudinal WPC locations are associated with annual and seasonal timescales, the largest variances in the WPV and HC are due to the multi-decadal non-linear trend.
Detection of a surface breaking crack by using the centroid variations of laser ultrasonic spectrums
International Nuclear Information System (INIS)
Park, Seung Kyu; Baik, Sung Hoon; Lim, Chang Hwan; Joo, Young Sang; Jung, Hyun Kyu; Cha, Hyung Ki; Kang, Young June
2006-01-01
A laser ultrasonic system is a non-contact inspection device with a wide-band spectrum and a high spatial resolution. It provides absolute measurements of the moving distance and it can be applied to hard-to-access locations including curved or rough surfaces like in a nuclear power plant. In this paper, we have investigated the detection methods of the depth of a surface-breaking crack by using the surface wave of a laser ultrasound. The filtering function of a surface-breaking crack is a kind of a low-pass filter. The higher frequency components are more highly decreased in proportion to the crack depth. Also, the center frequency value of each ultrasound spectrum is decreased in proportion to the crack depth. We extracted the depth information of a surface-breaking crack by observing the centroid variation of the frequency spectrum. We describe the experimental results to detect the crack depth information by using the peak-to-valley values in the time domain and the center frequency values in the frequency domain.
Power centroid radar and its rise from the universal cybernetics duality
Feria, Erlan H.
2014-05-01
Power centroid radar (PC-Radar) is a fast and powerful adaptive radar scheme that naturally surfaced from the recent discovery of the time-dual for information theory which has been named "latency theory." Latency theory itself was born from the universal cybernetics duality (UC-Duality), first identified in the late 1970s, that has also delivered a time dual for thermodynamics that has been named "lingerdynamics" and anchors an emerging lifespan theory for biological systems. In this paper the rise of PC-Radar from the UC-Duality is described. The development of PC-Radar, US patented, started with Defense Advanced Research Projects Agency (DARPA) funded research on knowledge-aided (KA) adaptive radar of the last decade. The outstanding signal to interference plus noise ratio (SINR) performance of PC-Radar under severely taxing environmental disturbances will be established. More specifically, it will be seen that the SINR performance of PC-Radar, either KA or knowledgeunaided (KU), approximates that of an optimum KA radar scheme. The explanation for this remarkable result is that PC-Radar inherently arises from the UC-Duality, which advances a "first principles" duality guidance theory for the derivation of synergistic storage-space/computational-time compression solutions. Real-world synthetic aperture radar (SAR) images will be used as prior-knowledge to illustrate these results.
Collective centroid oscillations as an emittance preservation diagnostic in linear collider linacs
International Nuclear Information System (INIS)
Adolphsen, C.E.; Bane, K.L.F.; Spence, W.L.; Woodley, M.D.
1997-08-01
Transverse bunch centroid oscillations, induced at operating beam currents at which transverse wakefields are substantial, and observed at Beam Position Monitors, are sensitive to the actual magnetic focusing, energy gain, and rf phase profiles in a linac, and are insensitive to misalignments and jitter sources. In the pulse stealing set-up implemented at the SLC, they thus allow the frequent monitoring of the stability of the in-place emittance growth inhibiting or mitigating measures--primarily the energy scaled magnetic lattice and the rf phases necessary for BNS damping--independent of the actual emittance growth as driven by misalignments and jitter. The authors have developed a physically based analysis technique to meaningfully reduce the data. Oscillation beta-beating is a primary indicator of beam energy errors; shifts in the invariant amplitude reflect differential internal motion along the longitudinally extended bunch and thus are a sensitive indicator of the real rf phases in the machine; shifts in betatron phase advance contain corroborative information sensitive to both effects
Centroid based clustering of high throughput sequencing reads based on n-mer counts.
Solovyov, Alexander; Lipkin, W Ian
2013-09-08
Many problems in computational biology require alignment-free sequence comparisons. One of the common tasks involving sequence comparison is sequence clustering. Here we apply methods of alignment-free comparison (in particular, comparison using sequence composition) to the challenge of sequence clustering. We study several centroid based algorithms for clustering sequences based on word counts. Study of their performance shows that using k-means algorithm with or without the data whitening is efficient from the computational point of view. A higher clustering accuracy can be achieved using the soft expectation maximization method, whereby each sequence is attributed to each cluster with a specific probability. We implement an open source tool for alignment-free clustering. It is publicly available from github: https://github.com/luscinius/afcluster. We show the utility of alignment-free sequence clustering for high throughput sequencing analysis despite its limitations. In particular, it allows one to perform assembly with reduced resources and a minimal loss of quality. The major factor affecting performance of alignment-free read clustering is the length of the read.
International Nuclear Information System (INIS)
Valentine, J.D.; Rana, A.E.
1996-01-01
The effect of approximating a continuous Gaussian distribution with histogrammed data are studied. The expressions for theoretical uncertainties in centroid and full-width at half maximum (FWHM), as determined by calculation of moments, are derived using the error propagation method for a histogrammed Gaussian distribution. The results are compared with the corresponding pseudo-experimental uncertainties for computer-generated histogrammed Gaussian peaks to demonstrate the effect of binning the data. It is shown that increasing the number of bins in the histogram improves the continuous distribution approximation. For example, a FWHM ≥ 9 and FWHM ≥ 12 bins are needed to reduce the pseudo-experimental standard deviation of FWHM to within ≥5% and ≥1%, respectively, of the theoretical value for a peak containing 10,000 counts. In addition, the uncertainties in the centroid and FWHM as a function of peak area are studied. Finally, Sheppard's correction is applied to partially correct for the binning effect
Directory of Open Access Journals (Sweden)
M. Payami
2003-12-01
Full Text Available In this work, we have shown the important role of the finite-size correction to the work function in predicting the correct position of the centroid of excess charge in positively charged simple metal clusters with different values . For this purpose, firstly we have calculated the self-consistent Kohn-Sham energies of neutral and singly-ionized clusters with sizes in the framework of local spin-density approximation and stabilized jellium model (SJM as well as simple jellium model (JM with rigid jellium. Secondly, we have fitted our results to the asymptotic ionization formulas both with and without the size correction to the work function. The results of fittings show that the formula containing the size correction predict a correct position of the centroid inside the jellium while the other predicts a false position, outside the jellium sphere.
International Nuclear Information System (INIS)
Dorenbos, P.; Andriessen, J.; Eijk, C.W.E. van
2003-01-01
Data collected on the centroid shift of the 5d-configuration of Ce 3+ in oxide and fluoride compounds were recently analyzed with a model involving the correlated motion between 5d-electron and ligand electrons. The correlation effects are proportional to the polarizability of the anion ligands and it leads, like covalency, to lowering of the 5d-orbital energies. By means of ab initio Hartree-Fock-LCAO calculations including configuration interaction the contribution from covalency and correlated motion to the centroid shift are determined separately for Ce 3+ in various compounds. It will be shown that in fluoride compounds, covalency provides an insignificant contribution. In oxides, polarizability appears to be of comparable importance as covalency
International Nuclear Information System (INIS)
Payami, M.
2004-01-01
In this work, we have shown the important role of the finite-size correction to the work function in predicting the correct position of the centroid of excess charge in positively charged simple metal clusters with different r s values (2≤ r s ≥ 7). For this purpose, firstly we have calculated the self-consistent Kohn-Sham energies of neutral and singly-ionized clusters with sizes 2≤ N ≥100 in the framework of local spin-density approximation and stabilized jellium model as well as simple jellium model with rigid jellium. Secondly, we have fitted our results to the asymptotic ionization formulas both with and without the size correction to the work function. The results of fittings show that the formula containing the size correction predict a correct position of the centroid inside the jellium while the other predicts a false position, outside the jellium sphere
International Nuclear Information System (INIS)
Davidson, Ronald C.; Logan, B. Grant
2011-01-01
Recent heavy ion fusion target studies show that it is possible to achieve ignition with direct drive and energy gain larger than 100 at 1MJ. To realize these advanced, high-gain schemes based on direct drive, it is necessary to develop a reliable beam smoothing technique to mitigate instabilities and facilitate uniform deposition on the target. The dynamics of the beam centroid can be explored as a possible beam smoothing technique to achieve a uniform illumination over a suitably chosen region of the target. The basic idea of this technique is to induce an oscillatory motion of the centroid for each transverse slice of the beam in such a way that the centroids of different slices strike different locations on the target. The centroid dynamics is controlled by a set of biased electrical plates called 'wobblers'. Using a model based on moments of the Vlasov-Maxwell equations, we show that the wobbler deflection force acts only on the centroid motion, and that the envelope dynamics are independent of the wobbler fields. If the conducting wall is far away from the beam, then the envelope dynamics and centroid dynamics are completely decoupled. This is a preferred situation for the beam wobbling technique, because the wobbler system can be designed to generate the desired centroid motion on the target without considering its effects on the envelope and emittance. A conceptual design of the wobbler system for a heavy ion fusion driver is briefly summarized.
Neutron radiography with sub-15 {mu}m resolution through event centroiding
Energy Technology Data Exchange (ETDEWEB)
Tremsin, Anton S., E-mail: ast@ssl.berkeley.edu [Space Sciences Laboratory, University of California at Berkeley, Berkeley, CA 94720 (United States); McPhate, Jason B.; Vallerga, John V.; Siegmund, Oswald H.W. [Space Sciences Laboratory, University of California at Berkeley, Berkeley, CA 94720 (United States); Bruce Feller, W. [NOVA Scientific, Inc. 10 Picker Road, Sturbridge, MA 01566 (United States); Lehmann, Eberhard; Kaestner, Anders; Boillat, Pierre; Panzner, Tobias; Filges, Uwe [Spallation Neutron Source Division, Paul Scherrer Institute, CH-5232 Villigen (Switzerland)
2012-10-01
Conversion of thermal and cold neutrons into a strong {approx}1 ns electron pulse with an absolute neutron detection efficiency as high as 50-70% makes detectors with {sup 10}B-doped Microchannel Plates (MCPs) very attractive for neutron radiography and microtomography applications. The subsequent signal amplification preserves the location of the event within the MCP pore (typically 6-10 {mu}m in diameter), providing the possibility to perform neutron counting with high spatial resolution. Different event centroiding techniques of the charge landing on a patterned anode enable accurate reconstruction of the neutron position, provided the charge footprints do not overlap within the time required for event processing. The new fast 2 Multiplication-Sign 2 Timepix readout with >1.2 kHz frame rates provides the unique possibility to detect neutrons with sub-15 {mu}m resolution at several MHz/cm{sup 2} counting rates. The results of high resolution neutron radiography experiments presented in this paper, demonstrate the sub-15 {mu}m resolution capability of our detection system. The high degree of collimation and cold spectrum of ICON and BOA beamlines combined with the high spatial resolution and detection efficiency of MCP-Timepix detectors are crucial for high contrast neutron radiography and microtomography with high spatial resolution. The next generation of Timepix electronics with sparsified readout should enable counting rates in excess of 10{sup 7} n/cm{sup 2}/s taking full advantage of high beam intensity of present brightest neutron imaging facilities.
Quick regional centroid moment tensor solutions for the Emilia 2012 (northern Italy seismic sequence
Directory of Open Access Journals (Sweden)
Silvia Pondrelli
2012-10-01
Full Text Available In May 2012, a seismic sequence struck the Emilia region (northern Italy. The mainshock, of Ml 5.9, occurred on May 20, 2012, at 02:03 UTC. This was preceded by a smaller Ml 4.1 foreshock some hours before (23:13 UTC on May 19, 2012 and followed by more than 2,500 earthquakes in the magnitude range from Ml 0.7 to 5.2. In addition, on May 29, 2012, three further strong earthquakes occurred, all with magnitude Ml ≥5.2: a Ml 5.8 earthquake in the morning (07:00 UTC, followed by two events within just 5 min of each other, one at 10:55 UTC (Ml 5.3 and the second at 11:00 UTC (Ml 5.2. For all of the Ml ≥4.0 earthquakes in Italy and for all of the Ml ≥4.5 in the Mediterranean area, an automatic procedure for the computation of a regional centroid moment tensor (RCMT is triggered by an email alert. Within 1 h of the event, a manually revised quick RCMT (QRCMT can be published on the website if the solution is considered stable. In particular, for the Emilia seismic sequence, 13 QRCMTs were determined and for three of them, those with M >5.5, the automatically computed QRCMTs fitted the criteria for publication without manual revision. Using this seismic sequence as a test, we can then identify the magnitude threshold for automatic publication of our QRCMTs.
Yuan, Ying; Li, Jicun; Li, Xin-Zheng; Wang, Feng
2018-05-01
The development of effective centroid potentials (ECPs) is explored with both the constrained-centroid and quasi-adiabatic force matching using liquid water as a test system. A trajectory integrated with the ECP is free of statistical noises that would be introduced when the centroid potential is approximated on the fly with a finite number of beads. With the reduced cost of ECP, challenging experimental properties can be studied in the spirit of centroid molecular dynamics. The experimental number density of H2O is 0.38% higher than that of D2O. With the ECP, the H2O number density is predicted to be 0.42% higher, when the dispersion term is not refit. After correction of finite size effects, the diffusion constant of H2O is found to be 21% higher than that of D2O, which is in good agreement with the 29.9% higher diffusivity for H2O observed experimentally. Although the ECP is also able to capture the redshifts of both the OH and OD stretching modes in liquid water, there are a number of properties that a classical simulation with the ECP will not be able to recover. For example, the heat capacities of H2O and D2O are predicted to be almost identical and higher than the experimental values. Such a failure is simply a result of not properly treating quantized vibrational energy levels when the trajectory is propagated with classical mechanics. Several limitations of the ECP based approach without bead population reconstruction are discussed.
Bisanz, T.; Große-Knetter, J.; Quadt, A.; Rieger, J.; Weingarten, J.
2017-08-01
The upgrade to the High Luminosity Large Hadron Collider will increase the instantaneous luminosity by more than a factor of 5, thus creating significant challenges to the tracking systems of all experiments. Recent advancement of active pixel detectors designed in CMOS processes provide attractive alternatives to the well-established hybrid design using passive sensors since they allow for smaller pixel sizes and cost effective production. This article presents studies of a high-voltage CMOS active pixel sensor designed for the ATLAS tracker upgrade. The sensor is glued to the read-out chip of the Insertable B-Layer, forming a capacitively coupled pixel detector. The pixel pitch of the device under test is 33× 125 μm2, while the pixels of the read-out chip have a pitch of 50× 250 μm2. Three pixels of the CMOS device are connected to one read-out pixel, the information of which of these subpixels is hit is encoded in the amplitude of the output signal (subpixel encoding). Test beam measurements are presented that demonstrate the usability of this subpixel encoding scheme.
Basoglu, Burak; Halicioglu, Kerem; Albayrak, Muge; Ulug, Rasit; Tevfik Ozludemir, M.; Deniz, Rasim
2017-04-01
In the last decade, the importance of high-precise geoid determination at local or national level has been pointed out by Turkish National Geodesy Commission. The Commission has also put objective of modernization of national height system of Turkey to the agenda. Meanwhile several projects have been realized in recent years. In Istanbul city, a GNSS/Levelling geoid was defined in 2005 for the metropolitan area of the city with an accuracy of ±3.5cm. In order to achieve a better accuracy in this area, "Local Geoid Determination with Integration of GNSS/Levelling and Astro-Geodetic Data" project has been conducted in Istanbul Technical University and Bogazici University KOERI since January 2016. The project is funded by The Scientific and Technological Research Council of Turkey. With the scope of the project, modernization studies of Digital Zenith Camera System are being carried on in terms of hardware components and software development. Accentuated subjects are the star catalogues, and centroiding algorithm used to identify the stars on the zenithal star field. During the test observations of Digital Zenith Camera System performed between 2013-2016, final results were calculated using the PSF method for star centroiding, and the second USNO CCD Astrograph Catalogue (UCAC2) for the reference star positions. This study aims to investigate the position accuracy of the star images by comparing different centroiding algorithms and available star catalogs used in astro-geodetic observations conducted with the digital zenith camera system.
Milliner, C. W. D.; Dolan, J. F.; Hollingsworth, J.; Leprince, S.; Ayoub, F.
2014-12-01
Coseismic surface deformation is typically measured in the field by geologists and with a range of geophysical methods such as InSAR, LiDAR and GPS. Current methods, however, either fail to capture the near-field coseismic surface deformation pattern where vital information is needed, or lack pre-event data. We develop a standardized and reproducible methodology to fully constrain the surface, near-field, coseismic deformation pattern in high resolution using aerial photography. We apply our methodology using the program COSI-corr to successfully cross-correlate pairs of aerial, optical imagery before and after the 1992, Mw 7.3 Landers and 1999, Mw 7.1 Hector Mine earthquakes. This technique allows measurement of the coseismic slip distribution and magnitude and width of off-fault deformation with sub-pixel precision. This technique can be applied in a cost effective manner for recent and historic earthquakes using archive aerial imagery. We also use synthetic tests to constrain and correct for the bias imposed on the result due to use of a sliding window during correlation. Correcting for artificial smearing of the tectonic signal allows us to robustly measure the fault zone width along a surface rupture. Furthermore, the synthetic tests have constrained for the first time the measurement precision and accuracy of estimated fault displacements and fault-zone width. Our methodology provides the unique ability to robustly understand the kinematics of surface faulting while at the same time accounting for both off-fault deformation and measurement biases that typically complicates such data. For both earthquakes we find that our displacement measurements derived from cross-correlation are systematically larger than the field displacement measurements, indicating the presence of off-fault deformation. We show that the Landers and Hector Mine earthquake accommodated 46% and 38% of displacement away from the main primary rupture as off-fault deformation, over a mean
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Schellhorn, M; Rosenberger, M; Correns, M; Blau, M; Goepfert, A; Rueckwardt, M; Linss, G
2010-01-01
Within the field of industrial image processing the use of colour cameras becomes ever more common. Increasingly the established black and white cameras are replaced by economical single-chip colour cameras with Bayer pattern. The use of the additional colour information is particularly important for recognition or inspection. Become interesting however also for the geometric metrology, if measuring tasks can be solved more robust or more exactly. However only few suitable algorithms are available, in order to detect edges with the necessary precision. All attempts require however additional computation expenditure. On the basis of a new filter for edge detection in colour images with subpixel precision, the implementation on a pre-processing hardware platform is presented. Hardware implemented filters offer the advantage that they can be used easily with existing measuring software, since after the filtering a single channel image is present, which unites the information of all colour channels. Advanced field programmable gate arrays represent an ideal platform for the parallel processing of multiple channels. The effective implementation presupposes however a high programming expenditure. On the example of the colour filter implementation, arising problems are analyzed and the chosen solution method is presented.
International Nuclear Information System (INIS)
Andriessen, J.; Dorenbos, P.; Eijk, C.W.E van
2002-01-01
The centroid shifts of the 5d level of Ce 3+ in BaF 2 , LaAlO 3 and LaCl 3 have been calculated using the ionic cluster approach. By applying configuration interaction as extension of the basic HF-LCAO approach the dynamical polarization contribution to the centroid shift was calculated. This was found to be only successful if basis sets are used optimized for polarization of the anions
International Nuclear Information System (INIS)
Orives, Juliane Resges; Galvan, Diego; Coppo, Rodolfo Lopes; Rodrigues, Cezar Henrique Furtoso; Angilelli, Karina Gomes; Borsato, Dionísio
2014-01-01
Highlights: • Mixture experimental design was used which allowed evaluating various responses. • Predictive equation was presented that allows verifying the behavior of the mixtures. • The results depicted that the obtained biodiesel dispensed the use of any additives. - Abstract: The quality of biodiesel is a determining factor in its commercialisation, and parameters such as the Cold Filter Plugging Point (CFPP) and Induction Period (IP) determine its operability in engines on cold days and storage time, respectively. These factors are important in characterisation of the final product. A B100 biodiesel formulation was developed using a multiresponse optimisation, for which the CFPP and cost were minimised, and the IP and yield were maximised. The experiments were carried out according to a simplex-centroid mixture design using soybean oil, beef tallow, and poultry fat. The optimum formulation consisted of 50% soybean oil, 20% beef tallow, and 30% poultry fat and had CFPP values of 1.92 °C, raw material costs of US$ 903.87 ton −1 , an IP of 8.28 h, and a yield of 95.68%. Validation was performed in triplicate and the t-test indicated that there were no difference between the estimated and experimental values for none of the dependent variables, thus indicating efficiency of the joint optimisation in the biodiesel production process that met the criteria for CFPP and IP, as well as high yield and low cost
Papadelis, Aras
2006-01-01
We present results from a recent beam test of a prototype sensor for the LHCb Vertex Locator detector, read out with the Beetle 1.3 front-end chip. We have studied the effect of the sensor bias voltage on the reconstructed cluster positions in a sensor placed in a 120GeV pion beam at a 10° incidence angle. We find an unexplained sysematic shift in the reconstructed cluster centroid when increasing the bias voltage on an already overdepleted sensor. The shift is independent of strip pitch and sensor thickness.
Directory of Open Access Journals (Sweden)
Moysés Nascimento
2009-03-01
Full Text Available O objetivo deste trabalho foi alterar o método centroide de avaliação da adaptabilidade e estabilidade fenotípica de genótipos, para deixá-lo com maior sentido biológico e melhorar aspectos quantitativos e qualitativos de sua análise. A alteração se deu pela adição de mais três ideótipos, definidos de acordo com valores médios dos genótipos nos ambientes. Foram utilizados dados provenientes de um experimento sobre produção de matéria seca de 92 genótipos de alfafa (Medicago sativa realizado em blocos ao acaso, com duas repetições. Os genótipos foram submetidos a 20 cortes, no período de novembro de 2004 a junho de 2006. Cada corte foi considerado um ambiente. A inclusão dos ideótipos de maior sentido biológico (valores médios nos ambientes resultou em uma dispersão gráfica em forma de uma seta voltada para a direita, na qual os genótipos mais produtivos ficaram próximos à ponta da seta. Com a alteração, apenas cinco genótipos foram classificados nas mesmas classes do método centroide original. A figura em forma de seta proporciona uma comparação direta dos genótipos, por meio da formação de um gradiente de produtividade. A alteração no método mantém a facilidade de interpretação dos resultados para a recomendação dos genótipos presente no método original e não permite duplicidade de interpretação dos resultados.ABSTRACT The objective of this work was to modify the centroid method of evaluation of phenotypic adaptability and the phenotype stability of genotypes in order for the method to make greater biological sense and improve its quantitative and qualitative performance. The method was modified by means of the inclusion of three additional ideotypes defined in accordance with the genotypes' average yield in the environments tested. The alfalfa (Medicago sativa L. forage yield of 92 genotypes was used. The trial had a randomized block design, with two replicates, and the data were used to
Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho
2016-01-01
This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366
Directory of Open Access Journals (Sweden)
P.B. Chopade
2014-05-01
Full Text Available This paper presents image super-resolution scheme based on sub-pixel image registration by the design of a specific class of dyadic-integer-coefficient based wavelet filters derived from the construction of a half-band polynomial. First, the integer-coefficient based half-band polynomial is designed by the splitting approach. Next, this designed half-band polynomial is factorized and assigned specific number of vanishing moments and roots to obtain the dyadic-integer coefficients low-pass analysis and synthesis filters. The possibility of these dyadic-integer coefficients based wavelet filters is explored in the field of image super-resolution using sub-pixel image registration. The two-resolution frames are registered at a specific shift from one another to restore the resolution lost by CCD array of camera. The discrete wavelet transform (DWT obtained from the designed coefficients is applied on these two low-resolution images to obtain the high resolution image. The developed approach is validated by comparing the quality metrics with existing filter banks.
International Nuclear Information System (INIS)
BURKARDT, JOHN; GUNZBURGER, MAX; PETERSON, JANET; BRANNON, REBECCA M.
2002-01-01
The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported
International Nuclear Information System (INIS)
Qu, J. L.; Lu, F. J.; Lu, Y.; Song, L. M.; Zhang, S.; Wang, J. M.; Ding, G. Q.
2010-01-01
We present a study of the centroid frequencies and phase lags of quasi-periodic oscillations (QPOs) as functions of photon energy for GRS 1915+105. It is found that the centroid frequencies of the 0.5-10 Hz QPOs and their phase lags are both energy dependent, and there exists an anticorrelation between the QPO frequency and phase lag. These new results challenge the popular QPO models, because none of them can fully explain the observed properties. We suggest that the observed QPO phase lags are partially due to the variation of the QPO frequency with energy, especially for those with frequency higher than 3.5 Hz.
Centroid stabilization for laser alignment to corner cubes: designing a matched filter
Energy Technology Data Exchange (ETDEWEB)
Awwal, Abdul A. S.; Bliss, Erlan; Brunton, Gordon; Kamm, Victoria Miller; Leach, Richard R.; Lowe-Webb, Roger; Roberts, Randy; Wilhelmsen, Karl
2016-11-08
Automation of image-based alignment of National Ignition Facility high energy laser beams is providing the capability of executing multiple target shots per day. One important alignment is beam centration through the second and third harmonic generating crystals in the final optics assembly (FOA), which employs two retroreflecting corner cubes as centering references for each beam. Beam-to-beam variations and systematic beam changes over time in the FOA corner cube images can lead to a reduction in accuracy as well as increased convergence durations for the template-based position detector. A systematic approach is described that maintains FOA corner cube templates and guarantees stable position estimation.
Zhang, Z; Werner, F.; Cho, H. -M.; Wind, Galina; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry
2017-01-01
The so-called bi-spectral method retrieves cloud optical thickness (t) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near infrared (VIS/NIR) band and the other in a shortwave-infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved t and re. In this study, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the t and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the t and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval.
Energy Technology Data Exchange (ETDEWEB)
Regis, J.-M., E-mail: regis@ikp.uni-koeln.d [Institut fuer Kernphysik, Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany); Pascovici, G.; Jolie, J.; Rudigier, M. [Institut fuer Kernphysik, Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany)
2010-10-01
The ultra-fast timing technique was introduced in the 1980s and is capable of measuring picosecond lifetimes of nuclear excited states with about 3 ps accuracy. Very fast scintillator detectors are connected to an electronic timing circuit and detector vs. detector time spectra are analyzed by means of the centroid shift method. The very good 3% energy resolution of the nowadays available LaBr{sub 3}(Ce) scintillator detectors for {gamma}-rays has made possible an extension of the well-established fast timing technique. The energy dependent fast timing characteristics or the prompt curve, respectively, of the LaBr{sub 3}(Ce) scintillator detector has been measured using a standard {sup 152}Eu {gamma}-ray source. For any energy combination in the range of 200keV
Directory of Open Access Journals (Sweden)
S. Sharifi hashjin
2016-06-01
Full Text Available In recent years, developing target detection algorithms has received growing interest in hyperspectral images. In comparison to the classification field, few studies have been done on dimension reduction or band selection for target detection in hyperspectral images. This study presents a simple method to remove bad bands from the images in a supervised manner for sub-pixel target detection. The proposed method is based on comparing field and laboratory spectra of the target of interest for detecting bad bands. For evaluation, the target detection blind test dataset is used in this study. Experimental results show that the proposed method can improve efficiency of the two well-known target detection methods, ACE and CEM.
Mikheeva, A. I.; Tutubalina, O. V.; Zimin, M. V.; Golubeva, E. I.
2017-12-01
The tundra-taiga ecotone plays significant role in northern ecosystems. Due to global climatic changes, the vegetation of the ecotone is the key object of many remote-sensing studies. The interpretation of vegetation and nonvegetation objects of the tundra-taiga ecotone on satellite imageries of a moderate resolution is complicated by the difficulty of extracting these objects from the spectral and spatial mixtures within a pixel. This article describes a method for the subpixel classification of Terra ASTER satellite image for vegetation mapping of the tundra-taiga ecotone in the Tuliok River, Khibiny Mountains, Russia. It was demonstrated that this method allows to determine the position of the boundaries of ecotone objects and their abundance on the basis of quantitative criteria, which provides a more accurate characteristic of ecotone vegetation when compared to the per-pixel approach of automatic imagery interpretation.
Hill, R.; Calvin, W. M.; Harpold, A.
2017-12-01
Mountain snow storage is the dominant source of water for humans and ecosystems in western North America. Consequently, the spatial distribution of snow-covered area is fundamental to both hydrological, ecological, and climate models. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were collected along the entire Sierra Nevada mountain range extending from north of Lake Tahoe to south of Mt. Whitney during the 2015 and 2016 snow-covered season. The AVIRIS dataset used in this experiment consists of 224 contiguous spectral channels with wavelengths ranging 400-2500 nanometers at a 15-meter spatial pixel size. Data from the Sierras were acquired on four days: 2/24/15 during a very low snow year, 3/24/16 near maximum snow accumulation, and 5/12/16 and 5/18/16 during snow ablation and snow loss. Building on previous retrieval of subpixel snow-covered area algorithms that take into account varying grain size we present a model that analyzes multiple endmembers of varying snow grain size, vegetation, rock, and soil in segmented regions along the Sierra Nevada to determine snow-cover spatial extent, snow sub-pixel fraction, and approximate grain size. In addition, varying simulated models of the data will compare and contrast the retrieval of current snow products such as MODIS Snow-Covered Area and Grain Size (MODSCAG) and the Airborne Space Observatory (ASO). Specifically, does lower spatial resolution (MODIS), broader resolution bandwidth (MODIS), and limited spectral resolution (ASO) affect snow-cover area and grain size approximations? The implications of our findings will help refine snow mapping products for planned hyperspectral satellite spectrometer systems such as EnMAP (slated to launch in 2019), HISUI (planned for inclusion on the International Space Station in 2018), and HyspIRI (currently under consideration).
Directory of Open Access Journals (Sweden)
Dear Keith BG
2006-09-01
Full Text Available Abstract Background To explain the possible effects of exposure to weather conditions on population health outcomes, weather data need to be calculated at a level in space and time that is appropriate for the health data. There are various ways of estimating exposure values from raw data collected at weather stations but the rationale for using one technique rather than another; the significance of the difference in the values obtained; and the effect these have on a research question are factors often not explicitly considered. In this study we compare different techniques for allocating weather data observations to small geographical areas and different options for weighting averages of these observations when calculating estimates of daily precipitation and temperature for Australian Postal Areas. Options that weight observations based on distance from population centroids and population size are more computationally intensive but give estimates that conceptually are more closely related to the experience of the population. Results Options based on values derived from sites internal to postal areas, or from nearest neighbour sites – that is, using proximity polygons around weather stations intersected with postal areas – tended to include fewer stations' observations in their estimates, and missing values were common. Options based on observations from stations within 50 kilometres radius of centroids and weighting of data by distance from centroids gave more complete estimates. Using the geographic centroid of the postal area gave estimates that differed slightly from the population weighted centroids and the population weighted average of sub-unit estimates. Conclusion To calculate daily weather exposure values for analysis of health outcome data for small areas, the use of data from weather stations internal to the area only, or from neighbouring weather stations (allocated by the use of proximity polygons, is too limited. The most
Yang, Longyuan; Cao, Hongliang; Yuan, Qiaoxia; Luoa, Shuai; Liu, Zhigang
2018-03-01
Vermicomposting is a promising method to disposal dairy manures, and the dairy manure vermicompost (DMV) to replace expensive peat is of high value in the application of seedling compressed substrates. In this research, three main components: DMV, straw, and peat, are conducted in the compressed substrates, and the effect of individual components and the corresponding optimal ratio for the seedling production are significant. To address these issues, the simplex-centroid experimental mixture design is employed, and the cucumber seedling experiment is conducted to evaluate the compressed substrates. Results demonstrated that the mechanical strength and physicochemical properties of compressed substrates for cucumber seedling can be well satisfied with suitable mixture ratio of the components. Moreover, DMV, straw, and peat) could be determined at 0.5917:0.1608:0.2475 when the weight coefficients of the three parameters (shoot length, root dry weight, and aboveground dry weight) were 1:1:1. For different purpose, the optimum ratio can be little changed on the basis of different weight coefficients. Compressed substrate is lump and has certain mechanical strength, produced by application of mechanical pressure to the seedling substrates. It will not harm seedlings when bedding out the seedlings, since the compressed substrate and seedling are bedded out together. However, there is no one using the vermicompost and agricultural waste components of compressed substrate for vegetable seedling production before. Thus, it is important to understand the effect of individual components to seedling production, and to determine the optimal ratio of components.
Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi
Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.
Bhimireddy, Sudheer Reddy; Bhaganagar, Kiran
2016-11-01
Buoyant plumes are common in atmosphere when there exists a difference in temperature or density between the source and its ambience. In a stratified environment, plume rise happens until the buoyancy variation exists between the plume and ambience. In a calm no wind ambience, this plume rise is purely vertical and the entrainment happens because of the relative motion of the plume with ambience and also ambient turbulence. In this study, a plume centroid is defined as the plume mass center and is calculated from the kinematic equation which relates the rate of change of centroids position to the plume rise velocity. Parameters needed to describe the plume are considered as the plume radius, plumes vertical velocity and local buoyancy of the plume. The plume rise velocity is calculated by the mass, momentum and heat conservation equations in their differential form. Our study focuses on the entrainment velocity, as it depicts the extent of plume growth. This entrainment velocity is made up as sum of fractions of plume's relative velocity and ambient turbulence. From the results, we studied the effect of turbulence on the plume growth by observing the variation in the plume radius at different heights and the centroid height reached before loosing its buoyancy.
Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.
2013-01-01
Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post
Two methods to estimate the position resolution for straw chambers with strip readout
International Nuclear Information System (INIS)
Golutvin, I.A.; Movchan, S.A.; Peshekhonov, V.D.; Preda, T.
1992-01-01
The centroid and charge-ratio methods are presented to estimate the position resolution of the straw chambers with strip readout. For the straw chambers of 10 mm in diameter, the highest position resolution was obtained for a strip pitch of 5 mm. With the centroid method and perpendicular X-ray beam, the position resolution was ≅120 μm, for the signal-to-noise ratio of 60-65. The charge-ratio method has demonstrated ≅10% better position resolution at the edges of the strip. 6 refs.; 5 figs
Using Hedonic price model to estimate effects of flood on real ...
African Journals Online (AJOL)
Distances were measured in metres from the centroid of the building to the edge of the river and roads using Global Positioning System. The result of the estimation shows that property located within the floodplain are lowers in value by an average of N 493, 408 which represents 6.8 percent reduction in sales price for an ...
Directory of Open Access Journals (Sweden)
Zbisław Tabor
2011-05-01
Full Text Available In the study an algorithm based on a lattice gas model is proposed as a tool for enhancing quality of lowresolution images of binary structures. Analyzed low-resolution gray-level images are replaced with binary images, in which pixel size is decreased. The intensity in the pixels of these new images is determined by corresponding gray-level intensities in the original low-resolution images. Then the white phase pixels in the binary images are assumed to be particles interacting with one another, interacting with properly defined external field and allowed to diffuse. The evolution is driven towards a state with maximal energy by Metropolis algorithm. This state is used to estimate the imaged object. The performance of the proposed algorithm and local and global thresholding methods are compared.
Anatomy guided automated SPECT renal seed point estimation
Dwivedi, Shekhar; Kumar, Sailendra
2010-04-01
Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.
Directory of Open Access Journals (Sweden)
N. Peleg
2013-06-01
Full Text Available Runoff and flash flood generation are very sensitive to rainfall's spatial and temporal variability. The increasing use of radar and satellite data in hydrological applications, due to the sparse distribution of rain gauges over most catchments worldwide, requires furthering our knowledge of the uncertainties of these data. In 2011, a new super-dense network of rain gauges containing 14 stations, each with two side-by-side gauges, was installed within a 4 km2 study area near Kibbutz Galed in northern Israel. This network was established for a detailed exploration of the uncertainties and errors regarding rainfall variability within a common pixel size of data obtained from remote sensing systems for timescales of 1 min to daily. In this paper, we present the analysis of the first year's record collected from this network and from the Shacham weather radar, located 63 km from the study area. The gauge–rainfall spatial correlation and uncertainty were examined along with the estimated radar error. The nugget parameter of the inter-gauge rainfall correlations was high (0.92 on the 1 min scale and increased as the timescale increased. The variance reduction factor (VRF, representing the uncertainty from averaging a number of rain stations per pixel, ranged from 1.6% for the 1 min timescale to 0.07% for the daily scale. It was also found that at least three rain stations are needed to adequately represent the rainfall (VRF < 5% on a typical radar pixel scale. The difference between radar and rain gauge rainfall was mainly attributed to radar estimation errors, while the gauge sampling error contributed up to 20% to the total difference. The ratio of radar rainfall to gauge-areal-averaged rainfall, expressed by the error distribution scatter parameter, decreased from 5.27 dB for 3 min timescale to 3.21 dB for the daily scale. The analysis of the radar errors and uncertainties suggest that a temporal scale of at least 10 min should be used for
Ambrosi, R M; Hill, J; Cheruvu, C; Abbey, A F; Short, A D T
2002-01-01
The optical components of the Swift Gamma Ray Burst Explorer X-ray Telescope (XRT), consisting of the JET-X spare flight mirror and a charge coupled device of the type used in the EPIC program, were used in a re-calibration study carried out at the Panter facility, which is part of the Max Planck Institute for Extraterrestrial Physics. The objective of this study was to check the focal length and the off axis performance of the mirrors and to show that the half energy width (HEW) of the on-axis point spread function (PSF) was of the order of 16 arcsec at 1.5 keV (Nucl. Instr. and Meth. A 488 (2002) 543; SPIE 4140 (2000) 64) and that a centroiding accuracy better that 1 arcsec could be achieved within the 4 arcmin sampling area designated by the Burst Alert Telescope (Nucl. Instr. and Meth. A 488 (2002) 543). The centroiding accuracy of the Swift XRT's optical components was tested as a function of distance from the focus and off axis position of the PSF (Nucl. Instr. and Meth. A 488 (2002) 543). The presence ...
Rapid Moment Magnitude Estimation Using Strong Motion Derived Static Displacements
Muzli, Muzli; Asch, Guenter; Saul, Joachim; Murjaya, Jaya
2015-01-01
The static surface deformation can be recovered from strong motion records. Compared to satellite-based measurements such as GPS or InSAR, the advantage of strong motion records is that they have the potential to provide real-time coseismic static displacements. The use of these valuable data was optimized for the moment magnitude estimation. A centroid grid search method was introduced to calculate the moment magnitude by using1 model. The method to data sets was applied of the 2011...
Energy Technology Data Exchange (ETDEWEB)
Régis, J.-M., E-mail: regis@ikp.uni-koeln.de [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Mach, H. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Simpson, G.S. [Laboratoire de Physique Subatomique et de Cosmologie Grenoble, 53, rue des Martyrs, 38026 Grenoble Cedex (France); Jolie, J.; Pascovici, G.; Saed-Samii, N.; Warr, N. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Bruce, A. [School of Computing, Engineering and Mathematics, University of Brighton, Lewes Road, Brighton BN2 4GJ (United Kingdom); Degenkolb, J. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Fraile, L.M. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Fransen, C. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Ghita, D.G. [Horia Hulubei National Institute for Physics and Nuclear Engineering, 77125 Bucharest (Romania); and others
2013-10-21
A novel method for direct electronic “fast-timing” lifetime measurements of nuclear excited states via γ–γ coincidences using an array equipped with N∈N equally shaped very fast high-resolution LaBr{sub 3}(Ce) scintillator detectors is presented. Analogous to the mirror symmetric centroid difference method, the generalized centroid difference method provides two independent “start” and “stop” time spectra obtained by a superposition of the N(N−1)γ–γ time difference spectra of the N detector fast-timing system. The two fast-timing array time spectra correspond to a forward and reverse gating of a specific γ–γ cascade. Provided that the energy response and the electronic time pick-off of the detectors are almost equal, a mean prompt response difference between start and stop events is calibrated and used as a single correction for lifetime determination. These combined fast-timing arrays mean γ–γ time-walk characteristics can be determined for 40keV
Dahm, Torsten; Heimann, Sebastian; Funke, Sigward; Wendt, Siegfried; Rappsilber, Ivo; Bindi, Dino; Plenefisch, Thomas; Cotton, Fabrice
2018-05-01
On April 29, 2017 at 0:56 UTC (2:56 local time), an M W = 2.8 earthquake struck the metropolitan area between Leipzig and Halle, Germany, near the small town of Markranstädt. The earthquake was felt within 50 km from the epicenter and reached a local intensity of I 0 = IV. Already in 2015 and only 15 km northwest of the epicenter, a M W = 3.2 earthquake struck the area with a similar large felt radius and I 0 = IV. More than 1.1 million people live in the region, and the unusual occurrence of the two earthquakes led to public attention, because the tectonic activity is unclear and induced earthquakes have occurred in neighboring regions. Historical earthquakes south of Leipzig had estimated magnitudes up to M W ≈ 5 and coincide with NW-SE striking crustal basement faults. We use different seismological methods to analyze the two recent earthquakes and discuss them in the context of the known tectonic structures and historical seismicity. Novel stochastic full waveform simulation and inversion approaches are adapted for the application to weak, local earthquakes, to analyze mechanisms and ground motions and their relation to observed intensities. We find NW-SE striking normal faulting mechanisms for both earthquakes and centroid depths of 26 and 29 km. The earthquakes are located where faults with large vertical offsets of several hundred meters and Hercynian strike have developed since the Mesozoic. We use a stochastic full waveform simulation to explain the local peak ground velocities and calibrate the method to simulate intensities. Since the area is densely populated and has sensitive infrastructure, we simulate scenarios assuming that a 12-km long fault segment between the two recent earthquakes is ruptured and study the impact of rupture parameters on ground motions and expected damage.
DEFF Research Database (Denmark)
Kaspersen, Per Skougaard; Fensholt, Rasmus; Drews, Martin
This paper addresses the accuracy and applicability of medium resolution (MR) remote sensing estimates of impervious surfaces (IS) for urban land cover change analysis. Landsat-based vegetation indices (VI) are found to provide fairly accurate measurements of sub-pixel imperviousness for urban...... areas at different geographical locations within Europe, and to be applicable for cities with diverse morphologies and dissimilar climatic and vegetative conditions. Detailed data on urban land cover changes can be used to examine the diverse environmental impacts of past and present urbanisation...
Directory of Open Access Journals (Sweden)
Dóris Faria de OLIVEIRA
2017-10-01
Full Text Available Abstract This study aimed to combine the nutritional advantages of whey and soybean by developing a type of chocolate beverage with water-soluble soybean extract dissolved in whey. Different concentrations of thickeners (carrageenan, pectin and starch – maximum level of 500 mg.100 mL-1 were tested by a simplex-centroid design. Several physicochemical, rheological, and sensory properties of the beverages were measured and a multi-response optimization was conducted aiming to obtain a whey and soybean beverage with increased overall sensory impression and maximum purchase intention. Beverages presented mean protein levels higher than 3.1 g.100 mL-1, a low content of lipids (< 2 g.100 mL-1 and total soluble solids ≥20 g.100 mL-1. Response surface methodology was applied and the proposed for overall impression and purchase intention presented R2=0.891 and R2=0.966, respectively. The desirability index (d-value=0.92 showed that the best formulation should contain 46% carrageenan and 54% pectin in the formulation. The formulation manufactured with this combination of thickeners was tested and the overall impression was 7.11±1.09 (over a 9-point hedonic scale and the purchase intention was 4.0±1.3 (over a 5-point hedonic scale, thus showing that the proposed models were predictive.
Saoudi, Salma; Chammem, Nadia; Sifaoui, Ines; Jiménez, Ignacio A; Lorenzo-Morales, Jacob; Piñero, José E; Bouassida-Beji, Maha; Hamdi, Moktar; L Bazzocchi, Isabel
2017-08-01
Oxidation taking place during the use of oil leads to the deterioration of both nutritional and sensorial qualities. Natural antioxidants from herbs and plants are rich in phenolic compounds and could therefore be more efficient than synthetic ones in preventing lipid oxidation reactions. This study was aimed at the valorization of Tunisian aromatic plants and their active compounds as new sources of natural antioxidant preventing oil oxidation. Carnosol, rosmarinic acid and thymol were isolated from Rosmarinus officinalis and Thymus capitatus by column chromatography and were analyzed by nuclear magnetic resonance. Their antioxidant activities were measured by DPPH, ABTS and FRAP assays. These active compounds were added to soybean oil in different proportions using a simplex-centroid mixture design. Antioxidant activity and oxidative stability of oils were determined before and after 20 days of accelerated oxidation at 60 °C. Results showed that bioactive compounds are effective in maintaining oxidative stability of soybean oil. However, the binary interaction of rosmarinic acid and thymol caused a reduction in antioxidant activity and oxidative stability of soybean oil. Optimum conditions for maximum antioxidant activity and oxidative stability were found to be an equal ternary mixture of carnosol, rosmarinic acid and thymol. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Xin, X.; Li, F.; Peng, Z.; Qinhuo, L.
2017-12-01
Land surface heterogeneities significantly affect the reliability and accuracy of remotely sensed evapotranspiration (ET), and it gets worse for lower resolution data. At the same time, temporal scale extrapolation of the instantaneous latent heat flux (LE) at satellite overpass time to daily ET are crucial for applications of such remote sensing product. The purpose of this paper is to propose a simple but efficient model for estimating daytime evapotranspiration considering heterogeneity of mixed pixels. In order to do so, an equation to calculate evapotranspiration fraction (EF) of mixed pixels was derived based on two key assumptions. Assumption 1: the available energy (AE) of each sub-pixel equals approximately to that of any other sub-pixels in the same mixed pixel within acceptable margin of bias, and as same as the AE of the mixed pixel. It's only for a simpification of the equation, and its uncertainties and resulted errors in estimated ET are very small. Assumption 2: EF of each sub-pixel equals to the EF of the nearest pure pixel(s) of same land cover type. This equation is supposed to be capable of correcting the spatial scale error of the mixed pixels EF and can be used to calculated daily ET with daily AE data.The model was applied to an artificial oasis in the midstream of Heihe River. HJ-1B satellite data were used to estimate the lumped fluxes at the scale of 300 m after resampling the 30-m resolution datasets to 300 m resolution, which was used to carry on the key step of the model. The results before and after correction were compare to each other and validated using site data of eddy-correlation systems. Results indicated that the new model is capable of improving accuracy of daily ET estimation relative to the lumped method. Validations at 12 sites of eddy-correlation systems for 9 days of HJ-1B overpass showed that the R² increased to 0.82 from 0.62; the RMSE decreased to 1.60 MJ/m² from 2.47MJ/m²; the MBE decreased from 1.92 MJ/m² to 1
A new technique for fire risk estimation in the wildland urban interface
Dasgupta, S.; Qu, J. J.; Hao, X.
A novel technique based on the physical variable of pre-ignition energy is proposed for assessing fire risk in the Grassland-Urban-Interface The physical basis lends meaning a site and season independent applicability possibilities for computing spread rates and ignition probabilities features contemporary fire risk indices usually lack The method requires estimates of grass moisture content and temperature A constrained radiative-transfer inversion scheme on MODIS NIR-SWIR reflectances which reduces solution ambiguity is used for grass moisture retrieval while MODIS land surface temperature emissivity products are used for retrieving grass temperature Subpixel urban contamination of the MODIS reflective and thermal signals over a Grassland-Urban-Interface pixel is corrected using periodic estimates of urban influence from high spatial resolution ASTER
Beelen, R.M.J.; Voogt, M.; Duyzer, J.; Zandveld, P.; Hoek, G.
2010-01-01
The performance of a Land Use Regression (LUR) model and a dispersion model (URBIS - URBis Information System) was compared in a Dutch urban area. For the Rijnmond area, i.e. Rotterdam and surroundings, nitrogen dioxide (NO2) concentrations for 2001 were estimated for nearly 70 000 centroids of a
NSGIC Local Govt | GIS Inventory — Cellular Phone Towers dataset current as of 2003. Cell towers developed for Appraiser's Department in 2003. Location was based upon parcel centroids, and corrected...
2009-06-01
Pressure (R) Figure 2.9 Aerodynamic drag acting at the centroid of each surface element This approach avoids time- consuming repetitive evaluation of...5. Update the state estimate with the latest measurement yfe (37 p. 210) xfe(tfc) = **_!&) + Kfc (yfe - hfc) (3.72) In some cases it is necessary to...in XELIAS can be a rather challenging and time consuming task, depending on the complexity of the target being analyzed. 4. According to the XELIAS
Attenuation (1/Q) estimation in reflection seismic records
International Nuclear Information System (INIS)
Raji, Wasiu; Rietbrock, Andreas
2013-01-01
Despite its numerous potential applications, the lack of a reliable method for determining attenuation (1/Q) in seismic data is an issue when utilizing attenuation for hydrocarbon exploration. In this paper, a new method for measuring attenuation in reflection seismic data is presented. The inversion process involves two key stages: computation of the centroid frequency for the individual signal using a variable window length and fast Fourier transform; and estimation of the difference in the centroid frequency and travel time for paired incident and transmitted signals. The new method introduces a shape factor and a constant which allows several spectral shapes to be used to represent a real seismic signal without altering the mathematical model. Application of the new method to synthetic data shows that it can provide reliable estimates of Q using any of the spectral shapes commonly assumed for real seismic signals. Tested against two published methods of Q measurement, the new method shows less sensitivity to interference from noise and change of frequency bandwidth. The method is also applied to a 3D data set from the Gullfaks field, North Sea, Norway. The trace length is divided into four intervals: AB, BC, CD, and DE. Results show that interval AB has the lowest 1/Q value, and that interval BC has the highest 1/Q value. The values of 1/Q measured in the CDP stack using the new method are consistent with those measured using the classical spectral ratio method. (paper)
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
Depth estimation of complex geometry scenes from light fields
Si, Lipeng; Wang, Qing
2018-01-01
The surface camera (SCam) of light fields gathers angular sample rays passing through a 3D point. The consistency of SCams is evaluated to estimate the depth map of scene. But the consistency is affected by several limitations such as occlusions or non-Lambertian surfaces. To solve those limitations, the SCam is partitioned into two segments that one of them could satisfy the consistency constraint. The segmentation pattern of SCam is highly related to the texture of spatial patch, so we enforce a mask matching to describe the shape correlation between segments of SCam and spatial patch. To further address the ambiguity in textureless region, a global method with pixel-wise plane label is presented. Plane label inference at each pixel can recover not only depth value but also local geometry structure, that is suitable for light fields with sub-pixel disparities and continuous view variation. Our method is evaluated on public light field datasets and outperforms the state-of-the-art.
Estimating Velocities of Glaciers Using Sentinel-1 SAR Imagery
Gens, R.; Arnoult, K., Jr.; Friedl, P.; Vijay, S.; Braun, M.; Meyer, F. J.; Gracheva, V.; Hogenson, K.
2017-12-01
In an international collaborative effort, software has been developed to estimate the velocities of glaciers by using Sentinel-1 Synthetic Aperture Radar (SAR) imagery. The technique, initially designed by the University of Erlangen-Nuremberg (FAU), has been previously used to quantify spatial and temporal variabilities in the velocities of surging glaciers in the Pakistan Karakoram. The software estimates surface velocities by first co-registering image pairs to sub-pixel precision and then by estimating local offsets based on cross-correlation. The Alaska Satellite Facility (ASF) at the University of Alaska Fairbanks (UAF) has modified the software to make it more robust and also capable of migration into the Amazon Cloud. Additionally, ASF has implemented a prototype that offers the glacier tracking processing flow as a subscription service as part of its Hybrid Pluggable Processing Pipeline (HyP3). Since the software is co-located with ASF's cloud-based Sentinel-1 archive, processing of large data volumes is now more efficient and cost effective. Velocity maps are estimated for Single Look Complex (SLC) SAR image pairs and a digital elevation model (DEM) of the local topography. A time series of these velocity maps then allows the long-term monitoring of these glaciers. Due to the all-weather capabilities and the dense coverage of Sentinel-1 data, the results are complementary to optically generated ones. Together with the products from the Global Land Ice Velocity Extraction project (GoLIVE) derived from Landsat 8 data, glacier speeds can be monitored more comprehensively. Examples from Sentinel-1 SAR-derived results are presented along with optical results for the same glaciers.
Investigation on method of estimating the excitation spectrum of vibration source
International Nuclear Information System (INIS)
Zhang Kun; Sun Lei; Lin Song
2010-01-01
In practical engineer area, it is hard to obtain the excitation spectrum of the auxiliary machines of nuclear reactor through direct measurement. To solve this problem, the general method of estimating the excitation spectrum of vibration source through indirect measurement is proposed. First, the dynamic transfer matrix between the virtual excitation points and the measure points is obtained through experiment. The matrix combined with the response spectrum at the measure points under practical work condition can be used to calculate the excitation spectrum acts on the virtual excitation points. Then a simplified method is proposed which is based on the assumption that the vibration machine can be regarded as rigid body. The method treats the centroid as the excitation point and the dynamic transfer matrix is derived by using the sub structure mobility synthesis method. Thus, the excitation spectrum can be obtained by the inverse of the transfer matrix combined with the response spectrum at the measure points. Based on the above method, a computing example is carried out to estimate the excitation spectrum acts on the centroid of a electrical pump. By comparing the input excitation and the estimated excitation, the reliability of this method is verified. (authors)
Fournier, Céline; Bridal, S Lori; Coron, Alain; Laugier, Pascal
2003-04-01
In vivo skin attenuation estimators must be applicable to backscattered radio frequency signals obtained in a pulse-echo configuration. This work compares three such estimators: short-time Fourier multinarrowband (MNB), short-time Fourier centroid shift (FC), and autoregressive centroid shift (ARC). All provide estimations of the attenuation slope (beta, dB x cm(-1) x MHz(-1)); MNB also provides an independent estimation of the mean attenuation level (IA, dB x cm(-1)). Practical approaches are proposed for data windowing, spectral variance characterization, and bandwidth selection. Then, based on simulated data, FC and ARC were selected as the best (compromise between bias and variance) attenuation slope estimators. The FC, ARC, and MNB were applied to in vivo human skin data acquired at 20 MHz to estimate betaFC, betaARC, and IA(MNB), respectively (without diffraction correction, between 11 and 27 MHz). Lateral heterogeneity had less effect and day-to-day reproducibility was smaller for IA than for beta. The IA and betaARC were dependent on pressure applied to skin during acquisition and IA on room and skin-surface temperatures. Negative values of IA imply that IA and beta may be influenced not only by skin's attenuation but also by structural heterogeneity across dermal depth. Even so, IA was correlated to subject age and IA, betaFC, and betaARC were dependent on subject gender. Thus, in vivo attenuation measurements reveal interesting variations with subject age and gender and thus appeared promising to detect skin structure modifications.
Circumcenter, Circumcircle and Centroid of a Triangle
Coghetto Roland
2016-01-01
We introduce, using the Mizar system [1], some basic concepts of Euclidean geometry: the half length and the midpoint of a segment, the perpendicular bisector of a segment, the medians (the cevians that join the vertices of a triangle to the midpoints of the opposite sides) of a triangle.
Circumcenter, Circumcircle and Centroid of a Triangle
Directory of Open Access Journals (Sweden)
Coghetto Roland
2016-03-01
Full Text Available We introduce, using the Mizar system [1], some basic concepts of Euclidean geometry: the half length and the midpoint of a segment, the perpendicular bisector of a segment, the medians (the cevians that join the vertices of a triangle to the midpoints of the opposite sides of a triangle.
Center for Research on Infrared Detectors (CENTROID)
2006-09-30
of growth, x is also monitored in situ by SE, and Tis measured by a thermocouple, a pyrometer and indirectly by the heating power and is calibrated...Polar optical, acoustic , and inter-valley phonon scattering are included, as wells as scattering from the quantum dots. The simulation includes
Epidemiology from Tweets: Estimating Misuse of Prescription Opioids in the USA from Social Media.
Chary, Michael; Genes, Nicholas; Giraud-Carrier, Christophe; Hanson, Carl; Nelson, Lewis S; Manini, Alex F
2017-12-01
The misuse of prescription opioids (MUPO) is a leading public health concern. Social media are playing an expanded role in public health research, but there are few methods for estimating established epidemiological metrics from social media. The purpose of this study was to demonstrate that the geographic variation of social media posts mentioning prescription opioid misuse strongly correlates with government estimates of MUPO in the last month. We wrote software to acquire publicly available tweets from Twitter from 2012 to 2014 that contained at least one keyword related to prescription opioid use (n = 3,611,528). A medical toxicologist and emergency physician curated the list of keywords. We used the semantic distance (SemD) to automatically quantify the similarity of meaning between tweets and identify tweets that mentioned MUPO. We defined the SemD between two words as the shortest distance between the two corresponding word-centroids. Each word-centroid represented all recognized meanings of a word. We validated this automatic identification with manual curation. We used Twitter metadata to estimate the location of each tweet. We compared our estimated geographic distribution with the 2013-2015 National Surveys on Drug Usage and Health (NSDUH). Tweets that mentioned MUPO formed a distinct cluster far away from semantically unrelated tweets. The state-by-state correlation between Twitter and NSDUH was highly significant across all NSDUH survey years. The correlation was strongest between Twitter and NSDUH data from those aged 18-25 (r = 0.94, p usage. Mentions of MUPO on Twitter correlate strongly with state-by-state NSDUH estimates of MUPO. We have also demonstrated that a natural language processing can be used to analyze social media to provide insights for syndromic toxicosurveillance.
DEFF Research Database (Denmark)
Arndt, Channing; Simler, Kenneth R.
2010-01-01
A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...
Petrou, Zisis I.; Xian, Yang; Tian, YingLi
2018-04-01
Estimation of sea ice motion at fine scales is important for a number of regional and local level applications, including modeling of sea ice distribution, ocean-atmosphere and climate dynamics, as well as safe navigation and sea operations. In this study, we propose an optical flow and super-resolution approach to accurately estimate motion from remote sensing images at a higher spatial resolution than the original data. First, an external example learning-based super-resolution method is applied on the original images to generate higher resolution versions. Then, an optical flow approach is applied on the higher resolution images, identifying sparse correspondences and interpolating them to extract a dense motion vector field with continuous values and subpixel accuracies. Our proposed approach is successfully evaluated on passive microwave, optical, and Synthetic Aperture Radar data, proving appropriate for multi-sensor applications and different spatial resolutions. The approach estimates motion with similar or higher accuracy than the original data, while increasing the spatial resolution of up to eight times. In addition, the adopted optical flow component outperforms a state-of-the-art pattern matching method. Overall, the proposed approach results in accurate motion vectors with unprecedented spatial resolutions of up to 1.5 km for passive microwave data covering the entire Arctic and 20 m for radar data, and proves promising for numerous scientific and operational applications.
Evaluation of the Airborne CASI/TASI Ts-VI Space Method for Estimating Near-Surface Soil Moisture
Directory of Open Access Journals (Sweden)
Lei Fan
2015-03-01
Full Text Available High spatial resolution airborne data with little sub-pixel heterogeneity were used to evaluate the suitability of the temperature/vegetation (Ts/VI space method developed from satellite observations, and were explored to improve the performance of the Ts/VI space method for estimating soil moisture (SM. An evaluation of the airborne ΔTs/Fr space (incorporated with air temperature revealed that normalized difference vegetation index (NDVI saturation and disturbed pixels were hindering the appropriate construction of the space. The non-disturbed ΔTs/Fr space, which was modified by adjusting the NDVI saturation and eliminating the disturbed pixels, was clearly correlated with the measured SM. The SM estimations of the non-disturbed ΔTs/Fr space using the evaporative fraction (EF and temperature vegetation dryness index (TVDI were validated by using the SM measured at a depth of 4 cm, which was determined according to the land surface types. The validation results show that the EF approach provides superior estimates with a lower RMSE (0.023 m3·m−3 value and a higher correlation coefficient (0.68 than the TVDI. The application of the airborne ΔTs/Fr space shows that the two modifications proposed in this study strengthen the link between the ΔTs/Fr space and SM, which is important for improving the precision of the remote sensing Ts/VI space method for monitoring SM.
Dozier, J.; Tolle, K.; Bair, N.
2014-12-01
We have a problem that may be a specific example of a generic one. The task is to estimate spatiotemporally distributed estimates of snow water equivalent (SWE) in snow-dominated mountain environments, including those that lack on-the-ground measurements. Several independent methods exist, but all are problematic. The remotely sensed date of disappearance of snow from each pixel can be combined with a calculation of melt to reconstruct the accumulated SWE for each day back to the last significant snowfall. Comparison with streamflow measurements in mountain ranges where such data are available shows this method to be accurate, but the big disadvantage is that SWE can only be calculated retroactively after snow disappears, and even then only for areas with little accumulation during the melt season. Passive microwave sensors offer real-time global SWE estimates but suffer from several issues, notably signal loss in wet snow or in forests, saturation in deep snow, subpixel variability in the mountains owing to the large (~25 km) pixel size, and SWE overestimation in the presence of large grains such as depth and surface hoar. Throughout the winter and spring, snow-covered area can be measured at sub-km spatial resolution with optical sensors, with accuracy and timeliness improved by interpolating and smoothing across multiple days. So the question is, how can we establish the relationship between Reconstruction—available only after the snow goes away—and passive microwave and optical data to accurately estimate SWE during the snow season, when the information can help forecast spring runoff? Linear regression provides one answer, but can modern machine learning techniques (used to persuade people to click on web advertisements) adapt to improve forecasts of floods and droughts in areas where more than one billion people depend on snowmelt for their water resources?
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Variance estimation for generalized Cavalieri estimators
Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen
2011-01-01
The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.
Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua
2017-05-01
With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
Directory of Open Access Journals (Sweden)
Cui Jia
2017-05-01
Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
Solar resources estimation combining digital terrain models and satellite images techniques
Energy Technology Data Exchange (ETDEWEB)
Bosch, J.L.; Batlles, F.J. [Universidad de Almeria, Departamento de Fisica Aplicada, Ctra. Sacramento s/n, 04120-Almeria (Spain); Zarzalejo, L.F. [CIEMAT, Departamento de Energia, Madrid (Spain); Lopez, G. [EPS-Universidad de Huelva, Departamento de Ingenieria Electrica y Termica, Huelva (Spain)
2010-12-15
One of the most important steps to make use of any renewable energy is to perform an accurate estimation of the resource that has to be exploited. In the designing process of both active and passive solar energy systems, radiation data is required for the site, with proper spatial resolution. Generally, a radiometric stations network is used in this evaluation, but when they are too dispersed or not available for the study area, satellite images can be utilized as indirect solar radiation measurements. Although satellite images cover wide areas with a good acquisition frequency they usually have a poor spatial resolution limited by the size of the image pixel, and irradiation must be interpolated to evaluate solar irradiation at a sub-pixel scale. When pixels are located in flat and homogeneous areas, correlation of solar irradiation is relatively high, and classic interpolation can provide a good estimation. However, in complex topography zones, data interpolation is not adequate and the use of Digital Terrain Model (DTM) information can be helpful. In this work, daily solar irradiation is estimated for a wide mountainous area using a combination of Meteosat satellite images and a DTM, with the advantage of avoiding the necessity of ground measurements. This methodology utilizes a modified Heliosat-2 model, and applies for all sky conditions; it also introduces a horizon calculation of the DTM points and accounts for the effect of snow covers. Model performance has been evaluated against data measured in 12 radiometric stations, with results in terms of the Root Mean Square Error (RMSE) of 10%, and a Mean Bias Error (MBE) of +2%, both expressed as a percentage of the mean value measured. (author)
Estimating the accuracy of geographical imputation
Directory of Open Access Journals (Sweden)
Boscoe Francis P
2008-01-01
Full Text Available Abstract Background To reduce the number of non-geocoded cases researchers and organizations sometimes include cases geocoded to postal code centroids along with cases geocoded with the greater precision of a full street address. Some analysts then use the postal code to assign information to the cases from finer-level geographies such as a census tract. Assignment is commonly completed using either a postal centroid or by a geographical imputation method which assigns a location by using both the demographic characteristics of the case and the population characteristics of the postal delivery area. To date no systematic evaluation of geographical imputation methods ("geo-imputation" has been completed. The objective of this study was to determine the accuracy of census tract assignment using geo-imputation. Methods Using a large dataset of breast, prostate and colorectal cancer cases reported to the New Jersey Cancer Registry, we determined how often cases were assigned to the correct census tract using alternate strategies of demographic based geo-imputation, and using assignments obtained from postal code centroids. Assignment accuracy was measured by comparing the tract assigned with the tract originally identified from the full street address. Results Assigning cases to census tracts using the race/ethnicity population distribution within a postal code resulted in more correctly assigned cases than when using postal code centroids. The addition of age characteristics increased the match rates even further. Match rates were highly dependent on both the geographic distribution of race/ethnicity groups and population density. Conclusion Geo-imputation appears to offer some advantages and no serious drawbacks as compared with the alternative of assigning cases to census tracts based on postal code centroids. For a specific analysis, researchers will still need to consider the potential impact of geocoding quality on their results and evaluate
Directory of Open Access Journals (Sweden)
Cheng Liu
2011-07-01
Full Text Available Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction，have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR and near infrared (NIR channels of satellite sensors have been employed for detecting live fuel moisture content (FMC, and the Normalized Difference Water Index (NDWI was used for evaluating the forest vegetation condition and its moisture status.
Zhang, Jia-Hua; Yao, Feng-Mei; Liu, Cheng; Yang, Li-Min; Boken, Vijendra K
2011-08-01
Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP) estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction, have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR) and near infrared (NIR) channels of satellite sensors have been employed for detecting live fuel moisture content (FMC), and the Normalized Difference Water Index (NDWI) was used for evaluating the forest vegetation condition and its moisture status.
Zhang, Jia-Hua; Yao, Feng-Mei; Liu, Cheng; Yang, Li-Min; Boken, Vijendra K.
2011-01-01
Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP) estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction, have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR) and near infrared (NIR) channels of satellite sensors have been employed for detecting live fuel moisture content (FMC), and the Normalized Difference Water Index (NDWI) was used for evaluating the forest vegetation condition and its moisture status. PMID:21909297
Directory of Open Access Journals (Sweden)
Qingjun Zhang
2014-01-01
Full Text Available This paper proposes a novel image formation algorithm for the bistatic synthetic aperture radar (BiSAR with the configuration of a noncooperative transmitter and a stationary receiver in which the traditional imaging algorithm failed because the necessary imaging parameters cannot be estimated from the limited information from the noncooperative data provider. In the new algorithm, the essential parameters for imaging, such as squint angle, Doppler centroid, and Doppler chirp-rate, will be estimated by full exploration of the recorded direct signal (direct signal is the echo from satellite to stationary receiver directly from the transmitter. The Doppler chirp-rate is retrieved by modeling the peak phase of direct signal as a quadratic polynomial. The Doppler centroid frequency and the squint angle can be derived from the image contrast optimization. Then the range focusing, the range cell migration correction (RCMC, and the azimuth focusing are implemented by secondary range compression (SRC and the range cell migration, respectively. At last, the proposed algorithm is validated by imaging of the BiSAR experiment configured with china YAOGAN 10 SAR as the transmitter and the receiver platform located on a building at a height of 109 m in Jiangsu province. The experiment image with geometric correction shows good accordance with local Google images.
Estimation of the Rotational Terms of the Dynamic Response Matrix
Directory of Open Access Journals (Sweden)
D. Montalvão
2004-01-01
Full Text Available The dynamic response of a structure can be described by both its translational and rotational receptances. The latter ones are frequently not considered because of the difficulties in applying a pure moment excitation or in measuring rotations. However, in general, this implies a reduction up to 75% of the complete model. On the other hand, if a modification includes a rotational inertia, the rotational receptances of the unmodified system are needed. In one method, more commonly found in the literature, a so called T-block is attached to the structure. Then, a force, applied to an arm of the T-block, generates a moment together with a force at the connection point. The T-block also allows for angular displacement measurements. Nevertheless, the results are often not quite satisfactory. In this work, an alternative method based upon coupling techniques is developed, in which rotational receptances are estimated without the need of applying a moment excitation. This is accomplished by introducing a rotational inertia modification when rotating the T-block. The force is then applied in its centroid. Several numerical and experimental examples are discussed so that the methodology can be clearly described. The advantages and limitations are identified within the practical application of the method.
A comparison of moment magnitude estimates for the European-Mediterranean and Italian regions
Gasperini, Paolo; Lolli, Barbara; Vannucci, Gianfranco; Boschi, Enzo
2012-09-01
With the goal of constructing a homogeneous data set of moment magnitudes (Mw) to be used for seismic hazard assessment, we compared Mw estimates from moment tensor catalogues available online. We found an apparent scaling disagreement between Mw estimates from the National Earthquake Information Center (NEIC) of the US Geological Survey and from the Global Centroid Moment Tensor (GCMT) project. We suspect that this is the effect of an underestimation of Mw > 7.0 (M0 > 4.0 × 1019 Nm) computed by NEIC owing to the limitations of their computational approach. We also found an apparent scaling disagreement between GCMT and two regional moment tensor catalogues provided by the 'Eidgenössische Technische Hochschule Zürich' (ETHZ) and by the European-Mediterranean Regional Centroid Moment Tensor (RCMT) project of the Italian 'Istituto Nazionale di Geofisica e Vulcanologia' (INGV). This is probably the effect of the overestimation of Mw < 5.5 (M0 < 2.2 × 1017 Nm), up to year 2002, and of Mw < 5.0 (M0 < 4.0 × 1016 Nm), since year 2003, owing to the physical limitations of the standard CMT inversion method used by GCMT for the earthquakes of relatively low magnitude. If the discrepant data are excluded from the comparisons, the scaling disagreements become insignificant in all cases. We observed instead small absolute offsets (≤0.1 units) for NEIC and ETHZ catalogues with respect to GCMT whereas there is an almost perfect correspondence between RCMT and GCMT. Finally, we found a clear underestimation of about 0.2 units of Mw magnitudes computed at the INGV using the time-domain moment tensor (TDMT) method with respect to those reported by GCMT and RCMT. According to our results, we suggest appropriate offset corrections to be applied to Mw estimates from NEIC, ETHZ and TDMT catalogues before merging their data with GCMT and RCMT catalogues. We suggest as well to discard the probably discrepant data from NEIC and GCMT if other Mw estimates from different sources are
Estimating Selected Streamflow Statistics Representative of 1930-2002 in West Virginia
Wiley, Jeffrey B.
2008-01-01
Regional equations and procedures were developed for estimating 1-, 3-, 7-, 14-, and 30-day 2-year; 1-, 3-, 7-, 14-, and 30-day 5-year; and 1-, 3-, 7-, 14-, and 30-day 10-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the 1-day, 3-year and 4-day, 3-year biologically based low-flow frequency values; the U.S. Environmental Protection Agency harmonic-mean flows; and the 10-, 25-, 50-, 75-, and 90-percent flow-duration values. Regional equations were developed using ordinary least-squares regression using statistics from 117 U.S. Geological Survey continuous streamflow-gaging stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia - North, South-Central, and Eastern Panhandle - were determined. Drainage area, precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. Estimating procedures are presented for determining statistics at a gaging station, a partial-record station, and an ungaged location. Examples of some estimating procedures are presented.
Estimating tree bole volume using artificial neural network models for four species in Turkey.
Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V
2010-01-01
Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.
Estimation of selected seasonal streamflow statistics representative of 1930-2002 in West Virginia
Wiley, Jeffrey B.; Atkins, John T.
2010-01-01
Regional equations and procedures were developed for estimating seasonal 1-day 10-year, 7-day 10-year, and 30-day 5-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the seasonal U.S. Environmental Protection Agency harmonic-mean flows and the 50-percent flow-duration values. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Regional equations were developed using ordinary least squares regression using statistics from 117 U.S. Geological Survey continuous streamgage stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia-North, South-Central, and Eastern Panhandle Regions-were determined. Drainage area, average annual precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. The average standard error of estimates for the equations ranged from 12.6 to 299 percent. Procedures developed to estimate the selected seasonal streamflow statistics in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia that have independent variables within the limits of the stations used to develop the regional equations: drainage area from 16.3 to 1,516 square miles in the North Region, from 2.78 to 1,619 square miles in the South-Central Region, and from 8.83 to 3,041 square miles in the Eastern Panhandle Region; average annual precipitation from 42.3 to 61.4 inches in the South-Central Region and from 39.8 to 52.9 inches in the Eastern Panhandle Region; and longitude of the basin centroid from 79.618 to 82.023 decimal degrees in the North Region. All estimates of seasonal streamflow statistics are representative of the period from the 1930 to the 2002 climatic year.
Di Bella, C.; Faivre, R.; Ruget, F.; Seguin, B.
In France, pastures constitute an important land cover type, sustaining principally husbandry production. The absence of low-cost methods applicable to large regions has conducted to the use of simulation models, as in the ISOP system. Remote sensing data may be considered as a potential tool to improve a correct diagnosis in a real time framework. Thirteen forage regions (FR) of France, differing in their soil, climatic and productive characteristics were selected for this purpose. SPOT4-VEGETATION images have been used to provide, using subpixel estimation models, the spectral signature corresponding to pure pasture conditions. This information has been related with some growth variables estimated by STICS-Prairie model (inside ISOP system). Beyond the good general agreement between the two types of data, we found that the best relations were observed between NDVI middle infrared based index (SWVI) and leaf area index. The results confirm the capacities of the satellite data to provide complementary productive variables and help to identify the spatial and temporal differences between satellite and model information, mainly during the harvesting periods. This could contribute to improve the evaluations of the model on a regional scale.
Peterson, Harold; Koshak, William J.
2009-01-01
An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.
Estimating urban vegetation fraction across 25 cities in pan-Pacific using Landsat time series data
Lu, Yuhao; Coops, Nicholas C.; Hermosilla, Txomin
2017-04-01
Urbanization globally is consistently reshaping the natural landscape to accommodate the growing human population. Urban vegetation plays a key role in moderating environmental impacts caused by urbanization and is critically important for local economic, social and cultural development. The differing patterns of human population growth, varying urban structures and development stages, results in highly varied spatial and temporal vegetation patterns particularly in the pan-Pacific region which has some of the fastest urbanization rates globally. Yet spatially-explicit temporal information on the amount and change of urban vegetation is rarely documented particularly in less developed nations. Remote sensing offers an exceptional data source and a unique perspective to map urban vegetation and change due to its consistency and ubiquitous nature. In this research, we assess the vegetation fractions of 25 cities across 12 pan-Pacific countries using annual gap-free Landsat surface reflectance products acquired from 1984 to 2012, using sub-pixel, spectral unmixing approaches. Vegetation change trends were then analyzed using Mann-Kendall statistics and Theil-Sen slope estimators. Unmixing results successfully mapped urban vegetation for pixels located in urban parks, forested mountainous regions, as well as agricultural land (correlation coefficient ranging from 0.66 to 0.77). The greatest vegetation loss from 1984 to 2012 was found in Shanghai, Tianjin, and Dalian in China. In contrast, cities including Vancouver (Canada) and Seattle (USA) showed stable vegetation trends through time. Using temporal trend analysis, our results suggest that it is possible to reduce noise and outliers caused by phenological changes particularly in cropland using dense new Landsat time series approaches. We conclude that simple yet effective approaches of unmixing Landsat time series data for assessing spatial and temporal changes of urban vegetation at regional scales can provide
Lu, Shan; Zhang, Hanmo
2016-01-01
To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Chatterji, Gano
2011-01-01
Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.
Optimal fault signal estimation
Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.
2002-01-01
We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By
Tracking Subpixel Targets with Critically Sampled Optical Sensors
2012-09-01
LEFT BLANK xii LIST OF ACRONYMS AND ABBREVIATIONS PSF point spread function SNR signal-to-noise ratio SLAM simultaneous localization and tracking EO... LIDAR light detection and ranging FOV field of view RMS root mean squared PF particle filter TBD track before detect MCMC monte carlo markov chain
DEFF Research Database (Denmark)
Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik
1995-01-01
This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...
International Nuclear Information System (INIS)
Faggiano, Elena; Cattaneo, Giovanni M; Ciavarro, Cristina; Dell'Oca, Italo; Persano, Diego; Calandrino, Riccardo; Rizzo, Giovanna
2011-01-01
The study of lung parenchyma anatomical modification is useful to estimate dose discrepancies during the radiation treatment of Non-Small-Cell Lung Cancer (NSCLC) patients. We propose and validate a method, based on free-form deformation and mutual information, to elastically register planning kVCT with daily MVCT images, to estimate lung parenchyma modification during Tomotherapy. We analyzed 15 registrations between the planning kVCT and 3 MVCT images for each of the 5 NSCLC patients. Image registration accuracy was evaluated by visual inspection and, quantitatively, by Correlation Coefficients (CC) and Target Registration Errors (TRE). Finally, a lung volume correspondence analysis was performed to specifically evaluate registration accuracy in lungs. Results showed that elastic registration was always satisfactory, both qualitatively and quantitatively: TRE after elastic registration (average value of 3.6 mm) remained comparable and often smaller than voxel resolution. Lung volume variations were well estimated by elastic registration (average volume and centroid errors of 1.78% and 0.87 mm, respectively). Our results demonstrate that this method is able to estimate lung deformations in thorax MVCT, with an accuracy within 3.6 mm comparable or smaller than the voxel dimension of the kVCT and MVCT images. It could be used to estimate lung parenchyma dose variations in thoracic Tomotherapy
Energy Technology Data Exchange (ETDEWEB)
Gilani, Syed Irtiza Ali
2008-09-15
correlation matrix is employed. This method is beneficial because it offers sub-pixel displacement estimation without interpolation, increased robustness to noise and limited computational complexity. Owing to all these advantages, the proposed technique is very suitable for the real-time implementation to solve the motion correction problem. (orig.)
International Nuclear Information System (INIS)
Gilani, Syed Irtiza Ali
2008-09-01
correlation matrix is employed. This method is beneficial because it offers sub-pixel displacement estimation without interpolation, increased robustness to noise and limited computational complexity. Owing to all these advantages, the proposed technique is very suitable for the real-time implementation to solve the motion correction problem. (orig.)
Directory of Open Access Journals (Sweden)
Peek-Asa Corinne
2011-01-01
Full Text Available Abstract Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM, which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks within each participant's reported region (ZIP Code, weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV. We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance
2011-01-01
Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM), which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks) within each participant's reported region (ZIP Code), weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV). We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance estimates. The algorithm
Adjusting estimative prediction limits
Masao Ueki; Kaoru Fueda
2007-01-01
This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.
Estimation of measurement variances
International Nuclear Information System (INIS)
Anon.
1981-01-01
In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
DEFF Research Database (Denmark)
Andersen, C K; Andersen, K; Kragh-Sørensen, P
2000-01-01
on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...
Heemstra, F.J.
1992-01-01
The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be
Heemstra, F.J.; Heemstra, F.J.
1993-01-01
The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be
Coherence in quantum estimation
Giorda, Paolo; Allegra, Michele
2018-01-01
The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.
Overconfidence in Interval Estimates
Soll, Jack B.; Klayman, Joshua
2004-01-01
Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…
Adaptive Spectral Doppler Estimation
DEFF Research Database (Denmark)
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2009-01-01
. The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...
Optomechanical parameter estimation
International Nuclear Information System (INIS)
Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P
2013-01-01
We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)
DEFF Research Database (Denmark)
2015-01-01
A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....
International Nuclear Information System (INIS)
Schull, W.J.; Texas Univ., Houston, TX
1992-01-01
Estimation of the risk of cancer following exposure to ionizing radiation remains largely empirical, and models used to adduce risk incorporate few, if any, of the advances in molecular biology of a past decade or so. These facts compromise the estimation risk where the epidemiological data are weakest, namely, at low doses and dose rates. Without a better understanding of the molecular and cellular events ionizing radiation initiates or promotes, it seems unlikely that this situation will improve. Nor will the situation improve without further attention to the identification and quantitative estimation of the effects of those host and environmental factors that enhance or attenuate risk. (author)
Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang
Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R 2 , mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R 2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.
Estimation and mapping of above ground biomass and carbon of ...
African Journals Online (AJOL)
In addition, field data from 35 sample plots comprising of the Diameter at Breast Height (DBH), co-ordinates of centroids and angles to the top and bottom of the individual trees was used for the analysis. The relationship between biomass and radar backscatter for selected sample plots was established using pairwise ...
DEFF Research Database (Denmark)
Bollerslev, Tim; Todorov, Victor
We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes...... the weak assumption of regular variation in the jump tails, along with in-fill asymptotic arguments for uniquely identifying the "large" jumps from the data. The estimation allows for very general dynamic dependencies in the jump tails, and does not restrict the continuous part of the process...... and the temporal variation in the stochastic volatility. On implementing the new estimation procedure with actual high-frequency data for the S&P 500 aggregate market portfolio, we find strong evidence for richer and more complex dynamic dependencies in the jump tails than hitherto entertained in the literature....
Bridged Race Population Estimates
U.S. Department of Health & Human Services — Population estimates from "bridging" the 31 race categories used in Census 2000, as specified in the 1997 Office of Management and Budget (OMB) race and ethnicity...
Estimation of measurement variances
International Nuclear Information System (INIS)
Jaech, J.L.
1984-01-01
The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented
APLIKASI SPLINE ESTIMATOR TERBOBOT
Directory of Open Access Journals (Sweden)
I Nyoman Budiantara
2001-01-01
Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2, ,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2, ,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.
Fractional cointegration rank estimation
DEFF Research Database (Denmark)
Lasak, Katarzyna; Velasco, Carlos
the parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...
Estimation of spectral kurtosis
Sutawanir
2017-03-01
Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Ranking as parameter estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Guy, Tatiana Valentine
2009-01-01
Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf
Maximal combustion temperature estimation
International Nuclear Information System (INIS)
Golodova, E; Shchepakina, E
2006-01-01
This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models
Centroid-Based Document Classification Algorithms: Analysis & Experimental Results
2000-03-06
0.24 traffick 0.23 gang 0.23 polic 0.20 heroin 0.17 arrest 0.16 narcot 0.16 kg 0.15 addict 0.12 cocain 11 0.49 nafta 0.40 mexico 0.24 job 0.23...mafia 0.12 crime 19 0.36 speci 0.25 whale 0.23 endang 0.23 wolve 0.22 wildlif 0.17 hyph 0.17 blank 0.16 mammal 0.15 marin 0.15 wolf 20 0.30 rwanda 0.25...oil 34 0.64 drug 0.20 legal 0.16 greif 0.15 court 0.14 colombia 0.14 addict 0.13 de 0.11 traffick 0.11 bogota 0.11 decrimin 35 0.24 boate 0.23 ship
Determination of star bodies from p-centroid bodies
Indian Academy of Sciences (India)
2016-08-26
Aug 26, 2016 ... Home; Journals; Proceedings – Mathematical Sciences; Volume 123; Issue 4 ... The domain part of the email address of all email addresses used by the office of Indian Academy of Sciences, including those of the staff, the journals, various programmes, and Current Science, has changed from 'ias.ernet.in' ...
Real-Time Forecasting of Echo-Centroid Motion.
1979-01-01
is apparent that after five observations are obtained, the forecast error drops considerably. The normal lifetime of an echo (25 to 30 min) is...10kmI I ! Fig. 11. Track of 5 April 1978 mesocyclone (M) and two TVS’s (1) and (2). Times are CST. Pumpkin Center tornado is hatched and Marlow tornado is
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
Czech Academy of Sciences Publication Activity Database
Vackář, J.; Burjánek, Jan; Gallovič, F.; Zahradník, J.; Clinton, J.
2017-01-01
Roč. 210, č. 2 (2017), s. 693-705 ISSN 0956-540X Institutional support: RVO:67985530 Keywords : inverse theory * waveform inversion * computational seismology * earthquake source observations * seismic noise Subject RIV: DC - Siesmology, Volcanology, Earth Structure OBOR OECD: Volcanology Impact factor: 2.414, year: 2016
Single snapshot DOA estimation
Häcker, P.; Yang, B.
2010-10-01
In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.
Thermodynamic estimation: Ionic materials
International Nuclear Information System (INIS)
Glasser, Leslie
2013-01-01
Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy
Distribution load estimation - DLE
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A. [VTT Energy, Espoo (Finland)
1996-12-31
The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems
Generalized estimating equations
Hardin, James W
2002-01-01
Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th
Hassani, Majid; Macchiavello, Chiara; Maccone, Lorenzo
2017-11-01
Quantum metrology calculates the ultimate precision of all estimation strategies, measuring what is their root-mean-square error (RMSE) and their Fisher information. Here, instead, we ask how many bits of the parameter we can recover; namely, we derive an information-theoretic quantum metrology. In this setting, we redefine "Heisenberg bound" and "standard quantum limit" (the usual benchmarks in the quantum estimation theory) and show that the former can be attained only by sequential strategies or parallel strategies that employ entanglement among probes, whereas parallel-separable strategies are limited by the latter. We highlight the differences between this setting and the RMSE-based one.
Distribution load estimation - DLE
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A [VTT Energy, Espoo (Finland)
1997-12-31
The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems
Shin, S M; Kim, Y-I; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B
2015-01-01
To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. The sample included 24 female and 19 male patients with hand-wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index.
Burke, Gary; Nesheiwat, Jeffrey; Su, Ling
1994-01-01
Verification is important aspect of process of designing application-specific integrated circuit (ASIC). Design must not only be functionally accurate, but must also maintain correct timing. IFA, Intelligent Front Annotation program, assists in verifying timing of ASIC early in design process. This program speeds design-and-verification cycle by estimating delays before layouts completed. Written in C language.
Organizational flexibility estimation
Komarynets, Sofia
2013-01-01
By the help of parametric estimation the evaluation scale of organizational flexibility and its parameters was formed. Definite degrees of organizational flexibility and its parameters for the Lviv region enterprises were determined. Grouping of the enterprises under the existing scale was carried out. Special recommendations to correct the enterprises behaviour were given.
On Functional Calculus Estimates
Schwenninger, F.L.
2015-01-01
This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm
DEFF Research Database (Denmark)
2000-01-01
Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...
Quantifying IT estimation risks
Kulk, G.P.; Peters, R.J.; Verhoef, C.
2009-01-01
A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be
Numerical Estimation in Preschoolers
Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco
2010-01-01
Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…
McDonald, Judith A.; Thornton, Robert J.
2011-01-01
Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…
Fast fundamental frequency estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2017-01-01
Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...
Czech Academy of Sciences Publication Activity Database
Fabián, Zdeněk
2017-01-01
Roč. 56, č. 2 (2017), s. 125-132 ISSN 0973-1377 Institutional support: RVO:67985807 Keywords : gnostic theory * statistics * robust estimates Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability http://www.ceser.in/ceserp/index.php/ijamas/article/view/4707
Estimation of morbidity effects
International Nuclear Information System (INIS)
Ostro, B.
1994-01-01
Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10
Histogram Estimators of Bivariate Densities
National Research Council Canada - National Science Library
Husemann, Joyce A
1986-01-01
One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...
Vamos¸, C˘alin
2013-01-01
Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.
Distribution load estimation (DLE)
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A; Lehtonen, M [VTT Energy, Espoo (Finland)
1998-08-01
The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented
Estimating ISABELLE shielding requirements
International Nuclear Information System (INIS)
Stevens, A.J.; Thorndike, A.M.
1976-01-01
Estimates were made of the shielding thicknesses required at various points around the ISABELLE ring. Both hadron and muon requirements are considered. Radiation levels at the outside of the shield and at the BNL site boundary are kept at or below 1000 mrem per year and 5 mrem/year respectively. Muon requirements are based on the Wang formula for pion spectra, and the hadron requirements on the hadron cascade program CYLKAZ of Ranft. A muon shield thickness of 77 meters of sand is indicated outside the ring in one area, and hadron shields equivalent to from 2.7 to 5.6 meters in thickness of sand above the ring. The suggested safety allowance would increase these values to 86 meters and 4.0 to 7.2 meters respectively. There are many uncertainties in such estimates, but these last figures are considered to be rather conservative
Variance Function Estimation. Revision.
1987-03-01
UNLSIFIED RFOSR-TR-87-±112 F49620-85-C-O144 F/C 12/3 NL EEEEEEh LOUA28~ ~ L53 11uLoo MICROOP REOUINTS-’HR ------ N L E U INARF-% - IS %~1 %i % 0111...and 9 jointly. If 7,, 0. and are any preliminary estimators for 71, 6. and 3. define 71 and 6 to be the solutions of (4.1) N1 IN2 (7., ’ Td " ~ - / =0P
Aswath Damodaran
1999-01-01
Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...
Estimating Venezuelas Latent Inflation
Juan Carlos Bencomo; Hugo J. Montesinos; Hugo M. Montesinos; Jose Roberto Rondo
2011-01-01
Percent variation of the consumer price index (CPI) is the inflation indicator most widely used. This indicator, however, has some drawbacks. In addition to measurement errors of the CPI, there is a problem of incongruence between the definition of inflation as a sustained and generalized increase of prices and the traditional measure associated with the CPI. We use data from 1991 to 2005 to estimate a complementary indicator for Venezuela, the highest inflation country in Latin America. Late...
Chernobyl source term estimation
International Nuclear Information System (INIS)
Gudiksen, P.H.; Harvey, T.F.; Lange, R.
1990-09-01
The Chernobyl source term available for long-range transport was estimated by integration of radiological measurements with atmospheric dispersion modeling and by reactor core radionuclide inventory estimation in conjunction with WASH-1400 release fractions associated with specific chemical groups. The model simulations revealed that the radioactive cloud became segmented during the first day, with the lower section heading toward Scandinavia and the upper part heading in a southeasterly direction with subsequent transport across Asia to Japan, the North Pacific, and the west coast of North America. By optimizing the agreement between the observed cloud arrival times and duration of peak concentrations measured over Europe, Japan, Kuwait, and the US with the model predicted concentrations, it was possible to derive source term estimates for those radionuclides measured in airborne radioactivity. This was extended to radionuclides that were largely unmeasured in the environment by performing a reactor core radionuclide inventory analysis to obtain release fractions for the various chemical transport groups. These analyses indicated that essentially all of the noble gases, 60% of the radioiodines, 40% of the radiocesium, 10% of the tellurium and about 1% or less of the more refractory elements were released. These estimates are in excellent agreement with those obtained on the basis of worldwide deposition measurements. The Chernobyl source term was several orders of magnitude greater than those associated with the Windscale and TMI reactor accidents. However, the 137 Cs from the Chernobyl event is about 6% of that released by the US and USSR atmospheric nuclear weapon tests, while the 131 I and 90 Sr released by the Chernobyl accident was only about 0.1% of that released by the weapon tests. 13 refs., 2 figs., 7 tabs
Estimating Corporate Yield Curves
Antionio Diaz; Frank Skinner
2001-01-01
This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...
Estimation of inspection effort
International Nuclear Information System (INIS)
Mullen, M.F.; Wincek, M.A.
1979-06-01
An overview of IAEA inspection activities is presented, and the problem of evaluating the effectiveness of an inspection is discussed. Two models are described - an effort model and an effectiveness model. The effort model breaks the IAEA's inspection effort into components; the amount of effort required for each component is estimated; and the total effort is determined by summing the effort for each component. The effectiveness model quantifies the effectiveness of inspections in terms of probabilities of detection and quantities of material to be detected, if diverted over a specific period. The method is applied to a 200 metric ton per year low-enriched uranium fuel fabrication facility. A description of the model plant is presented, a safeguards approach is outlined, and sampling plans are calculated. The required inspection effort is estimated and the results are compared to IAEA estimates. Some other applications of the method are discussed briefly. Examples are presented which demonstrate how the method might be useful in formulating guidelines for inspection planning and in establishing technical criteria for safeguards implementation
Qualitative Robustness in Estimation
Directory of Open Access Journals (Sweden)
Mohammed Nasser
2012-07-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.
Estimating directional epistasis
Le Rouzic, Arnaud
2014-01-01
Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828
Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...
African Journals Online (AJOL)
Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...
Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L; Hamprecht, Fred A; Winkler, Andreas
2014-06-01
Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.
Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L.; Hamprecht, Fred A.; Winkler, Andreas
2014-06-01
Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.
Estimation of Lung Ventilation
Ding, Kai; Cao, Kunlin; Du, Kaifang; Amelon, Ryan; Christensen, Gary E.; Raghavan, Madhavan; Reinhardt, Joseph M.
Since the primary function of the lung is gas exchange, ventilation can be interpreted as an index of lung function in addition to perfusion. Injury and disease processes can alter lung function on a global and/or a local level. MDCT can be used to acquire multiple static breath-hold CT images of the lung taken at different lung volumes, or with proper respiratory control, 4DCT images of the lung reconstructed at different respiratory phases. Image registration can be applied to this data to estimate a deformation field that transforms the lung from one volume configuration to the other. This deformation field can be analyzed to estimate local lung tissue expansion, calculate voxel-by-voxel intensity change, and make biomechanical measurements. The physiologic significance of the registration-based measures of respiratory function can be established by comparing to more conventional measurements, such as nuclear medicine or contrast wash-in/wash-out studies with CT or MR. An important emerging application of these methods is the detection of pulmonary function change in subjects undergoing radiation therapy (RT) for lung cancer. During RT, treatment is commonly limited to sub-therapeutic doses due to unintended toxicity to normal lung tissue. Measurement of pulmonary function may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy. This chapter reviews the basic measures to estimate regional ventilation from image registration of CT images, the comparison of them to the existing golden standard and the application in radiation therapy.
Estimating Subjective Probabilities
DEFF Research Database (Denmark)
Andersen, Steffen; Fountain, John; Harrison, Glenn W.
2014-01-01
either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...
Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research
2011-01-01
The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...
Risk estimation and evaluation
Energy Technology Data Exchange (ETDEWEB)
Ferguson, R A.D.
1982-10-01
Risk assessment involves subjectivity, which makes objective decision making difficult in the nuclear power debate. The author reviews the process and uncertainties of estimating risks as well as the potential for misinterpretation and misuse. Risk data from a variety of aspects cannot be summed because the significance of different risks is not comparable. A method for including political, social, moral, psychological, and economic factors, environmental impacts, catastrophes, and benefits in the evaluation process could involve a broad base of lay and technical consultants, who would explain and argue their evaluation positions. 15 references. (DCK)
Estimating Gear Teeth Stiffness
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2013-01-01
The estimation of gear stiffness is important for determining the load distribution between the gear teeth when two sets of teeth are in contact. Two factors have a major influence on the stiffness; firstly the boundary condition through the gear rim size included in the stiffness calculation...... and secondly the size of the contact. In the FE calculation the true gear tooth root profile is applied. The meshing stiffness’s of gears are highly non-linear, it is however found that the stiffness of an individual tooth can be expressed in a linear form assuming that the contact length is constant....
Mixtures Estimation and Applications
Mengersen, Kerrie; Titterington, Mike
2011-01-01
This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete. The editors provide a complete account of the applications, mathematical structure and statistical analysis of finite mixture distributions along with MCMC computational methods, together with a range of detailed discussions covering the applications of the methods and features chapters from the leading experts on the subject
Robust Wave Resource Estimation
DEFF Research Database (Denmark)
Lavelle, John; Kofoed, Jens Peter
2013-01-01
density estimates of the PDF as a function both of Hm0 and Tp, and Hm0 and T0;2, together with the mean wave power per unit crest length, Pw, as a function of Hm0 and T0;2. The wave elevation parameters, from which the wave parameters are calculated, are filtered to correct or remove spurious data....... An overview is given of the methods used to do this, and a method for identifying outliers of the wave elevation data, based on the joint distribution of wave elevations and accelerations, is presented. The limitations of using a JONSWAP spectrum to model the measured wave spectra as a function of Hm0 and T0......;2 or Hm0 and Tp for the Hanstholm site data are demonstrated. As an alternative, the non-parametric loess method, which does not rely on any assumptions about the shape of the wave elevation spectra, is used to accurately estimate Pw as a function of Hm0 and T0;2....
Estimations of actual availability
International Nuclear Information System (INIS)
Molan, M.; Molan, G.
2001-01-01
Adaptation of working environment (social, organizational, physical and physical) should assure higher level of workers' availability and consequently higher level of workers' performance. A special theoretical model for description of connections between environmental factors, human availability and performance was developed and validated. The central part of the model is evaluations of human actual availability in the real working situation or fitness for duties self-estimation. The model was tested in different working environments. On the numerous (2000) workers, standardized values and critical limits for an availability questionnaire were defined. Standardized method was used in identification of the most important impact of environmental factors. Identified problems were eliminated by investments in the organization in modification of selection and training procedures in humanization of working .environment. For workers with behavioural and health problems individual consultancy was offered. The described method is a tool for identification of impacts. In combination with behavioural analyses and mathematical analyses of connections, it offers possibilities to keep adequate level of human availability and fitness for duty in each real working situation. The model should be a tool for achieving adequate level of nuclear safety by keeping the adequate level of workers' availability and fitness for duty. For each individual worker possibility for estimation of level of actual fitness for duty is possible. Effects of prolonged work and additional tasks should be evaluated. Evaluations of health status effects and ageing are possible on the individual level. (author)
Comparison of variance estimators for metaanalysis of instrumental variable estimates
Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.
2016-01-01
Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Directory of Open Access Journals (Sweden)
Laurence Booth
2015-04-01
Full Text Available Discount rates are essential to applied finance, especially in setting prices for regulated utilities and valuing the liabilities of insurance companies and defined benefit pension plans. This paper reviews the basic building blocks for estimating discount rates. It also examines market risk premiums, as well as what constitutes a benchmark fair or required rate of return, in the aftermath of the financial crisis and the U.S. Federal Reserve’s bond-buying program. Some of the results are disconcerting. In Canada, utilities and pension regulators responded to the crash in different ways. Utilities regulators haven’t passed on the full impact of low interest rates, so that consumers face higher prices than they should whereas pension regulators have done the opposite, and forced some contributors to pay more. In both cases this is opposite to the desired effect of monetary policy which is to stimulate aggregate demand. A comprehensive survey of global finance professionals carried out last year provides some clues as to where adjustments are needed. In the U.S., the average equity market required return was estimated at 8.0 per cent; Canada’s is 7.40 per cent, due to the lower market risk premium and the lower risk-free rate. This paper adds a wealth of historic and survey data to conclude that the ideal base long-term interest rate used in risk premium models should be 4.0 per cent, producing an overall expected market return of 9-10.0 per cent. The same data indicate that allowed returns to utilities are currently too high, while the use of current bond yields in solvency valuations of pension plans and life insurers is unhelpful unless there is a realistic expectation that the plans will soon be terminated.
Akbar, Somaieh; Fathianpour, Nader
2016-12-01
The Curie point depth is of great importance in characterizing geothermal resources. In this study, the Curie iso-depth map was provided using the well-known method of dividing the aeromagnetic dataset into overlapping blocks and analyzing the power spectral density of each block separately. Determining the optimum block dimension is vital in improving the resolution and accuracy of estimating Curie point depth. To investigate the relation between the optimal block size and power spectral density, a forward magnetic modeling was implemented on an artificial prismatic body with specified characteristics. The top, centroid, and bottom depths of the body were estimated by the spectral analysis method for different block dimensions. The result showed that the optimal block size could be considered as the smallest possible block size whose corresponding power spectrum represents an absolute maximum in small wavenumbers. The Curie depth map of the Sabalan geothermal field and its surrounding areas, in the northwestern Iran, was produced using a grid of 37 blocks with different dimensions from 10 × 10 to 50 × 50 km2, which showed at least 50% overlapping with adjacent blocks. The Curie point depth was estimated in the range of 5 to 21 km. The promising areas with the Curie point depths less than 8.5 km are located around Mountain Sabalan encompassing more than 90% of known geothermal resources in the study area. Moreover, the Curie point depth estimated by the improved spectral analysis is in good agreement with the depth calculated from the thermal gradient data measured in one of the exploratory wells in the region.
Toxicity Estimation Software Tool (TEST)
The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...
Sampling and estimating recreational use.
Timothy G. Gregoire; Gregory J. Buhyoff
1999-01-01
Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.
Flexible and efficient estimating equations for variogram estimation
Sun, Ying; Chang, Xiaohui; Guan, Yongtao
2018-01-01
Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.
Flexible and efficient estimating equations for variogram estimation
Sun, Ying
2018-01-11
Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.
Improved Estimates of Thermodynamic Parameters
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
State estimation in networked systems
Sijs, J.
2012-01-01
This thesis considers state estimation strategies for networked systems. State estimation refers to a method for computing the unknown state of a dynamic process by combining sensor measurements with predictions from a process model. The most well known method for state estimation is the Kalman
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Uveal melanoma: Estimating prognosis
Directory of Open Access Journals (Sweden)
Swathi Kaliki
2015-01-01
Full Text Available Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms "uvea," "iris," "ciliary body," "choroid," "melanoma," "uveal melanoma" and "prognosis," "metastasis," "genetic testing," "gene expression profiling." Relevant English language articles were extracted, reviewed, and referenced appropriately.
Choi, S.; Joiner, J.; Krotkov, N. A.; Choi, Y.; Duncan, B. N.; Celarier, E. A.; Bucsela, E. J.; Vasilkov, A. P.; Strahan, S. E.; Veefkind, J. P.; Cohen, R. C.; Weinheimer, A. J.; Pickering, K. E.
2013-12-01
Total column measurements of NO2 from space-based sensors are of interest to the atmospheric chemistry and air quality communities; the relatively short lifetime of near-surface NO2 produces satellite-observed hot-spots near pollution sources including power plants and urban areas. However, estimates of NO2 concentrations in the free-troposphere, where lifetimes are longer and the radiative impact through ozone formation is larger, are severely lacking. Such information is critical to evaluate chemistry-climate and air quality models that are used for prediction of the evolution of tropospheric ozone and its impact of climate and air quality. Here, we retrieve free-tropospheric NO2 volume mixing ratio (VMR) using the cloud slicing technique. We use cloud optical centroid pressures (OCPs) as well as collocated above-cloud vertical NO2 columns (defined as the NO2 column from top of the atmosphere to the cloud OCP) from the Ozone Monitoring Instrument (OMI). The above-cloud NO2 vertical columns used in our study are retrieved independent of a priori NO2 profile information. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud optical centroid pressure is proportional to the NO2 volume mixing ratio (VMR) for a given pressure (altitude) range. We retrieve NO2 volume mixing ratios and compare the obtained NO2 VMRs with in-situ aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. The agreement is good when proper data screening is applied. In addition, the OMI cloud slicing reports a high NO2 VMR where the aircraft reported lightning NOx during the Deep Convection Clouds and Chemistry (DC3) campaign in 2012. We also provide a global seasonal climatology of free-tropospheric NO2 VMR in cloudy conditions. Enhanced NO2 in free troposphere commonly appears near polluted urban locations where NO2 produced in the boundary layer may be transported vertically out of the
A study on high speed wavefront control algorithm for an adaptive optics system
International Nuclear Information System (INIS)
Park, Seung Kyu; Baik, Sung Hoon; Kim, Cheol Jung; Seo, Young Seok
2000-01-01
We developed a high speed control algorithm and system for measuring and correcting the wavefront distortions based on Windows operating system. To get quickly the information of wavefront distortion from the Hartman spot image, we preprocessed the image to remove background noises and extracted the centroid position by finding the center of weights. We moved finely the centroid position with sub-pixel resolution repeatedly to get the wavefront information with more enhanced resolution. We designed a differential data communication driver and an isolated analog driver to have robust system control. As the experimental results, the measurement resolution of the wavefront was 0.05 pixels and correction speed was 5Hz
Approaches to estimating decommissioning costs
International Nuclear Information System (INIS)
Smith, R.I.
1990-07-01
The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...
A new estimator for vector velocity estimation [medical ultrasonics
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2001-01-01
A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...
Abrantes, João R. C. B.; Moruzzi, Rodrigo B.; Silveira, Alexandre; de Lima, João L. M. P.
2018-02-01
The accurate measurement of shallow flow velocities is crucial to understand and model the dynamics of sediment and pollutant transport by overland flow. In this study, a novel triple-tracer approach was used to re-evaluate and compare the traditional and well established dye and salt tracer techniques with the more recent thermal tracer technique in estimating shallow flow velocities. For this purpose a triple tracer (i.e. dyed-salted-heated water) was used. Optical and infrared video cameras and an electrical conductivity sensor were used to detect the tracers in the flow. Leading edge and centroid velocities of the tracers were measured and the correction factors used to determine the actual mean flow velocities from tracer measured velocities were compared and investigated. Experiments were carried out for different flow discharges (32-1813 ml s-1) on smooth acrylic, sand, stones and synthetic grass bed surfaces with 0.8, 4.4 and 13.2% slopes. The results showed that thermal tracers can be used to estimate shallow flow velocities, since the three techniques yielded very similar results without significant differences between them. The main advantages of the thermal tracer were that the movement of the tracer along the measuring section was more easily visible than it was in the real image videos and that it was possible to measure space-averaged flow velocities instead of only one velocity value, with the salt tracer. The correction factors used to determine the actual mean velocity of overland flow varied directly with Reynolds and Froude numbers, flow velocity and slope and inversely with flow depth and bed roughness. In shallow flows, velocity estimation using tracers entails considerable uncertainty and caution must be taken with these measurements, especially in field studies where these variables vary appreciably in space and time.
International Nuclear Information System (INIS)
Vetrinskaya, N.I.; Manasbayeva, A.B.
1998-01-01
Water has a particular ecological function and it is an indicator of the general state of the biosphere. In relation with this summary, the toxicological evaluation of water by biologic testing methods is very actual. The peculiarity of biologic testing information is an integral reflection of all totality properties of examination of the environment in position of its perception by living objects. Rapid integral evaluation of anthropological situation is a base aim of biologic testing. If this evaluation has deviations from normal state, detailed analysis and revelation of dangerous components could be conducted later. The quality of water from the Degelen gallery, where nuclear explosions were conducted, was investigated by bio-testing methods. The micro-organisms (Micrococcus Luteus, Candida crusei, Pseudomonas algaligenes) and water plant elodea (Elodea canadensis Rich) were used as test-objects. It is known that the transporting functions of cell membranes of living organisms are violated the first time in extreme conditions by difference influences. Therefore, ion penetration of elodeas and micro-organisms cells, which contained in the examination water with toxicants, were used as test-function. Alteration of membrane penetration was estimated by measurement of electrolytes electrical conductivity, which gets out from living objects cells to distillate water. Index of water toxic is ratio of electrical conductivity in experience to electrical conductivity in control. Also, observations from common state of plant, which was incubated in toxic water, were made. (Chronic experience conducted for 60 days.) The plants were incubated in water samples, which were picked out from gallery in the years 1996 and 1997. The time of incubation is 1-10 days. The results of investigation showed that ion penetration of elodeas and micro-organisms cells changed very much with influence of radionuclides, which were contained in testing water. Changes are taking place even in
WAYS HIERARCHY OF ACCOUNTING ESTIMATES
Directory of Open Access Journals (Sweden)
ŞERBAN CLAUDIU VALENTIN
2015-03-01
Full Text Available Based on one hand on the premise that the estimate is an approximate evaluation, completed with the fact that the term estimate is increasingly common and used by a variety of both theoretical and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate. Completing on the other hand the idea above with the phrase "estimated value", which implies that we are dealing with a value obtained from an evaluation process, but its size is not exact but approximated, meaning is close to the actual size, it becomes obvious the neccessity to delimit the hierarchical relationship between evaluation / estimate while considering the context in which the evaluation activity is derulated at entity level.
Modal mass estimation from ambient vibrations measurement: A method for civil buildings
Acunzo, G.; Fiorini, N.; Mori, F.; Spina, D.
2018-01-01
A new method for estimating the modal mass ratios of buildings from unscaled mode shapes identified from ambient vibrations is presented. The method is based on the Multi Rigid Polygons (MRP) model in which each floor of the building is ideally divided in several non-deformable polygons that move independent of each other. The whole mass of the building is concentrated in the centroid of the polygons and the experimental mode shapes are expressed in term of rigid translations and of rotations. In this way, the mass matrix of the building can be easily computed on the basis of simple information about the geometry and the materials of the structure. The modal mass ratios can be then obtained through the classical equation of structural dynamics. Ambient vibrations measurement must be performed according to this MRP models, using at least two biaxial accelerometers per polygon. After a brief illustration of the theoretical background of the method, numerical validations are presented analysing the method sensitivity for possible different source of errors. Quality indexes are defined for evaluating the approximation of the modal mass ratios obtained from a certain MRP model. The capability of the proposed model to be applied to real buildings is illustrated through two experimental applications. In the first one, a geometrically irregular reinforced concrete building is considered, using a calibrated Finite Element Model for validating the results of the method. The second application refers to a historical monumental masonry building, with a more complex geometry and with less information available. In both cases, MRP models with a different number of rigid polygons per floor are compared.
Spring Small Grains Area Estimation
Palmer, W. F.; Mohler, R. J.
1986-01-01
SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.
Parameter estimation in plasmonic QED
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
Quantity Estimation Of The Interactions
International Nuclear Information System (INIS)
Gorana, Agim; Malkaj, Partizan; Muda, Valbona
2007-01-01
In this paper we present some considerations about quantity estimations, regarding the range of interaction and the conservations laws in various types of interactions. Our estimations are done under classical and quantum point of view and have to do with the interaction's carriers, the radius, the influence range and the intensity of interactions
CONDITIONS FOR EXACT CAVALIERI ESTIMATION
Directory of Open Access Journals (Sweden)
Mónica Tinajero-Bravo
2014-03-01
Full Text Available Exact Cavalieri estimation amounts to zero variance estimation of an integral with systematic observations along a sampling axis. A sufficient condition is given, both in the continuous and the discrete cases, for exact Cavalieri sampling. The conclusions suggest improvements on the current stereological application of fractionator-type sampling.
Optimization of Barron density estimates
Czech Academy of Sciences Publication Activity Database
Vajda, Igor; van der Meulen, E. C.
2001-01-01
Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001
Stochastic Estimation via Polynomial Chaos
2015-10-01
AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic
Bayesian estimates of linkage disequilibrium
Directory of Open Access Journals (Sweden)
Abad-Grau María M
2007-06-01
Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.
Reactivity estimation using digital nonlinear H∞ estimator for VHTRC experiment
International Nuclear Information System (INIS)
Suzuki, Katsuo; Nabeshima, Kunihiko; Yamane, Tsuyoshi
2003-01-01
On-line and real-time estimation of time-varying reactivity in a nuclear reactor in necessary for early detection of reactivity anomaly and safe operation. Using a digital nonlinear H ∞ estimator, an experiment of real-time dynamic reactivity estimation was carried out in the Very High Temperature Reactor Critical Assembly (VHTRC) of Japan Atomic Energy Research Institute. Some technical issues of the experiment are described, such as reactivity insertion, data sampling frequency, anti-aliasing filter, experimental circuit and digitalising nonlinear H ∞ reactivity estimator, and so on. Then, we discussed the experimental results obtained by the digital nonlinear H ∞ estimator with sampled data of the nuclear instrumentation signal for the power responses under various reactivity insertions. Good performances of estimated reactivity were observed, with almost no delay to the true reactivity and sufficient accuracy between 0.05 cent and 0.1 cent. The experiment shows that real-time reactivity for data sampling period of 10 ms can be certainly realized. From the results of the experiment, it is concluded that the digital nonlinear H ∞ reactivity estimator can be applied as on-line real-time reactivity meter for actual nuclear plants. (author)
DEFF Research Database (Denmark)
Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels
2015-01-01
A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...... in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation...
UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY
Directory of Open Access Journals (Sweden)
Jean-Paul Jernot
2011-05-01
Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.
Laser cost experience and estimation
International Nuclear Information System (INIS)
Shofner, F.M.; Hoglund, R.L.
1977-01-01
This report addresses the question of estimating the capital and operating costs for LIS (Laser Isotope Separation) lasers, which have performance requirements well beyond the state of mature art. This question is seen with different perspectives by political leaders, ERDA administrators, scientists, and engineers concerned with reducing LIS to economically successful commercial practice, on a timely basis. Accordingly, this report attempts to provide ''ballpark'' estimators for capital and operating costs and useful design and operating information for lasers based on mature technology, and their LIS analogs. It is written very basically and is intended to respond about equally to the perspectives of administrators, scientists, and engineers. Its major contributions are establishing the current, mature, industrialized laser track record (including capital and operating cost estimators, reliability, types of application, etc.) and, especially, evolution of generalized estimating procedures for capital and operating cost estimators for new laser design
Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)
Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.
Granato, Gregory E.
2012-01-01
A nationwide study to better define triangular-hydrograph statistics for use with runoff-quality and flood-flow studies was done by the U.S. Geological Survey (USGS) in cooperation with the Federal Highway Administration. Although the triangular hydrograph is a simple linear approximation, the cumulative distribution of stormflow with a triangular hydrograph is a curvilinear S-curve that closely approximates the cumulative distribution of stormflows from measured data. The temporal distribution of flow within a runoff event can be estimated using the basin lagtime, (which is the time from the centroid of rainfall excess to the centroid of the corresponding runoff hydrograph) and the hydrograph recession ratio (which is the ratio of the duration of the falling limb to the rising limb of the hydrograph). This report documents results of the study, methods used to estimate the variables, and electronic files that facilitate calculation of variables. Ten viable multiple-linear regression equations were developed to estimate basin lagtimes from readily determined drainage basin properties using data published in 37 stormflow studies. Regression equations using the basin lag factor (BLF, which is a variable calculated as the main-channel length, in miles, divided by the square root of the main-channel slope in feet per mile) and two variables describing development in the drainage basin were selected as the best candidates, because each equation explains about 70 percent of the variability in the data. The variables describing development are the USGS basin development factor (BDF, which is a function of the amount of channel modifications, storm sewers, and curb-and-gutter streets in a basin) and the total impervious area variable (IMPERV) in the basin. Two datasets were used to develop regression equations. The primary dataset included data from 493 sites that have values for the BLF, BDF, and IMPERV variables. This dataset was used to develop the best-fit regression
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Radiation dose estimates for radiopharmaceuticals
International Nuclear Information System (INIS)
Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.
1996-04-01
Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Boundary methods for mode estimation
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
Likelihood estimators for multivariate extremes
Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.
2015-01-01
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Likelihood estimators for multivariate extremes
Huser, Raphaël
2015-11-17
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Analytical estimates of structural behavior
Dym, Clive L
2012-01-01
Explicitly reintroducing the idea of modeling to the analysis of structures, Analytical Estimates of Structural Behavior presents an integrated approach to modeling and estimating the behavior of structures. With the increasing reliance on computer-based approaches in structural analysis, it is becoming even more important for structural engineers to recognize that they are dealing with models of structures, not with the actual structures. As tempting as it is to run innumerable simulations, closed-form estimates can be effectively used to guide and check numerical results, and to confirm phys
Phase estimation in optical interferometry
Rastogi, Pramod
2014-01-01
Phase Estimation in Optical Interferometry covers the essentials of phase-stepping algorithms used in interferometry and pseudointerferometric techniques. It presents the basic concepts and mathematics needed for understanding the phase estimation methods in use today. The first four chapters focus on phase retrieval from image transforms using a single frame. The next several chapters examine the local environment of a fringe pattern, give a broad picture of the phase estimation approach based on local polynomial phase modeling, cover temporal high-resolution phase evaluation methods, and pre
Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range
Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.
2013-12-01
The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/ΔE) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 μm from the current 24 μm spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 μm square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 μm and 3.9±0.1 μm at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these
An Analytical Cost Estimation Procedure
National Research Council Canada - National Science Library
Jayachandran, Toke
1999-01-01
Analytical procedures that can be used to do a sensitivity analysis of a cost estimate, and to perform tradeoffs to identify input values that can reduce the total cost of a project, are described in the report...
Spectral unmixing: estimating partial abundances
CSIR Research Space (South Africa)
Debba, Pravesh
2009-01-01
Full Text Available techniques is complicated when considering very similar spectral signatures. Iron-bearing oxide/hydroxide/sulfate minerals have similar spectral signatures. The study focuses on how could estimates of abundances of spectrally similar iron-bearing oxide...
50th Percentile Rent Estimates
Department of Housing and Urban Development — Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. Fair Market Rents (FMRs) are primarily used to determine payment...
LPS Catch and Effort Estimation
National Oceanic and Atmospheric Administration, Department of Commerce — Data collected from the LPS dockside (LPIS) and the LPS telephone (LPTS) surveys are combined to produce estimates of total recreational catch, landings, and fishing...
Exploratory shaft liner corrosion estimate
International Nuclear Information System (INIS)
Duncan, D.R.
1985-10-01
An estimate of expected corrosion degradation during the 100-year design life of the Exploratory Shaft (ES) is presented. The basis for the estimate is a brief literature survey of corrosion data, in addition to data taken by the Basalt Waste Isolation Project. The scope of the study is expected corrosion environment of the ES, the corrosion modes of general corrosion, pitting and crevice corrosion, dissimilar metal corrosion, and environmentally assisted cracking. The expected internal and external environment of the shaft liner is described in detail and estimated effects of each corrosion mode are given. The maximum amount of general corrosion degradation was estimated to be 70 mils at the exterior and 48 mils at the interior, at the shaft bottom. Corrosion at welds or mechanical joints could be significant, dependent on design. After a final determination of corrosion allowance has been established by the project it will be added to the design criteria. 10 refs., 6 figs., 5 tabs
Project Cost Estimation for Planning
2010-02-26
For Nevada Department of Transportation (NDOT), there are far too many projects that ultimately cost much more than initially planned. Because project nominations are linked to estimates of future funding and the analysis of system needs, the inaccur...
Robust estimation and hypothesis testing
Tiku, Moti L
2004-01-01
In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...
Estimating Emissions from Railway Traffic
DEFF Research Database (Denmark)
Jørgensen, Morten W.; Sorenson, Spencer C.
1998-01-01
Several parameters of importance for estimating emissions from railway traffic are discussed, and typical results presented. Typical emissions factors from diesel engines and electrical power generation are presented, and the effect of differences in national electrical generation sources...
Travel time estimation using Bluetooth.
2015-06-01
The objective of this study was to investigate the feasibility of using a Bluetooth Probe Detection System (BPDS) to : estimate travel time in an urban area. Specifically, the study investigated the possibility of measuring overall congestion, the : ...
Estimating uncertainty in resolution tests
CSIR Research Space (South Africa)
Goncalves, DP
2006-05-01
Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...
Estimating solar radiation in Ghana
International Nuclear Information System (INIS)
Anane-Fenin, K.
1986-04-01
The estimates of global radiation on a horizontal surface for 9 towns in Ghana, West Africa, are deduced from their sunshine data using two methods developed by Angstrom and Sabbagh. An appropriate regional parameter is determined with the first method and used to predict solar irradiation in all the 9 stations with an accuracy better than 15%. Estimation of diffuse solar irradiation by Page, Lin and Jordan and three other authors' correlation are performed and the results examined. (author)
The Psychology of Cost Estimating
Price, Andy
2016-01-01
Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.
Estimating emissions from railway traffic
Energy Technology Data Exchange (ETDEWEB)
Joergensen, M.W.; Sorenson, C.
1997-07-01
The report discusses methods that can be used to estimate the emissions from various kinds of railway traffic. The methods are based on the estimation of the energy consumption of the train, so that comparisons can be made between electric and diesel driven trains. Typical values are given for the necessary traffic parameters, emission factors, and train loading. Detailed models for train energy consumption are presented, as well as empirically based methods using average train speed and distance between stop. (au)
Efficient, Differentially Private Point Estimators
Smith, Adam
2008-01-01
Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (furt...
Computer-Aided Parts Estimation
Cunningham, Adam; Smart, Robert
1993-01-01
In 1991, Ford Motor Company began deployment of CAPE (computer-aided parts estimating system), a highly advanced knowledge-based system designed to generate, evaluate, and cost automotive part manufacturing plans. cape is engineered on an innovative, extensible, declarative process-planning and estimating knowledge representation language, which underpins the cape kernel architecture. Many manufacturing processes have been modeled to date, but eventually every significant process in motor veh...
Guideline to Estimate Decommissioning Costs
Energy Technology Data Exchange (ETDEWEB)
Yun, Taesik; Kim, Younggook; Oh, Jaeyoung [KHNP CRI, Daejeon (Korea, Republic of)
2016-10-15
The primary objective of this work is to provide guidelines to estimate the decommissioning cost as well as the stakeholders with plausible information to understand the decommissioning activities in a reasonable manner, which eventually contribute to acquiring the public acceptance for the nuclear power industry. Although several cases of the decommissioning cost estimate have been made for a few commercial nuclear power plants, the different technical, site-specific and economic assumptions used make it difficult to interpret those cost estimates and compare them with that of a relevant plant. Trustworthy cost estimates are crucial to plan a safe and economic decommissioning project. The typical approach is to break down the decommissioning project into a series of discrete and measurable work activities. Although plant specific differences derived from the economic and technical assumptions make a licensee difficult to estimate reliable decommissioning costs, estimating decommissioning costs is the most crucial processes since it encompasses all the spectrum of activities from the planning to the final evaluation on whether a decommissioning project has successfully been preceded from the perspective of safety and economic points. Hence, it is clear that tenacious efforts should be needed to successfully perform the decommissioning project.
Comparison of density estimators. [Estimation of probability density functions
Energy Technology Data Exchange (ETDEWEB)
Kao, S.; Monahan, J.F.
1977-09-01
Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)
Weldon Spring historical dose estimate
International Nuclear Information System (INIS)
Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.
1986-07-01
This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr
Weldon Spring historical dose estimate
Energy Technology Data Exchange (ETDEWEB)
Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.
1986-07-01
This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.
Simon, Patrick; Schneider, Peter
2017-08-01
In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We have employed general adaptive moments (GLAM ) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. The GLAM ellipticity has useful properties for any chosen weight profile: the weighted ellipticity is identical to that of isophotes of elliptical images, and in absence of noise and pixellation it is always an unbiased estimator of reduced shear. We show that moment-based techniques, adaptive or unweighted, are similar to a model-based approach in the sense that they can be seen as imperfect fit of an elliptical profile to the image. Due to residuals in the fit, moment-based estimates of ellipticities are prone to underfitting bias when inferred from observed images. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimised estimator for the pre-seeing GLAM ellipticity and quantify its bias for noise-free images. To deal with images where pixel noise is prominent, we consider a Bayesian approach to infer GLAM ellipticity where, similar to the noise-free case, the ellipticity posterior can be inconsistent with the true ellipticity if we do not properly account for our ignorance about fit residuals. This underfitting bias, quantified in the paper, does not vary with the overall noise level but changes with the pre-seeing brightness profile and the correlation or heterogeneity of pixel noise over the image. Furthermore, when inferring a constant ellipticity or, more relevantly, constant shear from a source sample with a distribution of intrinsic properties (sizes, centroid positions, intrinsic shapes), an additional, now noise-dependent bias arises towards low signal-to-noise if incorrect prior densities for the intrinsic properties are used. We discuss the origin of this
An improved estimation and focusing scheme for vector velocity estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Munk, Peter
1999-01-01
to reduce spatial velocity dispersion. Examples of different velocity vector conditions are shown using the Field II simulation program. A relative accuracy of 10.1 % is obtained for the lateral velocity estimates for a parabolic velocity profile for a flow perpendicular to the ultrasound beam and a signal...
Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...
estimating formwork striking time for concrete mixes estimating
African Journals Online (AJOL)
eobe
In this study, we estimated the time for strength development in concrete cured up to 56 days. Water. In this .... regression analysis using MS Excel 2016 Software performed on the ..... [1] Abolfazl, K. R, Peroti S. and Rahemi L 'The Effect of.
Moving Horizon Estimation and Control
DEFF Research Database (Denmark)
Jørgensen, John Bagterp
successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....
Heuristic introduction to estimation methods
International Nuclear Information System (INIS)
Feeley, J.J.; Griffith, J.M.
1982-08-01
The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems
Estimation of effective wind speed
Østergaard, K. Z.; Brath, P.; Stoustrup, J.
2007-07-01
The wind speed has a huge impact on the dynamic response of wind turbine. Because of this, many control algorithms use a measure of the wind speed to increase performance, e.g. by gain scheduling and feed forward. Unfortunately, no accurate measurement of the effective wind speed is online available from direct measurements, which means that it must be estimated in order to make such control methods applicable in practice. In this paper a new method is presented for the estimation of the effective wind speed. First, the rotor speed and aerodynamic torque are estimated by a combined state and input observer. These two variables combined with the measured pitch angle is then used to calculate the effective wind speed by an inversion of a static aerodynamic model.
Estimation and valuation in accounting
Directory of Open Access Journals (Sweden)
Cicilia Ionescu
2014-03-01
Full Text Available The relationships of the enterprise with the external environment give rise to a range of informational needs. Satisfying those needs requires the production of coherent, comparable, relevant and reliable information included into the individual or consolidated financial statements. International Financial Reporting Standards IAS / IFRS aim to ensure the comparability and relevance of the accounting information, providing, among other things, details about the issue of accounting estimates and changes in accounting estimates. Valuation is a process continually used, in order to assign values to the elements that are to be recognised in the financial statements. Most of the times, the values reflected in the books are clear, they are recorded in the contracts with third parties, in the supporting documents, etc. However, the uncertainties in which a reporting entity operates determines that, sometimes, the assigned or values attributable to some items composing the financial statements be determined by use estimates.
Integral Criticality Estimators in MCATK
Energy Technology Data Exchange (ETDEWEB)
Nolen, Steven Douglas [Los Alamos National Laboratory; Adams, Terry R. [Los Alamos National Laboratory; Sweezy, Jeremy Ed [Los Alamos National Laboratory
2016-06-14
The Monte Carlo Application ToolKit (MCATK) is a component-based software toolset for delivering customized particle transport solutions using the Monte Carlo method. Currently under development in the XCP Monte Carlo group at Los Alamos National Laboratory, the toolkit has the ability to estimate the ke f f and a eigenvalues for static geometries. This paper presents a description of the estimators and variance reduction techniques available in the toolkit and includes a preview of those slated for future releases. Along with the description of the underlying algorithms is a description of the available user inputs for controlling the iterations. The paper concludes with a comparison of the MCATK results with those provided by analytic solutions. The results match within expected statistical uncertainties and demonstrate MCATK’s usefulness in estimating these important quantities.
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Methods for estimating the semivariogram
DEFF Research Database (Denmark)
Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle
2002-01-01
. In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...
Albedo estimation for scene segmentation
Energy Technology Data Exchange (ETDEWEB)
Lee, C H; Rosenfeld, A
1983-03-01
Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.
Estimation of strong ground motion
International Nuclear Information System (INIS)
Watabe, Makoto
1993-01-01
Fault model has been developed to estimate a strong ground motion in consideration of characteristics of seismic source and propagation path of seismic waves. There are two different approaches in the model. The first one is a theoretical approach, while the second approach is a semi-empirical approach. Though the latter is more practical than the former to be applied to the estimation of input motions, it needs at least the small-event records, the value of the seismic moment of the small event and the fault model of the large event
Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin
2011-01-01
Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292
Multicollinearity and maximum entropy leuven estimator
Sudhanshu Mishra
2004-01-01
Multicollinearity is a serious problem in applied regression analysis. Q. Paris (2001) introduced the MEL estimator to resolve the multicollinearity problem. This paper improves the MEL estimator to the Modular MEL (MMEL) estimator and shows by Monte Carlo experiments that MMEL estimator performs significantly better than OLS as well as MEL estimators.
Unrecorded Alcohol Consumption: Quantitative Methods of Estimation
Razvodovsky, Y. E.
2010-01-01
unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.
International Nuclear Information System (INIS)
Zhang, W.; Zaehringer, M.; Ungar, K.; Hoffman, I.
2008-01-01
In this paper, the uncertainties of gamma-ray small peak analysis have been examined. As the intensity of a gamma-ray peak approaches its detection decision limit, derived parameters such as centroid channel energy, peak area, peak area uncertainty, baseline determination, and peak significance are statistically sensitive. The intercomparison exercise organized by the CTBTO provided an excellent opportunity for this to be studied. Near background levels, the false-positive and false-negative peak identification frequencies in artificial test spectra have been compared to statistically predictable limiting values. In addition, naturally occurring radon progeny were used to compare observed variance against nominal uncertainties. The results infer that the applied fit algorithms do not always represent the best estimator. Understanding the statistically predicted peak-finding limit is important for data evaluation and analysis assessment. Furthermore, these results are useful to optimize analytical procedures to achieve the best results
Collider Scaling and Cost Estimation
International Nuclear Information System (INIS)
Palmer, R.B.
1986-01-01
This paper deals with collider cost and scaling. The main points of the discussion are the following ones: 1) scaling laws and cost estimation: accelerating gradient requirements, total stored RF energy considerations, peak power consideration, average power consumption; 2) cost optimization; 3) Bremsstrahlung considerations; 4) Focusing optics: conventional, laser focusing or super disruption. 13 refs
Helicopter Toy and Lift Estimation
Shakerin, Said
2013-01-01
A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)
Estimation of potential uranium resources
International Nuclear Information System (INIS)
Curry, D.L.
1977-09-01
Potential estimates, like reserves, are limited by the information on hand at the time and are not intended to indicate the ultimate resources. Potential estimates are based on geologic judgement, so their reliability is dependent on the quality and extent of geologic knowledge. Reliability differs for each of the three potential resource classes. It is greatest for probable potential resources because of the greater knowledge base resulting from the advanced stage of exploration and development in established producing districts where most of the resources in this class are located. Reliability is least for speculative potential resources because no significant deposits are known, and favorability is inferred from limited geologic data. Estimates of potential resources are revised as new geologic concepts are postulated, as new types of uranium ore bodies are discovered, and as improved geophysical and geochemical techniques are developed and applied. Advances in technology that permit the exploitation of deep or low-grade deposits, or the processing of ores of previously uneconomic metallurgical types, also will affect the estimates
An Improved Cluster Richness Estimator
Energy Technology Data Exchange (ETDEWEB)
Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.
2009-08-03
Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.
Estimation of Bridge Reliability Distributions
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...
Estimation of Motion Vector Fields
DEFF Research Database (Denmark)
Larsen, Rasmus
1993-01-01
This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....
Multispacecraft current estimates at swarm
DEFF Research Database (Denmark)
Dunlop, M. W.; Yang, Y.-Y.; Yang, J.-Y.
2015-01-01
During the first several months of the three-spacecraft Swarm mission all three spacecraft camerepeatedly into close alignment, providing an ideal opportunity for validating the proposed dual-spacecraftmethod for estimating current density from the Swarm magnetic field data. Two of the Swarm...
Estimating Swedish biomass energy supply
International Nuclear Information System (INIS)
Johansson, J.; Lundqvist, U.
1999-01-01
Biomass is suggested to supply an increasing amount of energy in Sweden. There have been several studies estimating the potential supply of biomass energy, including that of the Swedish Energy Commission in 1995. The Energy Commission based its estimates of biomass supply on five other analyses which presented a wide variation in estimated future supply, in large part due to differing assumptions regarding important factors. In this paper, these studies are assessed, and the estimated potential biomass energy supplies are discusses regarding prices, technical progress and energy policy. The supply of logging residues depends on the demand for wood products and is limited by ecological, technological, and economic restrictions. The supply of stemwood from early thinning for energy and of straw from cereal and oil seed production is mainly dependent upon economic considerations. One major factor for the supply of willow and reed canary grass is the size of arable land projected to be not needed for food and fodder production. Future supply of biomass energy depends on energy prices and technical progress, both of which are driven by energy policy priorities. Biomass energy has to compete with other energy sources as well as with alternative uses of biomass such as forest products and food production. Technical progress may decrease the costs of biomass energy and thus increase the competitiveness. Economic instruments, including carbon taxes and subsidies, and allocation of research and development resources, are driven by energy policy goals and can change the competitiveness of biomass energy
Estimates of wildland fire emissions
Yongqiang Liu; John J. Qu; Wanting Wang; Xianjun Hao
2013-01-01
Wildland fire missions can significantly affect regional and global air quality, radiation, climate, and the carbon cycle. A fundamental and yet challenging prerequisite to understanding the environmental effects is to accurately estimate fire emissions. This chapter describes and analyzes fire emission calculations. Various techniques (field measurements, empirical...
State Estimation for Tensegrity Robots
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
Fuel Estimation Using Dynamic Response
National Research Council Canada - National Science Library
Hines, Michael S
2007-01-01
...?s simulated satellite (SimSAT) to known control inputs. With an iterative process, the moment of inertia of SimSAT about the yaw axis was estimated by matching a model of SimSAT to the measured angular rates...
Empirical estimates of the NAIRU
DEFF Research Database (Denmark)
Madsen, Jakob Brøchner
2005-01-01
equations. In this paper it is shown that a high proportion of the constant term is a statistical artefact and suggests a new method which yields approximately unbiased estimates of the time-invariant NAIRU. Using data for OECD countries it is shown that the constant-term correction lowers the unadjusted...
Online Wavelet Complementary velocity Estimator.
Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin
2018-02-01
In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Load Estimation from Modal Parameters
DEFF Research Database (Denmark)
Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández
2007-01-01
In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF m...
Gini estimation under infinite variance
A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)
2018-01-01
textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient
Software Cost-Estimation Model
Tausworthe, R. C.
1985-01-01
Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.
Correlation Dimension Estimation for Classification
Czech Academy of Sciences Publication Activity Database
Jiřina, Marcel; Jiřina jr., M.
2006-01-01
Roč. 1, č. 3 (2006), s. 547-557 ISSN 1895-8648 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation dimension * probability density estimation * classification * UCI MLR Subject RIV: BA - General Mathematics
Molecular pathology and age estimation.
Meissner, Christoph; Ritz-Timme, Stefanie
2010-12-15
Over the course of our lifetime a stochastic process leads to gradual alterations of biomolecules on the molecular level, a process that is called ageing. Important changes are observed on the DNA-level as well as on the protein level and are the cause and/or consequence of our 'molecular clock', influenced by genetic as well as environmental parameters. These alterations on the molecular level may aid in forensic medicine to estimate the age of a living person, a dead body or even skeletal remains for identification purposes. Four such important alterations have become the focus of molecular age estimation in the forensic community over the last two decades. The age-dependent accumulation of the 4977bp deletion of mitochondrial DNA and the attrition of telomeres along with ageing are two important processes at the DNA-level. Among a variety of protein alterations, the racemisation of aspartic acid and advanced glycation endproducs have already been tested for forensic applications. At the moment the racemisation of aspartic acid represents the pinnacle of molecular age estimation for three reasons: an excellent standardization of sampling and methods, an evaluation of different variables in many published studies and highest accuracy of results. The three other mentioned alterations often lack standardized procedures, published data are sparse and often have the character of pilot studies. Nevertheless it is important to evaluate molecular methods for their suitability in forensic age estimation, because supplementary methods will help to extend and refine accuracy and reliability of such estimates. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
23 CFR 635.115 - Agreement estimate.
2010-04-01
... CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.115 Agreement estimate. (a) Following the award of contract, an agreement estimate based on the contract unit prices and estimated quantities shall be...
On semiautomatic estimation of surface area
DEFF Research Database (Denmark)
Dvorak, J.; Jensen, Eva B. Vedel
2013-01-01
and the surfactor. For ellipsoidal particles, it is shown that the flower estimator is equal to the pivotal estimator based on support function measurements along four perpendicular rays. This result makes the pivotal estimator a powerful approximation to the flower estimator. In a simulation study of prolate....... If the segmentation is correct the estimate is computed automatically, otherwise the expert performs the necessary measurements manually. In case of convex particles we suggest to base the semiautomatic estimation on the so-called flower estimator, a new local stereological estimator of particle surface area....... For convex particles, the estimator is equal to four times the area of the support set (flower set) of the particle transect. We study the statistical properties of the flower estimator and compare its performance to that of two discretizations of the flower estimator, namely the pivotal estimator...
Estimating sediment discharge: Appendix D
Gray, John R.; Simões, Francisco J. M.
2008-01-01
Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with
Estimating Foreign Exchange Reserve Adequacy
Directory of Open Access Journals (Sweden)
Abdul Hakim
2013-04-01
Full Text Available Accumulating foreign exchange reserves, despite their cost and their impacts on other macroeconomics variables, provides some benefits. This paper models such foreign exchange reserves. To measure the adequacy of foreign exchange reserves for import, it uses total reserves-to-import ratio (TRM. The chosen independent variables are gross domestic product growth, exchange rates, opportunity cost, and a dummy variable separating the pre and post 1997 Asian financial crisis. To estimate the risky TRM value, this paper uses conditional Value-at-Risk (VaR, with the help of Glosten-Jagannathan-Runkle (GJR model to estimate the conditional volatility. The results suggest that all independent variables significantly influence TRM. They also suggest that the short and long run volatilities are evident, with the additional evidence of asymmetric effects of negative and positive past shocks. The VaR, which are calculated assuming both normal and t distributions, provide similar results, namely violations in 2005 and 2008.
Organ volume estimation using SPECT
Zaidi, H
1996-01-01
Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...
Comments on mutagenesis risk estimation
International Nuclear Information System (INIS)
Russell, W.L.
1976-01-01
Several hypotheses and concepts have tended to oversimplify the problem of mutagenesis and can be misleading when used for genetic risk estimation. These include: the hypothesis that radiation-induced mutation frequency depends primarily on the DNA content per haploid genome, the extension of this concept to chemical mutagenesis, the view that, since DNA is DNA, mutational effects can be expected to be qualitatively similar in all organisms, the REC unit, and the view that mutation rates from chronic irradiation can be theoretically and accurately predicted from acute irradiation data. Therefore, direct determination of frequencies of transmitted mutations in mammals continues to be important for risk estimation, and the specific-locus method in mice is shown to be not as expensive as is commonly supposed for many of the chemical testing requirements
Bayesian estimation in homodyne interferometry
International Nuclear Information System (INIS)
Olivares, Stefano; Paris, Matteo G A
2009-01-01
We address phase-shift estimation by means of squeezed vacuum probe and homodyne detection. We analyse Bayesian estimator, which is known to asymptotically saturate the classical Cramer-Rao bound to the variance, and discuss convergence looking at the a posteriori distribution as the number of measurements increases. We also suggest two feasible adaptive methods, acting on the squeezing parameter and/or the homodyne local oscillator phase, which allow us to optimize homodyne detection and approach the ultimate bound to precision imposed by the quantum Cramer-Rao theorem. The performances of our two-step methods are investigated by means of Monte Carlo simulated experiments with a small number of homodyne data, thus giving a quantitative meaning to the notion of asymptotic optimality.
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Cost Estimates and Investment Decisions
International Nuclear Information System (INIS)
Emhjellen, Kjetil; Emhjellen Magne; Osmundsen, Petter
2001-08-01
When evaluating new investment projects, oil companies traditionally use the discounted cashflow method. This method requires expected cashflows in the numerator and a risk adjusted required rate of return in the denominator in order to calculate net present value. The capital expenditure (CAPEX) of a project is one of the major cashflows used to calculate net present value. Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. In the oil industry and many other industries, it is common practice to report a CAPEX that is the estimated 50/50 (median) CAPEX instead of the estimated expected (expected value) CAPEX. In this article we demonstrate how the practice of using a 50/50 (median) CAPEX, when the cost distributions are asymmetric, causes project valuation errors and therefore may lead to wrong investment decisions with acceptance of projects that have negative net present values. (author)
Location Estimation using Delayed Measurements
DEFF Research Database (Denmark)
Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus
1998-01-01
When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new techn...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported...
Prior information in structure estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Nedoma, Petr; Khailova, Natalia; Pavelková, Lenka
2003-01-01
Roč. 150, č. 6 (2003), s. 643-653 ISSN 1350-2379 R&D Projects: GA AV ČR IBS1075102; GA AV ČR IBS1075351; GA ČR GA102/03/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : prior knowledge * structure estimation * autoregressive models Subject RIV: BC - Control Systems Theory Impact factor: 0.745, year: 2003 http://library.utia.cas.cz/separaty/historie/karny-0411258.pdf
Radiation in space: risk estimates
International Nuclear Information System (INIS)
Fry, R.J.M.
2002-01-01
The complexity of radiation environments in space makes estimation of risks more difficult than for the protection of terrestrial population. In deep space the duration of the mission, position of the solar cycle, number and size of solar particle events (SPE) and the spacecraft shielding are the major determinants of risk. In low-earth orbit missions there are the added factors of altitude and orbital inclination. Different radiation qualities such as protons and heavy ions and secondary radiations inside the spacecraft such as neutrons of various energies, have to be considered. Radiation dose rates in space are low except for short periods during very large SPEs. Risk estimation for space activities is based on the human experience of exposure to gamma rays and to a lesser extent X rays. The doses of protons, heavy ions and neutrons are adjusted to take into account the relative biological effectiveness (RBE) of the different radiation types and thus derive equivalent doses. RBE values and factors to adjust for the effect of dose rate have to be obtained from experimental data. The influence of age and gender on the cancer risk is estimated from the data from atomic bomb survivors. Because of the large number of variables the uncertainties in the probability of the effects are large. Information needed to improve the risk estimates includes: (1) risk of cancer induction by protons, heavy ions and neutrons; (2) influence of dose rate and protraction, particularly on potential tissue effects such as reduced fertility and cataracts; and (3) possible effects of heavy ions on the central nervous system. Risk cannot be eliminated and thus there must be a consensus on what level of risk is acceptable. (author)
Properties of estimated characteristic roots
Bent Nielsen; Heino Bohn Nielsen
2008-01-01
Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear when multiple roots are present as this implies a non-differentiablity so the Î´-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples ...
Recent estimates of capital flight
Claessens, Stijn; Naude, David
1993-01-01
Researchers and policymakers have in recent years paid considerable attention to the phenomenon of capital flight. Researchers have focused on four questions: What concept should be used to measure capital flight? What figure for capital flight will emerge, using this measure? Can the occurrence and magnitude of capital flight be explained by certain (economic) variables? What policy changes can be useful to reverse capital flight? The authors focus strictly on presenting estimates of capital...
Effort Estimation in BPMS Migration
Drews, Christopher; Lantow, Birger
2018-01-01
Usually Business Process Management Systems (BPMS) are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation re...
Reactor core performance estimating device
International Nuclear Information System (INIS)
Tanabe, Akira; Yamamoto, Toru; Shinpuku, Kimihiro; Chuzen, Takuji; Nishide, Fusayo.
1995-01-01
The present invention can autonomously simplify a neural net model thereby enabling to conveniently estimate various amounts which represents reactor core performances by a simple calculation in a short period of time. Namely, a reactor core performance estimation device comprises a nerve circuit net which divides the reactor core into a large number of spacial regions, and receives various physical amounts for each region as input signals for input nerve cells and outputs estimation values of each amount representing the reactor core performances as output signals of output nerve cells. In this case, the nerve circuit net (1) has a structure of extended multi-layered model having direct coupling from an upper stream layer to each of downstream layers, (2) has a forgetting constant q in a corrected equation for a joined load value ω using an inverse error propagation method, (3) learns various amounts representing reactor core performances determined using the physical models as teacher signals, (4) determines the joined load value ω decreased as '0' when it is to less than a predetermined value upon learning described above, and (5) eliminates elements of the nerve circuit net having all of the joined load value decreased to 0. As a result, the neural net model comprises an autonomously simplifying means. (I.S.)
Contact Estimation in Robot Interaction
Directory of Open Access Journals (Sweden)
Filippo D'Ippolito
2014-07-01
Full Text Available In the paper, safety issues are examined in a scenario in which a robot manipulator and a human perform the same task in the same workspace. During the task execution, the human should be able to physically interact with the robot, and in this case an estimation algorithm for both interaction forces and a contact point is proposed in order to guarantee safety conditions. The method, starting from residual joint torque estimation, allows both direct and adaptive computation of the contact point and force, based on a principle of equivalence of the contact forces. At the same time, all the unintended contacts must be avoided, and a suitable post-collision strategy is considered to move the robot away from the collision area or else to reduce impact effects. Proper experimental tests have demonstrated the applicability in practice of both the post-impact strategy and the estimation algorithms; furthermore, experiments demonstrate the different behaviour resulting from the adaptation of the contact point as opposed to direct calculation.
Statistical estimation of process holdup
International Nuclear Information System (INIS)
Harris, S.P.
1988-01-01
Estimates of potential process holdup and their random and systematic error variances are derived to improve the inventory difference (ID) estimate and its associated measure of uncertainty for a new process at the Savannah River Plant. Since the process is in a start-up phase, data have not yet accumulated for statistical modelling. The material produced in the facility will be a very pure, highly enriched 235U with very small isotopic variability. Therefore, data published in LANL's unclassified report on Estimation Methods for Process Holdup of a Special Nuclear Materials was used as a starting point for the modelling process. LANL's data were gathered through a series of designed measurements of special nuclear material (SNM) holdup at two of their materials-processing facilities. Also, they had taken steps to improve the quality of data through controlled, larger scale, experiments outside of LANL at highly enriched uranium processing facilities. The data they have accumulated are on an equipment component basis. Our modelling has been restricted to the wet chemistry area. We have developed predictive models for each of our process components based on the LANL data. 43 figs
Abundance estimation and conservation biology
Nichols, J.D.; MacKenzie, D.I.
2004-01-01
Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001). The initial capture–recapture models developed for partially (Darroch, 1959) and completely (Jolly, 1965; Seber, 1965) open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992), and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993). However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001). The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004) is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004) emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004) also suggest that our attention
Abundance estimation and Conservation Biology
Directory of Open Access Journals (Sweden)
Nichols, J. D.
2004-06-01
Full Text Available Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001. The initial capture–recapture models developed for partially (Darroch, 1959 and completely (Jolly, 1965; Seber, 1965 open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992, and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993. However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001. The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004 is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004 emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004 also suggest that
Estimating the Costs of Preventive Interventions
Foster, E. Michael; Porter, Michele M.; Ayers, Tim S.; Kaplan, Debra L.; Sandler, Irwin
2007-01-01
The goal of this article is to improve the practice and reporting of cost estimates of prevention programs. It reviews the steps in estimating the costs of an intervention and the principles that should guide estimation. The authors then review prior efforts to estimate intervention costs using a sample of well-known but diverse studies. Finally,…
Thermodynamics and life span estimation
International Nuclear Information System (INIS)
Kuddusi, Lütfullah
2015-01-01
In this study, the life span of people living in seven regions of Turkey is estimated by applying the first and second laws of thermodynamics to the human body. The people living in different regions of Turkey have different food habits. The first and second laws of thermodynamics are used to calculate the entropy generation rate per unit mass of a human due to the food habits. The lifetime entropy generation per unit mass of a human was previously found statistically. The two entropy generations, lifetime entropy generation and entropy generation rate, enable one to determine the life span of people living in seven regions of Turkey with different food habits. In order to estimate the life span, some statistics of Turkish Statistical Institute regarding the food habits of the people living in seven regions of Turkey are used. The life spans of people that live in Central Anatolia and Eastern Anatolia regions are the longest and shortest, respectively. Generally, the following inequality regarding the life span of people living in seven regions of Turkey is found: Eastern Anatolia < Southeast Anatolia < Black Sea < Mediterranean < Marmara < Aegean < Central Anatolia. - Highlights: • The first and second laws of thermodynamics are applied to the human body. • The entropy generation of a human due to his food habits is determined. • The life span of Turks is estimated by using the entropy generation method. • Food habits of a human have effect on his life span
The estimation of genetic divergence
Holmquist, R.; Conroy, T.
1981-01-01
Consideration is given to the criticism of Nei and Tateno (1978) of the REH (random evolutionary hits) theory of genetic divergence in nucleic acids and proteins, and to their proposed alternative estimator of total fixed mutations designated X2. It is argued that the assumption of nonuniform amino acid or nucleotide substitution will necessarily increase REH estimates relative to those made for a model where each locus has an equal likelihood of fixing mutations, thus the resulting value will not be an overestimation. The relative values of X2 and measures calculated on the basis of the PAM and REH theories for the number of nucleotide substitutions necessary to explain a given number of observed amino acid differences between two homologous proteins are compared, and the smaller values of X2 are attributed to (1) a mathematical model based on the incorrect assumption that an entire structural gene is free to fix mutations and (2) the assumptions of different numbers of variable codons for the X2 and REH calculations. Results of a repeat of the computer simulations of Nei and Tateno are presented which, in contrast to the original results, confirm the REH theory. It is pointed out that while a negative correlation is observed between estimations of the fixation intensity per varion and the number of varions for a given pair of sequences, the correlation between the two fixation intensities and varion numbers of two different pairs of sequences need not be negative. Finally, REH theory is used to resolve a paradox concerning the high rate of covarion turnover and the nature of general function sites as permanent covarions.
Nonparametric e-Mixture Estimation.
Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru
2016-12-01
This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.
Dose estimation by biological methods
International Nuclear Information System (INIS)
Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.
1997-01-01
The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)
Stochastic estimation of electricity consumption
International Nuclear Information System (INIS)
Kapetanovic, I.; Konjic, T.; Zahirovic, Z.
1999-01-01
Electricity consumption forecasting represents a part of the stable functioning of the power system. It is very important because of rationality and increase of control process efficiency and development planning of all aspects of society. On a scientific basis, forecasting is a possible way to solve problems. Among different models that have been used in the area of forecasting, the stochastic aspect of forecasting as a part of quantitative models takes a very important place in applications. ARIMA models and Kalman filter as stochastic estimators have been treated together for electricity consumption forecasting. Therefore, the main aim of this paper is to present the stochastic forecasting aspect using short time series. (author)
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-06
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.
Location Estimation of Mobile Devices
Directory of Open Access Journals (Sweden)
Kamil ŽIDEK
2009-06-01
Full Text Available This contribution describes mathematical model (kinematics for Mobile Robot carriage. The mathematical model is fully parametric. Model is designed universally for any measures three or four wheeled carriage. The next conditions are: back wheels are driving-wheel, front wheels change angle of Robot turning. Position of the front wheel gives the actual position of the robot. Position of the robot is described by coordinates x, y and by angle of the front wheel α in reference position. Main reason for model implementation is indoor navigation. We need some estimation of robot position especially after turning of the Robot. Next use is for outdoor navigation especially for precising GPS information.
Estimation of the energy needs
International Nuclear Information System (INIS)
Ailleret
1955-01-01
The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [fr
Random Decrement Based FRF Estimation
DEFF Research Database (Denmark)
Brincker, Rune; Asmussen, J. C.
to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...
Random Decrement Based FRF Estimation
DEFF Research Database (Denmark)
Brincker, Rune; Asmussen, J. C.
1997-01-01
to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...
Applied parameter estimation for chemical engineers
Englezos, Peter
2000-01-01
Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam
Graph Sampling for Covariance Estimation
Chepuri, Sundeep Prabhakar
2017-04-25
In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.
Note on demographic estimates 1979.
1979-01-01
Based on UN projections, national projections, and the South Pacific Commission data, the ESCAP Population Division has compiled estimates of the 1979 population and demogaphic figures for the 38 member countries and associate members. The 1979 population is estimated at 2,400 million, 55% of the world total of 4,336 million. China comprises 39% of the region, India, 28%. China, India, Indonesia, Japan, Bangladesh, and Pakistan comprise 6 of the 10 largest countries in the world. China and India are growing at the rate of 1 million people per month. Between 1978-9 Hong Kong experienced the highest rate of growth, 6.2%, Niue the lowest, 4.5%. Life expectancy at birth is 58.7 years in the ESCAP region, but is over 70 in Japan, Hong Kong, Australia, New Zealand, and Singapore. At 75.2 years life expectancy in Japan is highest in the world. By world standards, a high percentage of females aged 16-64 are economically active. More than half the women aged 15-64 are in the labor force in 10 of the ESCAP countries. The region is still 73% rural. By the end of the 20th century the population of the ESCAP region is projected at 3,272 million, a 36% increase over the 1979 total.
Practical global oceanic state estimation
Wunsch, Carl; Heimbach, Patrick
2007-06-01
The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.
LOD estimation from DORIS observations
Stepanek, Petr; Filler, Vratislav; Buday, Michal; Hugentobler, Urs
2016-04-01
The difference between astronomically determined duration of the day and 86400 seconds is called length of day (LOD). The LOD could be also understood as the daily rate of the difference between the Universal Time UT1, based on the Earth rotation, and the International Atomic Time TAI. The LOD is estimated using various Satellite Geodesy techniques as GNSS and SLR, while absolute UT1-TAI difference is precisely determined by VLBI. Contrary to other IERS techniques, the LOD estimation using DORIS (Doppler Orbitography and Radiopositioning Integrated by satellite) measurement did not achieve a geodetic accuracy in the past, reaching the precision at the level of several ms per day. However, recent experiments performed by IDS (International DORIS Service) analysis centre at Geodetic Observatory Pecny show a possibility to reach accuracy around 0.1 ms per day, when not adjusting the cross-track harmonics in the Satellite orbit model. The paper presents the long term LOD series determined from the DORIS solutions. The series are compared with C04 as the reference. Results are discussed in the context of accuracy achieved with GNSS and SLR. Besides the multi-satellite DORIS solutions, also the LOD series from the individual DORIS satellite solutions are analysed.
CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE
Directory of Open Access Journals (Sweden)
Nino Serdarevic
2012-10-01
Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.
International Nuclear Information System (INIS)
Pochin, E.E.
1980-01-01
In an increasing number of situations, it is becoming possible to obtain and compare numerical estimates of the biological risks involved in different alternative sources of action. In some cases these risks are similar in kind, as for example when the risk of including fatal cancer of the breast or stomach by x-ray screening of a population at risk, is compared with the risk of such cancers proving fatal if not detected by a screening programme. In other cases in which it is important to attempt a comparison, the risks are dissimilar in type, as when the safety of occupations involving exposure to radiation or chemical carcinogens is compared with that of occupations in which the major risks are from lung disease or from accidental injury and death. Similar problems of assessing the relative severity of unlike effects occur in any attempt to compare the total biological harm associated with a given output of electricity derived from different primary fuel sources, with its contributions both of occupation and of public harm. In none of these instances is the numerical frequency of harmful effects alone an adequate measure of total biological detriment, nor is such detriment the only factor which should influence decisions. Estimations of risk appear important however, since otherwise public health decisions are likely to be made on more arbitrary grounds, and public opinion will continue to be affected predominantly by the type rather than also by the size of risk. (author)
Variance function estimation for immunoassays
International Nuclear Information System (INIS)
Raab, G.M.; Thompson, R.; McKenzie, I.
1980-01-01
A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)
Information and crystal structure estimation
International Nuclear Information System (INIS)
Wilkins, S.W.; Commonwealth Scientific and Industrial Research Organization, Clayton; Varghese, J.N.; Steenstrup, S.
1984-01-01
The conceptual foundations of a general information-theoretic based approach to X-ray structure estimation are reexamined with a view to clarifying some of the subtleties inherent in the approach and to enhancing the scope of the method. More particularly, general reasons for choosing the minimum of the Shannon-Kullback measure for information as the criterion for inference are discussed and it is shown that the minimum information (or maximum entropy) principle enters the present treatment of the structure estimation problem in at least to quite separate ways, and that three formally similar but conceptually quite different expressions for relative information appear at different points in the theory. One of these is the general Shannon-Kullback expression, while the second is a derived form pertaining only under the restrictive assumptions of the present stochastic model for allowed structures, and the third is a measure of the additional information involved in accepting a fluctuation relative to an arbitrary mean structure. (orig.)
Pierro, Marco; De Felice, Matteo; Maggioni, Enrico; Moser, David; Perotto, Alessandro; Spada, Francesco; Cornaro, Cristina
2017-04-01
The growing photovoltaic generation results in a stochastic variability of the electric demand that could compromise the stability of the grid and increase the amount of energy reserve and the energy imbalance cost. On regional scale, solar power estimation and forecast is becoming essential for Distribution System Operators, Transmission System Operator, energy traders, and aggregators of generation. Indeed the estimation of regional PV power can be used for PV power supervision and real time control of residual load. Mid-term PV power forecast can be employed for transmission scheduling to reduce energy imbalance and related cost of penalties, residual load tracking, trading optimization, secondary energy reserve assessment. In this context, a new upscaling method was developed and used for estimation and mid-term forecast of the photovoltaic distributed generation in a small area in the north of Italy under the control of a local DSO. The method was based on spatial clustering of the PV fleet and neural networks models that input satellite or numerical weather prediction data (centered on cluster centroids) to estimate or predict the regional solar generation. It requires a low computational effort and very few input information should be provided by users. The power estimation model achieved a RMSE of 3% of installed capacity. Intra-day forecast (from 1 to 4 hours) obtained a RMSE of 5% - 7% while the one and two days forecast achieve to a RMSE of 7% and 7.5%. A model to estimate the forecast error and the prediction intervals was also developed. The photovoltaic production in the considered region provided the 6.9% of the electric consumption in 2015. Since the PV penetration is very similar to the one observed at national level (7.9%), this is a good case study to analyse the impact of PV generation on the electric grid and the effects of PV power forecast on transmission scheduling and on secondary reserve estimation. It appears that, already with 7% of PV
Xu, Wenbin
2015-02-03
A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7 cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2 km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.
Xu, Wenbin; Dutta, Rishabh; Jonsson, Sigurjon
2015-01-01
A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7 cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2 km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.
Directory of Open Access Journals (Sweden)
Mani D R
2011-08-01
Full Text Available Abstract Background Clustering is a widely applicable pattern recognition method for discovering groups of similar observations in data. While there are a large variety of clustering algorithms, very few of these can enforce constraints on the variation of attributes for data points included in a given cluster. In particular, a clustering algorithm that can limit variation within a cluster according to that cluster's position (centroid location can produce effective and optimal results in many important applications ranging from clustering of silicon pixels or calorimeter cells in high-energy physics to label-free liquid chromatography based mass spectrometry (LC-MS data analysis in proteomics and metabolomics. Results We present MEDEA (M-Estimator with DEterministic Annealing, an M-estimator based, new unsupervised algorithm that is designed to enforce position-specific constraints on variance during the clustering process. The utility of MEDEA is demonstrated by applying it to the problem of "peak matching"--identifying the common LC-MS peaks across multiple samples--in proteomic biomarker discovery. Using real-life datasets, we show that MEDEA not only outperforms current state-of-the-art model-based clustering methods, but also results in an implementation that is significantly more efficient, and hence applicable to much larger LC-MS data sets. Conclusions MEDEA is an effective and efficient solution to the problem of peak matching in label-free LC-MS data. The program implementing the MEDEA algorithm, including datasets, clustering results, and supplementary information is available from the author website at http://www.hephy.at/user/fru/medea/.
PHAZE, Parametric Hazard Function Estimation
International Nuclear Information System (INIS)
2002-01-01
1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate
Noise estimation of beam position monitors at RHIC
International Nuclear Information System (INIS)
Shen, X.; Bai, M.
2014-01-01
Beam position monitors (BPM) are used to record the average orbits and transverse turn-by-turn displacements of the beam centroid motion. The Relativistic Hadron Ion Collider (RHIC) has 160 BPMs for each plane in each of the Blue and Yellow rings: 72 dual-plane BPMs in the insertion regions (IR) and 176 single-plane modules in the arcs. Each BPM is able to acquire 1024 or 4096 consecutive turn-by-turn beam positions. Inevitably, there are broadband noisy signals in the turn-by-turn data due to BPM electronics as well as other sources. A detailed study of the BPM noise performance is critical for reliable optics measurement and beam dynamics analysis based on turn-by-turn data.
Spectral Unmixing Applied to Desert Soils for the Detection of Sub-Pixel Disturbances
2012-09-01
to use this information to produce a trafficability product providing the consumer with information that is helpful in navigating through areas of...1989; Sieh and Bursik, 1986). The ring around the dome of the volcano is the result of a strombolian type of eruption (Sieh and Bursik, 1986; Sharp...6ENCHES. LANDSLIDES CUESTAS HILLSIDES AND E$-CAAPME:NT$ IN A ORY SlJEIHtJMJO Ct.IMAftC lOf.IE CAH0NA-8E0AY-HACifRM.AN M0091’a,lely d9flp .lOd \\let
2012-09-01
sweep, which is towed behind the helicopter and produces a magnetic signature in order to force magnetic influence mines to detonate. The MH-60S...Science and civilisation in China, Volume 5, Part 7. Cambridge University Press. Olsen, R. C. (2007). Remote sensing from air and space. Bellingham
Finite mixture models for sub-pixel coastal land cover classification
CSIR Research Space (South Africa)
Ritchie, Michaela C
2017-05-01
Full Text Available Models for Sub- pixel Coastal Land Cover Classification M. Ritchie Dr. M. Lück-Vogel Dr. P. Debba Dr. V. Goodall ISRSE - 37 Tshwane, South Africa 10 May 2017 2Study Area Africa South Africa FALSE BAY 3Strand Gordon’s Bay Study Area WorldView-2 Image.../Urban 1 10 10 Herbaceous Vegetation 1 5 5 Shadow 1 8 8 Sparse Vegetation 1 3 3 Water 1 10 10 Woody Vegetation 1 5 5 11 Maximum Likelihood Classification (MLC) 12 Gaussian Mixture Discriminant Analysis (GMDA) 13 A B C t-distribution Mixture Discriminant...
Detection of Subpixel Submerged Mine-Like Targets in WorldView-2 Multispectral Imagery
2012-09-01
exploit the capabilities of the technology already in use, such as the WorldView-2. This is a complicated issue, as any capability has to be able to...Applied Science and Analysis Inc. 91 Mayer, R., & Bucholtz, F. (2003). Object detection by using “ whitening / dewhitening” to transform target
Quantifying sub-pixel urban impervious surface through fusion of optical and inSAR imagery
Yang, L.; Jiang, L.; Lin, H.; Liao, M.
2009-01-01
In this study, we explored the potential to improve urban impervious surface modeling and mapping with the synergistic use of optical and Interferometric Synthetic Aperture Radar (InSAR) imagery. We used a Classification and Regression Tree (CART)-based approach to test the feasibility and accuracy of quantifying Impervious Surface Percentage (ISP) using four spectral bands of SPOT 5 high-resolution geometric (HRG) imagery and three parameters derived from the European Remote Sensing (ERS)-2 Single Look Complex (SLC) SAR image pair. Validated by an independent ISP reference dataset derived from the 33 cm-resolution digital aerial photographs, results show that the addition of InSAR data reduced the ISP modeling error rate from 15.5% to 12.9% and increased the correlation coefficient from 0.71 to 0.77. Spatially, the improvement is especially noted in areas of vacant land and bare ground, which were incorrectly mapped as urban impervious surfaces when using the optical remote sensing data. In addition, the accuracy of ISP prediction using InSAR images alone is only marginally less than that obtained by using SPOT imagery. The finding indicates the potential of using InSAR data for frequent monitoring of urban settings located in cloud-prone areas.
Bayesian estimation methods in metrology
International Nuclear Information System (INIS)
Cox, M.G.; Forbes, A.B.; Harris, P.M.
2004-01-01
In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods
International Nuclear Information System (INIS)
Anon.
1982-01-01
The way nuclear power plants are built practically excludes accidents with serious consequences. This is attended to by careful selection of material, control of fabrication and regular retesting as well as by several safety systems working independently. But the remaining risk, a 'hypothetic' uncontrollable incident with catastrophic effects is the main subject of the discussion on the peaceful utilization of nuclear power. The this year's 'Annual Meeting on Nuclear Engineering' in Mannheim and the meeting 'Reactor Safety Research' in Cologne showed, that risk studies so far were too pessimistic. 'Best estimate' calculations suggest that core melt-down accidents only occur if almost all safety systems fail, that accidents take place much more slowly, and that the release of radioactive fission products is by several magnitudes lower than it was assumed until now. (orig.) [de
Neutron background estimates in GESA
Directory of Open Access Journals (Sweden)
Fernandes A.C.
2014-01-01
Full Text Available The SIMPLE project looks for nuclear recoil events generated by rare dark matter scattering interactions. Nuclear recoils are also produced by more prevalent cosmogenic neutron interactions. While the rock overburden shields against (μ,n neutrons to below 10−8 cm−2 s−1, it itself contributes via radio-impurities. Additional shielding of these is similar, both suppressing and contributing neutrons. We report on the Monte Carlo (MCNP estimation of the on-detector neutron backgrounds for the SIMPLE experiment located in the GESA facility of the Laboratoire Souterrain à Bas Bruit, and its use in defining additional shielding for measurements which have led to a reduction in the extrinsic neutron background to ∼ 5 × 10−3 evts/kgd. The calculated event rate induced by the neutron background is ∼ 0,3 evts/kgd, with a dominant contribution from the detector container.
International Nuclear Information System (INIS)
Carlberg, R.G.
1990-01-01
The redshift dependence of the fraction of galaxies which are merging or strongly interacting is a steep function of Omega and depends on the ratio of the cutoff velocity for interactions to the pairwise velocity dispersion. For typical galaxies the merger rate is shown to vary as (1 + z)exp m, where m is about 4.51 (Omega)exp 0.42, for Omega near 1 and a CDM-like cosmology. The index m has a relatively weak dependence on the maximum merger velocity, the mass of the galaxy, and the background cosmology, for small variations around a cosmology with a low redshift, z of about 2, of galaxy formation. Estimates of m from optical and IRAS galaxies have found that m is about 3-4, but with very large uncertainties. If quasar evolution follows the evolution of galaxy merging and m for quasars is greater than 4, then Omega is greater than 0.8. 21 refs
2007 Estimated International Energy Flows
Energy Technology Data Exchange (ETDEWEB)
Smith, C A; Belles, R D; Simon, A J
2011-03-10
An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.
Data Handling and Parameter Estimation
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist
2016-01-01
,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...
Model for traffic emissions estimation
Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.
A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.
Effort Estimation in BPMS Migration
Directory of Open Access Journals (Sweden)
Christopher Drews
2018-04-01
Full Text Available Usually Business Process Management Systems (BPMS are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation regarding the technical aspects of BPMS migration. The framework provides questions for BPMS comparison and an effort evaluation schema. The applicability of the framework is evaluated based on a simplified BPMS migration scenario.
Supplemental report on cost estimates'
International Nuclear Information System (INIS)
1992-01-01
The Office of Management and Budget (OMB) and the U.S. Army Corps of Engineers have completed an analysis of the Department of Energy's (DOE) Fiscal Year (FY) 1993 budget request for its Environmental Restoration and Waste Management (ERWM) program. The results were presented to an interagency review group (IAG) of senior-Administration officials for their consideration in the budget process. This analysis included evaluations of the underlying legal requirements and cost estimates on which the ERWM budget request was based. The major conclusions are contained in a separate report entitled, ''Interagency Review of the Department of Energy Environmental Restoration and Waste Management Program.'' This Corps supplemental report provides greater detail on the cost analysis
Age Estimation in Forensic Sciences
Alkass, Kanar; Buchholz, Bruce A.; Ohtani, Susumu; Yamamoto, Toshiharu; Druid, Henrik; Spalding, Kirsty L.
2010-01-01
Age determination of unknown human bodies is important in the setting of a crime investigation or a mass disaster because the age at death, birth date, and year of death as well as gender can guide investigators to the correct identity among a large number of possible matches. Traditional morphological methods used by anthropologists to determine age are often imprecise, whereas chemical analysis of tooth dentin, such as aspartic acid racemization, has shown reproducible and more precise results. In this study, we analyzed teeth from Swedish individuals using both aspartic acid racemization and radiocarbon methodologies. The rationale behind using radiocarbon analysis is that aboveground testing of nuclear weapons during the cold war (1955–1963) caused an extreme increase in global levels of carbon-14 (14C), which has been carefully recorded over time. Forty-four teeth from 41 individuals were analyzed using aspartic acid racemization analysis of tooth crown dentin or radiocarbon analysis of enamel, and 10 of these were split and subjected to both radiocarbon and racemization analysis. Combined analysis showed that the two methods correlated well (R2 = 0.66, p Aspartic acid racemization also showed a good precision with an overall absolute error of 5.4 ± 4.2 years. Whereas radiocarbon analysis gives an estimated year of birth, racemization analysis indicates the chronological age of the individual at the time of death. We show how these methods in combination can also assist in the estimation of date of death of an unidentified victim. This strategy can be of significant assistance in forensic casework involving dead victim identification. PMID:19965905
Runoff estimation in residencial area
Directory of Open Access Journals (Sweden)
Meire Regina de Almeida Siqueira
2013-12-01
Full Text Available This study aimed to estimate the watershed runoff caused by extreme events that often result in the flooding of urban areas. The runoff of a residential area in the city of Guaratinguetá, São Paulo, Brazil was estimated using the Curve-Number method proposed by USDA-NRCS. The study also investigated current land use and land cover conditions, impermeable areas with pasture and indications of the reforestation of those areas. Maps and satellite images of Residential Riverside I Neighborhood were used to characterize the area. In addition to characterizing land use and land cover, the definition of the soil type infiltration capacity, the maximum local rainfall, and the type and quality of the drainage system were also investigated. The study showed that this neighborhood, developed in 1974, has an area of 792,700 m², a population of 1361 inhabitants, and a sloping area covered with degraded pasture (Guaratinguetá-Piagui Peak located in front of the residential area. The residential area is located in a flat area near the Paraiba do Sul River, and has a poor drainage system with concrete pipes, mostly 0.60 m in diameter, with several openings that capture water and sediments from the adjacent sloping area. The Low Impact Development (LID system appears to be a viable solution for this neighborhood drainage system. It can be concluded that the drainage system of the Guaratinguetá Riverside I Neighborhood has all of the conditions and characteristics that make it suitable for the implementation of a low impact urban drainage system. Reforestation of Guaratinguetá-Piagui Peak can reduce the basin’s runoff by 50% and minimize flooding problems in the Beira Rio neighborhood.