WorldWideScience

Sample records for subpixel centroid estimation

  1. Estimation of Subpixel Motion Using Bispectrum

    Directory of Open Access Journals (Sweden)

    El Mehdi Ismaili Aalaoui

    2008-01-01

    Full Text Available Motion estimation techniques are widely used in today's video processing systems. The frequently used techniques are frequency-domain motion estimation methods, most notably phase correlation (PC. If the image frames are corrupted by Gaussian noises, then cross-correlation and related techniques do not work well. In this paper, however, we have studied this topic from a viewpoint different from the above. Our scheme is based on the bispectrum method for sub-pixel motion estimation of noisy image sequences. Experimental results show that our proposed method performs significantly better than PC technique.

  2. Bayesian centroid estimation for motif discovery.

    Science.gov (United States)

    Carvalho, Luis

    2013-01-01

    Biological sequences may contain patterns that signal important biomolecular functions; a classical example is regulation of gene expression by transcription factors that bind to specific patterns in genomic promoter regions. In motif discovery we are given a set of sequences that share a common motif and aim to identify not only the motif composition, but also the binding sites in each sequence of the set. We propose a new centroid estimator that arises from a refined and meaningful loss function for binding site inference. We discuss the main advantages of centroid estimation for motif discovery, including computational convenience, and how its principled derivation offers further insights about the posterior distribution of binding site configurations. We also illustrate, using simulated and real datasets, that the centroid estimator can differ from the traditional maximum a posteriori or maximum likelihood estimators.

  3. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  4. Sub-pixel Area Calculation Methods for Estimating Irrigated Areas

    Directory of Open Access Journals (Sweden)

    Suraj Pandey

    2007-10-01

    Full Text Available The goal of this paper was to develop and demonstrate practical methods forcomputing sub-pixel areas (SPAs from coarse-resolution satellite sensor data. Themethods were tested and verified using: (a global irrigated area map (GIAM at 10-kmresolution based, primarily, on AVHRR data, and (b irrigated area map for India at 500-mbased, primarily, on MODIS data. The sub-pixel irrigated areas (SPIAs from coarse-resolution satellite sensor data were estimated by multiplying the full pixel irrigated areas(FPIAs with irrigated area fractions (IAFs. Three methods were presented for IAFcomputation: (a Google Earth Estimate (IAF-GEE; (b High resolution imagery (IAF-HRI; and (c Sub-pixel de-composition technique (IAF-SPDT. The IAF-GEE involvedthe use of “zoom-in-views” of sub-meter to 4-meter very high resolution imagery (VHRIfrom Google Earth and helped determine total area available for irrigation (TAAI or netirrigated areas that does not consider intensity or seasonality of irrigation. The IAF-HRI isa well known method that uses finer-resolution data to determine SPAs of the coarser-resolution imagery. The IAF-SPDT is a unique and innovative method wherein SPAs aredetermined based on the precise location of every pixel of a class in 2-dimensionalbrightness-greenness-wetness (BGW feature-space plot of red band versus near-infraredband spectral reflectivity. The SPIAs computed using IAF-SPDT for the GIAM was within2 % of the SPIA computed using well known IAF-HRI. Further the fractions from the 2 methods were significantly correlated. The IAF-HRI and IAF-SPDT help to determine annualized or gross irrigated areas (AIA that does consider intensity or seasonality (e.g., sum of areas from season 1, season 2, and continuous year-round crops. The national census based irrigated areas for the top 40 irrigated nations (which covers about 90% of global irrigation was significantly better related (and had lesser uncertainties and errors when

  5. Ordinal Regression Based Subpixel Shift Estimation for Video Super-Resolution

    Directory of Open Access Journals (Sweden)

    Petrovic Nemanja

    2007-01-01

    Full Text Available We present a supervised learning-based approach for subpixel motion estimation which is then used to perform video super-resolution. The novelty of this work is the formulation of the problem of subpixel motion estimation in a ranking framework. The ranking formulation is a variant of classification and regression formulation, in which the ordering present in class labels namely, the shift between patches is explicitly taken into account. Finally, we demonstrate the applicability of our approach on superresolving synthetically generated images with global subpixel shifts and enhancing real video frames by accounting for both local integer and subpixel shifts.

  6. Subpixel urban land cover estimation: comparing cubist, random forests, and support vector regression

    Science.gov (United States)

    Jeffrey T. Walton

    2008-01-01

    Three machine learning subpixel estimation methods (Cubist, Random Forests, and support vector regression) were applied to estimate urban cover. Urban forest canopy cover and impervious surface cover were estimated from Landsat-7 ETM+ imagery using a higher resolution cover map resampled to 30 m as training and reference data. Three different band combinations (...

  7. Implementation and optimization of sub-pixel motion estimation on BWDSP platform

    Science.gov (United States)

    Jia, Shangzhu; Lang, Wenhui; Zeng, Feiyang; Liu, Yufu

    2017-08-01

    Sub-pixel Motion estimation algorithm is a key technology in video coding inter-frame prediction algorithm, which has important influence on video coding performance. In the latest video coding standard H.265/HEVC, interpolation filters based on DCT are used to Sub-pixel motion estimation, but it has very high computation complexity. In order to ensure the real-time performance of hardware coding, we combine the characteristics of BWDSP architecture, using code level optimization techniques to realize the sub-pixel motion estimation algorithm. Experimental results demonstrate that In the BWDSP simulation environment, the proposed method significantly decreases the running clock cycle and thus improves the performance of the encoder.

  8. Comparison of estimates of hardwood bole volume using importance sampling, the centroid method, and some taper equations

    Science.gov (United States)

    Harry V., Jr. Wiant; Michael L. Spangler; John E. Baumgras

    2002-01-01

    Various taper systems and the centroid method were compared to unbiased volume estimates made by importance sampling for 720 hardwood trees selected throughout the state of West Virginia. Only the centroid method consistently gave volumes estimates that did not differ significantly from those made by importance sampling, although some taper equations did well for most...

  9. An autocorrelation-based method for improvement of sub-pixel displacement estimation in ultrasound strain imaging.

    Science.gov (United States)

    Kim, Seungsoo; Aglyamov, Salavat R; Park, Suhyun; O'Donnell, Matthew; Emelianov, Stanislav Y

    2011-04-01

    In ultrasound strain and elasticity imaging, an accurate and cost-effective sub-pixel displacement estimator is required because strain/elasticity imaging quality relies on the displacement SNR, which can often be higher if more computational resources are provided. In this paper, we introduce an autocorrelation-based method to cost-effectively improve subpixel displacement estimation quality. To quantitatively evaluate the performance of the autocorrelation method, simulated and tissue-mimicking phantom experiments were performed. The computational cost of the autocorrelation method is also discussed. The results of our study suggest the autocorrelation method can be used for a real-time elasticity imaging system. © 2011 IEEE

  10. Doppler Centroid Estimation for Airborne SAR Supported by POS and DEM

    Directory of Open Access Journals (Sweden)

    CHENG Chunquan

    2015-05-01

    Full Text Available It is difficult to estimate the Doppler frequency and modulating rate for airborne SAR by using traditional vector method due to instable flight and complex terrain. In this paper, it is qualitatively analyzed that the impacts of POS, DEM and their errors on airborne SAR Doppler parameters. Then an innovative vector method is presented based on the range-coplanarity equation to estimate the Doppler centroid taking the POS and DEM as auxiliary data. The effectiveness of the proposed method is validated and analyzed via the simulation experiments. The theoretical analysis and experimental results show that the method can be used to estimate the Doppler centroid with high accuracy even in the cases of high relief, instable flight, and large squint SAR.

  11. Estimation of Seismic Centroid Moment Tensor Using Ocean Bottom Pressure Gauges as Seismometers

    Science.gov (United States)

    Kubota, Tatsuya; Saito, Tatsuhiko; Suzuki, Wataru; Hino, Ryota

    2017-11-01

    We examined the dynamic pressure change at the seafloor to estimate the centroid moment tensor solutions of the largest and second largest foreshocks (Mw 7.2 and 6.5) of the 2011 Tohoku-Oki earthquake. Combination of onshore broadband seismograms and high-frequency ( 20-200 s) seafloor pressure records provided the resolution of the horizontal locations of the centroids, consistent with the results of tsunami inversion using the long-period (≳10 min) seafloor pressure records although the depth was not constrained well, whereas the source locations were poorly constrained by the onshore seismic data alone. Also, the waveforms synthesized from the estimated CMT solution demonstrated the validity of the theoretical relationship between pressure change and vertical acceleration at the seafloor. The results of this study suggest that offshore pressure records can be utilized as offshore seismograms, which would be greatly useful for revealing the source process of offshore earthquakes.

  12. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    E. M. Ismaili Aalaoui

    2009-02-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  13. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    Ibn-Elhaj E

    2009-01-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  14. Fuzzy neural network model for the estimation of subpixel land cover composition

    Science.gov (United States)

    Binaghi, Elisabetta; Brivio, Pietro A.; Ghezzi, Pier P.; Rampini, Anna; Vicenzi, Massimo

    1998-12-01

    This paper reports on an experimental study designed for the in-depth investigation of how a supervised neuro-fuzzy classifier evaluates partial membership in land cover classes. The system is based on the Fuzzy Multilayer Perceptron model proposed by Pal and Mitra to which modifications in distance measures adopted for computing gradual membership to fuzzy class are introduced. During the training phase supervised learning is used to assign output class membership to pure training vectors (full membership to one land cover class); the model supports a procedure to automatically compute fuzzy output membership values for mixed training pixels. The classifier has been evaluated by conducting two experiments. The first employed simulated tests images which include pure and mixed pixels of known geometry and radiometry. The second experiment was conducted on a highly complex real scene of the Venice lagoon, Italy) where water and wetland merge into one another, at sub-pixel level. Accuracy of the results produced by the classifier was evaluated and compared using evaluation tools specifically defined and implemented to extend conventional descriptive and analytical statistical estimators to the case of multi-membership in classes. Results obtained demonstrated in the specific context of mixed pixels that the classification benefits from the integration of neural and fuzzy techniques.

  15. Target Centroid Position Estimation of Phase-Path Volume Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Fengjun Hu

    2016-01-01

    Full Text Available For the problem of easily losing track target when obstacles appear in intelligent robot target tracking, this paper proposes a target tracking algorithm integrating reduced dimension optimal Kalman filtering algorithm based on phase-path volume integral with Camshift algorithm. After analyzing the defects of Camshift algorithm, compare the performance with the SIFT algorithm and Mean Shift algorithm, and Kalman filtering algorithm is used for fusion optimization aiming at the defects. Then aiming at the increasing amount of calculation in integrated algorithm, reduce dimension with the phase-path volume integral instead of the Gaussian integral in Kalman algorithm and reduce the number of sampling points in the filtering process without influencing the operational precision of the original algorithm. Finally set the target centroid position from the Camshift algorithm iteration as the observation value of the improved Kalman filtering algorithm to fix predictive value; thus to make optimal estimation of target centroid position and keep the target tracking so that the robot can understand the environmental scene and react in time correctly according to the changes. The experiments show that the improved algorithm proposed in this paper shows good performance in target tracking with obstructions and reduces the computational complexity of the algorithm through the dimension reduction.

  16. Sub-pixel estimation of tree cover and bare surface densities using regression tree analysis

    Directory of Open Access Journals (Sweden)

    Carlos Augusto Zangrando Toneli

    2011-09-01

    Full Text Available Sub-pixel analysis is capable of generating continuous fields, which represent the spatial variability of certain thematic classes. The aim of this work was to develop numerical models to represent the variability of tree cover and bare surfaces within the study area. This research was conducted in the riparian buffer within a watershed of the São Francisco River in the North of Minas Gerais, Brazil. IKONOS and Landsat TM imagery were used with the GUIDE algorithm to construct the models. The results were two index images derived with regression trees for the entire study area, one representing tree cover and the other representing bare surface. The use of non-parametric and non-linear regression tree models presented satisfactory results to characterize wetland, deciduous and savanna patterns of forest formation.

  17. Detailed comparison of neuro-fuzzy estimation of subpixel land-cover composition from remotely sensed data

    Science.gov (United States)

    Baraldi, Andrea; Binaghi, Elisabetta; Blonda, Palma N.; Brivio, Pietro A.; Rampini, Anna

    1998-10-01

    Mixed pixels, which do not follow a known statistical distribution that could be parameterized, are a major source of inconvenience in classification of remote sensing images. This paper reports on an experimental study designed for the in-depth investigation of how and why two neuro-fuzzy classification schemes, whose properties are complementary, estimate sub-pixel land cover composition from remotely sensed data. The first classifier is based on the fuzzy multilayer perceptron proposed by Pal and Mitra: the second classifier consists of a two-stage hybrid (TSH) learning scheme whose unsupervised first stage is based on the fully self- organizing simplified adaptive resonance theory clustering network proposed by Baraldi. Results of the two neuro-fuzzy classifiers are assessed by means of specific evaluation tools designed to extend conventional descriptive and analytical statistical estimators to the case of multi-membership in classes. When a synthetic data set consisting of pure and mixed pixels is processed by the two neuro-fuzzy classifiers, experimental result show that: i) the two neuro- fuzzy classifiers perform better than the traditional MLP; ii) classification accuracies of the two neuro-fuzzy classifiers are comparable; and iii) the TSH classifier requires to train less background knowledge than FMLP.

  18. Estimation of sub-pixel water area on Tibet plateau using multiple endmembers spectral mixture spectral analysis from MODIS data

    Science.gov (United States)

    Cui, Qian; Shi, Jiancheng; Xu, Yuanliu

    2011-12-01

    Water is the basic needs for human society, and the determining factor of stability of ecosystem as well. There are lots of lakes on Tibet Plateau, which will lead to flood and mudslide when the water expands sharply. At present, water area is extracted from TM or SPOT data for their high spatial resolution; however, their temporal resolution is insufficient. MODIS data have high temporal resolution and broad coverage. So it is valuable resource for detecting the change of water area. Because of its low spatial resolution, mixed-pixels are common. In this paper, four spectral libraries are built using MOD09A1 product, based on that, water body is extracted in sub-pixels utilizing Multiple Endmembers Spectral Mixture Analysis (MESMA) using MODIS daily reflectance data MOD09GA. The unmixed result is comparing with contemporaneous TM data and it is proved that this method has high accuracy.

  19. Nearest shrunken centroids via alternative genewise shrinkages.

    Directory of Open Access Journals (Sweden)

    Byeong Yeob Choi

    Full Text Available Nearest shrunken centroids (NSC is a popular classification method for microarray data. NSC calculates centroids for each class and "shrinks" the centroids toward 0 using soft thresholding. Future observations are then assigned to the class with the minimum distance between the observation and the (shrunken centroid. Under certain conditions the soft shrinkage used by NSC is equivalent to a LASSO penalty. However, this penalty can produce biased estimates when the true coefficients are large. In addition, NSC ignores the fact that multiple measures of the same gene are likely to be related to one another. We consider several alternative genewise shrinkage methods to address the aforementioned shortcomings of NSC. Three alternative penalties were considered: the smoothly clipped absolute deviation (SCAD, the adaptive LASSO (ADA, and the minimax concave penalty (MCP. We also showed that NSC can be performed in a genewise manner. Classification methods were derived for each alternative shrinkage method or alternative genewise penalty, and the performance of each new classification method was compared with that of conventional NSC on several simulated and real microarray data sets. Moreover, we applied the geometric mean approach for the alternative penalty functions. In general the alternative (genewise penalties required fewer genes than NSC. The geometric mean of the class-specific prediction accuracies was improved, as well as the overall predictive accuracy in some cases. These results indicate that these alternative penalties should be considered when using NSC.

  20. Gridded Population of the World, Version 3 (GPWv3): Centroids

    Data.gov (United States)

    National Aeronautics and Space Administration — Gridded Population of the World, Version 3 (GPWv3) Centroids consists of estimates of human population counts and densities for the years 1990, 1995, 2000, 2005,...

  1. Spatial scaling of net primary productivity using subpixel landcover information

    Science.gov (United States)

    Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.

    2008-10-01

    Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.

  2. Centroid of a Polygon--Three Views.

    Science.gov (United States)

    Shilgalis, Thomas W.; Benson, Carol T.

    2001-01-01

    Investigates the idea of the center of mass of a polygon and illustrates centroids of polygons. Connects physics, mathematics, and technology to produces results that serve to generalize the notion of centroid to polygons other than triangles. (KHR)

  3. Spatial variability of extreme rainfall at radar subpixel scale

    Science.gov (United States)

    Peleg, Nadav; Marra, Francesco; Fatichi, Simone; Paschalis, Athanasios; Molnar, Peter; Burlando, Paolo

    2018-01-01

    Extreme rainfall is quantified in engineering practice using Intensity-Duration-Frequency curves (IDF) that are traditionally derived from rain-gauges and more recently also from remote sensing instruments, such as weather radars. These instruments measure rainfall at different spatial scales: rain-gauge samples rainfall at the point scale while weather radar averages precipitation on a relatively large area, generally around 1 km2. As such, a radar derived IDF curve is representative of the mean areal rainfall over a given radar pixel and neglects the within-pixel rainfall variability. In this study, we quantify subpixel variability of extreme rainfall by using a novel space-time rainfall generator (STREAP model) that downscales in space the rainfall within a given radar pixel. The study was conducted using a unique radar data record (23 years) and a very dense rain-gauge network in the Eastern Mediterranean area (northern Israel). Radar-IDF curves, together with an ensemble of point-based IDF curves representing the radar subpixel extreme rainfall variability, were developed fitting Generalized Extreme Value (GEV) distributions to annual rainfall maxima. It was found that the mean areal extreme rainfall derived from the radar underestimate most of the extreme values computed for point locations within the radar pixel (on average, ∼70%). The subpixel variability of rainfall extreme was found to increase with longer return periods and shorter durations (e.g. from a maximum variability of 10% for a return period of 2 years and a duration of 4 h to 30% for 50 years return period and 20 min duration). For the longer return periods, a considerable enhancement of extreme rainfall variability was found when stochastic (natural) climate variability was taken into account. Bounding the range of the subpixel extreme rainfall derived from radar-IDF can be of major importance for different applications that require very local estimates of rainfall extremes.

  4. SHARE 2012: subpixel detection and unmixing experiments

    Science.gov (United States)

    Kerekes, John P.; Ludgate, Kyle; Giannandrea, AnneMarie; Raqueno, Nina G.; Goldberg, Daniel S.

    2013-05-01

    The quantitative evaluation of algorithms applied to remotely sensed hyperspectral imagery require data sets with known ground truth. A recent data collection known as SHARE 2012, conducted by scientists in the Digital Imaging and Remote Sensing Laboratory at the Rochester Institute of Technology together with several outside collaborators, acquired hyperspectral data with this goal in mind. Several experiments were designed, deployed, and ground truth collected to support algorithm evaluation. In this paper, we describe two experiments that addressed the particular needs for the evaluation of subpixel detection and unmixing algorithms. The subpixel detection experiment involved the deployment of dozens of nearly identical subpixel targets in a random spatial array. The subpixel targets were pieces of wood painted either green or yellow. They were sized to occupy about 5% to 20% of the 1 m pixels. The unmixing experiment used novel targets with prescribed fractions of different materials based on a geometric arrangement of subpixel patterns. These targets were made up of different fabrics with various colors. Whole pixel swatches of the same materials were also deployed in the scene to provide in-scene endmembers. Alternatively, researchers can use the unmixing targets alone to derive endmembers from the mixed pixels. Field reflectance spectra were collected for all targets and adjacent background areas. While efforts are just now underway to evaluate the detection performance using the subpixel targets, initial results for the unmixing targets have demonstrated retrieved fractions that are close approximations to the geometric fractions. These data, together with the ground truth, are planned to be made available to the remote sensing research community for evaluation and development of detection and unmixing algorithms.

  5. 2D Sub-Pixel Disparity Measurement Using QPEC / Medicis

    Directory of Open Access Journals (Sweden)

    M. Cournet

    2016-06-01

    Full Text Available In the frame of its earth observation missions, CNES created a library called QPEC, and one of its launcher called Medicis. QPEC / Medicis is a sub-pixel two-dimensional stereo matching algorithm that works on an image pair. This tool is a block matching algorithm, which means that it is based on a local method. Moreover it does not regularize the results found. It proposes several matching costs, such as the Zero mean Normalised Cross-Correlation or statistical measures (the Mutual Information being one of them, and different match validation flags. QPEC / Medicis is able to compute a two-dimensional dense disparity map with a subpixel precision. Hence, it is more versatile than disparity estimation methods found in computer vision literature, which often assume an epipolar geometry. CNES uses Medicis, among other applications, during the in-orbit image quality commissioning of earth observation satellites. For instance the Pléiades-HR 1A & 1B and the Sentinel-2 geometric calibrations are based on this block matching algorithm. Over the years, it has become a common tool in ground segments for in-flight monitoring purposes. For these two kinds of applications, the two-dimensional search and the local sub-pixel measure without regularization can be essential. This tool is also used to generate automatic digital elevation models, for which it was not initially dedicated. This paper deals with the QPEC / Medicis algorithm. It also presents some of its CNES applications (in-orbit commissioning, in flight monitoring or digital elevation model generation. Medicis software is distributed outside the CNES as well. This paper finally describes some of these external applications using Medicis, such as ground displacement measurement, or intra-oral scanner in the dental domain.

  6. Networks and centroid metrics for understanding football

    African Journals Online (AJOL)

    Gonçalo Dias

    games. However, it seems that the centroid metric, supported only by the position ... displaying and measuring the interpersonal relationships established by the .... according to game venue (games played home or away), a histogram-based ...

  7. Comparison of three sub-pixel computation approaches

    Science.gov (United States)

    Zhao, An; Zheng, Lin; Jiang, Meixin

    2005-10-01

    Sub-pixel classification is a tough issue in remote sensing field. Although many kinds of software or its Module can be used to address this problem, their rationale, algorithms and methodologies are different, resulting in different use of different method for different purpose. This makes many users feel confused when they want to detect mixed feature content within a pixel and to use sub-pixel approach for practical application. It is necessary to make an in-depth comparison study for different sub-pixel methods in order for RS&GIS users to choose proper sub-pixel methods for their specific applications. After reviewing the basic theories and methods in dealing with sub-pixels, this paper made an introductory analysis to their principles, algorithms, parameters and computing process of three sub-pixel calculation methods, or Linear Unmixing in platform ILWIS3.0, Erdas8.5's Sub-pixel Classifier, eCognition3.0's Nearest Neighbor. A case study of three sub-pixel methods was then made of flood monitoring in Poyang Lake region of P.R.China with image data of band-1 and band-2 of NOAA AVHRR image. Finally, a theoretic, technological and practical comparison study was made of these three sub-pixel methods in aspects of the basic principles, the parameters to be set, the suitable application fields and their respective use limitation. Opinions and comments were presented in the end on the use of the sub-pixel calculation results of these three methods in a hope to provide some reference to future sub-pixel application study for the researchers in interest.

  8. Angles-centroids fitting calibration and the centroid algorithm applied to reverse Hartmann test

    Science.gov (United States)

    Zhao, Zhu; Hui, Mei; Xia, Zhengzheng; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Kong, Lingqin; Zhao, Yuejin

    2017-02-01

    In this paper, we develop an angles-centroids fitting (ACF) system and the centroid algorithm to calibrate the reverse Hartmann test (RHT) with sufficient precision. The essence of ACF calibration is to establish the relationship between ray angles and detector coordinates. Centroids computation is used to find correspondences between the rays of datum marks and detector pixels. Here, the point spread function of RHT is classified as circle of confusion (CoC), and the fitting of a CoC spot with 2D Gaussian profile to identify the centroid forms the basis of the centroid algorithm. Theoretical and experimental results of centroids computation demonstrate that the Gaussian fitting method has a less centroid shift or the shift grows at a slower pace when the quality of the image is reduced. In ACF tests, the optical instrumental alignments reach an overall accuracy of 0.1 pixel with the application of laser spot centroids tracking program. Locating the crystal at different positions, the feasibility and accuracy of ACF calibration are further validated to 10-6-10-4 rad root-mean-square error of the calibrations differences.

  9. Networks and centroid metrics for understanding football

    African Journals Online (AJOL)

    Gonçalo Dias

    ABSTRACT. This study aimed to verify the network of contacts resulting from the collective behaviour of professional football teams through the centroid method and networks as well, thereby providing detailed information about the match to coaches and sport analysts. For this purpose, 999 collective attacking actions from ...

  10. Improved subpixel monitoring of seasonal snow cover: a case study in the Alps

    OpenAIRE

    Veganzones, Miguel Angel; Dalla Mura, Mauro; Dumont, Marie; Zin, Isabella; Chanussot, Jocelyn

    2014-01-01

    International audience; The snow coverage area (SCA) is one of the most important parameters for cryospheric studies. The use of remote sensing imagery can complement field measurements by providing means to derive SCA with a high temporal frequency and covering large areas. Images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) are perhaps the most widely used data to retrieve SCA maps. Some MODIS derived algorithms are available for subpixel SCA estimation, as MODSCAG ...

  11. Improved Surface Reflectance from Remote Sensing Data with Sub-Pixel Topographic Information

    Directory of Open Access Journals (Sweden)

    Laure Roupioz

    2014-10-01

    Full Text Available Several methods currently exist to efficiently correct topographic effects on the radiance measured by satellites. Most of those methods use topographic information and satellite data at the same spatial resolution. In this study, the 30 m spatial resolution data of the Digital Elevation Model (DEM from ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer are used to account for those topographic effects when retrieving land surface reflectance from satellite data at lower spatial resolution (e.g., 1 km. The methodology integrates the effects of sub-pixel topography on the estimation of the total irradiance received at the surface considering direct, diffuse and terrain irradiance. The corrected total irradiance is then used to compute the topographically corrected surface reflectance. The proposed method has been developed to be applied on various kilometric pixel size satellite data. In this study, it was tested and validated with synthetic Landsat data aggregated at 1 km. The results obtained after a sub-pixel topographic correction are compared with the ones obtained after a pixel level topographic correction and show that in rough terrain, the sub-pixel topography correction method provides better results even if it tends to slightly overestimate the retrieved land surface reflectance in some cases.

  12. Correlation techniques as applied to pose estimation in space station docking

    Science.gov (United States)

    Rollins, John M.; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-08-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not necessarily provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots must form a constellation of specific relative positions in the incoming image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1/20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow and lighting irregularity compensation are discussed.

  13. FINGERPRINT MATCHING BASED ON PORE CENTROIDS

    Directory of Open Access Journals (Sweden)

    S. Malathi

    2011-05-01

    Full Text Available In recent years there has been exponential growth in the use of bio- metrics for user authentication applications. Automated Fingerprint Identification systems have become popular tool in many security and law enforcement applications. Most of these systems rely on minutiae (ridge ending and bifurcation features. With the advancement in sensor technology, high resolution fingerprint images (1000 dpi pro- vide micro level of features (pores that have proven to be useful fea- tures for identification. In this paper, we propose a new strategy for fingerprint matching based on pores by reliably extracting the pore features The extraction of pores is done by Marker Controlled Wa- tershed segmentation method and the centroids of each pore are con- sidered as feature vectors for matching of two fingerprint images. Experimental results shows that the proposed method has better per- formance with lower false rates and higher accuracy.

  14. Computing the apparent centroid of radar targets

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C.E.

    1996-12-31

    A high-frequency multibounce radar scattering code was used as a simulation platform for demonstrating an algorithm to compute the ARC of specific radar targets. To illustrate this simulation process, several targets models were used. Simulation results for a sphere model were used to determine the errors of approximation associated with the simulation; verifying the process. The severity of glint induced tracking errors was also illustrated using a model of an F-15 aircraft. It was shown, in a deterministic manner, that the ARC of a target can fall well outside its physical extent. Finally, the apparent radar centroid simulation based on a ray casting procedure is well suited for use on most massively parallel computing platforms and could lead to the development of a near real-time radar tracking simulation for applications such as endgame fuzing, survivability, and vulnerability analyses using specific radar targets and fuze algorithms.

  15. Subpixel jitter video restoration on board of micro-UAV

    Science.gov (United States)

    Szu, Harold H.; Buss, James R.; Garcia, Joseph P.; Breaux, Nancy A.; Kopriva, Ivica; Karangelen, Nicholas E.; Hsu, M.; Lee, Ting; Willey, Jeff; Shield, Gary; Brown, Steve; Robbins, R.; Hobday, John

    2004-04-01

    We review various image processing algorithms for micro-UAV EO/IR sub-pixel jitter restoration. Since the micro-UAV, Silver Fox, cannot afford isolation coupling mounting from the turbulent aerodynamics of the airframe, we explore smart real-time software to mitigate the sub-pixel jitter effect. We define jitter to be sub-pixel or small-amplitude vibrations up to one pixel, as opposed to motion blur over several pixels for which there already exists real time correction algorithms used on other platforms. We divide the set of jitter correction algorithms into several categories: They are real time, pseudo-real time, or non-real-time, but they are all standalone, i.e. without relying on a library storage or flight data basis on-board the UAV. The top of the list is demonstrated and reported here using real-world data and a truly unsupervised, real-time algorithm.

  16. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-03-23

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  17. Weight Adjustment Schemes for a Centroid Based Classifier

    National Research Council Canada - National Science Library

    Shankar, Shrikanth; Karypis, George

    2000-01-01

    .... Similarity based categorization algorithms such as k-nearest neighbor, generalized instance set and centroid based classification have been shown to be very effective in document categorization...

  18. Real-time subpixel-accuracy tracking of single mitochondria in neurons reveals heterogeneous mitochondrial motion.

    Science.gov (United States)

    Alsina, Adolfo; Lai, Wu Ming; Wong, Wai Kin; Qin, Xianan; Zhang, Min; Park, Hyokeun

    2017-11-04

    Mitochondria are essential for cellular survival and function. In neurons, mitochondria are transported to various subcellular regions as needed. Thus, defects in the axonal transport of mitochondria are related to the pathogenesis of neurodegenerative diseases, and the movement of mitochondria has been the subject of intense research. However, the inability to accurately track mitochondria with subpixel accuracy has hindered this research. Here, we report an automated method for tracking mitochondria based on the center of fluorescence. This tracking method, which is accurate to approximately one-tenth of a pixel, uses the centroid of an individual mitochondrion and provides information regarding the distance traveled between consecutive imaging frames, instantaneous speed, net distance traveled, and average speed. Importantly, this new tracking method enables researchers to observe both directed motion and undirected movement (i.e., in which the mitochondrion moves randomly within a small region, following a sub-diffusive motion). This method significantly improves our ability to analyze the movement of mitochondria and sheds light on the dynamic features of mitochondrial movement. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Evaluating Fourier Cross-Correlation Sub-Pixel Registration in Landsat Images

    Directory of Open Access Journals (Sweden)

    Jaime Almonacid-Caballer

    2017-10-01

    Full Text Available Multi-temporal analysis is one of the main applications of remote sensing, and Landsat imagery has been one of the main resources for many years. However, the moderate spatial resolution (30 m restricts their use for high precision applications. In this paper, we simulate Landsat scenes to evaluate, by means of an exhaustive number of tests, a subpixel registration process based on phase correlation and the upsampling of the Fourier transform. From a high resolution image (0.5 m, two sets of 121 synthetic images of fixed translations are created to simulate Landsat scenes (30 m. In this sense, the use of the point spread function (PSF of the Landsat TM (Thematic Mapper sensor in the downsampling process improves the results compared to those obtained by simple averaging. In the process of obtaining sub-pixel accuracy by upsampling the cross correlation matrix by a certain factor, the limit of improvement is achieved at 0.1 pixels. We show that image size affects the cross correlation results, but for images equal or larger than 100 × 100 pixels similar accuracies are expected. The large dataset used in the tests allows us to describe the intra-pixel distribution of the errors obtained in the registration process and how they follow a waveform instead of random/stochastic behavior. The amplitude of this waveform, representing the highest expected error, is estimated at 1.88 m. Finally, a validation test is performed over a set of sub-pixel shorelines obtained from actual Landsat-5 TM, Landsat-7 ETM+ (Enhanced Thematic Mapper Plus and Landsat-8 OLI (Operation Land Imager scenes. The evaluation of the shoreline accuracy with respect to permanent seawalls, before and after the registration, shows the importance of the registering process and serves as a non-synthetic validation test that reinforce previous results.

  20. Sub-Pixel Classification of MODIS EVI for Annual Mappings of Impervious Surface Areas

    Directory of Open Access Journals (Sweden)

    Narumasa Tsutsumida

    2016-02-01

    Full Text Available Regular monitoring of expanding impervious surfaces areas (ISAs in urban areas is highly desirable. MODIS data can meet this demand in terms of frequent observations but are lacking in spatial detail, leading to the mixed land cover problem when per-pixel classifications are applied. To overcome this issue, this research develops and applies a spatio-temporal sub-pixel model to estimate ISAs on an annual basis during 2001–2013 in the Jakarta Metropolitan Area, Indonesia. A Random Forest (RF regression inferred the ISA proportion from annual 23 values of MODIS MOD13Q1 EVI and reference data in which such proportion was visually allocated from very high-resolution images in Google Earth over time at randomly selected locations. Annual maps of ISA proportion were generated and showed an average increase of 30.65 km2/year over 13 years. For comparison, a series of RF per-pixel classifications were also developed from the same reference data using a Boolean class constructed from different thresholds of ISA proportion. Results from per-pixel models varied when such thresholds change, suggesting difficulty of estimation of actual ISAs. This research demonstrated the advantages of spatio-temporal sub-pixel analysis for annual ISAs mapping and addresses the problem associated with definitions of thresholds in per-pixel approaches.

  1. Sub-pixel measurement system for grid's width and period based on an improved partial area effect

    Science.gov (United States)

    Zhu, Feijia; Jin, Peng

    2017-12-01

    Based on the partial area effect of charge-coupled device (CCD), a sub-pixel line detecting algorithm is proposed to measure the width and the period of a metal grid. An optical pointing system is developed and applied to accurately measure the line-width and the period of a grid. The grid's moving image is captured by the developed system. From the obtained images, one can determine position of a line with sub-pixel resolution. By controlling the grid's movement and aiming at the grid, the absolute coordinates of a grating ruler are obtained. Simulated calculations and experiments are performed with recorded video images to validate the performance of the proposed algorithm. The results show that the precision of the proposed estimation algorithm can reach 0.025 pixels for a moving image.

  2. Optimization of the autocorrelation weighting function for the time-domain calculation of spectral centroids.

    Science.gov (United States)

    Heo, Seo; Hur, Don; Kim, Hyungsuk

    2015-03-01

    Spectral centroid from the backscattered ultrasound provides important information about the attenuation properties of soft tissues and Doppler effects of blood flows. Because the spectral centroid is originally determined from the power spectrum of backscattered ultrasound signals in the frequency domain, it is natural to calculate it after converting time-domain signals into spectral domain signals, using the fast Fourier transform (FFT). Recent research, however, derived the time-domain equations for calculating the spectral centroid using a Parseval's theorem, to avoid the calculation of the Fourier transform. The work only presented the final result, which showed that the computational time of the proposed time-domain method was 4.4 times faster than that of the original FFT-based method, whereas the average estimation error was negligible. In this paper, we present the optimal design of the autocorrelation weighting function, which is used for the timedomain spectral centroid estimation process, to reduce the computational time significantly. We also carry out a comprehensive analysis of the computational complexities of the FFTbased and time-domain methods with respect to the length of ultrasound signal segments. The simulation results using numerical phantoms show that, with the optimized autocorrelation weighting function, we only need approximately 3% of the full set of data points. In addition to that, because the proposed optimization technique requires a fixed number of data points to calculate the spectral centroid, the execution time is constant as the length of the data segment increases, whereas the execution time of the conventional FFT-based method is increased. Analysis of the computational complexities between the proposed method and the conventional FFT-based method presents O(N) and O(Nlog2N), respectively.

  3. Image Centroid Algorithms for Sun Sensors with Super Wide Field of View

    Directory of Open Access Journals (Sweden)

    ZHAN Yinhu

    2015-10-01

    Full Text Available Sun image centroid algorithm is one of the key technologies of celestial navigation using sun sensors, which directly determine the precision of the sensors. Due to the limitation of centroid algorithm for non-circular sun image of the sun sensor of large field of view, firstly, the ellipse fitting algorithm is proposed for solving elliptical or sub-elliptical sun images. Then the spherical circle fitting algorithm is put forward. Based on the projection model and distortion model of the camera, the spherical circle fitting algorithm is used to obtain the edge points of the sun in the object space, and then the centroid of the sun can be determined by fitting the edge points as a spherical circle. In order to estimate the precision of spherical circle fitting algorithm, the centroid of the sun should be projected back to the image space. Theoretically, the spherical circle fitting algorithm is no longer need to take into account the shape of the sun image, the algorithm is more precise. The results of practical sun images demonstrate that the ellipse fitting algorithm is more suitable for the sun image with 70°~80.3° half angle of view, and the mean precision is about 0.075 pixels; the spherical circle fitting algorithm is more suitable for the sun image with a half angle of view larger than 80.3°, and the mean precision is about 0.082 pixels.

  4. An Adaptive Connectivity-based Centroid Algorithm for Node Positioning in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Aries Pratiarso

    2015-06-01

    Full Text Available In wireless sensor network applications, the position of nodes is randomly distributed following the contour of the observation area. A simple solution without any measurement tools is provided by range-free method. However, this method yields the coarse estimating position of the nodes. In this paper, we propose Adaptive Connectivity-based (ACC algorithm. This algorithm is a combination of Centroid as range-free based algorithm, and hop-based connectivity algorithm. Nodes have a possibility to estimate their own position based on the connectivity level between them and their reference nodes. Each node divides its communication range into several regions where each of them has a certain weight depends on the received signal strength. The weighted value is used to obtain the estimated position of nodes. Simulation result shows that the proposed algorithm has up to 3 meter error of estimated position on 100x100 square meter observation area, and up to 3 hop counts for 80 meters' communication range. The proposed algorithm performs an average error positioning up to 10 meters better than Weighted Centroid algorithm. Keywords: adaptive, connectivity, centroid, range-free.

  5. Plasma Channel Diagnostic Based on Laser Centroid Oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2010-09-09

    A technique has been developed for measuring the properties of discharge-based plasma channels by monitoring the centroid location of a laser beam exiting the channel as a function of input alignment offset between the laser and the channel. The centroid position of low-intensity (<10{sup 14}Wcm{sup -2}) laser pulses focused at the input of a hydrogen-filled capillary discharge waveguide was scanned and the exit positions recorded to determine the channel shape and depth with an accuracy of a few %. In addition, accurate alignment of the laser beam through the plasma channel can be provided by minimizing laser centroid motion at the channel exit as the channel depth is scanned either by scanning the plasma density or the discharge timing. The improvement in alignment accuracy provided by this technique will be crucial for minimizing electron beam pointing errors in laser plasma accelerators.

  6. Subpixel boundary backward substitution reconstruction algorithm for not uniform microscan to FPA and blind micromotion matching

    Science.gov (United States)

    Chen, Yi-nan; Jin, Wei-qi; Zhao, Lei; Gao, Mei-jing; Zhao, Lin

    2008-03-01

    For the subpixel micro-scanning imaging, we propose the reconstruction algorithm based on neither interpolation nor super-resolution idea but one of the block-by-block method recursive from the boundary to centre when additional narrowband boundary view-field diaphragm whose radiation is known a prior. The aim of the predicted boundary value is to add the conditions for solving the ununiqueness ill-problem to the inverse transition matrix from the destructed process. For the non-uniform scan factor, the improved algorithm associated with certain non-uniform motion variables is proposed. Additionally, attention is focused on the case of unknown subpixel motion, when the reconstructed images are blurred by motion parameter modulation and neighbouring point aliasing because the value of micro-motion is not the correct one. Unlike other methods that the image registration is accomplished before multi-frame restoration from undersampled sequences frame by frame, in this paper, 2-D motion vector is estimated in single frame just from the blur character of reconstructed grids. We demonstrate that once the estimated motion approaches to the real one, square summation of all pixels over the unmatched image approximately descends to the minimum. The matching track based on recursive Newton secant approaching is optimized for high matching speed and precision by different strategies, including matching region hunting, matching direction choosing and convergence prejudgement. All iterative step lengths with respect to motion parameters are substituted by the suitable values derived from the statistic process and one or multi-secant solution. The simulations demonstrate the feasibility of the matching algorithm and the obvious resolution enhancement compared to the direct oversampling image.

  7. A Statistical Study of Beam Centroid Oscillations in a Solenoid Transport Channel

    Energy Technology Data Exchange (ETDEWEB)

    Lund, S; Wootton, C; Coleman, J; Lidia, S; Seidl, P

    2009-05-07

    A recent theory of transverse centroid oscillations in solenoidally focused beam transport lattices presented in Ref. [1] is applied to statistically analyze properties of the centroid orbit in the Neutralized Drift Compression Experiment (NDCX) at the Lawrence Berkeley National Laboratory. Contributions to the amplitude of the centroid oscillations from mechanical misalignments and initial centroid errors exiting the injector are analyzed. Measured values of the centroid appear consistent with expected alignment tolerances. Correction of these errors is discussed.

  8. mproved Correction Localization Algorithm Based on Dynamic Weighted Centroid for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xuejiao Chen

    2014-08-01

    Full Text Available For wireless sensor network applications that require location information for sensor nodes, locations of nodes can be estimated by a number of localization algorithms. However, precise location information may be unavailable due to the constraint in energy, computation, or terrain. An improved correction localization algorithm based on dynamic weighted centroid for wireless sensor networks was proposed in this paper. The idea is that each anchor node computes its position error through its neighbor anchor nodes in its range, the position error will be transform to distance error, according the distance between unknown node and anchor node and the anchor node’s distance error, the dynamic weighted value will be computed. For each unknown node, it can use the coordinate of anchor node in its range and the dynamic weighted value to compute it’s coordinate. Simulation results show that the localization accuracy of the proposed algorithm is better than the traditional centroid localization algorithm and weighted centroid localization algorithm, the position error of three algorithms is decreased along radius increasing, where the decreased trend of our algorithm is significant.

  9. The subpixel resolution of optical-flow-based modal analysis

    Science.gov (United States)

    Javh, Jaka; Slavič, Janko; Boltežar, Miha

    2017-05-01

    This research looks at the possibilities for full-field, non-contact, displacement measurements based on high-speed video analyses. A simplified gradient-based optical flow method, optimised for subpixel harmonic displacements, is used to predict the resolution potential. The simplification assumes an image-gradient linearity, producing a linear relation between the light intensity and the displacement in the direction of the intensity gradient. The simplicity of the method enables each pixel or small subset to be viewed as a sensor. The resolution potential and the effect of noise are explored theoretically and tested in a synthetic experiment, which is followed by a real experiment. The identified displacement can be smaller than a thousandth of a pixel and subpixel displacements are recognisable, even with a high image noise. The resolution and the signal-to-noise ratio are influenced by the dynamic range of the camera, the subset size and the sampling length. Real-world experiments were performed to validate and demonstrate the method using a monochrome high-speed camera. One-dimensional mode shapes of a steel beam are recognisable even at the maximum displacement amplitude of 0.0008 pixel (equal to 0.2 μm) and multiple out-of-plane mode shapes are recognisable from the high-speed video of a vibrating cymbal.

  10. Determination of star bodies from p-centroid bodies

    Indian Academy of Sciences (India)

    In this paper, we prove that an origin-symmetric star body is uniquely determined by its -centroid body. Furthermore, using spherical harmonics, we establish a result for non-symmetric star bodies. As an application, we show that there is a unique member of p ⟨ K ⟩ characterized by having larger volume than any other ...

  11. Determination of star bodies from p-centroid bodies

    Indian Academy of Sciences (India)

    Abstract. In this paper, we prove that an origin-symmetric star body is uniquely determined by its p-centroid body. Furthermore, using spherical harmonics, we estab- lish a result for non-symmetric star bodies. As an application, we show that there is a unique member of p〈K〉 characterized by having larger volume than any ...

  12. Networks and centroid metrics for understanding football | Gama ...

    African Journals Online (AJOL)

    This study aimedto verifythe network of contacts resulting from the collective behaviour of professional football teams through the centroid method and networks as well, therebyproviding detailed information about the match to coaches and sport analysts. For this purpose, 999 collective attacking actions from twoteams were ...

  13. Thorough statistical comparison of machine learning regression models and their ensembles for sub-pixel imperviousness and imperviousness change mapping

    Science.gov (United States)

    Drzewiecki, Wojciech

    2017-12-01

    We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.

  14. Characterizing Subpixel Spatial Resolution of a Hybrid CMOS Detector

    Science.gov (United States)

    Bray, Evan; Burrows, Dave; Chattopadhyay, Tanmoy; Falcone, Abraham; Hull, Samuel; Kern, Matthew; McQuaide, Maria; Wages, Mitchell

    2018-01-01

    The detection of X-rays is a unique process relative to other wavelengths, and allows for some novel features that increase the scientific yield of a single observation. Unlike lower photon energies, X-rays liberate a large number of electrons from the silicon absorber array of the detector. This number is usually on the order of several hundred to a thousand for moderate-energy X-rays. These electrons tend to diffuse outward into what is referred to as the charge cloud. This cloud can then be picked up by several pixels, forming a specific pattern based on the exact incident location. By conducting the first ever “mesh experiment" on a hybrid CMOS detector (HCD), we have experimentally determined the charge cloud shape and used it to characterize responsivity of the detector with subpixel spatial resolution.

  15. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  16. Non-obtuse Remeshing with Centroidal Voronoi Tessellation

    KAUST Repository

    Yan, Dongming

    2015-12-03

    We present a novel remeshing algorithm that avoids triangles with small and triangles with large (obtuse) angles. Our solution is based on an extension to Centroidal Voronoi Tesselation (CVT). We augment the original CVT formulation by a penalty term that penalizes short Voronoi edges, while the CVT term helps to avoid small angles. Our results show significant improvements of the remeshing quality over the state of the art.

  17. Plasma channel diagnostic based on laser centroid oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Gonsalves, A. J.; Nakamura, K.; Lin, C.; Osterhoff, J.; Shiraishi, S.; Schroeder, C. B.; Geddes, C. G. R.; Tóth, Cs.; Esarey, E.; Leemans, W. P.

    2010-05-01

    A technique has been developed for measuring the properties of discharge-based plasma channels by monitoring the centroid location of a laser beam exiting the channel as a function of input alignment offset between the laser and the channel. Experiments were performed using low-intensity (<1014 Wcm-2) laser pulses focused onto the entrance of a hydrogen-filled capillary discharge waveguide. Scanning the laser centroid position at the input of the channel and recording the exit position allows determination of the channel depth with an accuracy of a few percent, measurement of the transverse channel shape, and inference of the matched spot size. In addition, accurate alignment of the laser beam through the plasma channel is provided by minimizing laser centroid motion at the channel exit as the channel depth is scanned either by scanning the plasma density or the discharge timing. The improvement in alignment accuracy provided by this technique will be crucial for minimizing electron beam pointing errors in laser plasma accelerators.

  18. Star centroiding error compensation for intensified star sensors.

    Science.gov (United States)

    Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun

    2016-12-26

    A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.

  19. Mapping the Spatial Distribution of Winter Crops at Sub-Pixel Level Using AVHRR NDVI Time Series and Neural Nets

    Directory of Open Access Journals (Sweden)

    Felix Rembold

    2013-03-01

    Full Text Available For large areas, it is difficult to assess the spatial distribution and inter-annual variation of crop acreages through field surveys. Such information, however, is of great value for governments, land managers, planning authorities, commodity traders and environmental scientists. Time series of coarse resolution imagery offer the advantage of global coverage at low costs, and are therefore suitable for large-scale crop type mapping. Due to their coarse spatial resolution, however, the problem of mixed pixels has to be addressed. Traditional hard classification approaches cannot be applied because of sub-pixel heterogeneity. We evaluate neural networks as a modeling tool for sub-pixel crop acreage estimation. The proposed methodology is based on the assumption that different cover type proportions within coarse pixels prompt changes in time profiles of remotely sensed vegetation indices like the Normalized Difference Vegetation Index (NDVI. Neural networks can learn the relation between temporal NDVI signatures and the sought crop acreage information. This learning step permits a non-linear unmixing of the temporal information provided by coarse resolution satellite sensors. For assessing the feasibility and accuracy of the approach, a study region in central Italy (Tuscany was selected. The task consisted of mapping the spatial distribution of winter crops abundances within 1 km AVHRR pixels between 1988 and 2001. Reference crop acreage information for network training and validation was derived from high resolution Thematic Mapper/Enhanced Thematic Mapper (TM/ETM+ images and official agricultural statistics. Encouraging results were obtained demonstrating the potential of the proposed approach. For example, the spatial distribution of winter crop acreage at sub-pixel level was mapped with a cross-validated coefficient of determination of 0.8 with respect to the reference information from high resolution imagery. For the eight years for which

  20. Evaluating Fourier Cross-Correlation Sub-Pixel Registration in Landsat Images

    National Research Council Canada - National Science Library

    Jaime Almonacid-Caballer; Josep E Pardo-Pascual; Luis A Ruiz

    2017-01-01

    .... In this paper, we simulate Landsat scenes to evaluate, by means of an exhaustive number of tests, a subpixel registration process based on phase correlation and the upsampling of the Fourier transform...

  1. Sub-pixel analysis to enhance the accuracy of evapotranspiration determined using MODIS images

    National Research Council Canada - National Science Library

    Abdalhaleem A Hassaballa; Abdul-Nasir Matori; Khalid A Al-Gaadi; Elkamil H Tola; Rangaswamy Madugundu

    2017-01-01

    ...) were recorded at the time of satellite overpass. In order to enhance the accuracy of the generated ET maps, MODIS images were subjected to sub-pixel analysis by assigning weights for different land surface cover...

  2. Autonomous Sub-Pixel Satellite Track Endpoint Determination for Space Based Images

    Energy Technology Data Exchange (ETDEWEB)

    Simms, L M

    2011-03-07

    An algorithm for determining satellite track endpoints with sub-pixel resolution in spaced-based images is presented. The algorithm allows for significant curvature in the imaged track due to rotation of the spacecraft capturing the image. The motivation behind the subpixel endpoint determination is first presented, followed by a description of the methodology used. Results from running the algorithm on real ground-based and simulated spaced-based images are shown to highlight its effectiveness.

  3. Quantifying Sub-Pixel Surface Water Coverage in Urban Environments Using Low-Albedo Fraction from Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Weiwei Sun

    2017-05-01

    Full Text Available The problem of mixed pixels negatively affects the delineation of accurate surface water in Landsat Imagery. Linear spectral unmixing has been demonstrated to be a powerful technique for extracting surface materials at a sub-pixel scale. Therefore, in this paper, we propose an innovative low albedo fraction (LAF method based on the idea of unconstrained linear spectral unmixing. The LAF stands on the “High Albedo-Low Albedo-Vegetation” model of spectral unmixing analysis in urban environments, and investigates the urban surface water extraction problem with the low albedo fraction map. Three experiments are carefully designed using Landsat TM/ETM+ images on the three metropolises of Wuhan, Shanghai, and Guangzhou in China, and per-pixel and sub-pixel accuracies are estimated. The results are compared against extraction accuracies from three popular water extraction methods including the normalized difference water index (NDWI, modified normalized difference water index (MNDWI, and automated water extraction index (AWEI. Experimental results show that LAF achieves a better accuracy when extracting urban surface water than both MNDWI and AWEI do, especially in boundary mixed pixels. Moreover, the LAF has the smallest threshold variations among the three methods, and the fraction threshold of 1 is a proper choice for LAF to obtain good extraction results. Therefore, the LAF is a promising approach for extracting urban surface water coverage.

  4. Subpixel Snow Cover Mapping from MODIS Data by Nonparametric Regression Splines

    Science.gov (United States)

    Akyurek, Z.; Kuter, S.; Weber, G. W.

    2016-12-01

    Spatial extent of snow cover is often considered as one of the key parameters in climatological, hydrological and ecological modeling due to its energy storage, high reflectance in the visible and NIR regions of the electromagnetic spectrum, significant heat capacity and insulating properties. A significant challenge in snow mapping by remote sensing (RS) is the trade-off between the temporal and spatial resolution of satellite imageries. In order to tackle this issue, machine learning-based subpixel snow mapping methods, like Artificial Neural Networks (ANNs), from low or moderate resolution images have been proposed. Multivariate Adaptive Regression Splines (MARS) is a nonparametric regression tool that can build flexible models for high dimensional and complex nonlinear data. Although MARS is not often employed in RS, it has various successful implementations such as estimation of vertical total electron content in ionosphere, atmospheric correction and classification of satellite images. This study is the first attempt in RS to evaluate the applicability of MARS for subpixel snow cover mapping from MODIS data. Total 16 MODIS-Landsat ETM+ image pairs taken over European Alps between March 2000 and April 2003 were used in the study. MODIS top-of-atmospheric reflectance, NDSI, NDVI and land cover classes were used as predictor variables. Cloud-covered, cloud shadow, water and bad-quality pixels were excluded from further analysis by a spatial mask. MARS models were trained and validated by using reference fractional snow cover (FSC) maps generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also developed. The mutual comparison of obtained MARS and ANN models was accomplished on independent test areas. The MARS model performed better than the ANN model with an average RMSE of 0.1288 over the independent test areas; whereas the average RMSE of the ANN model

  5. Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals

    Science.gov (United States)

    Thompson, David R.; Mandrake, Lukas; Green, Robert O.

    2013-01-01

    Mapping localized spectral features in large images demands sensitive and robust detection algorithms. Two aspects of large images that can harm matched-filter detection performance are addressed simultaneously. First, multimodal backgrounds may thwart the typical Gaussian model. Second, outlier features can trigger false detections from large projections onto the target vector. Two state-of-the-art approaches are combined that independently address outlier false positives and multimodal backgrounds. The background clustering models multimodal backgrounds, and the mixture tuned matched filter (MT-MF) addresses outliers. Combining the two methods captures significant additional performance benefits. The resulting mixture tuned clutter matched filter (MT-CMF) shows effective performance on simulated and airborne datasets. The classical MNF transform was applied, followed by k-means clustering. Then, each cluster s mean, covariance, and the corresponding eigenvalues were estimated. This yields a cluster-specific matched filter estimate as well as a cluster- specific feasibility score to flag outlier false positives. The technology described is a proof of concept that may be employed in future target detection and mapping applications for remote imaging spectrometers. It is of most direct relevance to JPL proposals for airborne and orbital hyperspectral instruments. Applications include subpixel target detection in hyperspectral scenes for military surveillance. Earth science applications include mineralogical mapping, species discrimination for ecosystem health monitoring, and land use classification.

  6. A Framework Based on 2-D Taylor Expansion for Quantifying the Impacts of Sub-Pixel Reflectance Variance and Covariance on Cloud Optical Thickness and Effective Radius Retrievals Based on the Bi-Spectral Method

    Science.gov (United States)

    Zhang, Z.; Werner, F.; Cho, H. -M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry

    2016-01-01

    to estimate the retrieval uncertainty from sub-pixel reflectance variations in operational satellite cloud products and to help understand the differences in and re retrievals between two instruments.

  7. A Framework Based on 2-D Taylor Expansion for Quantifying the Impacts of Subpixel Reflectance Variance and Covariance on Cloud Optical Thickness and Effective Radius Retrievals Based on the Bispectral Method

    Science.gov (United States)

    Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, K.

    2016-01-01

    framework can be used to estimate the retrieval uncertainty from subpixel reflectance variations in operational satellite cloud products and to help understand the differences in t and re retrievals between two instruments.

  8. The generation of spatial population distributions from census centroid data.

    Science.gov (United States)

    Bracken, I; Martin, D

    1989-04-01

    "Census data are commonly used in geographical analysis and to inform planning purposes, though at the disaggregate level the basis of enumeration poses difficulties. In this paper an approach to surface generation is described that offers the prospect of revealing an underlying population distribution from centroid-based data which is independent of zonal geography. It is suggested that this can serve a wide variety of analytical, cartographic, and policy purposes, including the creation of spatial indicators of economic and social conditions and enhancing the value of census data. The approach is illustrated by reference to an analysis of part of the valleys of South Wales, in the United Kingdom." excerpt

  9. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  10. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-04-01

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  11. Subpixel Inundation Mapping Using Landsat-8 OLI and UAV Data for a Wetland Region on the Zoige Plateau, China

    Directory of Open Access Journals (Sweden)

    Haoming Xia

    2017-01-01

    Full Text Available Wetland inundation is crucial to the survival and prosperity of fauna and flora communities in wetland ecosystems. Even small changes in surface inundation may result in a substantial impact on the wetland ecosystem characteristics and function. This study presented a novel method for wetland inundation mapping at a subpixel scale in a typical wetland region on the Zoige Plateau, northeast Tibetan Plateau, China, by combining use of an unmanned aerial vehicle (UAV and Landsat-8 Operational Land Imager (OLI data. A reference subpixel inundation percentage (SIP map at a Landsat-8 OLI 30 m pixel scale was first generated using high resolution UAV data (0.16 m. The reference SIP map and Landsat-8 OLI imagery were then used to develop SIP estimation models using three different retrieval methods (Linear spectral unmixing (LSU, Artificial neural networks (ANN, and Regression tree (RT. Based on observations from 2014, the estimation results indicated that the estimation model developed with RT method could provide the best fitting results for the mapping wetland SIP (R2 = 0.933, RMSE = 8.73% compared to the other two methods. The proposed model with RT method was validated with observations from 2013, and the estimated SIP was highly correlated with the reference SIP, with an R2 of 0.986 and an RMSE of 9.84%. This study highlighted the value of high resolution UAV data and globally and freely available Landsat data in combination with the developed approach for monitoring finely gradual inundation change patterns in wetland ecosystems.

  12. Low-Frequency Centroid Moment Tensor Inversion of the 2015 Illapel Earthquake from Superconducting-Gravimeter Data

    Science.gov (United States)

    Zábranová, Eliška; Matyska, Ctirad

    2016-04-01

    After the 2015 Illapel earthquake the radial and spheroidal modes up to 1 mHz were registered by the network of superconducting gravimeters. These data provide unique opportunity to obtain ultralow-frequency estimates of several centroid moment tensor components. We employ the superconducting-gravimeter records of 60-h lengths and perform the joint inversion for M_{rr}, (M_{\\vartheta \\vartheta }-M_{\\varphi \\varphi })/2 and M_{\\vartheta \\varphi } centroid moment tensor components from spheroidal modes up to 1 mHz. The M_{rr} component is also obtained from independent inversion of the radial modes _0S_0 and _1S_0. Our results are consistent with the published solutions obtained from higher frequency data, suggesting thus negligible slow afterslip phenomenon.

  13. Digital pupillometry and centroid shift changes after cataract surgery.

    Science.gov (United States)

    Kanellopoulos, Anastasios John; Asimellis, George; Georgiadou, Stella

    2015-02-01

    To compare postoperative changes in apparent photopic and mesopic pupil size and centration in relation to cornea reflection landmarks after cataract surgery. LaserVision.gr Clinical and Research Eye Institute, Athens, Greece. Prospective consecutive case study. Pupils were imaged for pupil size and corneal vertex location before and 1-month after cataract surgery. Digital analysis of pupil images was used to determine the Cartesian coordinates (nasal-temporal, horizontal axis, superior-inferior, vertical axis) of the first Purkinje reflection point (approximating the corneal intersection of the visual axis [corneal vertex]) to the pupil geometric center (approximating the corneal intersection of the line of sight [corneal apex]). Pupil size changes were measured, and the correlation between vertex-to-apex shift changes and postoperative pupil centroid shift was evaluated. The study evaluated 40 eyes. The pupil size (diameter) change corresponded to a relative reduction of -9.8% for photopic pupils and -9.1% for mesopic pupils; the difference was statistically significant (P = .045 and P = .011, respectively). Also, there was a reduction in the centroid shift (all eyes) from a mean of 0.12 mm preoperatively to 0.05 mm postoperatively as a result of the postoperative minus temporal horizontal difference between the corneal vertex and the apex. Cataract extraction surgery appears to affect pupil size and centration. Specifically, a smaller pupil and less temporal shift were recorded. These data may have clinical relevance in targeted intraoperative intraocular lens centration. Dr. Kanellopoulos is a consultant to Alcon Surgical, Inc., Wavelight Laser Technologie AG, Allergan, Avedro, Inc., and i-Optics Corp. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  14. Rapid determinations of centroid moment tensor in Turkey

    Science.gov (United States)

    Nakano, Masaru; Citak, Seckin; Kalafat, Dogan

    2015-04-01

    Rapid determination of centroid moment tensor (CMT) of earthquakes, namely the source centroid location, focal mechanism, and magnitude is important for early disaster responses and issuing Tsunami warnings. Using the SWIFT system (Source parameter determinations based on Waveform Inversion of Fourier Transformed seismograms) developed by Nakano et al. (2008), we are developing earthquake monitoring system in Turkey. Also determinations of CMT for background seismicity can resolve the stress field in the crust, which may contribute to evaluate potential earthquake, to develop scenarios for future disastrous earthquakes, or to find hidden faults in the crust. Using data from regional network in Turkey, we have tried a waveform inversion for an M=4.4 earthquake that occurred about 50 km south of Sea of Marmara, of which source location is at 40.0N and 27.9E with 15 km depth (after the ANSS Comprehensive Catalog). We successfully obtained the CMT solution showing a right-lateral strike-slip fault, of which one of the nodal planes strikes ENE-WSW, corresponding to the strike of an active fault mapped here. This fault runs parallel to the north Anatolian fault, and large earthquakes of Ms 7.2 and 7.0 ruptured this fault on 1953 and 1964, respectively. Using the regional network data, we can determine CMT for earthquakes as small as magnitude about 4. Of course, the lower limit of magnitude depend on the data quality. In the research project of SATREPS - Earthquake and tsunami disaster mitigation in the Marmara region and disaster education in Turkey, we will develop CMT determination system and CMT catalogue in Turkey.

  15. Subpixel displacement measurement method based on the combination of particle swarm optimization and gradient algorithm

    Science.gov (United States)

    Guang, Chen; Qibo, Feng; Keqin, Ding; Zhan, Gao

    2017-10-01

    A subpixel displacement measurement method based on the combination of particle swarm optimization (PSO) and gradient algorithm (GA) was proposed for accuracy and speed optimization in GA, which is a subpixel displacement measurement method better applied in engineering practice. An initial integer-pixel value was obtained according to the global searching ability of PSO, and then gradient operators were adopted for a subpixel displacement search. A comparison was made between this method and GA by simulated speckle images and rigid-body displacement in metal specimens. The results showed that the computational accuracy of the combination of PSO and GA method reached 0.1 pixel in the simulated speckle images, or even 0.01 pixels in the metal specimen. Also, computational efficiency and the antinoise performance of the improved method were markedly enhanced.

  16. Simulating urban land cover changes at sub-pixel level in a coastal city

    Science.gov (United States)

    Zhao, Xiaofeng; Deng, Lei; Feng, Huihui; Zhao, Yanchuang

    2014-10-01

    The simulation of urban expansion or land cover changes is a major theme in both geographic information science and landscape ecology. Yet till now, almost all of previous studies were based on grid computations at pixel level. With the prevalence of spectral mixture analysis in urban land cover research, the simulation of urban land cover at sub-pixel level is being put into agenda. This study provided a new approach of land cover simulation at sub-pixel level. Landsat TM/ETM+ images of Xiamen city, China on both the January of 2002 and 2007 were used to acquire land cover data through supervised classification. Then the two classified land cover data were utilized to extract the transformation rule between 2002 and 2007 using logistic regression. The transformation possibility of each land cover type in a certain pixel was taken as its percent in the same pixel after normalization. And cellular automata (CA) based grid computation was carried out to acquire simulated land cover on 2007. The simulated 2007 sub-pixel land cover was testified with a validated sub-pixel land cover achieved by spectral mixture analysis in our previous studies on the same date. And finally the sub-pixel land cover of 2017 was simulated for urban planning and management. The results showed that our method is useful in land cover simulation at sub-pixel level. Although the simulation accuracy is not quite satisfactory for all the land cover types, it provides an important idea and a good start in the CA-based urban land cover simulation.

  17. Sentinel-2’s Potential for Sub-Pixel Landscape Feature Detection

    Directory of Open Access Journals (Sweden)

    Julien Radoux

    2016-06-01

    Full Text Available Land cover and land use maps derived from satellite remote sensing imagery are critical to support biodiversity and conservation, especially over large areas. With its 10 m to 20 m spatial resolution, Sentinel-2 is a promising sensor for the detection of a variety of landscape features of ecological relevance. However, many components of the ecological network are still smaller than the 10 m pixel, i.e., they are sub-pixel targets that stretch the sensor’s resolution to its limit. This paper proposes a framework to empirically estimate the minimum object size for an accurate detection of a set of structuring landscape foreground/background pairs. The developed method combines a spectral separability analysis and an empirical point spread function estimation for Sentinel-2. The same approach was also applied to Landsat-8 and SPOT-5 (Take 5, which can be considered as similar in terms of spectral definition and spatial resolution, respectively. Results show that Sentinel-2 performs consistently on both aspects. A large number of indices have been tested along with the individual spectral bands and target discrimination was possible in all but one case. Overall, results for Sentinel-2 highlight the critical importance of a good compromise between the spatial and spectral resolution. For instance, the Sentinel-2 roads detection limit was of 3 m and small water bodies are separable with a diameter larger than 11 m. In addition, the analysis of spectral mixtures draws attention to the uneven sensitivity of a variety of spectral indices. The proposed framework could be implemented to assess the fitness for purpose of future sensors within a large range of applications.

  18. Radial lens distortion correction with sub-pixel accuracy for X-ray micro-tomography.

    Science.gov (United States)

    Vo, Nghia T; Atwood, Robert C; Drakopoulos, Michael

    2015-12-14

    Distortion correction or camera calibration for an imaging system which is highly configurable and requires frequent disassembly for maintenance or replacement of parts needs a speedy method for recalibration. Here we present direct techniques for calculating distortion parameters of a non-linear model based on the correct determination of the center of distortion. These techniques are fast, very easy to implement, and accurate at sub-pixel level. The implementation at the X-ray tomography system of the I12 beamline, Diamond Light Source, which strictly requires sub-pixel accuracy, shows excellent performance in the calibration image and in the reconstructed images.

  19. Bayesian ISOLA: new tool for automated centroid moment tensor inversion

    Science.gov (United States)

    Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John

    2017-08-01

    We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances are rejected and full-waveform inversion in a space-time grid around a provided hypocentre. A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequency ranges. The method is tested on synthetic and observed data. It is applied on a data set from the Swiss seismic network and the results are compared with the existing high-quality MT catalogue. The software package programmed in Python is designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large pre-existing earthquake catalogues and data sets.

  20. Photon counting imaging and centroiding with an electron-bombarded CCD using single molecule localisation software

    Energy Technology Data Exchange (ETDEWEB)

    Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus, E-mail: klaus.suhling@kcl.ac.uk

    2016-06-01

    Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times.

  1. Physical Simulator of Infrared Spectroradiometer with Spatial Resolution Enhancement Using Subpixel Image Registration and Processing

    Directory of Open Access Journals (Sweden)

    Lyalko, V.І.

    2015-11-01

    Full Text Available The mathematical and physical models of the new frame infrared spectroradiometer based on microbolometer array sensor with subpixel image registration are presented. It is planned to include the radiometer into onboard instrumentation of the future «Sich» satellite system for the land surface physical characterization by enhanced spatial resolution infrared space imagery.

  2. Sub-Pixel Magnetic Field and Plasma Dynamics Derived from Photospheric Spectral Data

    Science.gov (United States)

    Rasca, Anthony P.; Chen, James; Pevtsov, Alexei A.

    2017-08-01

    Current high-resolution observations of the photosphere show small dynamic features at the resolving limit during emerging flux events. However, line-of-sight (LOS) magnetogram pixels only contain the net uncanceled magnetic flux, which is expected to increase for fixed regions as resolution limits improve. Using a new method with spectrographic images, we quantify distortions in photospheric absorption (or emission) lines caused by sub-pixel magnetic field and plasma dynamics in the vicinity of active regions and emerging flux events. Absorption lines—quantified by their displacement, width, asymmetry, and peakedness—have previously been used with Stokes I images from SOLIS/VSM to relate line distortions with sub-pixel plasma dynamics driven by solar flares or small-scale flux ropes. The method is extended to include the full Stokes parameters and relate inferred sub-pixel dynamics with small-scale magnetic fields. Our analysis is performed on several sets of spectrographic images taken by SOLIS/VSM while observing eruptive and non-eruptive active regions. We discuss the results of this application and their relevance for understanding magnetic fields signatures and coupled plasma properties on sub-pixel scales.

  3. Exploring the limits of identifying sub-pixel thermal features using ASTER TIR data

    Science.gov (United States)

    Vaughan, R.G.; Keszthelyi, L.P.; Davies, A.G.; Schneider, D.J.; Jaworowski, C.; Heasler, H.

    2010-01-01

    Understanding the characteristics of volcanic thermal emissions and how they change with time is important for forecasting and monitoring volcanic activity and potential hazards. Satellite instruments view volcanic thermal features across the globe at various temporal and spatial resolutions. Thermal features that may be a precursor to a major eruption, or indicative of important changes in an on-going eruption can be subtle, making them challenging to reliably identify with satellite instruments. The goal of this study was to explore the limits of the types and magnitudes of thermal anomalies that could be detected using satellite thermal infrared (TIR) data. Specifically, the characterization of sub-pixel thermal features with a wide range of temperatures is considered using ASTER multispectral TIR data. First, theoretical calculations were made to define a "thermal mixing detection threshold" for ASTER, which quantifies the limits of ASTER's ability to resolve sub-pixel thermal mixing over a range of hot target temperatures and % pixel areas. Then, ASTER TIR data were used to model sub-pixel thermal features at the Yellowstone National Park geothermal area (hot spring pools with temperatures from 40 to 90 ??C) and at Mount Erebus Volcano, Antarctica (an active lava lake with temperatures from 200 to 800 ??C). Finally, various sources of uncertainty in sub-pixel thermal calculations were quantified for these empirical measurements, including pixel resampling, atmospheric correction, and background temperature and emissivity assumptions.

  4. Background Suppression and Feature Based Spectroscopy Methods for Subpixel Material Identification

    Science.gov (United States)

    2012-08-03

    were computed from five regions of the scene representing gravel, sand, grass, trees, and stressed vegetation . ©2012 The MITRE Corporation...built-up areas using neural networks and subpixel demixing methods on multispectral /hyperspectral data”, Proceedings of the 23st Annual Conference of

  5. Segmented separable footprint projector for digital breast tomosynthesis and its application for subpixel reconstruction.

    Science.gov (United States)

    Zheng, Jiabei; Fessler, Jeffrey A; Chan, Heang-Ping

    2017-03-01

    Digital forward and back projectors play a significant role in iterative image reconstruction. The accuracy of the projector affects the quality of the reconstructed images. Digital breast tomosynthesis (DBT) often uses the ray-tracing (RT) projector that ignores finite detector element size. This paper proposes a modified version of the separable footprint (SF) projector, called the segmented separable footprint (SG) projector, that calculates efficiently the Radon transform mean value over each detector element. The SG projector is specifically designed for DBT reconstruction because of the large height-to-width ratio of the voxels generally used in DBT. This study evaluates the effectiveness of the SG projector in reducing projection error and improving DBT reconstruction quality. We quantitatively compared the projection error of the RT and the SG projector at different locations and their performance in regular and subpixel DBT reconstruction. Subpixel reconstructions used finer voxels in the imaged volume than the detector pixel size. Subpixel reconstruction with RT projector uses interpolated projection views as input to provide adequate coverage of the finer voxel grid with the traced rays. Subpixel reconstruction with the SG projector, however, uses the measured projection views without interpolation. We simulated DBT projections of a test phantom using CatSim (GE Global Research, Niskayuna, NY) under idealized imaging conditions without noise and blur, to analyze the effects of the projectors and subpixel reconstruction without other image degrading factors. The phantom contained an array of horizontal and vertical line pair patterns (1 to 9.5 line pairs/mm) and pairs of closely spaced spheres (diameters 0.053 to 0.5 mm) embedded at the mid-plane of a 5-cm-thick breast tissue-equivalent uniform volume. The images were reconstructed with regular simultaneous algebraic reconstruction technique (SART) and subpixel SART using different projectors. The

  6. Seismotectonics of Morocco from regional centroid moment tensors

    Science.gov (United States)

    Villaseñor, Antonio; el Moudnib, Lahcen; Herrmann, Robert B.; Harnafi, Mimoun

    2014-05-01

    We have obtained new regional centroid moment tensors (RCMTs) for 35 earthquakes occurred in Morocco and vicinity between 2008 and 2012. During this time period an unprecedented number of broadband stations (more than 100) were operating in the region, providing high-quality waveform data that were used to obtain RCMTs from waveform inversion. The main part of this dataset was composed of temporary broadband stations that were concurrently deployed in different seismic experiments (i.e. IberArray, PICASSO, Muenster, Bristol). The events analyzed in this study are moderate in size, ranging in moment magnitude Mw from 3.5 to 4.8. Their predominant mechanisms correspond to reverse and strike-slip faulting, although normal and "mixed" mechanisms are also observed. In spite of this variability in mechanism type, when analyzed in terms of the orientation of the P (compression) axes two major groups can be distinguished. The first group, corresponding to earthquakes in the Altas and NE Morocco is characterized by near-horizontal P axes oriented in an approximately NW-SE direction that coincides with the direction of convergence between Africa and Eurasia. A small clockwise rotation of the orientation of the P axes is observed from eastern Morocco to the western Atlas. The second group corresponds to earthquakes in the western Rif, that are characterized also by horizontal P axes, but oriented in a SW-NE direction, almost perpendicular to the first group. These earthquakes are part of a cluster located north of Ouezzane. The mechanisms in this second cluster are consistent with recent GPS results that show that the western Rif is moving in a SW direction with respect to the African (Nubia) plate.

  7. Model Independent Analysis of Beam Centroid Dynamics in Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chun-xi

    2003-04-21

    Fundamental issues in Beam-Position-Monitor (BPM)-based beam dynamics observations are studied in this dissertation. The major topic is the Model-Independent Analysis (MIA) of beam centroid dynamics. Conventional beam dynamics analysis requires a certain machine model, which itself of ten needs to be refined by beam measurements. Instead of using any particular machine model, MIA relies on a statistical analysis of the vast amount of BPM data that often can be collected non-invasively during normal machine operation. There are two major parts in MIA. One is noise reduction and degrees-of-freedom analysis using a singular value decomposition of a BPM-data matrix, which constitutes a principal component analysis of BPM data. The other is a physical base decomposition of the BPM-data matrix based on the time structure of pulse-by-pulse beam and/or machine parameters. The combination of these two methods allows one to break the resolution limit set by individual BPMs and observe beam dynamics at more accurate levels. A physical base decomposition is particularly useful for understanding various beam dynamics issues. MIA improves observation and analysis of beam dynamics and thus leads to better understanding and control of beams in both linacs and rings. The statistical nature of MIA makes it potentially useful in other fields. Another important topic discussed in this dissertation is the measurement of a nonlinear Poincare section (one-turn) map in circular accelerators. The beam dynamics in a ring is intrinsically nonlinear. In fact, nonlinearities are a major factor that limits stability and influences the dynamics of halos. The Poincare section map plays a basic role in characterizing and analyzing such a periodic nonlinear system. Although many kinds of nonlinear beam dynamics experiments have been conducted, no direct measurement of a nonlinear map has been reported for a ring in normal operation mode. This dissertation analyzes various issues concerning map

  8. Bayesian ISOLA: new tool for automated centroid moment tensor inversion

    Science.gov (United States)

    Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John

    2017-04-01

    Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in

  9. Foot Bone in Vivo: Its Center of Mass and Centroid of Shape

    CERN Document Server

    Fan, Yifang; Fan, Yubo; Lin, Zhiyu; Lv, Changsheng

    2010-01-01

    This paper studies foot bone geometrical shape and its mass distribution and establishes an assessment method of bone strength. Using spiral CT scanning, with an accuracy of sub-millimeter, we analyze the data of 384 pieces of foot bones in vivo and investigate the relationship between the bone's external shape and internal structure. This analysis is explored on the bases of the bone's center of mass and its centroid of shape. We observe the phenomenon of superposition of center of mass and centroid of shape fairly precisely, indicating a possible appearance of biomechanical organism. We investigate two aspects of the geometrical shape, (i) distance between compact bone's centroid of shape and that of the bone and (ii) the mean radius of the same density bone issue relative to the bone's centroid of shape. These quantities are used to interpret the influence of different physical exercises imposed on bone strength, thereby contributing to an alternate assessment technique to bone strength.

  10. Research on Centroid Position for Stairs Climbing Stability of Search and Rescue Robot

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2011-01-01

    Full Text Available This paper represents the relationship between the stability of stairs climbing and the centroid position of the search and rescue robot. The robot system is considered as a mass point-plane model and the kinematics features are analyzed to find the relationship between centroid position and the maximal pitch angle of stairs the robot could climb up. A computable function about this relationship is given in this paper. During the stairs climbing, there is a maximal stability-keeping angle depends on the centroid position and the pitch angle of stairs, and the numerical formula is developed about the relationship between the maximal stability-keeping angle and the centroid position and pitch angle of stairs. The experiment demonstrates the trustworthy and correction of the method in the paper.

  11. Robust Matching of Wavelet Features for Sub-Pixel Registration of Landsat Data

    Science.gov (United States)

    LeMoigne, Jacqueline; Netanyahu, Nathan S.; Masek, Jeffrey G.; Mount, David M.; Goward, Samuel; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    For many Earth and Space Science applications, automatic geo-registration at sub-pixel accuracy has become a necessity. In this work, we are focusing on building an operational system, which will provide a sub-pixel accuracy registration of Landsat-5 and Landsat-7 data. The input to our registration method consists of scenes that have been geometrically and radiometrically corrected. Such pre-processed scenes are then geo-registered relative to a database of Landsat chips. The method assumes a transformation composed of a rotation and a translation, and utilizes rotation- and translation-invariant wavelets to extract image features that are matched using statistically robust feature matching and a generalized Hausdorff distance metric. The registration process is described and results on four Landsat input scenes of the Washington, D.C. area are presented.

  12. Centroid theory of transverse electron-proton two-stream instability in a long proton bunch

    OpenAIRE

    Tai-Sen F. Wang; Channell, Paul J.; Robert J. Macek; Davidson, Ronald C.

    2003-01-01

    This paper presents an analytical investigation of the transverse electron-proton (e-p) two-stream instability in a proton bunch propagating through a stationary electron background. The equations of motion, including damping effects, are derived for the centroids of the proton beam and the electron cloud by considering Lorentzian and Gaussian frequency spreads for the particles. For a Lorentzian frequency distribution, we derive the asymptotic solution of the coupled linear centroid equation...

  13. The wireless sensor network (WSN triangle centroid localization algorithm based on RSSI

    Directory of Open Access Journals (Sweden)

    Zhang Chuan Wei

    2016-01-01

    Full Text Available Node location is one of the key technologies in wireless sensor network. RSSI-based location is a hotspot in nowadays. For resolving biggish error in RSSI-based location, the paper presents a new method of location, RSSI-based triangle and centroid location, using triangle and centroid method to reduce the error of RSSI measurement. Simulation experiments prove that this algorithm can obviously improve the location accuracy compared to trilateration.

  14. The efficiency of the centroid method compared to a simple average

    DEFF Research Database (Denmark)

    Eskildsen, Jacob Kjær; Kristensen, Kai; Nielsen, Rikke

    Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot.......Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot....

  15. Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data

    OpenAIRE

    Kumar, C.; Shetty, A.; S Raval; Champatiray, P. K.; Sharma, R.

    2014-01-01

    This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals su...

  16. Variability of myocardial perfusion dark rim Gibbs artifacts due to sub-pixel shifts

    Directory of Open Access Journals (Sweden)

    Kellman Peter

    2009-05-01

    Full Text Available Abstract Background Gibbs ringing has been shown as a possible source of dark rim artifacts in myocardial perfusion studies. This type of artifact is usually described as transient, lasting a few heart beats, and localised in random segments of the myocardial wall. Dark rim artifacts are known to be unpredictably variable. This article aims to illustrate that a sub-pixel shift, i.e. a small displacement of the pixels with respect to the endocardial border, can result in different Gibbs ringing and hence different artifacts. Therefore a hypothesis for one cause of dark rim artifact variability is given based on the sub-pixel position of the endocardial border. This article also demonstrates the consequences for Gibbs artifacts when two different methods of image interpolation are applied (post-FFT interpolation, and pre-FFT zero-filling. Results Sub-pixel shifting of in vivo perfusion studies was shown to change the appearance of Gibbs artifacts. This effect was visible in the original uninterpolated images, and in the post-FFT interpolated images. The same shifted data interpolated by pre-FFT zero-filling exhibited much less variability in the Gibbs artifact. The in vivo findings were confirmed by phantom imaging and numerical simulations. Conclusion Unless pre-FFT zero-filling interpolation is performed, Gibbs artifacts are very dependent on the position of the subendocardial wall within the pixel. By introducing sub-pixel shifts relative to the endocardial border, some of the variability of the dark rim artifacts in different myocardial segments, in different patients and from frame to frame during first-pass perfusion due to cardiac and respiratory motion can be explained. Image interpolation by zero-filling can be used to minimize this dependency.

  17. Radiographic measures of thoracic kyphosis in osteoporosis: Cobb and vertebral centroid angles

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, A.M.; Greig, A.M. [University of Melbourne, Centre for Health, Exercise and Sports Medicine, School of Physiotherapy, Victoria (Australia); University of Melbourne, Department of Medicine, Royal Melbourne Hospital, Victoria (Australia); Wrigley, T.V.; Tully, E.A.; Adams, P.E.; Bennell, K.L. [University of Melbourne, Centre for Health, Exercise and Sports Medicine, School of Physiotherapy, Victoria (Australia)

    2007-08-15

    Several measures can quantify thoracic kyphosis from radiographs, yet their suitability for people with osteoporosis remains uncertain. The aim of this study was to examine the validity and reliability of the vertebral centroid and Cobb angles in people with osteoporosis. Lateral radiographs of the thoracic spine were captured in 31 elderly women with osteoporosis. Thoracic kyphosis was measured globally (T1-T12) and regionally (T4-T9) using Cobb and vertebral centroid angles. Multisegmental curvature was also measured by fitting polynomial functions to the thoracic curvature profile. Canonical and Pearson correlations were used to examine correspondence; agreement between measures was examined with linear regression. Moderate to high intra- and inter-rater reliability was achieved (SEM = 0.9-4.0 ). Concurrent validity of the simple measures was established against multisegmental curvature (r = 0.88-0.98). Strong association was observed between the Cobb and centroid angles globally (r = 0.84) and regionally (r = 0.83). Correspondence between measures was moderate for the Cobb method (r = 0.72), yet stronger for the centroid method (r = 0.80). The Cobb angle was 20% greater for regional measures due to the influence of endplate tilt. Regional Cobb and centroid angles are valid and reliable measures of thoracic kyphosis in people with osteoporosis. However, the Cobb angle is biased by endplate tilt, suggesting that the centroid angle is more appropriate for this population. (orig.)

  18. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution

    Science.gov (United States)

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-01

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  19. Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data

    Science.gov (United States)

    Kumar, C.; Shetty, A.; Raval, S.; Champatiray, P. K.; Sharma, R.

    2014-11-01

    This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.

  20. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

    Science.gov (United States)

    Haller, Istvan; Nedevschi, Sergiu

    2012-02-01

    Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE

  1. Centroid Localization of Uncooperative Nodes in Wireless Networks Using a Relative Span Weighting Method

    Directory of Open Access Journals (Sweden)

    Christine Laurendeau

    2010-01-01

    Full Text Available Increasingly ubiquitous wireless technologies require novel localization techniques to pinpoint the position of an uncooperative node, whether the target is a malicious device engaging in a security exploit or a low-battery handset in the middle of a critical emergency. Such scenarios necessitate that a radio signal source be localized by other network nodes efficiently, using minimal information. We propose two new algorithms for estimating the position of an uncooperative transmitter, based on the received signal strength (RSS of a single target message at a set of receivers whose coordinates are known. As an extension to the concept of centroid localization, our mechanisms weigh each receiver's coordinates based on the message's relative RSS at that receiver, with respect to the span of RSS values over all receivers. The weights may decrease from the highest RSS receiver either linearly or exponentially. Our simulation results demonstrate that for all but the most sparsely populated wireless networks, our exponentially weighted mechanism localizes a target node within the regulations stipulated for emergency services location accuracy.

  2. Lidar-based Evaluation of Sub-pixel Forest Structural Characteristics and Sun-sensor Geometries that Influence MODIS Leaf Area Index Product Accuracy and Retrieval Quality

    Science.gov (United States)

    Jensen, J.; Humes, K. S.

    2010-12-01

    Leaf Area Index (LAI) is an important structural component of vegetation because the foliar surface of plants largely controls the exchange of water, nutrients, and energy within terrestrial ecosystems. Because LAI is a key variable used to model water, energy, and biogeochemical cycles, Moderate Resolution Imaging Spectroradiometer (MODIS) LAI products are widely used in many studies to better understand and quantify exchanges between the terrestrial surface and the atmosphere. Within the last decade, significant resources and efforts have been invested toward MODIS LAI validation for a variety of biome types and a suite of published work has provided valuable feedback on the agreement between MODIS-derived LAI via radiative transfer (RT) inversion compared to multispectral-based empirical estimates of LAI. Our study provides an alternative assessment of the MODIS LAI product for a 58,000 ha evergreen needleleaf forest located in the western Rocky Mountain range in northern Idaho by using lidar data to model (R2=0.86, RMSE=0.76) and map fine-scale estimates of vegetation structure over a region for which multispectral LAI estimates were unacceptable. In an effort to provide feedback on algorithm performance, we evaluated the agreement between lidar-modeled and MODIS-retrieved LAI by specific MODIS LAI retrieval algorithm and product quality definitions. We also examined the sub-pixel vegetation structural conditions and satellite-sensor geometries that tend to influence MODIS LAI retrieval algorithm and product quality over our study area. Our results demonstrate a close agreement between lidar LAI and MODIS LAI retrieved using the main RT algorithm and consistently large MODIS LAI overestimates for pixels retrieved from a saturated set of RT solutions. Our evaluation also illuminated some conditions for which sub-pixel structural characteristics and sun-sensor geometries influenced retrieval quality and product agreement. These conditions include: 1) the

  3. Centroid vetting of transiting planet candidates from the Next Generation Transit Survey

    Science.gov (United States)

    Günther, Maximilian N.; Queloz, Didier; Gillen, Edward; McCormac, James; Bayliss, Daniel; Bouchy, Francois; Walker, Simon. R.; West, Richard G.; Eigmüller, Philipp; Smith, Alexis M. S.; Armstrong, David J.; Burleigh, Matthew; Casewell, Sarah L.; Chaushev, Alexander P.; Goad, Michael R.; Grange, Andrew; Jackman, James; Jenkins, James S.; Louden, Tom; Moyano, Maximiliano; Pollacco, Don; Poppenhaeger, Katja; Rauer, Heike; Raynard, Liam; Thompson, Andrew P. G.; Udry, Stéphane; Watson, Christopher A.; Wheatley, Peter J.

    2017-11-01

    The Next Generation Transit Survey (NGTS), operating in Paranal since 2016, is a wide-field survey to detect Neptunes and super-Earths transiting bright stars, which are suitable for precise radial velocity follow-up and characterization. Thereby, its sub-mmag photometric precision and ability to identify false positives are crucial. Particularly, variable background objects blended in the photometric aperture frequently mimic Neptune-sized transits and are costly in follow-up time. These objects can best be identified with the centroiding technique: if the photometric flux is lost off-centre during an eclipse, the flux centroid shifts towards the centre of the target star. Although this method has successfully been employed by the Kepler mission, it has previously not been implemented from the ground. We present a fully automated centroid vetting algorithm developed for NGTS, enabled by our high-precision autoguiding. Our method allows detecting centroid shifts with an average precision of 0.75 milli-pixel (mpix), and down to 0.25 mpix for specific targets, for a pixel size of 4.97 arcsec. The algorithm is now part of the NGTS candidate vetting pipeline and automatically employed for all detected signals. Further, we develop a joint Bayesian fitting model for all photometric and centroid data, allowing to disentangle which object (target or background) is causing the signal, and what its astrophysical parameters are. We demonstrate our method on two NGTS objects of interest. These achievements make NGTS the first ground-based wide-field transit survey ever to successfully apply the centroiding technique for automated candidate vetting, enabling the production of a robust candidate list before follow-up.

  4. Integrating a Hive Triangle Pattern with Subpixel Analysis for Noncontact Measurement of Structural Dynamic Response by Using a Novel Image Processing Scheme

    Directory of Open Access Journals (Sweden)

    Yung-Chi Lu

    2014-01-01

    Full Text Available This work presents a digital image processing approach with a unique hive triangle pattern by integrating subpixel analysis for noncontact measurement of structural dynamic response data. Feasibility of proposed approach is demonstrated based on numerical simulation of a photography experiment. According to those results, the measured time-history displacement of simulated image correlates well with the numerical solution. A small three-story frame is then mounted on a small shaker table, and a linear variation differential transformation (LVDT is set on the second floor. Experimental results indicate that the relative error between data from LVDT and analyzed data from digital image correlation is below 0.007%, 0.0205 in terms of frequency and displacement, respectively. Additionally, the appropriate image block affects the estimation accuracy of the measurement system. Importantly, the proposed approach for evaluating pattern center and size is highly promising for use in assigning the adaptive block for a digital image correlation method.

  5. First principles centroid molecular dynamics simulation of hydride in nanoporous C12A7:H-

    Science.gov (United States)

    Ikeda, Takashi

    2017-05-01

    Hydrides in nanoporous [Ca24Al28O64]4+(H-)4 (C12A7:H-) were investigated via first principles centroid molecular dynamics (CMD). The quality of our CMD simulations was assessed by examining the temperature dependence of the distribution of hydrides in the cages constituting the C12A7 framework. The vibrational states of C12A7:H- were analyzed by using the trajectories of the centroids generated in our CMD simulations. We find that the rattling motions of H- and D- behave qualitatively differently, resulting in non-trivial isotope effects, which are suggested to be detectable by using infrared and Raman spectroscopy.

  6. Centroid theory of transverse electron-proton two-stream instability in a long proton bunch

    Directory of Open Access Journals (Sweden)

    Tai-Sen F. Wang

    2003-01-01

    Full Text Available This paper presents an analytical investigation of the transverse electron-proton (e-p two-stream instability in a proton bunch propagating through a stationary electron background. The equations of motion, including damping effects, are derived for the centroids of the proton beam and the electron cloud by considering Lorentzian and Gaussian frequency spreads for the particles. For a Lorentzian frequency distribution, we derive the asymptotic solution of the coupled linear centroid equations in the time domain and study the e-p instability in proton bunches with nonuniform line densities. Examples are given for both uniform and parabolic proton line densities.

  7. Detection of sub-pixel fractures in X-ray dark-field tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lauridsen, Torsten; Feidenhans' l, Robert [University of Copenhagen, Niels Bohr Institute, Copenhagen (Denmark); Willner, Marian; Pfeiffer, Franz [Technische Universitaet Muenchen, Department of Physics and Institute of Medical Engineering, Garching (Germany); Bech, Martin [Lund University, Medical Radiation Physics, Lund (Sweden)

    2015-11-15

    We present a new method for detecting fractures in solid materials below the resolution given by the detector pixel size by using grating-based X-ray interferometry. The technique is particularly useful for detecting sub-pixel cracks in large samples where the size of the sample is preventing high-resolution μCT studies of the entire sample. The X-ray grating interferometer produces three distinct modality signals: absorption, phase and dark field. The method utilizes the unique scattering features of the dark-field signal. We have used tomograms reconstructed from each of the three signals to detect cracks in a model sample consisting of stearin. (orig.)

  8. Study on Zero-Doppler Centroid Control for GEO SAR Ground Observation

    Directory of Open Access Journals (Sweden)

    Yicheng Jiang

    2014-01-01

    Full Text Available In geosynchronous Earth orbit SAR (GEO SAR, Doppler centroid compensation is a key step for imaging process, which could be performed by the attitude steering of a satellite platform. However, this zero-Doppler centroid control method does not work well when the look angle of radar is out of an expected range. This paper primarily analyzes the Doppler properties of GEO SAR in the Earth rectangular coordinate. Then, according to the actual conditions of the GEO SAR ground observation, the effective range is presented by the minimum and maximum possible look angles which are directly related to the orbital parameters. Based on the vector analysis, a new approach for zero-Doppler centroid control in GEO SAR, performing the attitude steering by a combination of pitch and roll rotation, is put forward. This approach, considering the Earth’s rotation and elliptical orbit effects, can accurately reduce the residual Doppler centroid. All the simulation results verify the correctness of the range of look angle and the proposed steering method.

  9. Oscillations of centroid position and surface area of soccer teams in small-sided games

    NARCIS (Netherlands)

    Frencken, Wouter; Lemmink, Koen; Delleman, Nico; Visscher, Chris

    2011-01-01

    There is a need for a collective variable that captures the dynamics of team sports like soccer at match level. The centroid positions and surface areas of two soccer teams potentially describe the coordinated flow of attacking and defending in small-sided soccer games at team level. The aim of the

  10. Centroid and Theoretical Rotation: Justification for Their Use in Q Methodology Research

    Science.gov (United States)

    Ramlo, Sue

    2016-01-01

    This manuscript's purpose is to introduce Q as a methodology before providing clarification about the preferred factor analytical choices of centroid and theoretical (hand) rotation. Stephenson, the creator of Q, designated that only these choices allowed for scientific exploration of subjectivity while not violating assumptions associated with…

  11. A double inequality for bounding Toader mean by the centroidal mean

    Indian Academy of Sciences (India)

    A double inequality for bounding Toader mean by the centroidal mean. YUN HUA1,∗ and FENG QI2. 1Department of Information Engineering, Weihai Vocational College, Weihai City,. Shandong Province 264210, China. 2College of Mathematics, Inner Mongolia University for Nationalities, Tongliao City,. Inner Mongolia ...

  12. A double inequality for bounding Toader mean by the centroidal mean

    Indian Academy of Sciences (India)

    Annual Meetings · Mid Year Meetings · Discussion Meetings · Public Lectures · Lecture Workshops · Refresher Courses · Symposia · Live Streaming. Home; Journals; Proceedings – Mathematical Sciences; Volume 124; Issue 4. A double inequality for bounding Toader mean by the centroidal mean. Yun Hua Feng Qi.

  13. Characterizing charge centroids from lightning using a slow antenna network and the Levenberg-Marquardt inverse method

    Science.gov (United States)

    Lapierre, J. L.; Sonnenfeld, R. G.; Hager, W. W.; Morris, K.

    2011-12-01

    Researchers have long studied the copious and complex electric field waveforms caused by lightning. By combining electric-field measurements taken at many different locations on the ground simultaneously [Krehbiel et al., 1979], we hope to learn more about charge sources for lightning flashes. The Langmuir Electric Field Array (LEFA) is a network of nine field-change measurement stations (slow-antennas) arranged around Langmuir Laboratory near Magdalena, New Mexico. Using a mathematical method called the Levenberg-Marquardt (LM) method, we can invert the electric field data to determine the magnitude and position of the charge centroid removed from the cloud. We analyzed three return strokes (RS) following a dart-leader from a storm occurring on October 21st 2011. RS 'A' occurred at 07:17:00.63 UT. The altitude of the charge centroid was estimated to be 5 km via LMA data. Because the LM method requires a prediction, the code was run with a wide range of values to verify the robustness of the method. Predictions varied from ±3 C for the charge magnitude and ±20 km N-S and E-W for the position (with the coordinate origin being the Langmuir Laboratory Annex). The LM method converged to a charge magnitude of -5.5 C and a centroid position of 3.3 km E-W and 12 km, N-S for that RS. RS 'B' occurred at 07:20:05.9 UT. With an altitude of 4 km, the predictions were again varied; ±3 C, ±15 km N-S and E-W. Most runs converged to -27.5 C, 4 km E-W, and 10.9 km N-S. Finally, while results seem best for events right over the array, success was had locating more distant events. RS 'C' occurred at 02:42:46.8 UT. Assuming an altitude of 5 km and varying the predictions as with RS 'A', the results converged to -9.2 C, 35.5 km E-W, and 9 km N-S. All of these results are broadly consistent with the LMA and the NLDN. By continuing this type of analysis, we hope to learn more about how lightning channels propagate and how the charges in the cloud respond to the sudden change in

  14. Modified centroid for estimating sand, silt, and clay from soil texture class

    Science.gov (United States)

    Models that require inputs of soil particle size commonly use soil texture class for input; however, texture classes do not represent the continuum of soil size fractions. Soil texture class and clay percentage are collected as a standard practice for many land management agencies (e.g., NRCS, BLM, ...

  15. Simulation of urban land surface temperature based on sub-pixel land cover in a coastal city

    Science.gov (United States)

    Zhao, Xiaofeng; Deng, Lei; Feng, Huihui; Zhao, Yanchuang

    2014-11-01

    The sub-pixel urban land cover has been proved to have obvious correlations with land surface temperature (LST). Yet these relationships have seldom been used to simulate LST. In this study we provided a new approach of urban LST simulation based on sub-pixel land cover modeling. Landsat TM/ETM+ images of Xiamen city, China on both the January of 2002 and 2007 were used to acquire land cover and then extract the transformation rule using logistic regression. The transformation possibility was taken as its percent in the same pixel after normalization. And cellular automata were used to acquire simulated sub-pixel land cover on 2007 and 2017. On the other hand, the correlations between retrieved LST and sub-pixel land cover achieved by spectral mixture analysis in 2002 were examined and a regression model was built. Then the regression model was used on simulated 2007 land cover to model the LST of 2007. Finally the LST of 2017 was simulated for urban planning and management. The results showed that our method is useful in LST simulation. Although the simulation accuracy is not quite satisfactory, it provides an important idea and a good start in the modeling of urban LST.

  16. Characterizing sub-pixel landsat ETM plus fire severity on experimental fires in the Kruger National Park, South Africa

    CSIR Research Space (South Africa)

    Landmann, T

    2003-07-01

    Full Text Available Burn severity was quantitatively mapped using a unique linear spectral mixture model to determine sub-pixel abundances of different ashes and combustion completeness measured on the corresponding fire-affected pixels in Landsat data. A new burn...

  17. Bringing aerospace images into coincidence with subpixel accuracy by the local-correlation method

    Science.gov (United States)

    Potapov, A. S.; Malyshev, I. A.; Lutsiv, V. R.

    2004-05-01

    This paper proposes a local-correlation method that makes it possible to bring aerospace images into coincidence with subpixel accuracy after preliminary rough juxtaposition by transforming them relative to each other by uniform projective transformation and by additional mutual local displacements. The basis of the method is to establish a correspondence between the points of a pair of images by means of a phase correlation of individual segments of the images. The dissemination of information concerning measured shifts from reference points for which the correspondence has been found on a pair of images, as well as the use of analysis with variable spatial resolution, makes the method workable when the errors of the preliminary superposition are greater than the size of the correlation window. This paper presents the results of an experimental verification of the approach, using actual pairs of aerospace images as an example.© 2004

  18. A measure of variable planar locations anchored on the centroid of the vowel space

    DEFF Research Database (Denmark)

    Watt, Dominic; Fabricius, Anne

    2011-01-01

    This paper presents part of an ongoing research program which aims to apply mathematical and geometrical analytic methods to vowel formant data to enable the quantification of parameters of variation of interest to sociophoneticians. We open with an overview of recent research working towards a set...... of desiderata for choice of normalization algorithm(s) based on replicable procedures. We then present the principles of centroid-based normalization and account for its performance in recent road tests. In sections 4 and 5 we introduce a method that utilizes the centroid of the speaker’s vowel space...... as an anchor point or vertex for calculation of planar locations on formant plots, permitting quantification of the distribution of vowel tokens within the space. This information, along with details such as Euclidean distances, can then be used to precisely pinpoint the trajectories of diachronic change...

  19. Optimization of soy isoflavone extraction with different solvents using the simplex-centroid mixture design.

    Science.gov (United States)

    Yoshiara, Luciane Yuri; Madeira, Tiago Bervelieri; Delaroza, Fernanda; da Silva, Josemeyre Bonifácio; Ida, Elza Iouko

    2012-12-01

    The objective of this study was to optimize the extraction of different isoflavone forms (glycosidic, malonyl-glycosidic, aglycone and total) from defatted cotyledon soy flour using the simplex-centroid experimental design with four solvents of varying polarity (water, acetone, ethanol and acetonitrile). The obtained extracts were then analysed by high-performance liquid chromatography. The profile of the different soy isoflavones forms varied with different extractions solvents. Varying the solvent or mixture used, the extraction of different isoflavones was optimized using the centroid-simplex mixture design. The special cubic model best fitted to the four solvents and its combination for soy isoflavones extraction. For glycosidic isoflavones extraction, the polar ternary mixture (water, acetone and acetonitrile) achieved the best extraction; malonyl-glycosidic forms were better extracted with mixtures of water, acetone and ethanol. Aglycone isoflavones, water and acetone mixture were best extracted and total isoflavones, the best solvents were ternary mixture of water, acetone and ethanol.

  20. Architecture-Driven Level Set Optimization: From Clustering to Subpixel Image Segmentation.

    Science.gov (United States)

    Balla-Arabe, Souleymane; Gao, Xinbo; Ginhac, Dominique; Brost, Vincent; Yang, Fan

    2016-12-01

    Thanks to their effectiveness, active contour models (ACMs) are of great interest for computer vision scientists. The level set methods (LSMs) refer to the class of geometric active contours. Comparing with the other ACMs, in addition to subpixel accuracy, it has the intrinsic ability to automatically handle topological changes. Nevertheless, the LSMs are computationally expensive. A solution for their time consumption problem can be hardware acceleration using some massively parallel devices such as graphics processing units (GPUs). But the question is: which accuracy can we reach while still maintaining an adequate algorithm to massively parallel architecture? In this paper, we attempt to push back the compromise between, speed and accuracy, efficiency and effectiveness, to a higher level, comparing with state-of-the-art methods. To this end, we designed a novel architecture-aware hybrid central processing unit (CPU)-GPU LSM for image segmentation. The initialization step, using the well-known k -means algorithm, is fast although executed on a CPU, while the evolution equation of the active contour is inherently local and therefore suitable for GPU-based acceleration. The incorporation of local statistics in the level set evolution allowed our model to detect new boundaries which are not extracted by the used clustering algorithm. Comparing with some cutting-edge LSMs, the introduced model is faster, more accurate, less subject to giving local minima, and therefore suitable for automatic systems. Furthermore, it allows two-phase clustering algorithms to benefit from the numerous LSM advantages such as the ability to achieve robust and subpixel accurate segmentation results with smooth and closed contours. Intensive experiments demonstrate, objectively and subjectively, the good performance of the introduced framework both in terms of speed and accuracy.

  1. Centroiding algorithms for high speed crossed strip readout of microchannel plate detectors.

    Science.gov (United States)

    Vallerga, John; Tremsin, Anton; Raffanti, Rick; Siegmund, Oswald

    2011-05-01

    Imaging microchannel plate (MCP) detectors with cross strip (XS) readout anodes require centroiding algorithms to determine the location of the amplified charge cloud from the incident radiation, be it photon or particle. We have developed a massively parallel XS readout electronic system that employs an amplifier and ADC for each strip and uses this digital data to calculate the centroid of each event in real time using a field programmable gate array (FPGA). Doing the calculations in real time in the front end electronics using an FPGA enables a much higher input event rate, nearly two orders of magnitude faster, by avoiding the bandwidth limitations of the raw data transfer to a computer. We report on our detailed efforts to optimize the algorithms used on both an 18 mm and 40 mm diameter XS MCP detector with strip pitch of 640 microns and read out with multiple 32 channel "Preshape32" ASIC amplifiers (developed at Rutherford Appleton Laboratory). Each strip electrode is continuously digitized to 12 bits at 50 MHz with all 64 digital channels (128 for the 40 mm detector) transferred to a Xilinx Virtex 5 FPGA. We describe how events are detected in the continuous data stream and then multiplexed into firmware modules that spatially and temporally filter and weight the input after applying offset and gain corrections. We will contrast a windowed "center of gravity" algorithm to a convolution with a special centroiding kernel in terms of resolution and distortion and show results with 1 MHz.

  2. A Prolog-based centroid algorithm for isovolume extraction from finite element torso simulations.

    Science.gov (United States)

    Russomanno, David J; Hicks, Kathryn

    2002-02-01

    Computer modeling and simulation of the human torso provides a rapid and non-invasive means to observe the effects of implanted defibrillators. The objective of this study was to improve a method of extracting data from an implanted defibrillator simulation for subsequent visualization. Electrical quantities, such as the potential and gradient fields, are computed at points throughout various regions of a three-dimensional (3-D) torso model via a finite element solution. Software is then implemented in the Prolog language to extract and visualize a subset of the data, from within any subregion of the model, satisfying a given declarative constraint. In past work, membership in these subsets had been determined solely by the electrical quantities at the vertices of the tetrahedral elements within the model along with an arbitrary choice made by the user. However, this study expands upon previous work to utilize an alternative means of classification, calculating the centroid of each tetrahedron and assigning electrical properties to these centroids based on the distances of each centroid to the four corners of the tetrahedron. After the modifications, it is expected that the extracted subsets of the model will represent the data in a more realistic and conservative manner and provide more insight into the process of defibrillation than previous methods of data extraction and visualization.

  3. DONUTS: A science frame autoguiding algorithm with sub-pixel precision, capable of guiding on defocused stars

    OpenAIRE

    McCormac, J.; Pollacco, D.; Skillen, I.; Faedi, F.; Todd, I.; Watson, C. A.

    2013-01-01

    We present the DONUTS autoguiding algorithm, designed to fix stellar positions at the sub-pixel level for high-cadence time-series photometry, which is also capable of autoguiding on defocused stars. DONUTS was designed to calculate guide corrections from a series of science images and re-centre telescope pointing between each exposure. The algorithm has the unique ability of calculating guide corrections from under-sampled to heavily defocused point spread functions. We present the case for ...

  4. Quantifying Sub-Pixel Surface Water Coverage in Urban Environments Using Low-Albedo Fraction from Landsat Imagery

    OpenAIRE

    Weiwei Sun; Bo Du; Shaolong Xiong

    2017-01-01

    The problem of mixed pixels negatively affects the delineation of accurate surface water in Landsat Imagery. Linear spectral unmixing has been demonstrated to be a powerful technique for extracting surface materials at a sub-pixel scale. Therefore, in this paper, we propose an innovative low albedo fraction (LAF) method based on the idea of unconstrained linear spectral unmixing. The LAF stands on the “High Albedo-Low Albedo-Vegetation” model of spectral unmixing analysis in urban environment...

  5. Low-frequency centroid-moment-tensor inversion from superconducting-gravimeter data: The effect of seismic attenuation

    Science.gov (United States)

    Zábranová, Eliška; Matyska, Ctirad

    2014-10-01

    After the 2010 Maule and 2011 Tohoku earthquakes the spheroidal modes up to 1 mHz were clearly registered by the Global Geodynamic Project (GGP) network of superconducting gravimeters (SG). Fundamental parameters in synthetic calculations of the signals are the quality factors of the modes. We study the role of their uncertainties in the centroid-moment-tensor (CMT) inversions. First, we have inverted the SG data from selected GGP stations to jointly determine the quality factors of these normal modes and the three low-frequency CMT components, Mrr,(Mϑϑ-Mφφ)/2 and Mϑφ, that generate the observed SG signal. We have used several-days-long records to minimize the trade-off between the quality factors and the CMT but it was not eliminated completely. We have also inverted each record separately to get error estimates of the obtained parameters. Consequently, we have employed the GGP records of 60-h lengths for several published modal-quality-factor sets and inverted only the same three CMT components. The obtained CMT tensors are close to the solution from the joint Q-CMT inversion of longer records and resulting variability of the CMT components is smaller than differences among routine agency solutions. Reliable low-frequency CMT components can thus be obtained for any quality factors from the studied sets.

  6. Transport coefficients of normal liquid helium-4 calculated by path integral centroid molecular dynamics simulation

    Science.gov (United States)

    Imaoka, Haruna; Kinugawa, Kenichi

    2017-03-01

    Thermal conductivity, shear viscosity, and bulk viscosity of normal liquid 4He at 1.7-4.0 K are calculated using path integral centroid molecular dynamics (CMD) simulations. The calculated thermal conductivity and shear viscosity above lambda transition temperature are on the same order of magnitude as experimental values, while the agreement of shear viscosity is better. Above 2.3 K the CMD well reproduces the temperature dependences of isochoric shear viscosity and of the time integral of the energy current and off-diagonal stress tensor correlation functions. The calculated bulk viscosity, not known in experiments, is several times larger than shear viscosity.

  7. Shortcut in DIC error assessment induced by image interpolation used for subpixel shifting

    Science.gov (United States)

    Bornert, Michel; Doumalin, Pascal; Dupré, Jean-Christophe; Poilane, Christophe; Robert, Laurent; Toussaint, Evelyne; Wattrisse, Bertrand

    2017-04-01

    In order to characterize errors of Digital Image Correlation (DIC) algorithms, sets of virtual images are often generated from a reference image by in-plane sub-pixel translations. This leads to the determination of the well-known S-shaped bias error curves and their corresponding random error curves. As images are usually shifted by using interpolation schemes similar to those used in DIC algorithms, the question of the possible bias in the quantification of measurement uncertainties of DIC softwares is raised and constitutes the main problematic of this paper. In this collaborative work, synthetic numerically shifted images are built from two methods: one based on interpolations of the reference image and the other based on the transformation of an analytic texture function. Images are analyzed using an in-house subset-based DIC software and results are compared and discussed. The effect of image noise is also highlighted. The main result is that the a priori choices to numerically shift the reference image modify DIC results and may lead to wrong conclusions in terms of DIC error assessment.

  8. Parametric Study to Improve Subpixel Accuracy of Nitric Oxide Tagging Velocimetry with Image Preprocessing

    Directory of Open Access Journals (Sweden)

    Ravi Teja Vedula

    2017-01-01

    Full Text Available Biacetyl phosphorescence has been the commonly used molecular tagging velocimetry (MTV technique to investigate in-cylinder flow evolution and cycle-to-cycle variations in an optical engine. As the phosphorescence of biacetyl tracer deteriorates in the presence of oxygen, nitrogen was adopted as the working medium in the past. Recently, nitrous oxide MTV technique was employed to measure the velocity profile of an air jet. The authors here plan to investigate the potential application of this technique for engine flow studies. A possible experimental setup for this task indicated different permutations of image signal-to-noise ratio (SNR and laser line width. In the current work, a numerical analysis is performed to study the effect of these two factors on displacement error in MTV image processing. Also, several image filtering techniques were evaluated and the performance of selected filters was analyzed in terms of enhancing the image quality and minimizing displacement errors. The flow displacement error without image preprocessing was observed to be inversely proportional to SNR and directly proportional to laser line width. The mean filter resulted in the smallest errors for line widths smaller than 9 pixels. The effect of filter size on subpixel accuracy showed that error levels increased as the filter size increased.

  9. Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.

    Science.gov (United States)

    Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian

    2009-04-01

    Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.

  10. Subpixelic Measurement of Large 1D Displacements: Principle, Processing Algorithms, Performances and Software

    Science.gov (United States)

    Guelpa, Valérian; Laurent, Guillaume J.; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-01-01

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations—leading to high resolution—while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 μs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 μm measurement range. PMID:24625736

  11. Modelling Perception of Structure and Affect in Music: Spectral Centroid and Wishart's Red Bird

    Directory of Open Access Journals (Sweden)

    Roger T. Dean

    2011-12-01

    Full Text Available Pearce (2011 provides a positive and interesting response to our article on time series analysis of the influences of acoustic properties on real-time perception of structure and affect in a section of Trevor Wishart’s Red Bird (Dean & Bailes, 2010. We address the following topics raised in the response and our paper. First, we analyse in depth the possible influence of spectral centroid, a timbral feature of the acoustic stream distinct from the high level general parameter we used initially, spectral flatness. We find that spectral centroid, like spectral flatness, is not a powerful predictor of real-time responses, though it does show some features that encourage its continued consideration. Second, we discuss further the issue of studying both individual responses, and as in our paper, group averaged responses. We show that a multivariate Vector Autoregression model handles the grand average series quite similarly to those of individual members of our participant groups, and we analyse this in greater detail with a wide range of approaches in work which is in press and continuing. Lastly, we discuss the nature and intent of computational modelling of cognition using acoustic and music- or information theoretic data streams as predictors, and how the music- or information theoretic approaches may be applied to electroacoustic music, which is ‘sound-based’ rather than note-centred like Western classical music.

  12. MOBIUS-STRIP-LIKE COLUMNAR FUNCTIONAL CONNECTIONS ARE REVEALED IN SOMATO-SENSORY RECEPTIVE FIELD CENTROIDS.

    Directory of Open Access Journals (Sweden)

    James Joseph Wright

    2014-10-01

    Full Text Available Receptive fields of neurons in the forelimb region of areas 3b and 1 of primary somatosensory cortex, in cats and monkeys, were mapped using extracellular recordings obtained sequentially from nearly radial penetrations. Locations of the field centroids indicated the presence of a functional system, in which cortical homotypic representations of the limb surfaces are entwined in three-dimensional Mobius-strip-like patterns of synaptic connections. Boundaries of somatosensory receptive field in nested groups irregularly overlie the centroid order, and are interpreted as arising from the superposition of learned connections upon the embryonic order. Since the theory of embryonic synaptic self-organisation used to model these results was devised and earlier used to explain findings in primary visual cortex, the present findings suggest the theory may be of general application throughout cortex, and may reveal a modular functional synaptic system, which, only in some parts of the cortex, and in some species, is manifest as anatomical ordering into columns.

  13. Lifetime measurements in {sup 170}Yb using the generalized centroid difference method

    Energy Technology Data Exchange (ETDEWEB)

    Karayonchev, Vasil; Regis, Jean-Marc; Jolie, Jan; Dannhoff, Moritz; Saed-Samii, Nima; Blazhev, Andrey [Institute of Nuclear Physics, University of Cologne, Cologne (Germany)

    2016-07-01

    An experiment using the electronic γ-γ ''fast-timing'' technique was performed at the 10 MV Tandem Van-De-Graaff accelerator of the Institute for Nuclear Physics, Cologne in order to measure lifetimes of the yrast states in {sup 170}Yb. The lifetime of the first 2{sup +} state was determined using the slope method, which means by fitting an exponential decay to the ''slope'' seen in the energy-gated time-difference spectra. The value of τ=2.201(57) ns is in good agreement with the lifetimes measured using other techniques. The lifetimes of the first 4{sup +} and the 6{sup +} states are determined for the first time. They are in the ps range and were measured using the generalized centroid difference method, an extension of the well-known centroid-shift method and developed for fast-timing arrays. The derived reduced transition probabilities B(E2) values are compared with calculations done using the confined beta soft model and show good agreement within the experimental uncertainties.

  14. A Comparison Between the Centroid and the Yager Index Rank for Type Reduction of an Interval Type-2 Fuzzy Number

    Directory of Open Access Journals (Sweden)

    Juan Carlos Figueroa

    2016-05-01

    Full Text Available Context: There is a need for ranking and defuzzification of Interval Type-2 fuzzy sets (IT2FS, in particular Interval Type-2 fuzzy numbers (IT2FN. To do so, we use the classical Yager Index Rank (YIR for fuzzy sets to IT2FNs in order to find an alternative to the centroid of an IT2FN. Method: We use a simulation strategy to compare the results of the centroid and the YIR of an IT2FN. This way, we simulate 1000 IT2FNs of the following three kinds: gaussian, triangular, and non symmetrical in order to compare their centroids and YIRs. Results: After performing the simulations, we compute some statistics about its behavior such as the degree of subsethood, equality and the size of the Footprint of Uncertainty (FOU of an IT2FN. A description of the obtained results shows that the YIR is less wide than centroid of an IT2FN. Conclusions: In general, YIR is less complex to obtain than the centroid of an IT2FN, which is highly desirable in practical applications such as fuzzy decision making and control. Some other properties regarding its size and location are also discussed.

  15. A multi-resolution method for climate system modeling: application of Spherical Centroidal A multi-resolution method for climate system modeling: Application of Spherical Centroidal Voroni Tessellations

    Energy Technology Data Exchange (ETDEWEB)

    Ringler, Todd D [Los Alamos National Laboratory; Gunzburger, Max [FLORIDA STATE UNIV; Ju, Lili [UNIV OF SOUTH CAROLINA

    2008-01-01

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multi-resolution schemes that are able, at least regional to faithfully simulate these fine-scale processes. Spherical Centroidal Voronoi Tessellations (SCVTs) offer one potential path toward the development of robust, multi-resolution climate system component models, SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function, each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean-ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear shallow-water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multi-resolution method and the challenges ahead.

  16. Path integral centroid molecular dynamics simulations of semiinfinite slab and bulk liquid of para-hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Kinugawa, Kenichi [Nara Women`s Univ., Nara (Japan). Dept. of Chemistry

    1998-10-01

    It has been unsuccessful to solve a set of time-dependent Schroedinger equations numerically for many-body quantum systems which involve, e.g., a number of hydrogen molecules, protons, and excess electrons at a low temperature, where quantum effect evidently appears. This undesirable situation is fatal for the investigation of real low-temperature chemical systems because they are essentially composed of many quantum degrees of freedom. However, if we use a new technique called `path integral centroid molecular dynamics (CMD) simulation` proposed by Cao and Voth in 1994, the real-time semi-classical dynamics of many degrees of freedom can be computed by utilizing the techniques already developed in the traditional classical molecular dynamics (MD) simulations. Therefore, the CMD simulation is expected to be very powerful tool for the quantum dynamics studies or real substances. (J.P.N.)

  17. Ranking Fuzzy Numbers with a Distance Method using Circumcenter of Centroids and an Index of Modality

    Directory of Open Access Journals (Sweden)

    P. Phani Bushan Rao

    2011-01-01

    Full Text Available Ranking fuzzy numbers are an important aspect of decision making in a fuzzy environment. Since their inception in 1965, many authors have proposed different methods for ranking fuzzy numbers. However, there is no method which gives a satisfactory result to all situations. Most of the methods proposed so far are nondiscriminating and counterintuitive. This paper proposes a new method for ranking fuzzy numbers based on the Circumcenter of Centroids and uses an index of optimism to reflect the decision maker's optimistic attitude and also an index of modality that represents the neutrality of the decision maker. This method ranks various types of fuzzy numbers which include normal, generalized trapezoidal, and triangular fuzzy numbers along with crisp numbers with the particularity that crisp numbers are to be considered particular cases of fuzzy numbers.

  18. Franck–Condon factors and r-centroids for the diatomic fluorides of germanium and silicon

    Directory of Open Access Journals (Sweden)

    S. KANAGAPRABHA

    2008-05-01

    Full Text Available A suitable potential energy function was found by analysing the potential functions proposed by Morse, Mohammad and Rafi et al. for the A2Σ+–X2Π3/2 and B2Σ+–X2Π3/2 band systems of GeF and the 1Σ–1Π band system of SiF. It was found that the potential proposed by Rafi et al. is in close agreement with the Rydberg–Klein–Rees (R–K–R potential. Using this potential, the wave functions were evaluated by the Wentzel–Kramer–Brillouin (W–K–B method. The Franck–Condon factors and r-centroids were computed by a numerical integration technique. The results are compared with available theoretical values. The intensities of the various bands were investigated.

  19. Bayesian inference and interpretation of centroid moment tensors of the 2016 Kumamoto earthquake sequence, Kyushu, Japan

    Science.gov (United States)

    Hallo, Miroslav; Asano, Kimiyuki; Gallovič, František

    2017-09-01

    On April 16, 2016, Kumamoto prefecture in Kyushu region, Japan, was devastated by a shallow M JMA7.3 earthquake. The series of foreshocks started by M JMA6.5 foreshock 28 h before the mainshock. They have originated in Hinagu fault zone intersecting the mainshock Futagawa fault zone; hence, the tectonic background for this earthquake sequence is rather complex. Here we infer centroid moment tensors (CMTs) for 11 events with M JMA between 4.8 and 6.5, using strong motion records of the K-NET, KiK-net and F-net networks. We use upgraded Bayesian full-waveform inversion code ISOLA-ObsPy, which takes into account uncertainty of the velocity model. Such an approach allows us to reliably assess uncertainty of the CMT parameters including the centroid position. The solutions show significant systematic spatial and temporal variations throughout the sequence. Foreshocks are right-lateral steeply dipping strike-slip events connected to the NE-SW shear zone. Those located close to the intersection of the Hinagu and Futagawa fault zones are dipping slightly to ESE, while those in the southern area are dipping to WNW. Contrarily, aftershocks are mostly normal dip-slip events, being related to the N-S extensional tectonic regime. Most of the deviatoric moment tensors contain only minor CLVD component, which can be attributed to the velocity model uncertainty. Nevertheless, two of the CMTs involve a significant CLVD component, which may reflect complex rupture process. Decomposition of those moment tensors into two pure shear moment tensors suggests combined right-lateral strike-slip and normal dip-slip mechanisms, consistent with the tectonic settings of the intersection of the Hinagu and Futagawa fault zones.[Figure not available: see fulltext.

  20. A Proposal to Speed up the Computation of the Centroid of an Interval Type-2 Fuzzy Set

    Directory of Open Access Journals (Sweden)

    Carlos E. Celemin

    2013-01-01

    Full Text Available This paper presents two new algorithms that speed up the centroid computation of an interval type-2 fuzzy set. The algorithms include precomputation of the main operations and initialization based on the concept of uncertainty bounds. Simulations over different kinds of footprints of uncertainty reveal that the new algorithms achieve computation time reductions with respect to the Enhanced-Karnik algorithm, ranging from 40 to 70%. The results suggest that the initialization used in the new algorithms effectively reduces the number of iterations to compute the extreme points of the interval centroid while precomputation reduces the computational cost of each iteration.

  1. DONUTS: A Science Frame Autoguiding Algorithm with Sub-Pixel Precision, Capable of Guiding on Defocused Stars

    Science.gov (United States)

    McCormac, J.; Pollacco, D.; Skillen, I.; Faedi, F.; Todd, I.; Watson, C. A.

    2013-05-01

    We present the DONUTS autoguiding algorithm, designed to fix stellar positions at the sub-pixel level for high-cadence time-series photometry, and also capable of autoguiding on defocused stars. DONUTS was designed to calculate guide corrections from a series of science images and recentre telescope pointing between each exposure. The algorithm has the unique ability of calculating guide corrections from undersampled to heavily defocused point spread functions. We present the case for why such an algorithm is important for high precision photometry and give our results from off and on-sky testing. We discuss the limitations of DONUTS and the facilities where it soon will be deployed.

  2. Automated Multi-Peak Tracking Kymography (AMTraK: A Tool to Quantify Sub-Cellular Dynamics with Sub-Pixel Accuracy.

    Directory of Open Access Journals (Sweden)

    Anushree R Chaphalkar

    Full Text Available Kymographs or space-time plots are widely used in cell biology to reduce the dimensions of a time-series in microscopy for both qualitative and quantitative insight into spatio-temporal dynamics. While multiple tools for image kymography have been described before, quantification remains largely manual. Here, we describe a novel software tool for automated multi-peak tracking kymography (AMTraK, which uses peak information and distance minimization to track and automatically quantify kymographs, integrated in a GUI. The program takes fluorescence time-series data as an input and tracks contours in the kymographs based on intensity and gradient peaks. By integrating a branch-point detection method, it can be used to identify merging and splitting events of tracks, important in separation and coalescence events. In tests with synthetic images, we demonstrate sub-pixel positional accuracy of the program. We test the program by quantifying sub-cellular dynamics in rod-shaped bacteria, microtubule (MT transport and vesicle dynamics. A time-series of E. coli cell division with labeled nucleoid DNA is used to identify the time-point and rate at which the nucleoid segregates. The mean velocity of microtubule (MT gliding motility due to a recombinant kinesin motor is estimated as 0.5 μm/s, in agreement with published values, and comparable to estimates using software for nanometer precision filament-tracking. We proceed to employ AMTraK to analyze previously published time-series microscopy data where kymographs had been manually quantified: clathrin polymerization kinetics during vesicle formation and anterograde and retrograde transport in axons. AMTraK analysis not only reproduces the reported parameters, it also provides an objective and automated method for reproducible analysis of kymographs from in vitro and in vivo fluorescence microscopy time-series of sub-cellular dynamics.

  3. Automated Multi-Peak Tracking Kymography (AMTraK): A Tool to Quantify Sub-Cellular Dynamics with Sub-Pixel Accuracy.

    Science.gov (United States)

    Chaphalkar, Anushree R; Jain, Kunalika; Gangan, Manasi S; Athale, Chaitanya A

    2016-01-01

    Kymographs or space-time plots are widely used in cell biology to reduce the dimensions of a time-series in microscopy for both qualitative and quantitative insight into spatio-temporal dynamics. While multiple tools for image kymography have been described before, quantification remains largely manual. Here, we describe a novel software tool for automated multi-peak tracking kymography (AMTraK), which uses peak information and distance minimization to track and automatically quantify kymographs, integrated in a GUI. The program takes fluorescence time-series data as an input and tracks contours in the kymographs based on intensity and gradient peaks. By integrating a branch-point detection method, it can be used to identify merging and splitting events of tracks, important in separation and coalescence events. In tests with synthetic images, we demonstrate sub-pixel positional accuracy of the program. We test the program by quantifying sub-cellular dynamics in rod-shaped bacteria, microtubule (MT) transport and vesicle dynamics. A time-series of E. coli cell division with labeled nucleoid DNA is used to identify the time-point and rate at which the nucleoid segregates. The mean velocity of microtubule (MT) gliding motility due to a recombinant kinesin motor is estimated as 0.5 μm/s, in agreement with published values, and comparable to estimates using software for nanometer precision filament-tracking. We proceed to employ AMTraK to analyze previously published time-series microscopy data where kymographs had been manually quantified: clathrin polymerization kinetics during vesicle formation and anterograde and retrograde transport in axons. AMTraK analysis not only reproduces the reported parameters, it also provides an objective and automated method for reproducible analysis of kymographs from in vitro and in vivo fluorescence microscopy time-series of sub-cellular dynamics.

  4. Comment on “Centroid theory of transverse electron-proton two-stream instability in a long proton bunch”

    Directory of Open Access Journals (Sweden)

    D. V. Pestrikov

    2004-11-01

    Full Text Available We show that, due to inaccurate calculations with a Volterra integral equation, the results obtained in the commented paper are not true. Such an awkwardness could be avoided if the authors would retain in their calculations initial conditions for the proton bunch centroid.

  5. Left-right asymmetry of the Maxwell spot centroids in adults without and with dyslexia.

    Science.gov (United States)

    Le Floch, Albert; Ropars, Guy

    2017-10-25

    In human vision, the brain has to select one view of the world from our two eyes. However, the existence of a clear anatomical asymmetry providing an initial imbalance for normal neural development is still not understood. Using a so-called foveascope, we found that for a cohort of 30 normal adults, the two blue cone-free areas at the centre of the foveas are asymmetrical. The noise-stimulated afterimage dominant eye introduced here corresponds to the circular blue cone-free area, while the non-dominant eye corresponds to the diffuse and irregular elliptical outline. By contrast, we found that this asymmetry is absent or frustrated in a similar cohort of 30 adults with normal ocular status, but with dyslexia, i.e. with visual and phonological deficits. In this case, our results show that the two Maxwell centroid outlines are both circular but lead to an undetermined afterimage dominance with a coexistence of primary and mirror images. The interplay between the lack of asymmetry and the development in the neural maturation of the brain pathways suggests new implications in both fundamental and biomedical sciences. © 2017 The Author(s).

  6. Mitigation of Angle Tracking Errors Due to Color Dependent Centroid Shifts in SIM-Lite

    Science.gov (United States)

    Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.; hide

    2010-01-01

    The SIM-Lite astrometric interferometer will search for Earth-size planets in the habitable zones of nearby stars. In this search the interferometer will monitor the astrometric position of candidate stars relative to nearby reference stars over the course of a 5 year mission. The elemental measurement is the angle between a target star and a reference star. This is a two-step process, in which the interferometer will each time need to use its controllable optics to align the starlight in the two arms with each other and with the metrology beams. The sensor for this alignment is an angle tracking CCD camera. Various constraints in the design of the camera subject it to systematic alignment errors when observing a star of one spectrum compared with a start of a different spectrum. This effect is called a Color Dependent Centroid Shift (CDCS) and has been studied extensively with SIM-Lite's SCDU testbed. Here we describe results from the simulation and testing of this error in the SCDU testbed, as well as effective ways that it can be reduced to acceptable levels.

  7. Optimization of Carboxymethyl-Xyloglucan-Based Tramadol Matrix Tablets Using Simplex Centroid Mixture Design

    Directory of Open Access Journals (Sweden)

    Ashwini R. Madgulkar

    2013-01-01

    Full Text Available The aim was to determine the release-modifying effect of carboxymethyl xyloglucan for oral drug delivery. Sustained release matrix tablets of tramadol HCl were prepared by wet granulation method using carboxymethyl xyloglucan as matrix forming polymer. HPMC K100M was used in a small amount to control the burst effect which is most commonly seen with natural hydrophilic polymers. A simplex centroid design with three independent variables and two dependent variables was employed to systematically optimize drug release profile. Carboxymethyl xyloglucan , HPMC K100M , and dicalcium phosphate were taken as independent variables. The dependent variables selected were percent of drug release at 2nd hour and at 8th hour . Response surface plots were developed, and optimum formulations were selected on the basis of desirability. The formulated tablets showed anomalous release mechanism and followed matrix drug release kinetics, resulting in regulated and complete release from the tablets within 8 to 10 hours. The polymer carboxymethyl xyloglucan and HPMC K100M had significant effect on drug release from the tablet (. Polynomial mathematical models, generated for various response variables using multiple regression analysis, were found to be statistically significant (. The statistical models developed for optimization were found to be valid.

  8. Correction of sub-pixel topographical effects on land surface albedo retrieved from geostationary satellite (FengYun-2D) observations

    NARCIS (Netherlands)

    Roupioz, L.F.S.; Jia, L.; Nerry, F.; Menenti, M.

    2014-01-01

    The Qinghai-Tibetan Plateau is characterised by a very strong relief which affects albedo retrieval from satellite data. The objective of this study is to highlight the effects of subpixel topography and to account for those effects when retrieving land surface albedo from geostationary satellite

  9. 2D wireless sensor network deployment based on Centroidal Voronoi Tessellation

    Science.gov (United States)

    Iliodromitis, Athanasios; Pantazis, George; Vescoukis, Vasileios

    2017-06-01

    In recent years, Wireless Sensor Networks (WSNs) have rapidly evolved and now comprise a powerful tool in monitoring and observation of the natural environment, among other fields. The use of WSNs is critical in early warning systems, which are of high importance today. In fact, WSNs are adopted more and more in various applications, e.g. for fire or deformation detection. The optimum deployment of sensors is a multi-dimensional problem, which has two main components; network and positioning approach. Although lots of work has dealt with the issue, most of it emphasizes on mere network approach (communication, energy consumption) and not on the topography (positioning) of the sensors in achieving ideal geometry. In some cases, it is hard or even impossible to achieve perfect geometry in nodes' deployment. The ideal and desirable scenario of nodes arranged in square or hexagonal grid would raise extremely the cost of the network, especially in unfriendly or hostile environments. In such environments the positions of the sensors have to be chosen among a list of possible points, which in most cases are randomly distributed. This constraint has to be taken under consideration during the WSN planning. Full geographical coverage is in some applications of the same, if not of greater, importance than the network coverage. Cost is a crucial factor at network planning and given that resources are often limited, what matters, is to cover the whole area with the minimum number of sensors. This paper suggests a deployment method for nodes, in large scale and high density WSNs, based on Centroidal Voronoi Tessellation (CVT). It approximates the solution through the geometry of the random points and proposes a deployment plan, for the given characteristics of the study area, in order to achieve a deployment as near as possible to the ideal one.

  10. Spurious dianeutral mixing in a global ocean model using spherical centroidal voronoi tessellations

    Science.gov (United States)

    Zhao, Shimei; Liu, Yudi

    2016-12-01

    In order to quantitatively evaluate the spurious dianeutral mixing in a global ocean model MPAS-Ocean (Model for Prediction Across Scales) using a spherical centroidal voronoi tessellations developed jointly by the National Center for Atmospheric Research and the Los Alamos National Laboratory in the United States, we choose z* vertical coordinate system in MPAS-Ocean, in which all physical mixing processes, such as convection adjustment and explicit diffusion parameter schemes, are omitted, using a linear equation of state. By calculating the Reference Potential Energy (RPE), front revolution position, time rate of RPE change, probability density function distribution and dimensionless parameter χ, from the perspectives of resolution, viscosity, Horizontal Grid Reynolds Number (HGRN), ReΔ, and momentum transmission scheme, using two ideal cases, overflow and baroclinic eddy channel, we qualitatively analyze the simulation results by comparison with the three non-isopycnal models in Ilicak et al. (2012), i.e., MITGCM, MOM, and ROMS. The results show that the spurious dianeutral mixing in the MPAS-Ocean increases over time. The spurious dianeutral transport is proportional to the HGRN directly and is reduced by increasing the lateral viscosity or using a finer resolution to control HGRN. When the HGRN is less than 10, spurious transport is reduced significantly. When using the proper viscosity closure, MPAS-Ocean performs better than MITGCM and MOM, closely to ROMS, in the 2D case without rotation, and much better than the above-mentioned three ocean models under the condition of 3D space with rotation due to the cell area difference between the hexagon cell and the quadrilateral cell with the same resolution. Both the Zalesak (1979) flux corrected transport scheme and Leith closure in MPAS-Ocean play an excellent role in reducing spurious dianeutral mixing. The performance of Leith scheme is preferable to the condition of three-dimensional baroclinic eddy.

  11. Centroid search optimization of cultural conditions affecting the production of extracellular proteinase by Pseudomonas fragi ATCC 4973.

    Science.gov (United States)

    Myhara, R M; Skura, B

    1990-10-01

    The production of extracellular proteinase by Pseudomonas fragi ATCC 4973 grown in a defined citrate medium, containing glutamine as the sole nitrogen source, was determined under varying cultural conditions. Simultaneous evaluation of cultural conditions using a 'centroid search' optimization technique showed that the optimum cultural conditions for proteinase production by Ps. fragi were: incubation temperature, 12.5 degrees C; incubation time, 38 h; initial pH, 6.8; organic nitrogen concentration, 314 mmol nitrogen/l (glutamine); a gas mixture containing 16.4% oxygen flowing over the medium (7.42 ppm dissolved oxygen). Oxygen was the major factor influencing proteinase production by Ps. fragi. The results may have applications in the storage of fluid milk. Centroid search optimization was shown to be suitable for microbiological experiments.

  12. Color capable sub-pixel resolving optofluidic microscope and its application to blood cell imaging for malaria diagnosis.

    Directory of Open Access Journals (Sweden)

    Seung Ah Lee

    Full Text Available Miniaturization of imaging systems can significantly benefit clinical diagnosis in challenging environments, where access to physicians and good equipment can be limited. Sub-pixel resolving optofluidic microscope (SROFM offers high-resolution imaging in the form of an on-chip device, with the combination of microfluidics and inexpensive CMOS image sensors. In this work, we report on the implementation of color SROFM prototypes with a demonstrated optical resolution of 0.66 µm at their highest acuity. We applied the prototypes to perform color imaging of red blood cells (RBCs infected with Plasmodium falciparum, a particularly harmful type of malaria parasites and one of the major causes of death in the developing world.

  13. An Investigation on the Use of Different Centroiding Algorithms and Star Catalogs in Astro-Geodetic Observations

    Science.gov (United States)

    Basoglu, Burak; Halicioglu, Kerem; Albayrak, Muge; Ulug, Rasit; Tevfik Ozludemir, M.; Deniz, Rasim

    2017-04-01

    In the last decade, the importance of high-precise geoid determination at local or national level has been pointed out by Turkish National Geodesy Commission. The Commission has also put objective of modernization of national height system of Turkey to the agenda. Meanwhile several projects have been realized in recent years. In Istanbul city, a GNSS/Levelling geoid was defined in 2005 for the metropolitan area of the city with an accuracy of ±3.5cm. In order to achieve a better accuracy in this area, "Local Geoid Determination with Integration of GNSS/Levelling and Astro-Geodetic Data" project has been conducted in Istanbul Technical University and Bogazici University KOERI since January 2016. The project is funded by The Scientific and Technological Research Council of Turkey. With the scope of the project, modernization studies of Digital Zenith Camera System are being carried on in terms of hardware components and software development. Accentuated subjects are the star catalogues, and centroiding algorithm used to identify the stars on the zenithal star field. During the test observations of Digital Zenith Camera System performed between 2013-2016, final results were calculated using the PSF method for star centroiding, and the second USNO CCD Astrograph Catalogue (UCAC2) for the reference star positions. This study aims to investigate the position accuracy of the star images by comparing different centroiding algorithms and available star catalogs used in astro-geodetic observations conducted with the digital zenith camera system.

  14. Franck-Condon factors and r-centroids for the B-X bands of 10B18O and 11B18O molecules

    Directory of Open Access Journals (Sweden)

    VOJISLAV BOJOVIC

    2005-05-01

    Full Text Available Frank–Condon factors and r-centroids have been calculated for the B2S+ –X2S+ bands of the 10B18O and 11B18O isotopic molecules assuming that both the B and X states follow a Morse potential curve. The calculated q n'n" values are compared with observed band intensities and the relationship between the r-centroids and the band positions has been determined and is discussed.

  15. Optimization of the fermentation media for sophorolipid production from Candida bombicola ATCC 22214 using a simplex centroid design.

    Science.gov (United States)

    Rispoli, Fred J; Badia, Daniel; Shah, Vishal

    2010-01-01

    This article describes the use of a simplex centroid mixture experimental design to optimize the fermentation medium in the production of sophorolipids (SLs) using Candida bombicola. In the first stage, 16 media ingredients were screened for the ones that have the most positive influence on the SL production. The sixteen ingredients that were chosen are five different carbohydrates (fructose, glucose, glycerol, lactose, and sucrose), five different nitrogen sources (malt extract, peptone extract, soytone, urea, and yeast extract), two lipid sources (mineral oil and oleic acid), two phosphorus sources (K(2)HPO(4) and KH(2)PO(4)), MgSO(4), and CaCl(2). Multiple regression analysis and centroid effect analysis were carried out to find the sugar, lipid, nitrogen source, phosphorus source, and metals having the most positive influence. Sucrose, malt extract, oleic acid, K(2)HPO(4), and CaCl(2) were selected for the second stage of experiments. An augmented simplex centroid design for five ingredients requiring 16 experiments was used for the optimization stage. This produced a quadratic model developed to help understand the interaction amongst the ingredients and find the optimal media concentrations. In addition, the top three results from the optimization experiments were used to obtain constraints that identify an optimal region. The model together with the optimal region constraints predicts the maximum production of SLs when the fermentation media is composed of sucrose, 125 g/L; malt extract, 25 g/L; oleic acid, 166.67 g/L; K(2)HPO(4), 1.5 g/L; and CaCl(2), 2.5 g/L. The optimal media was validated experimentally and a yield of 177 g/L was obtained. (c) 2010 American Institute of Chemical Engineers

  16. Observations of sensor bias dependent cluster centroid shifts in a prototype sensor for the LHCb Vertex Locator detector

    CERN Document Server

    Papadelis, Aras

    2006-01-01

    We present results from a recent beam test of a prototype sensor for the LHCb Vertex Locator detector, read out with the Beetle 1.3 front-end chip. We have studied the effect of the sensor bias voltage on the reconstructed cluster positions in a sensor placed in a 120GeV pion beam at a 10° incidence angle. We find an unexplained sysematic shift in the reconstructed cluster centroid when increasing the bias voltage on an already overdepleted sensor. The shift is independent of strip pitch and sensor thickness.

  17. Overall noise characteristics of reduced images on liquid crystal display and advantages of independent subpixel driving technology.

    Science.gov (United States)

    Yamazaki, Asumi; Ichikawa, Katsuhiro; Kodera, Yoshie; Funahashi, Masao

    2013-02-01

    During soft-copy diagnoses, medical images with a large number of matrices often need to display reduced images on liquid crystal displays (LCDs) because of the spatial resolution limitation of LCDs. A new technology, known as independent subpixel driving (ISD), was recently applied to clinical uses aiming to improve the spatial resolution. The authors' study demonstrates the overall noise characteristics of images displayed on a LCD at various display magnifications, with and without ISD application. Measurements of the overall noise power spectra (NPS) of x-ray images displayed on LCD were performed at varying display magnifications, with and without ISD. The NPS of displayed images in several display situations were also simulated based on hypothetical noise factors. The measured and simulated NPS showed that noise characteristics worsened when the display magnification was reduced, due to aliasing errors. The overall noise characteristics were attributed to luminance-value fluctuation converted from pixel values, image-interpolation effects, inherent noise, and blurring of the LCD. ISD improved the noise characteristics because it suppressed noise increments by aliasing errors. ISD affected the noise-characteristic advantages of reduced images displayed on LCDs, particularly at low frequencies.

  18. PSICIC: noise and asymmetry in bacterial division revealed by computational image analysis at sub-pixel resolution.

    Directory of Open Access Journals (Sweden)

    Jonathan M Guberman

    2008-11-01

    Full Text Available Live-cell imaging by light microscopy has demonstrated that all cells are spatially and temporally organized. Quantitative, computational image analysis is an important part of cellular imaging, providing both enriched information about individual cell properties and the ability to analyze large datasets. However, such studies are often limited by the small size and variable shape of objects of interest. Here, we address two outstanding problems in bacterial cell division by developing a generally applicable, standardized, and modular software suite termed Projected System of Internal Coordinates from Interpolated Contours (PSICIC that solves common problems in image quantitation. PSICIC implements interpolated-contour analysis for accurate and precise determination of cell borders and automatically generates internal coordinate systems that are superimposable regardless of cell geometry. We have used PSICIC to establish that the cell-fate determinant, SpoIIE, is asymmetrically localized during Bacillus subtilis sporulation, thereby demonstrating the ability of PSICIC to discern protein localization features at sub-pixel scales. We also used PSICIC to examine the accuracy of cell division in Esherichia coli and found a new role for the Min system in regulating division-site placement throughout the cell length, but only prior to the initiation of cell constriction. These results extend our understanding of the regulation of both asymmetry and accuracy in bacterial division while demonstrating the general applicability of PSICIC as a computational approach for quantitative, high-throughput analysis of cellular images.

  19. Experimental determination of Philodendron melinonii and Arabidopsis thaliana tissue microstructure and geometric modeling via finite-edge centroidal Voronoi tessellation

    Science.gov (United States)

    Faisal, Tanvir R.; Hristozov, Nicolay; Rey, Alejandro D.; Western, Tamara L.; Pasini, Damiano

    2012-09-01

    Plant petioles and stems are hierarchical cellular structures, displaying structural features defined at multiple length scales. One or more of the intermediate hierarchical levels consists of tissues, in which the cellular distribution is quasirandom. The current work focuses on the realistic modeling of plant tissue microstructures. The finite-edge centroidal Voronoi tessellation (FECVT) is here introduced to overcome the drawbacks of the semi-infinite edges of a typical Voronoi model. FECVT can generate a realistic model of a tissue microstructure, which might have finite edges at its border, be defined by a boundary contour of any shape, and include complex heterogeneity and cellular gradients. The centroid-based Voronoi tessellation is applied to model the microstructure of the Philodendron melinonii petiole and the Arabidopsis thaliana stem, which both display intense cellular gradients. FECVT coupled with a digital image processing algorithm is implemented to capture the nonperiodic microstructures of plant tissues. The results obtained via this method satisfactorily obey the geometric, statistical, and topological laws of naturally evolved cellular solids. The predicted models are also validated by experimental data.

  20. Toward image registration process: Using different interpolation methods in case of subpixel displacement

    Science.gov (United States)

    Flores Padilla, Deyanira; Jimenez-Hernández, Hugo; Reynosa Canseco, Jaqueline

    2016-09-01

    Interpolating for data sample is required in many image processing methods. For example, in the estimation of displacement which are smaller than one pixel. In the case of displacement calculation in an image sequence using the numerical method, Newton-Rapson is very common to use a linear interpolator due to its simplicity and speed. However this method generates discontinuous functions. Therefore, theoretically it should not be used in combination with Newton-Rapson due to the use of the derivative in the numerical method. This work shows a comparative analysis of different interpolators, along with a comparison between "real world" and image displacements and their relationship. All of this with the purpose of identifying which interpolator offers the most exact approximation in the estimation of displacement calculation.

  1. Alteração no método centroide de avaliação da adaptabilidade genotípica Alteration of the centroid method to evaluate genotypic adaptability

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2009-03-01

    Full Text Available O objetivo deste trabalho foi alterar o método centroide de avaliação da adaptabilidade e estabilidade fenotípica de genótipos, para deixá-lo com maior sentido biológico e melhorar aspectos quantitativos e qualitativos de sua análise. A alteração se deu pela adição de mais três ideótipos, definidos de acordo com valores médios dos genótipos nos ambientes. Foram utilizados dados provenientes de um experimento sobre produção de matéria seca de 92 genótipos de alfafa (Medicago sativa realizado em blocos ao acaso, com duas repetições. Os genótipos foram submetidos a 20 cortes, no período de novembro de 2004 a junho de 2006. Cada corte foi considerado um ambiente. A inclusão dos ideótipos de maior sentido biológico (valores médios nos ambientes resultou em uma dispersão gráfica em forma de uma seta voltada para a direita, na qual os genótipos mais produtivos ficaram próximos à ponta da seta. Com a alteração, apenas cinco genótipos foram classificados nas mesmas classes do método centroide original. A figura em forma de seta proporciona uma comparação direta dos genótipos, por meio da formação de um gradiente de produtividade. A alteração no método mantém a facilidade de interpretação dos resultados para a recomendação dos genótipos presente no método original e não permite duplicidade de interpretação dos resultados.ABSTRACT The objective of this work was to modify the centroid method of evaluation of phenotypic adaptability and the phenotype stability of genotypes in order for the method to make greater biological sense and improve its quantitative and qualitative performance. The method was modified by means of the inclusion of three additional ideotypes defined in accordance with the genotypes' average yield in the environments tested. The alfalfa (Medicago sativa L. forage yield of 92 genotypes was used. The trial had a randomized block design, with two replicates, and the data were used to

  2. Optimization of a fermented soy product formulation with a kefir culture and fiber using a simplex-centroid mixture design.

    Science.gov (United States)

    Baú, Tahis Regina; Garcia, Sandra; Ida, Elza Iouko

    2013-12-01

    The objective of this work was to optimize a fermented soy product formulation with kefir and soy, oat and wheat fibers and to evaluate the fiber and product characteristics. A simplex-centroid mixture design was used for the optimization. Soymilk, soy, oat and wheat fiber mixtures, sucrose and anti-foaming agent were used for the formulation, followed by thermal treatment, cooling and the addition of flavoring. Fermentation was performed at 25 °C with a kefir culture until a pH of 4.5 was obtained. The products were cooled, homogenized and stored for analysis. From the mathematical models and variables response surface and desirability an optimal fermented product was formulated containing 3% (w/w) soy fiber. Compared with the other formulations, soy fermented product with 3% soy fiber had the best acidity, viscosity, syneresis, firmness and Lactococcus lactis count.

  3. High-position-resolution scintillation neutron-imaging detector by crossed-fiber readout with novel centroid-finding method

    CERN Document Server

    Katagiri, M; Sakasai, K; Matsubayashi, M; Birumachi, A; Takahashi, H; Nakazawa, M

    2002-01-01

    Aiming at high-position-resolution and high-counting-rate neutron imaging, a novel centroid-finding method is proposed for a scintillation neutron-imaging detector with crossed-fiber readout. Crossed wavelength-shifting fibers are arranged on and under the scintillator. Luminescences generated in the scintillator are emitted and detected by a few fibers surrounding the incident point of a neutron. In the novel method, X and Y positions of the incident neutron are decided by coincidence of a central signal and neighboring signals, respectively. By fundamental experiments using a ZnS:Ag/ sup 6 LiF scintillator of 0.5-mm thickness and crossed wavelength-shifting fibers with a size of 0.5 x 0.5 mm sup 2 , it was confirmed that the position resolution is about 0.5 mm and the limitation of the neutron-counting rate is 320 kcps. (orig.)

  4. High-position-resolution scintillation neutron-imaging detector by crossed-fiber readout with novel centroid-finding method

    Energy Technology Data Exchange (ETDEWEB)

    Katagiri, M.; Toh, K.; Sakasai, K.; Matsubayashi, M.; Birumachi, A. [Advanced Science Research Center, JAERI, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Takahashi, H.; Nakazawa, M. [Department of Quantum Engineering and Science, University of Tokyo, Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2002-07-01

    Aiming at high-position-resolution and high-counting-rate neutron imaging, a novel centroid-finding method is proposed for a scintillation neutron-imaging detector with crossed-fiber readout. Crossed wavelength-shifting fibers are arranged on and under the scintillator. Luminescences generated in the scintillator are emitted and detected by a few fibers surrounding the incident point of a neutron. In the novel method, X and Y positions of the incident neutron are decided by coincidence of a central signal and neighboring signals, respectively. By fundamental experiments using a ZnS:Ag/{sup 6}LiF scintillator of 0.5-mm thickness and crossed wavelength-shifting fibers with a size of 0.5 x 0.5 mm{sup 2}, it was confirmed that the position resolution is about 0.5 mm and the limitation of the neutron-counting rate is 320 kcps. (orig.)

  5. Centroid stabilization for laser alignment to corner cubes: designing a matched filter

    Energy Technology Data Exchange (ETDEWEB)

    Awwal, Abdul A. S.; Bliss, Erlan; Brunton, Gordon; Kamm, Victoria Miller; Leach, Richard R.; Lowe-Webb, Roger; Roberts, Randy; Wilhelmsen, Karl

    2016-11-08

    Automation of image-based alignment of National Ignition Facility high energy laser beams is providing the capability of executing multiple target shots per day. One important alignment is beam centration through the second and third harmonic generating crystals in the final optics assembly (FOA), which employs two retroreflecting corner cubes as centering references for each beam. Beam-to-beam variations and systematic beam changes over time in the FOA corner cube images can lead to a reduction in accuracy as well as increased convergence durations for the template-based position detector. A systematic approach is described that maintains FOA corner cube templates and guarantees stable position estimation.

  6. Dynamics of intracontinental convergence between the western Tarim basin and central Tien Shan constrained by centroid moment tensors of regional earthquakes

    Science.gov (United States)

    Huang, Guo-chin Dino; Roecker, Steven W.; Levin, Vadim; Wang, Haitao; Li, Zhihai

    2017-01-01

    Among the outstanding tectonic questions regarding the convergence between the Tien Shan and Tarim basin in northwestern China are the manner in which deformation is accommodated within their lithospheres, and the extent that the Tarim lithosphere underthrusts the Tien Shan. In particular, the amount and type of deformation within the Tarim basin is poorly understood. It is also uncertain if the convergence between the Tarim and the Tien Shan takes place mainly along a discrete boundary, or if the Tarim lithosphere simply indents into the Kazach shield, forming the Tien Shan through crustal thickening accommodated by a distributed series of thrust faults. In this study we use hypocentres from published earthquake catalogues and waveforms recorded by regional seismic networks to determine earthquake source parameters through regional centroid moment tensor inversion. The entire dataset consists of 160 earthquakes that occurred between 1969 and 2009 and with moment magnitudes between 3.5 and 7 distributed throughout the central Tien Shan and northwestern Tarim Basin. The estimated focal depths of these earthquakes range from the near-surface to about 44 km. Focal mechanisms throughout much of the Tien Shan indicate active deformation accommodated by thrust faults from at least the upper crust to 30 km depth. South of the Tien Shan, the Jia-shi earthquake sequence within the Tarim basin suggests that both crustal shortening and localized flexure are part of a complicated process involving rotational convergence. Inside the Tarim basin, two earthquakes with thrust faulting mechanisms near the crust-mantle boundary beneath the Bachu uplift imply a brittle rheology of the lower crust. High-angle thrust events occur broadly across the Tien Shan, suggesting that the Tarim lithosphere as a whole is strong and indents into the Kazach shield to create the mountain range.

  7. DESIGN OF DYADIC-INTEGER-COEFFICIENTS BASED BI-ORTHOGONAL WAVELET FILTERS FOR IMAGE SUPER-RESOLUTION USING SUB-PIXEL IMAGE REGISTRATION

    Directory of Open Access Journals (Sweden)

    P.B. Chopade

    2014-05-01

    Full Text Available This paper presents image super-resolution scheme based on sub-pixel image registration by the design of a specific class of dyadic-integer-coefficient based wavelet filters derived from the construction of a half-band polynomial. First, the integer-coefficient based half-band polynomial is designed by the splitting approach. Next, this designed half-band polynomial is factorized and assigned specific number of vanishing moments and roots to obtain the dyadic-integer coefficients low-pass analysis and synthesis filters. The possibility of these dyadic-integer coefficients based wavelet filters is explored in the field of image super-resolution using sub-pixel image registration. The two-resolution frames are registered at a specific shift from one another to restore the resolution lost by CCD array of camera. The discrete wavelet transform (DWT obtained from the designed coefficients is applied on these two low-resolution images to obtain the high resolution image. The developed approach is validated by comparing the quality metrics with existing filter banks.

  8. From classical to quantum and back: Hamiltonian adaptive resolution path integral, ring polymer, and centroid molecular dynamics

    Science.gov (United States)

    Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.

    2017-12-01

    Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.

  9. Component optimization of dairy manure vermicompost, straw, and peat in seedling compressed substrates using simplex centroid design.

    Science.gov (United States)

    Yang, Longyuan; Cao, Hongliang; Yuan, Qiaoxia; Luo, Shuai; Liu, Zhigang

    2017-08-22

    Vermicomposting is a promising method to disposal dairy manures, and the dairy manure vermicompost (DMV) to replace expensive peat is high-value in the application of seedling compressed substrates. In this research, three main components, DMV, straw, and peat are conducted in the compressed substrates, and the effect of individual component and the corresponding optimal ratio for the seedling production are significant. To address these issues, the simplex-centroid experimental mixture design is employed, and the cucumber seedling experiment is conducted to evaluate the compressed substrates. Results demonstrated that the mechanical strength and physicochemical properties of compressed substrates for cucumber seedling can be well satisfied with suitable mixture ratio of the components. Moreover, and peat) could be determined at 0.5917: 0.1608: 0.2475 when the weight coefficients of the three parameters (shoot length, root dry weight and aboveground dry weight) were 1: 1: 1. For different purpose, the optimum ratio can be little changed on the base of different weight coefficient. Compressed substrate is lump and has certain mechanical strength, produced by application of mechanical pressure to the seedling substrates. It will not harm seedlings when bedding out the seedlings, since the compressed substrate and seedling are bedded out together. However, there is no one using the vermicompost and agricultural wastes components of compressed substrate for vegetable seedling production before. Thus, it is important to understand the effect of individual component to seedling production, and to determine the optimal ratio of components.

  10. Performance of a class of multi-robot deploy and search strategies based on centroidal voronoi configurations

    Science.gov (United States)

    Guruprasad, K. R.; Ghose, Debasish

    2013-04-01

    This article considers a class of deploy and search strategies for multi-robot systems and evaluates their performance. The application framework used is deployment of a system of autonomous mobile robots equipped with required sensors in a search space to gather information. The lack of information about the search space is modelled as an uncertainty density distribution. The agents are deployed to maximise single-step search effectiveness. The centroidal Voronoi configuration, which achieves a locally optimal deployment, forms the basis for sequential deploy and search (SDS) and combined deploy and search (CDS) strategies. Completeness results are provided for both search strategies. The deployment strategy is analysed in the presence of constraints on robot speed and limit on sensor range for the convergence of trajectories with corresponding control laws responsible for the motion of robots. SDS and CDS strategies are compared with standard greedy and random search strategies on the basis of time taken to achieve reduction in the uncertainty density below a desired level. The simulation experiments reveal several important issues related to the dependence of the relative performances of the search strategies on parameters such as the number of robots, speed of robots and their sensor range limits.

  11. Direct assessment of quantum nuclear effects on hydrogen bond strength by constrained-centroid ab initio path integral molecular dynamics.

    Science.gov (United States)

    Walker, Brent; Michaelides, Angelos

    2010-11-07

    The impact of quantum nuclear effects on hydrogen (H-) bond strength has been inferred in earlier work from bond lengths obtained from path integral molecular dynamics (PIMD) simulations. To obtain a direct quantitative assessment of such effects, we use constrained-centroid PIMD simulations to calculate the free energy changes upon breaking the H-bonds in dimers of HF and water. Comparing ab initio simulations performed using PIMD and classical nucleus molecular dynamics (MD), we find smaller dissociation free energies with the PIMD method. Specifically, at 50 K, the H-bond in (HF)(2) is about 30% weaker when quantum nuclear effects are included, while that in (H(2)O)(2) is about 15% weaker. In a complementary set of simulations, we compare unconstrained PIMD and classical nucleus MD simulations to assess the influence of quantum nuclei on the structures of these systems. We find increased heavy atom distances, indicating weakening of the H-bond consistent with that observed by direct calculation of the free energies of dissociation.

  12. Optimization of Phenolic Antioxidant Extraction from Wuweizi (Schisandra chinensis Pulp Using Random-Centroid Optimazation Methodology

    Directory of Open Access Journals (Sweden)

    Xiong Yu

    2011-09-01

    Full Text Available The extraction optimization and composition analysis of polyphenols in the fresh pulp of Wuweizi (Schisandra chinensis have been investigated in this study. The extraction process of polyphenols from Wuweizi pulp was optimized using Random-Centroid Optimization (RCO methodology. Six factors including liquid and solid ratio, ethanol concentration, pH, temperature, heating time and extraction times, and three extraction targets of polyphenol content, antioxidant activity and extract yield were considered in the RCO program. Three sets of optimum proposed factor values were obtained corresponding to three extraction targets respectively. The set of optimum proposed factor values for polyphenol extraction given was chosen in further experiments as following: liquid and solid ratio (v/w 8, ethanol 67.3% (v/v, initial pH 1.75, temperature 55 °C for 4 h and extraction repeated for 4 times. The Wuweizi polyphenol extract (WPE was obtained with a yield of 16.37 mg/g and composition of polyphenols 1.847 mg/g, anthocyanins 0.179 mg/g, sugar 9.573 mg/g and protein 0.327 mg/g. The WPE demonstrated high scavenging activities against DPPH radicals.

  13. A Framework for Quantifying the Impacts of Sub-Pixel Reflectance Variance and Covariance on Cloud Optical Thickness and Effective Radius Retrievals Based on the Bi-Spectral Method.

    Science.gov (United States)

    Zhang, Z; Werner, F.; Cho, H. -M.; Wind, Galina; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry

    2017-01-01

    The so-called bi-spectral method retrieves cloud optical thickness (t) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near infrared (VIS/NIR) band and the other in a shortwave-infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved t and re. In this study, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the t and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the t and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval.

  14. A framework for quantifying the impacts of sub-pixel reflectance variance and covariance on cloud optical thickness and effective radius retrievals based on the bi-spectral method

    Science.gov (United States)

    Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry

    2017-02-01

    The so-called bi-spectral method retrieves cloud optical thickness (τ) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near infrared (VIS/NIR) band and the other in a shortwave-infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved τ and re. In this study, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the τ and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the τ and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval.

  15. A BAND SELECTION METHOD FOR SUB-PIXEL TARGET DETECTION IN HYPERSPECTRAL IMAGES BASED ON LABORATORY AND FIELD REFLECTANCE SPECTRAL COMPARISON

    Directory of Open Access Journals (Sweden)

    S. Sharifi hashjin

    2016-06-01

    Full Text Available In recent years, developing target detection algorithms has received growing interest in hyperspectral images. In comparison to the classification field, few studies have been done on dimension reduction or band selection for target detection in hyperspectral images. This study presents a simple method to remove bad bands from the images in a supervised manner for sub-pixel target detection. The proposed method is based on comparing field and laboratory spectra of the target of interest for detecting bad bands. For evaluation, the target detection blind test dataset is used in this study. Experimental results show that the proposed method can improve efficiency of the two well-known target detection methods, ACE and CEM.

  16. Simulation of plume rise: Study the effect of stably stratified turbulence layer on the rise of a buoyant plume from a continuous source by observing the plume centroid

    Science.gov (United States)

    Bhimireddy, Sudheer Reddy; Bhaganagar, Kiran

    2016-11-01

    Buoyant plumes are common in atmosphere when there exists a difference in temperature or density between the source and its ambience. In a stratified environment, plume rise happens until the buoyancy variation exists between the plume and ambience. In a calm no wind ambience, this plume rise is purely vertical and the entrainment happens because of the relative motion of the plume with ambience and also ambient turbulence. In this study, a plume centroid is defined as the plume mass center and is calculated from the kinematic equation which relates the rate of change of centroids position to the plume rise velocity. Parameters needed to describe the plume are considered as the plume radius, plumes vertical velocity and local buoyancy of the plume. The plume rise velocity is calculated by the mass, momentum and heat conservation equations in their differential form. Our study focuses on the entrainment velocity, as it depicts the extent of plume growth. This entrainment velocity is made up as sum of fractions of plume's relative velocity and ambient turbulence. From the results, we studied the effect of turbulence on the plume growth by observing the variation in the plume radius at different heights and the centroid height reached before loosing its buoyancy.

  17. Fusion of D-InSAR and sub-pixel image correlation measurementsfor coseismic displacement field estimation : application to the Kashmir earthquake (2005)

    OpenAIRE

    Yan, Y.; Trouvé, E.; Pinel, Virginie; Mauris, G.; Pathier, E.; Galichet, S.

    2012-01-01

    In geophysics, the uncertainty associated with model parameters or displacement measurements plays a crucial role in the understanding of geophysical phenomenon. An emerging way to reduce the geodetic parameter uncertainty is to combine a large number of data provided by SAR images. However, the measurements by radar imagery are subject to both random and epistemic uncertainties. Probability theory is known as the appropriate theory for random uncertainty, but questionable for epistemic uncer...

  18. Correlation of centroid-based breast size, surface-based breast volume, and asymmetry-score-based breast symmetry in three-dimensional breast shape analysis

    Directory of Open Access Journals (Sweden)

    Henseler, Helga

    2016-06-01

    Full Text Available Objective: The aim of this study was to investigate correlations among the size, volume, and symmetry of the female breast after reconstruction based on previously published data. Methods: The centroid, namely the geometric center of a three-dimensional (3D breast-landmark-based configuration, was used to calculate the size of the breast. The surface data of the 3D breast images were used to measure the volume. Breast symmetry was assessed by the Procrustes analysis method, which is based on the 3D coordinates of the breast landmarks to produce an asymmetry score. The relationship among the three measurements was investigated. For this purpose, the data of 44 patients who underwent unilateral breast reconstruction with an extended latissimus dorsi flap were analyzed. The breast was captured by a validated 3D imaging system using multiple cameras. Four landmarks on each breast and two landmarks marking the midline were used.Results: There was a significant positive correlation between the centroid-based breast size of the unreconstructed breast and the measured asymmetry (p=0.024; correlation coefficient, 0.34. There was also a significant relationship between the surface-based breast volume of the unaffected side and the overall asymmetry score (p<0.001; correlation coefficient, 0.556. An increase in size and especially in volume of the unreconstructed breast correlated positively with an increase in breast asymmetry in a linear relationship.Conclusions: In breast shape analysis, the use of more detailed surface-based data should be preferred to centroid-based size data. As the breast size increases, the latissimus dorsi flap for unilateral breast reconstruction increasingly falls short in terms of matching the healthy breast in a linear relationship. Other reconstructive options should be considered for larger breasts. Generally plastic surgeons should view the two breasts as a single unit when assessing breast aesthetics and not view each

  19. The impact of the in-orbit background and the X-ray source intensity on the centroiding accuracy of the Swift X-ray telescope

    CERN Document Server

    Ambrosi, R M; Hill, J; Cheruvu, C; Abbey, A F; Short, A D T

    2002-01-01

    The optical components of the Swift Gamma Ray Burst Explorer X-ray Telescope (XRT), consisting of the JET-X spare flight mirror and a charge coupled device of the type used in the EPIC program, were used in a re-calibration study carried out at the Panter facility, which is part of the Max Planck Institute for Extraterrestrial Physics. The objective of this study was to check the focal length and the off axis performance of the mirrors and to show that the half energy width (HEW) of the on-axis point spread function (PSF) was of the order of 16 arcsec at 1.5 keV (Nucl. Instr. and Meth. A 488 (2002) 543; SPIE 4140 (2000) 64) and that a centroiding accuracy better that 1 arcsec could be achieved within the 4 arcmin sampling area designated by the Burst Alert Telescope (Nucl. Instr. and Meth. A 488 (2002) 543). The centroiding accuracy of the Swift XRT's optical components was tested as a function of distance from the focus and off axis position of the PSF (Nucl. Instr. and Meth. A 488 (2002) 543). The presence ...

  20. Statistical Results Concerning the Precision of the Methods of Correlation and Interpolation Sub-Pixel Used in Video PIV

    Science.gov (United States)

    1998-08-27

    particules qui entrainenl dle fausses estimations dle d~placements. ISL - R119/98 49 Simulations num6riques 4.3.5 Convergence dle lerreur de d6placemnenf apr~s...cleurs" pair iam6thade d’austementpa~rfanction gaussienne en autacorr6flatin (fig. 28e) et en corr~lation croise (fig. 280 Ces nouvelles courbes dle

  1. Anatomy guided automated SPECT renal seed point estimation

    Science.gov (United States)

    Dwivedi, Shekhar; Kumar, Sailendra

    2010-04-01

    Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.

  2. APPLICATION OF A LATTICE GAS MODEL FOR SUBPIXEL PROCESSING OF LOW-RESOLUTION IMAGES OF BINARY STRUCTURES

    Directory of Open Access Journals (Sweden)

    Zbisław Tabor

    2011-05-01

    Full Text Available In the study an algorithm based on a lattice gas model is proposed as a tool for enhancing quality of lowresolution images of binary structures. Analyzed low-resolution gray-level images are replaced with binary images, in which pixel size is decreased. The intensity in the pixels of these new images is determined by corresponding gray-level intensities in the original low-resolution images. Then the white phase pixels in the binary images are assumed to be particles interacting with one another, interacting with properly defined external field and allowed to diffuse. The evolution is driven towards a state with maximal energy by Metropolis algorithm. This state is used to estimate the imaged object. The performance of the proposed algorithm and local and global thresholding methods are compared.

  3. The generalized centroid difference method for picosecond sensitive determination of lifetimes of nuclear excited states using large fast-timing arrays

    Energy Technology Data Exchange (ETDEWEB)

    Régis, J.-M., E-mail: regis@ikp.uni-koeln.de [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Mach, H. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Simpson, G.S. [Laboratoire de Physique Subatomique et de Cosmologie Grenoble, 53, rue des Martyrs, 38026 Grenoble Cedex (France); Jolie, J.; Pascovici, G.; Saed-Samii, N.; Warr, N. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Bruce, A. [School of Computing, Engineering and Mathematics, University of Brighton, Lewes Road, Brighton BN2 4GJ (United Kingdom); Degenkolb, J. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Fraile, L.M. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Fransen, C. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Ghita, D.G. [Horia Hulubei National Institute for Physics and Nuclear Engineering, 77125 Bucharest (Romania); and others

    2013-10-21

    A novel method for direct electronic “fast-timing” lifetime measurements of nuclear excited states via γ–γ coincidences using an array equipped with N∈N equally shaped very fast high-resolution LaBr{sub 3}(Ce) scintillator detectors is presented. Analogous to the mirror symmetric centroid difference method, the generalized centroid difference method provides two independent “start” and “stop” time spectra obtained by a superposition of the N(N−1)γ–γ time difference spectra of the N detector fast-timing system. The two fast-timing array time spectra correspond to a forward and reverse gating of a specific γ–γ cascade. Provided that the energy response and the electronic time pick-off of the detectors are almost equal, a mean prompt response difference between start and stop events is calibrated and used as a single correction for lifetime determination. These combined fast-timing arrays mean γ–γ time-walk characteristics can be determined for 40keV

  4. Radar subpixel-scale rainfall variability and uncertainty: lessons learned from observations of a dense rain-gauge network

    Directory of Open Access Journals (Sweden)

    N. Peleg

    2013-06-01

    Full Text Available Runoff and flash flood generation are very sensitive to rainfall's spatial and temporal variability. The increasing use of radar and satellite data in hydrological applications, due to the sparse distribution of rain gauges over most catchments worldwide, requires furthering our knowledge of the uncertainties of these data. In 2011, a new super-dense network of rain gauges containing 14 stations, each with two side-by-side gauges, was installed within a 4 km2 study area near Kibbutz Galed in northern Israel. This network was established for a detailed exploration of the uncertainties and errors regarding rainfall variability within a common pixel size of data obtained from remote sensing systems for timescales of 1 min to daily. In this paper, we present the analysis of the first year's record collected from this network and from the Shacham weather radar, located 63 km from the study area. The gauge–rainfall spatial correlation and uncertainty were examined along with the estimated radar error. The nugget parameter of the inter-gauge rainfall correlations was high (0.92 on the 1 min scale and increased as the timescale increased. The variance reduction factor (VRF, representing the uncertainty from averaging a number of rain stations per pixel, ranged from 1.6% for the 1 min timescale to 0.07% for the daily scale. It was also found that at least three rain stations are needed to adequately represent the rainfall (VRF < 5% on a typical radar pixel scale. The difference between radar and rain gauge rainfall was mainly attributed to radar estimation errors, while the gauge sampling error contributed up to 20% to the total difference. The ratio of radar rainfall to gauge-areal-averaged rainfall, expressed by the error distribution scatter parameter, decreased from 5.27 dB for 3 min timescale to 3.21 dB for the daily scale. The analysis of the radar errors and uncertainties suggest that a temporal scale of at least 10 min should be used for

  5. Aplicação do delineamento simplex-centroide no estudo da cinética da oxidação de biodiesel B100 em mistura com antioxidantes sintéticos The simplex-centroid design applied to study of the kinetics of the oxidation of B100 biodiesel in blend with synthetic antioxidants

    Directory of Open Access Journals (Sweden)

    Dionísio Borsato

    2010-01-01

    Full Text Available Antioxidants are an alternative to prevent or slow the degradation of the biofuel. In this study, it was evaluated the oxidative stability of B100 biodiesel from soybean oil in the presence of three commercial synthetic antioxidants, butylated hydroxyanisole (BHA, butylated hydroxytoluene (BHT and tert-butylhydroquinone (TBHQ, pure or blended, from the experimental design of simplex-centroid mixture. The reaction order and rate constant were also calculated for all tests. The treatment containing pure TBHQ proved to be the most effective, proven by design, the optimum mix obtained and the rate constant. Binary and ternary mixtures containing TBHQ also showed appreciable antioxidant effect.

  6. An Automated Approach for Sub-Pixel Registration of Landsat-8 Operational Land Imager (OLI and Sentinel-2 Multi Spectral Instrument (MSI Imagery

    Directory of Open Access Journals (Sweden)

    Lin Yan

    2016-06-01

    -points were extracted and had affine-transformation root-mean-square error fits of approximately 0.3 pixels at 10 m resolution and dense-matching prediction errors of similar magnitude. These results and visual assessment of the affine transformed data indicate that the methodology provides sub-pixel registration performance required for meaningful Landsat-8 OLI and Sentinel-2A MSI data comparison and combined data applications.

  7. Combined effect of carnosol, rosmarinic acid and thymol on the oxidative stability of soybean oil using a simplex centroid mixture design.

    Science.gov (United States)

    Saoudi, Salma; Chammem, Nadia; Sifaoui, Ines; Jiménez, Ignacio A; Lorenzo-Morales, Jacob; Piñero, José E; Bouassida-Beji, Maha; Hamdi, Moktar; L Bazzocchi, Isabel

    2017-08-01

    Oxidation taking place during the use of oil leads to the deterioration of both nutritional and sensorial qualities. Natural antioxidants from herbs and plants are rich in phenolic compounds and could therefore be more efficient than synthetic ones in preventing lipid oxidation reactions. This study was aimed at the valorization of Tunisian aromatic plants and their active compounds as new sources of natural antioxidant preventing oil oxidation. Carnosol, rosmarinic acid and thymol were isolated from Rosmarinus officinalis and Thymus capitatus by column chromatography and were analyzed by nuclear magnetic resonance. Their antioxidant activities were measured by DPPH, ABTS and FRAP assays. These active compounds were added to soybean oil in different proportions using a simplex-centroid mixture design. Antioxidant activity and oxidative stability of oils were determined before and after 20 days of accelerated oxidation at 60 °C. Results showed that bioactive compounds are effective in maintaining oxidative stability of soybean oil. However, the binary interaction of rosmarinic acid and thymol caused a reduction in antioxidant activity and oxidative stability of soybean oil. Optimum conditions for maximum antioxidant activity and oxidative stability were found to be an equal ternary mixture of carnosol, rosmarinic acid and thymol. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  8. Using a simplex centroid to study the effects of pH, temperature and lactulose on the viability of Bifidobacterium animalis subsp. lactis in a model system.

    Science.gov (United States)

    Altieri, Clelia; Bevilacqua, Antonio; Perricone, Marianne; Sinigaglia, Milena

    2013-10-01

    This paper reports on the effects of lactulose (0-10 g/l) on Bifidobacterium animalis subsp. lactis, along with the influence of pH (4.5-8.5) and temperature (15-45 °C); the three factors were combined through a simplex centroid. The experiments were performed in a laboratory medium and the data of cells counts were modeled through the Weibull equation for the evaluation of the first reduction time, the shape parameter and the death time. These fitting parameters were used as input values to build a desirability profile and a second-order model through the DoE approach (Design of Experiments). The medium containing glucose was used as control. The prebiotic enhanced the viability of the microbial target, by prolonging the first reduction time and inducing a shoulder phase in the death kinetic; moreover, in some combinations the statistical analysis highlighted a kind of interaction with the pH. Copyright © 2013. Published by Elsevier Ltd.

  9. Remote sensing estimates of impervious surfaces for hydrological modelling of changes in flood risk during high-intensity rainfall events

    DEFF Research Database (Denmark)

    Kaspersen, Per Skougaard; Fensholt, Rasmus; Drews, Martin

    This paper addresses the accuracy and applicability of medium resolution (MR) remote sensing estimates of impervious surfaces (IS) for urban land cover change analysis. Landsat-based vegetation indices (VI) are found to provide fairly accurate measurements of sub-pixel imperviousness for urban...... areas at different geographical locations within Europe, and to be applicable for cities with diverse morphologies and dissimilar climatic and vegetative conditions. Detailed data on urban land cover changes can be used to examine the diverse environmental impacts of past and present urbanisation...

  10. Kinematic fault slip model from joint inversion of teleseismic, GPS, InSAR and subpixel-correlation measurements of the 2010 El Mayor-Cucapah earthquake and postseismic deformation (Invited)

    Science.gov (United States)

    Fielding, E. J.; Wei, S.; Leprince, S.; Sladen, A.; Simons, M.; Avouac, J.; Briggs, R. W.; Hudnut, K. W.; Helmberger, D. V.; Hensley, S.; Hauksson, E.; Gonzalez-Garcia, J. J.; Herring, T.; Akciz, S. O.

    2010-12-01

    We use interferometric analysis of synthetic aperture radar (SAR) images (InSAR) and pixel tracking by subpixel correlation of SAR and optical images to map the fault ruptures and surface deformation of the 4 April 2010 El Mayor-Cucapah earthquake (Mw 7.2) in Baja California, Mexico. We then combine sampled InSAR and subpixel correlation results with GPS offsets at PBO stations and teleseismic waveforms in a joint inversion to produce a kinematic fault slip model. Pixel-tracking measurements from SPOT 2.5 m panchromatic images and from Envisat ASAR and ALOS PALSAR images measure large ground displacements close to fault ruptures, with a strong discontinuity where the rupture reached the surface. Optical image subpixel correlation measures horizontal displacements in both the east-west and north-south directions and shows the earthquake ruptured the Pescadores Fault in the southern Sierra Cucapah and the Borrego Fault in the central and northern edge of the mountain range. At the south end of the Sierra Cucapah, the fault ruptures fork into two subparallel strands with substantial slip on both visible. SAR image subpixel correlation measures horizontal deformation in the along-track direction of the satellite (approximately north or south) and in the radar line-of-sight direction. SAR along-track offsets, especially on ALOS images, show that there is a large amount of right-lateral slip (1-3 m) on a previously unmapped system of faults extending about 60 km to the southeast of the epicenter beneath the Colorado River Delta named the Indiviso Fault system. Aftershocks also extend approximately the same distance to the southeast. InSAR analyses of Envisat, ALOS and UAVSAR images, measure the surface displacements in the same radar line-of-sight as the range pixel tracking, but with much greater precision. Combination of SAR images from different directions allows the separation of the vertical and east components of the deformation, revealing the large normal fault

  11. Application example: Preliminary Results of ISOLA use to find moment tensor solutions and centroid depth applied to aftershocks of Mw=8.8 February 27 2010, Maule Earthquake

    Science.gov (United States)

    Nacif, S. V.; Sanchez, M. A.

    2013-05-01

    We selected seven aftershocks from Maule earthquake between 33.5°S to 35°S from May to September to find single source inversion. The data were provided by XY Chile Ramp Experiment* which was deployed after great Maule earthquake. Waveform data are from 13 broad band stations chosen from the 58 broad band stations deployed by IRIS-PASCAL from April to September 2010. Stations are placed above the normal subduction section south of ~33.5°S. Events were located with an iterative software called Hypocenter using one dimensional local model, obtained above for the forearc region between 33°S to 35°S. We used ISOLA which is a fortran code with a Matlab interface to obtain moment tensors solutions, optimum position and time of the subevents. Values depth obtained by a grid search of centroid position show range values which are compatibles with the interplate seismogenic zone. Double-Couple focal mechanism solutions (Figure 1) show 4 thrust events which can be associated with that zone. However, only one of them has strike, dip and rake of 358°, 27° and 101 respectively, appropriate to be expected for interplate seismogenic zone. On the other hand, the other 3 events show strike and normal double-couple focal mechanism solutions (Figure 1). This last topic makes association to those events to the contact of the Nazca and South American plate difficult. Nevertheless, in a first stage, their depths may allow possibility of an origin there. * The facilities of the IRIS Data Management System, and specifically the IRIS Data Management Center, were used for access to waveform, metadata or products required in this study. The IRIS DMS is funded through the National Science Foundation and specifically the GEO Directorate through the Instrumentation and Facilities Program of the National Science Foundation under Cooperative Agreement EAR-0552316. Some activities of are supported by the National Science Foundation EarthScope Program under Cooperative Agreement EAR-0733069

  12. Cellular Phone Towers, Cell towers developed for Appraiser's Department in 2003. Location was based upon parcel centroids, and corrected to orthophotography. Probably includes towers other than cell towers (uncertain). Not published., Published in 2003, 1:1200 (1in=100ft) scale, Sedgwick County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Cellular Phone Towers dataset current as of 2003. Cell towers developed for Appraiser's Department in 2003. Location was based upon parcel centroids, and corrected...

  13. Targeted quantitative bioanalysis in plasma using liquid chromatography/high-resolution accurate mass spectrometry: an evaluation of global selectivity as a function of mass resolving power and extraction window, with comparison of centroid and profile modes.

    Science.gov (United States)

    Xia, Yuan-Qing; Lau, Jim; Olah, Timothy; Jemal, Mohammed

    2011-10-15

    There is a growing interest in exploring the use of liquid chromatography coupled with full-scan high resolution accurate mass spectrometry (LC/HRMS) in bioanalytical laboratories as an alternative to the current practice of using LC coupled with tandem mass spectrometry (LC/MS/MS). Therefore, we have investigated the theoretical and practical aspects of LC/HRMS as it relates to the quantitation of drugs in plasma, which is the most commonly used matrix in pharmacokinetics studies. In order to assess the overall selectivity of HRMS, we evaluated the potential interferences from endogenous plasma components by analyzing acetonitrile-precipitated blank human plasma extract using an LC/HRMS system under chromatographic conditions typically used for LC/MS/MS bioanalysis with the acquisition of total ion chromatograms (TICs) using 10 k and 20 k resolving power in both profile and centroid modes. From each TIC, we generated extracted ion chromatograms (EICs) of the exact masses of the [M + H](+) ions of 153 model drugs using different mass extraction windows (MEWs) and determined the number of plasma endogenous peaks detected in each EIC. Fewer endogenous peaks are detected using higher resolving power, narrower MEW, and centroid mode. A 20 k resolving power can be considered adequate for the selective determination of drugs in plasma. To achieve desired analyte EIC selectivity and simultaneously avoid missing data points in the analyte EIC peak, the MEW used should not be too wide or too narrow and should be a small fraction of the full width at half maximum (FWHM) of the profile mass peak. It is recommended that the optimum MEW be established during method development under the specified chromatographic and sample preparation conditions. In general, the optimum MEW, typically ≤ ±20 ppm for 20 k resolving power, is smaller for the profile mode when compared with the centroid mode. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Comparison of the performances of land use regression modelling and dispersion modelling in estimating small-scale variations in long-term air pollution concentrations in a Dutch urban area.

    NARCIS (Netherlands)

    Beelen, R.M.J.|info:eu-repo/dai/nl/30483100X; Voogt, M.; Duyzer, J.; Zandveld, P.; Hoek, G.|info:eu-repo/dai/nl/069553475

    2010-01-01

    The performance of a Land Use Regression (LUR) model and a dispersion model (URBIS - URBis Information System) was compared in a Dutch urban area. For the Rijnmond area, i.e. Rotterdam and surroundings, nitrogen dioxide (NO2) concentrations for 2001 were estimated for nearly 70 000 centroids of a

  15. A new technique for fire risk estimation in the wildland urban interface

    Science.gov (United States)

    Dasgupta, S.; Qu, J. J.; Hao, X.

    A novel technique based on the physical variable of pre-ignition energy is proposed for assessing fire risk in the Grassland-Urban-Interface The physical basis lends meaning a site and season independent applicability possibilities for computing spread rates and ignition probabilities features contemporary fire risk indices usually lack The method requires estimates of grass moisture content and temperature A constrained radiative-transfer inversion scheme on MODIS NIR-SWIR reflectances which reduces solution ambiguity is used for grass moisture retrieval while MODIS land surface temperature emissivity products are used for retrieving grass temperature Subpixel urban contamination of the MODIS reflective and thermal signals over a Grassland-Urban-Interface pixel is corrected using periodic estimates of urban influence from high spatial resolution ASTER

  16. Comparação dos métodos de determinação da estabilidade oxidativa de biodiesel B100, em mistura com antioxidantes sintéticos: aplicação do delineamento simplex-centroide com variável de processo Comparison of methods for determination of oxidative stability of B100 biodiesel mixed with synthetic antioxidants: application of simplex-centroid design with process variable

    Directory of Open Access Journals (Sweden)

    João Rafael de Moraes Cini

    2013-01-01

    Full Text Available The Rancimat and accelerated stove tests were used to determine the oxidative stability of B100 biodiesel mixed with synthetic antioxidants. The predictive equations, with process variable, were obtained by applying a simplex-centroid design. Regardless of the antioxidant used, all assays carried out with the accelerated stove test presented storage time longer than 177.88 d, the greatest value obtained by applying the Rancimat test. The t test, applied to the parameters containing the process variable, showed a statistically significant difference (at the level of 5% between the methods used.

  17. Co-seismic displacements from differencing and sub-pixel correlation of multi-temporal LiDAR and cadastral surveys: application to the Greendale Fault, Canterbury, New Zealand

    Science.gov (United States)

    Duffy, B. G.; Van Dissen, R.; Quigley, M.; Litchfield, N. J.; McInnes, C.; Leprince, S.; Barrell, D.; Stahl, T. A.; Bilderback, E. L.

    2011-12-01

    Surface rupture on the dextral strike-slip Greendale fault during the 2010 Mw 7.1 Darfield (Canterbury), earthquake in New Zealand terminated in a releasing bend at the western end of the fault. Our first-ever co-seismic application of multi-temporal aerial LiDAR, coupled with cadastral surveying, real time kinematic GPS scarp profiling and offset mapping provides unprecedented documentation of surface displacements at the western end of the Greendale fault, particularly at the transition into the releasing bend. Cadastral trilateration data from the northern end of the releasing bend area demonstrate that the hanging wall (NE) side of the fault moved 1.5 m to the southeast while the footwall (SW) side of the fault moved 0.6 m to the southwest. This resulted in an oblique transtensional net slip of 2.5 m. At the southern end of the releasing bend, the north-side-down transtensional structure transitions into a north-side down transpressional structure. High-resolution absolute vertical motions associated with this transition, as well as relationships of drainage morphology to fault geometry, are captured by differencing of pre- and post-fault LiDAR. Vertical differencing reveals the distribution of vertical offsets, with some scarps defined that have vertical displacement gradients of only 1:1000. The geomorphology of these subtle vertical displacements reveals that the transition into the releasing bend is accommodated by a restraining stepover. Sub-pixel correlation of the pre-and post-earthquake LiDAR rasters using COSI-Corr (http://www.tectonics.caltech.edu/slip_history/spot_coseis/index.html) additionally reveal E-W shortening of approximately 0.8 m across a discontinuity that represents one side of the restraining stepover. This is consistent with the cadastral survey results. Our results demonstrate the utility of multi-temporal LiDAR for documenting both the vertical and horizontal components of co-seismic deformation.

  18. The Topological Weighted Centroid (TWC): A topological approach to the time-space structure of epidemic and pseudo-epidemic processes

    Science.gov (United States)

    Buscema, Massimo; Massini, Giulia; Sacco, Pier Luigi

    2018-02-01

    This paper offers the first systematic presentation of the topological approach to the analysis of epidemic and pseudo-epidemic spatial processes. We introduce the basic concepts and proofs, at test the approach on a diverse collection of case studies of historically documented epidemic and pseudo-epidemic processes. The approach is found to consistently provide reliable estimates of the structural features of epidemic processes, and to provide useful analytical insights and interpretations of fragmentary pseudo-epidemic processes. Although this analysis has to be regarded as preliminary, we find that the approach's basic tenets are strongly corroborated by this first test and warrant future research in this vein.

  19. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  20. A New Method to Define the VI-Ts Diagram Using Subpixel Vegetation and Soil Information: A Case Study over a Semiarid Agricultural Region in the North China Plain.

    Science.gov (United States)

    Sun, Zhigang; Wang, Qinxue; Matsushita, Bunkei; Fukushima, Takehiko; Ouyang, Zhu; Watanabe, Masataka

    2008-10-07

    The VI-Ts diagram determined by the scatter points of the vegetation index (VI) and surface temperature (Ts) has been widely applied in land surface studies. In the VI-Ts diagram, dry point is defined as a pixel with maximum Ts and minimum VI, while wet point is defined as a pixel with minimum Ts and maximum VI. If both dry and wet points can be obtained simultaneously, a triangular VI-Ts diagram can be readily defined. However, traditional methods cannot define an ideal VI-Ts diagram if there are no full ranges of land surface moisture and VI, such as during rainy season or in a period with a narrow VI range. In this study, a new method was proposed to define the VI-Ts diagram based on the subpixel vegetation and soil information, which was independent of the full ranges of land surface moisture and VI. In this method, a simple approach was firstly proposed to decompose Ts of a given pixel into two components, the surface temperatures of soil (T soil ) and vegetation (T veg ), by means of Ts and VI information of neighboring pixels. The minimum T veg and maximum T soil were then used to determine the wet and dry points respectively within a given sampling window. This method was tested over a 30 km × 30 km semiarid agricultural area in the North China Plain through 2003 using Advanced Spaceborne Thermal Emission Reflection Radiometer (ASTER) and MODerate-resolution Imaging Spectroradiometer (MODIS) data. The wet and dry points obtained from our proposed method and from a traditional method were compared with those obtained from ground data within the sampling window with the 30 km × 30 km size. Results show that T soil and T veg can be obtained with acceptable accuracies, and that our proposed method can define reasonable VI-Ts diagrams over a semiarid agricultural region throughout the whole year, even for both cases of rainy season and narrow range of VI.

  1. A New Method to Define the VI-Ts Diagram Using Subpixel Vegetation and Soil Information: A Case Study over a Semiarid Agricultural Region in the North China Plain

    Directory of Open Access Journals (Sweden)

    Masataka Watanabe

    2008-10-01

    Full Text Available The VI-Ts diagram determined by the scatter points of the vegetation index (VI and surface temperature (Ts has been widely applied in land surface studies. In the VI-Ts diagram, dry point is defined as a pixel with maximum Ts and minimum VI, while wet point is defined as a pixel with minimum Ts and maximum VI. If both dry and wet points can be obtained simultaneously, a triangular VI-Ts diagram can be readily defined. However, traditional methods cannot define an ideal VI-Ts diagram if there are no full ranges of land surface moisture and VI, such as during rainy season or in a period with a narrow VI range. In this study, a new method was proposed to define the VI-Ts diagram based on the subpixel vegetation and soil information, which was independent of the full ranges of land surface moisture and VI. In this method, a simple approach was firstly proposed to decompose Ts of a given pixel into two components, the surface temperatures of soil (Tsoil and vegetation (Tveg, by means of Ts and VI information of neighboring pixels. The minimum Tveg and maximum Tsoil were then used to determine the wet and dry points respectively within a given sampling window. This method was tested over a 30 km × 30 km semiarid agricultural area in the North China Plain through 2003 using Advanced Spaceborne Thermal Emission Reflection Radiometer (ASTER and MODerate-resolution Imaging Spectroradiometer (MODIS data. The wet and dry points obtained from our proposed method and from a traditional method were compared with those obtained from ground data within the sampling window with the 30 km × 30 km size. Results show that Tsoil and Tveg can be obtained with acceptable accuracies, and that our proposed method can define reasonable VI-Ts diagrams over a semiarid agricultural region throughout the whole year, even for both cases of rainy season and narrow range of VI.

  2. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    Science.gov (United States)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  3. Low cost subpixel method for vibration measurement

    Energy Technology Data Exchange (ETDEWEB)

    Ferrer, Belen [Department of Civil Engineering, Univ. Alicante P.O. Box, 99, 03080 Alicante (Spain); Espinosa, Julian; Perez, Jorge; Acevedo, Pablo; Mas, David [Inst. of Physics Applied to the Sciences and Technologies, Univ. Alicante P.O. Box, 99, 03080 Alicante (Spain); Roig, Ana B. [Department of Optics, Univ. Alicante P.O. Box, 99, 03080 Alicante (Spain)

    2014-05-27

    Traditional vibration measurement methods are based on devices that acquire local data by direct contact (accelerometers, GPS) or by laser beams (Doppler vibrometers). Our proposal uses video processing to obtain the vibration frequency directly from the scene, without the need of auxiliary targets or devices. Our video-vibrometer can obtain the vibration frequency at any point in the scene and can be implemented with low-cost devices, such as commercial cameras. Here we present the underlying theory and some experiments that support our technique.

  4. Center for Research on Infrared Detectors (CENTROID)

    Science.gov (United States)

    2006-09-30

    having been determined using a self - consistent Schrodinger - Poisson model. Finally, the presence of minibands leads to increased absorption due to the...and a change in d<O.Ol J-Lm in one iteration, the self - consistent fitting calculation was stopped, giving reasonably precise values for the...on the MCT surface. Tellurium precipitates consist of crystalline clusters of Te, which are identified as such by their lattice constant and by a

  5. Attitude Estimation or Quaternion Estimation?

    Science.gov (United States)

    Markley, F. Landis

    2003-01-01

    The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.

  6. Estimating the macroseismic parameters of earthquakes in eastern Iran

    Science.gov (United States)

    Amini, H.; Gasperini, P.; Zare, M.; Vannucci, G.

    2017-10-01

    Macroseismic intensity values allow assessing the macroseismic parameters of earthquakes such as location, magnitude, and fault orientation. This information is particularly useful for historical earthquakes whose parameters were estimated with low accuracy. Eastern Iran (56°-62°E, 29.5°-35.5°N), which is characterized by several active faults, was selected for this study. Among all earthquakes occurred in this region, only 29 have some macroseismic information. Their intensity values were reported in various intensity scales. After collecting the descriptions, their intensity values were re-estimated in a uniform intensity scale. Thereafter, Boxer method was applied to estimate their corresponding macroseismic parameters. Boxer estimates of macroseismic parameters for instrumental earthquakes (after 1964) were found to be consistent with those published by Global Centroid Moment Tensor Catalog (GCMT). Therefore, this method was applied to estimate location, magnitude, source dimension, and orientation of these earthquakes with macroseismic description in the period 1066-2012. Macroseismic parameters seem to be more reliable than instrumental ones not only for historical earthquakes but also for instrumental earthquakes especially for the ones occurred before 1960. Therefore, as final results of this study we propose to use the macroseismically determined parameters in preparing a catalog for earthquakes before 1960.

  7. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    Science.gov (United States)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  8. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...

  9. Epidemiology from Tweets: Estimating Misuse of Prescription Opioids in the USA from Social Media.

    Science.gov (United States)

    Chary, Michael; Genes, Nicholas; Giraud-Carrier, Christophe; Hanson, Carl; Nelson, Lewis S; Manini, Alex F

    2017-12-01

    The misuse of prescription opioids (MUPO) is a leading public health concern. Social media are playing an expanded role in public health research, but there are few methods for estimating established epidemiological metrics from social media. The purpose of this study was to demonstrate that the geographic variation of social media posts mentioning prescription opioid misuse strongly correlates with government estimates of MUPO in the last month. We wrote software to acquire publicly available tweets from Twitter from 2012 to 2014 that contained at least one keyword related to prescription opioid use (n = 3,611,528). A medical toxicologist and emergency physician curated the list of keywords. We used the semantic distance (SemD) to automatically quantify the similarity of meaning between tweets and identify tweets that mentioned MUPO. We defined the SemD between two words as the shortest distance between the two corresponding word-centroids. Each word-centroid represented all recognized meanings of a word. We validated this automatic identification with manual curation. We used Twitter metadata to estimate the location of each tweet. We compared our estimated geographic distribution with the 2013-2015 National Surveys on Drug Usage and Health (NSDUH). Tweets that mentioned MUPO formed a distinct cluster far away from semantically unrelated tweets. The state-by-state correlation between Twitter and NSDUH was highly significant across all NSDUH survey years. The correlation was strongest between Twitter and NSDUH data from those aged 18-25 (r = 0.94, p social media to provide insights for syndromic toxicosurveillance.

  10. Comparação dos métodos de determinação da estabilidade oxidativa de biodiesel B100, em mistura com antioxidantes sintéticos: aplicação do delineamento simplex-centroide com variável de processo

    Directory of Open Access Journals (Sweden)

    João Rafael de Moraes Cini

    2013-01-01

    Full Text Available The Rancimat and accelerated stove tests were used to determine the oxidative stability of B100 biodiesel mixed with synthetic antioxidants. The predictive equations, with process variable, were obtained by applying a simplex-centroid design. Regardless of the antioxidant used, all assays carried out with the accelerated stove test presented storage time longer than 177.88 d, the greatest value obtained by applying the Rancimat test. The t test, applied to the parameters containing the process variable, showed a statistically significant difference (at the level of 5% between the methods used.

  11. Evaluation of the Airborne CASI/TASI Ts-VI Space Method for Estimating Near-Surface Soil Moisture

    Directory of Open Access Journals (Sweden)

    Lei Fan

    2015-03-01

    Full Text Available High spatial resolution airborne data with little sub-pixel heterogeneity were used to evaluate the suitability of the temperature/vegetation (Ts/VI space method developed from satellite observations, and were explored to improve the performance of the Ts/VI space method for estimating soil moisture (SM. An evaluation of the airborne ΔTs/Fr space (incorporated with air temperature revealed that normalized difference vegetation index (NDVI saturation and disturbed pixels were hindering the appropriate construction of the space. The non-disturbed ΔTs/Fr space, which was modified by adjusting the NDVI saturation and eliminating the disturbed pixels, was clearly correlated with the measured SM. The SM estimations of the non-disturbed ΔTs/Fr  space using the evaporative fraction (EF and temperature vegetation dryness index (TVDI were validated by using the SM measured at a depth of 4 cm, which was determined according to the land surface types. The validation results show that the EF approach provides superior estimates with a lower RMSE (0.023 m3·m−3 value and a higher correlation coefficient (0.68 than the TVDI. The application of the airborne ΔTs/Fr  space shows that the two modifications proposed in this study strengthen the link between the ΔTs/Fr space and SM, which is important for improving the precision of the remote sensing Ts/VI space method for monitoring SM.

  12. Improve the Robustness of Range-Free Localization Methods on Wireless Sensor Networks using Recursive Position Estimation Algorithms

    Directory of Open Access Journals (Sweden)

    Gamantyo Hendrantoro

    2011-12-01

    Full Text Available The position of a sensor node at wireless sensor networks determines the received data sensing accuracy. By the knowledge of sensor positioning, the location of target sensed can be estimated. Localization techniques used to find out the position of sensor node by considering the distance of this sensor from the vicinity reference nodes. Centroid Algorithm is a robust, simple and low cost localization technique without dependence on hardware requirement. We propose Recursive Position Estimation Algorithm to obtain the more accurate node positioning on range-free localization technique. The simulation result shows that this algorithm has the ability on increasing position accuracy up to 50%. The trade off factor shows the smaller the number of reference nodes the higher the computational time required. The new method on the availability on sensor power controlled is proposed to optimize the estimated position.

  13. Improve the Robustness of Range-Free Localization Methods on Wireless Sensor Networks using Recursive Position Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Prima Kristalina

    2013-09-01

    Full Text Available The position of a sensor node at wireless sensor networks determines the received data sensing accuracy. By the knowledge of sensor positioning, the location of target sensed can be estimated. Localization techniques used to find out the position of sensor node by considering the distance of this sensor from the vicinity reference nodes.  Centroid Algorithm is a robust, simple and low cost localization technique without dependence on hardware requirement. We propose Recursive Position Estimation Algorithm to obtain the more accurate node positioning on range-free localization technique. The simulation result shows that this algorithm has the ability on increasing position accuracy up to 50%.  The trade off factor shows the smaller the number of reference nodes the higher the computational time required. The new method on the availability on sensor power controlled is proposed to optimize the estimated position.

  14. Estimation of myocardial deformation using correlation image velocimetry.

    Science.gov (United States)

    Jacob, Athira; Krishnamurthi, Ganapathy; Mathur, Manikandan

    2017-04-05

    Tagged Magnetic Resonance (tMR) imaging is a powerful technique for determining cardiovascular abnormalities. One of the reasons for tMR not being used in routine clinical practice is the lack of easy-to-use tools for image analysis and strain mapping. In this paper, we introduce a novel interdisciplinary method based on correlation image velocimetry (CIV) to estimate cardiac deformation and strain maps from tMR images. CIV, a cross-correlation based pattern matching algorithm, analyses a pair of images to obtain the displacement field at sub-pixel accuracy with any desired spatial resolution. This first time application of CIV to tMR image analysis is implemented using an existing open source Matlab-based software called UVMAT. The method, which requires two main input parameters namely correlation box size (C B ) and search box size (S B ), is first validated using a synthetic grid image with grid sizes representative of typical tMR images. Phantom and patient images obtained from a Medical Imaging grand challenge dataset ( http://stacom.cardiacatlas.org/motion-tracking-challenge/ ) were then analysed to obtain cardiac displacement fields and strain maps. The results were then compared with estimates from Harmonic Phase analysis (HARP) technique. For a known displacement field imposed on both the synthetic grid image and the phantom image, CIV is accurate for 3-pixel and larger displacements on a 512 × 512 image with (C B ,S B )=(25,55) pixels. Further validation of our method is achieved by showing that our estimated landmark positions on patient images fall within the inter-observer variability in the ground truth. The effectiveness of our approach to analyse patient images is then established by calculating dense displacement fields throughout a cardiac cycle, and were found to be physiologically consistent. Circumferential strains were estimated at the apical, mid and basal slices of the heart, and were shown to compare favorably with those of HARP over the

  15. An improved implementation of block matching for motion estimation in ultrasound imaging

    Science.gov (United States)

    Cardoso, Fernando M.; Furuie, Sergio S.

    2017-03-01

    Ultrasound elastography has become an important procedure that provides information about the tissue dynamics and may help on the detection of tissue abnormalities. Therefore, motion estimation in a sequence of ultrasound acquisition is crucial to the quality of this information. We propose a novel algorithm to perform speckle tracking, which consists in an implementation of 2D Block Matching with two enhancements: sub-pixel linear interpolation and displacement propagation, which are able to increase resolution, reduce computation time and prevent kernel mismatching errors. This method does not require any additional hardware and provide real-time information. The proposed technique was evaluated using four different numerical phantoms and its results were compared with the results from standard 2D block matching and optical flow. The proposed method outperformed the other two methods, providing an average error of 0.98 pixels, while standard 2D block matching and optical flow presented an average error of 2.50 and 10.03 pixels, respectively. The proposed algorithm was also assessed with four different physical phantoms and a qualitative comparison showed that the proposed technique presented results that were compatible to the results from the built-in elastography mode of the ultrasound equipment (Ultrasonix Touch).

  16. Estimation of Aeolian Dune Migration Over Martian Surface Employing High Precision Photogrammetric Measurements

    Science.gov (United States)

    Kim, J.

    2017-07-01

    At the present time, arguments continue regarding the migration speeds of Martian dune fields and their correlation with atmospheric circulation. However, precisely measuring the spatial translation of Martian dunes has been rarely successful due to the technical difficulties to quantitatively observe expected small surface migrations. Therefore, we developed a generic procedure to measure the migration of dune fields employing a high-accuracy photogrammetric processor and sub-pixel image correlator on the 25-cm resolution High Resolution Imaging Science Experiment (HiRISE). The established algorithms have been tested over a few Martian dune fields. Consequently, migrations over well-known crater dune fields appeared to be almost static for the considerable temporal periods and were weakly correlated with wind directions estimated by the Mars Climate Database. Only over some Martian dune fields, such as Kaiser crater, meaningful migration speeds (> 1m/year) considering photogrammetric error residual have been detected. Currently a technically improved processor to compensate error residual using time series observation is under development and expected to produce the long term migration speed over Martian dune fields where constant HiRISE image acquisitions are available.

  17. Genetic Algorithm-based Affine Parameter Estimation for Shape Recognition

    Directory of Open Access Journals (Sweden)

    Yuxing Mao

    2014-06-01

    Full Text Available Shape recognition is a classically difficult problem because of the affine transformation between two shapes. The current study proposes an affine parameter estimation method for shape recognition based on a genetic algorithm (GA. The contributions of this study are focused on the extraction of affine-invariant features, the individual encoding scheme, and the fitness function construction policy for a GA. First, the affine-invariant characteristics of the centroid distance ratios (CDRs of any two opposite contour points to the barycentre are analysed. Using different intervals along the azimuth angle, the different numbers of CDRs of two candidate shapes are computed as representations of the shapes, respectively. Then, the CDRs are selected based on predesigned affine parameters to construct the fitness function. After that, a GA is used to search for the affine parameters with optimal matching between candidate shapes, which serve as actual descriptions of the affine transformation between the shapes. Finally, the CDRs are resampled based on the estimated parameters to evaluate the similarity of the shapes for classification. The experimental results demonstrate the robust performance of the proposed method in shape recognition with translation, scaling, rotation and distortion.

  18. Using Landsat Vegetation Indices to Estimate Impervious Surface Fractions for European Cities

    Directory of Open Access Journals (Sweden)

    Per Skougaard Kaspersen

    2015-06-01

    Full Text Available Impervious surfaces (IS are a key indicator of environmental quality, and mapping of urban IS is important for a wide range of applications including hydrological modelling, water management, urban and environmental planning and urban climate studies. This paper addresses the accuracy and applicability of vegetation indices (VI, from Landsat imagery, to estimate IS fractions for European cities. The accuracy of three different measures of vegetation cover is examined for eight urban areas at different locations in Europe. The Normalized Difference Vegetation Index (NDVI and Soil Adjusted Vegetation Index (SAVI are converted to IS fractions using a regression modelling approach. Also, NDVI is used to estimate fractional vegetation cover (FR, and consequently IS fractions. All three indices provide fairly accurate estimates (MAEs ≈ 10%, MBE’s < 2% of sub-pixel imperviousness, and are found to be applicable for cities with dissimilar climatic and vegetative conditions. The VI/IS relationship across cities is examined by quantifying the MAEs and MBEs between all combinations of models and urban areas. Also, regional regression models are developed by compiling data from multiple cities to examine the potential for developing and applying a single regression model to estimate IS fractions for numerous urban areas without reducing the accuracy considerably. Our findings indicate that the models can be applied broadly for multiple urban areas, and that the accuracy is reduced only marginally by applying the regional models. SAVI is identified as a superior index for the development of regional quantification models. The findings of this study highlight that IS fractions, and spatiotemporal changes herein, can be mapped by use of simple regression models based on VIs from remote sensors, and that the method presented enables simple, accurate and resource efficient quantification of IS.

  19. Machine Learning on Images: Combining Passive Microwave and Optical Data to Estimate Snow Water Equivalent

    Science.gov (United States)

    Dozier, J.; Tolle, K.; Bair, N.

    2014-12-01

    We have a problem that may be a specific example of a generic one. The task is to estimate spatiotemporally distributed estimates of snow water equivalent (SWE) in snow-dominated mountain environments, including those that lack on-the-ground measurements. Several independent methods exist, but all are problematic. The remotely sensed date of disappearance of snow from each pixel can be combined with a calculation of melt to reconstruct the accumulated SWE for each day back to the last significant snowfall. Comparison with streamflow measurements in mountain ranges where such data are available shows this method to be accurate, but the big disadvantage is that SWE can only be calculated retroactively after snow disappears, and even then only for areas with little accumulation during the melt season. Passive microwave sensors offer real-time global SWE estimates but suffer from several issues, notably signal loss in wet snow or in forests, saturation in deep snow, subpixel variability in the mountains owing to the large (~25 km) pixel size, and SWE overestimation in the presence of large grains such as depth and surface hoar. Throughout the winter and spring, snow-covered area can be measured at sub-km spatial resolution with optical sensors, with accuracy and timeliness improved by interpolating and smoothing across multiple days. So the question is, how can we establish the relationship between Reconstruction—available only after the snow goes away—and passive microwave and optical data to accurately estimate SWE during the snow season, when the information can help forecast spring runoff? Linear regression provides one answer, but can modern machine learning techniques (used to persuade people to click on web advertisements) adapt to improve forecasts of floods and droughts in areas where more than one billion people depend on snowmelt for their water resources?

  20. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter

    Directory of Open Access Journals (Sweden)

    Xuemin Cheng

    2016-04-01

    Full Text Available Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF algorithm, modified random sample consensus (RANSAC, and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  1. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    Science.gov (United States)

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  2. Research on the method of information system risk state estimation based on clustering particle filter

    Science.gov (United States)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  3. Research on the method of information system risk state estimation based on clustering particle filter

    Directory of Open Access Journals (Sweden)

    Cui Jia

    2017-05-01

    Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  4. Source parameter estimates of echolocation clicks from wild pygmy killer whales (Feresa attenuata) (L)

    Science.gov (United States)

    Madsen, P. T.; Kerr, I.; Payne, R.

    2004-10-01

    Pods of the little known pygmy killer whale (Feresa attenuata) in the northern Indian Ocean were recorded with a vertical hydrophone array connected to a digital recorder sampling at 320 kHz. Recorded clicks were directional, short (25 μs) transients with estimated source levels between 197 and 223 dB re. 1 μPa (pp). Spectra of clicks recorded close to or on the acoustic axis were bimodal with peak frequencies between 45 and 117 kHz, and with centroid frequencies between 70 and 85 kHz. The clicks share characteristics of echolocation clicks from similar sized, whistling delphinids, and have properties suited for the detection and classification of prey targeted by this odontocete. .

  5. Solar resources estimation combining digital terrain models and satellite images techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bosch, J.L.; Batlles, F.J. [Universidad de Almeria, Departamento de Fisica Aplicada, Ctra. Sacramento s/n, 04120-Almeria (Spain); Zarzalejo, L.F. [CIEMAT, Departamento de Energia, Madrid (Spain); Lopez, G. [EPS-Universidad de Huelva, Departamento de Ingenieria Electrica y Termica, Huelva (Spain)

    2010-12-15

    One of the most important steps to make use of any renewable energy is to perform an accurate estimation of the resource that has to be exploited. In the designing process of both active and passive solar energy systems, radiation data is required for the site, with proper spatial resolution. Generally, a radiometric stations network is used in this evaluation, but when they are too dispersed or not available for the study area, satellite images can be utilized as indirect solar radiation measurements. Although satellite images cover wide areas with a good acquisition frequency they usually have a poor spatial resolution limited by the size of the image pixel, and irradiation must be interpolated to evaluate solar irradiation at a sub-pixel scale. When pixels are located in flat and homogeneous areas, correlation of solar irradiation is relatively high, and classic interpolation can provide a good estimation. However, in complex topography zones, data interpolation is not adequate and the use of Digital Terrain Model (DTM) information can be helpful. In this work, daily solar irradiation is estimated for a wide mountainous area using a combination of Meteosat satellite images and a DTM, with the advantage of avoiding the necessity of ground measurements. This methodology utilizes a modified Heliosat-2 model, and applies for all sky conditions; it also introduces a horizon calculation of the DTM points and accounts for the effect of snow covers. Model performance has been evaluated against data measured in 12 radiometric stations, with results in terms of the Root Mean Square Error (RMSE) of 10%, and a Mean Bias Error (MBE) of +2%, both expressed as a percentage of the mean value measured. (author)

  6. Ensemble estimators for multivariate entropy estimation.

    Science.gov (United States)

    Sricharan, Kumar; Wei, Dennis; Hero, Alfred O

    2013-07-01

    The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T(-)(γ)(/)(d) ), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T(-1)). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample.

  7. Aplicação do delineamento simplex-centroide com variável processo na otimização das condições de produção do biodiesel B100 de óleo de girassol.

    Directory of Open Access Journals (Sweden)

    Gabriel Henrique Dias

    2014-02-01

    Full Text Available O delineamento simplex-centroide foi aplicado para otimizar as condições de obtenção de biodiesel B100 de óleo de girassol, utilizando diferentes catalisadores, com metanol e etanol como variável de processo. A reação de transesterificação, usando metanol, indicou o metóxido de sódio como o melhor catalisador apresentando um rendimento de 98,30%. Usando etanol como variável de processo e KOH como catalisador, o rendimento da reação foi de somente 89,50%. Os ensaios com os produtos obtidos, nas condições ótimas, indicou que ele estavam de acordo com os parâmetros estabelecidos pela legislação Brasileira e pela União européia.

  8. Estimation of the Rotational Terms of the Dynamic Response Matrix

    Directory of Open Access Journals (Sweden)

    D. Montalvão

    2004-01-01

    Full Text Available The dynamic response of a structure can be described by both its translational and rotational receptances. The latter ones are frequently not considered because of the difficulties in applying a pure moment excitation or in measuring rotations. However, in general, this implies a reduction up to 75% of the complete model. On the other hand, if a modification includes a rotational inertia, the rotational receptances of the unmodified system are needed. In one method, more commonly found in the literature, a so called T-block is attached to the structure. Then, a force, applied to an arm of the T-block, generates a moment together with a force at the connection point. The T-block also allows for angular displacement measurements. Nevertheless, the results are often not quite satisfactory. In this work, an alternative method based upon coupling techniques is developed, in which rotational receptances are estimated without the need of applying a moment excitation. This is accomplished by introducing a rotational inertia modification when rotating the T-block. The force is then applied in its centroid. Several numerical and experimental examples are discussed so that the methodology can be clearly described. The advantages and limitations are identified within the practical application of the method.

  9. Detection, emission estimation and risk prediction of forest fires in China using satellite sensors and simulation models in the past three decades--an overview.

    Science.gov (United States)

    Zhang, Jia-Hua; Yao, Feng-Mei; Liu, Cheng; Yang, Li-Min; Boken, Vijendra K

    2011-08-01

    Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP) estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction, have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR) and near infrared (NIR) channels of satellite sensors have been employed for detecting live fuel moisture content (FMC), and the Normalized Difference Water Index (NDWI) was used for evaluating the forest vegetation condition and its moisture status.

  10. Detection, Emission Estimation and Risk Prediction of Forest Fires in China Using Satellite Sensors and Simulation Models in the Past Three Decades—An Overview

    Directory of Open Access Journals (Sweden)

    Cheng Liu

    2011-07-01

    Full Text Available Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction,have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR and near infrared (NIR channels of satellite sensors have been employed for detecting live fuel moisture content (FMC, and the Normalized Difference Water Index (NDWI was used for evaluating the forest vegetation condition and its moisture status.

  11. Price and cost estimation

    Science.gov (United States)

    Stewart, R. D.

    1979-01-01

    Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.

  12. Estimating tail probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Carr, D.B.; Tolley, H.D.

    1982-12-01

    This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.

  13. An adaptive threshold method for improving astrometry of space debris CCD images

    Science.gov (United States)

    Sun, Rong-yu; Zhao, Chang-yin

    2014-06-01

    Optical survey is a main technique for observing space debris, and precisely measuring the positions of space debris is of great importance. Due to several factors, e.g. the angle object normal to the observer, the shape as well as the attitude of the object, the variations of observed characteristics for low earth orbital space debris are distinct. When we look at optical CCD images of observed objects, the size and brightness are varying, hence it’s difficult to decide the threshold during centroid measurement and precise astrometry. Traditionally the threshold is given empirically and constantly in data reduction, and obviously it’s not suitable for data reduction of space debris. Here we offer a solution to provide the threshold. Our method assumes that the PSF (point spread function) is Gaussian and estimates the signal flux by a directly two-dimensional Gaussian fit, then a cubic spline interpolation is performed to divide each initial pixel into several sub-pixels, at last the threshold is determined by the estimation of signal flux and the sub-pixels above threshold are separated to estimate the centroid. A trail observation of the fast spinning satellite Ajisai is made and the CCD frames are obtained to test our algorithm. The calibration precision of various threshold is obtained through the comparison between the observed equatorial position and the reference one, the latter are obtained from the precise ephemeris of the satellite. The results indicate that our method reduces the total errors of measurements, it works effectively in improving the centering precision of space debris images.

  14. Determining an empirical estimate of the tracking inconsistency component for true astrometric uncertainties

    Science.gov (United States)

    Ramanjooloo, Yudish; Tholen, David J.; Fohring, Dora; Claytor, Zach; Hung, Denise

    2017-10-01

    The asteroid community is moving towards the implementation of a new astrometric reporting format. This new format will finally include of complementary astrometric uncertainties in the reported observations. The availability of uncertainties will allow ephemeris predictions and orbit solutions to be constrained with greater reliability, thereby improving the efficiency of the community's follow-up and recovery efforts.Our current uncertainty model involves our uncertainties in centroiding on the trailed stars and asteroid and the uncertainty due to the astrometric solution. The accuracy of our astrometric measurements are reliant on how well we can minimise the offset between the spatial and temporal centroids of the stars and the asteroid. This offset is currently unmodelled and can be caused by variations in the cloud transparency, the seeing and tracking inconsistencies. The magnitude zero point of the image, which is affected by fluctuating weather conditions and the catalog bias in the photometric magnitudes, can serve as an indicator of the presence and thickness of clouds. Through comparison of the astrometric uncertainties to the orbit solution residuals, it was apparent that a component of the error analysis remained unaccounted for, as a result of cloud coverage and thickness, telescope tracking inconsistencies and variable seeing. This work will attempt to quantify the tracking inconsistency component. We have acquired a rich dataset with the University of Hawaii 2.24 metre telescope (UH-88 inch) that is well positioned to construct an empirical estimate of the tracking inconsistency component. This work is funded by NASA grant NXX13AI64G.

  15. On nonparametric hazard estimation.

    Science.gov (United States)

    Hobbs, Brian P

    The Nelson-Aalen estimator provides the basis for the ubiquitous Kaplan-Meier estimator, and therefore is an essential tool for nonparametric survival analysis. This article reviews martingale theory and its role in demonstrating that the Nelson-Aalen estimator is uniformly consistent for estimating the cumulative hazard function for right-censored continuous time-to-failure data.

  16. On nonparametric hazard estimation

    OpenAIRE

    Hobbs, Brian P.

    2015-01-01

    The Nelson-Aalen estimator provides the basis for the ubiquitous Kaplan-Meier estimator, and therefore is an essential tool for nonparametric survival analysis. This article reviews martingale theory and its role in demonstrating that the Nelson-Aalen estimator is uniformly consistent for estimating the cumulative hazard function for right-censored continuous time-to-failure data.

  17. Estimating Uncertainty in Annual Forest Inventory Estimates

    Science.gov (United States)

    Ronald E. McRoberts; Veronica C. Lessard

    1999-01-01

    The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...

  18. Using lidar and effective LAI data to evaluate IKONOS and Landsat 7 ETM+ vegetation cover estimates in a ponderosa pine forest

    Science.gov (United States)

    Chen, X.; Vierling, Lee; Rowell, E.; DeFelice, Tom

    2004-01-01

    Structural and functional analyses of ecosystems benefit when high accuracy vegetation coverages can be derived over large areas. In this study, we utilize IKONOS, Landsat 7 ETM+, and airborne scanning light detection and ranging (lidar) to quantify coniferous forest and understory grass coverages in a ponderosa pine (Pinus ponderosa) dominated ecosystem in the Black Hills of South Dakota. Linear spectral mixture analyses of IKONOS and ETM+ data were used to isolate spectral endmembers (bare soil, understory grass, and tree/shade) and calculate their subpixel fractional coverages. We then compared these endmember cover estimates to similar cover estimates derived from lidar data and field measures. The IKONOS-derived tree/shade fraction was significantly correlated with the field-measured canopy effective leaf area index (LAIe) (r2=0.55, pfield measured tree canopy effective LAI and lidar tree cover response (r2=0.30, r=−0.55 and r2=0.41, r=−0.64, respectively; p<0.001) and further analyses indicate a strong linear relationship between EVI and the IKONOS-derived grass fraction (r2=0.99, p<0.001). We also found that using EVI resulted in better agreement with the subpixel vegetation fractions in this ecosystem than using normalized difference of vegetation index (NDVI). Coarsening the IKONOS data to 30 m resolution imagery revealed a stronger relationship with lidar tree measures (r2=0.77, p<0.001) than at 4 m resolution (r2=0.58, p<0.001). Unmixed tree/shade fractions derived from 30 m resolution ETM+ imagery also showed a significant correlation with the lidar data (r2=0.66, p<0.001). These results demonstrate the power of using high resolution lidar data to validate spectral unmixing results of satellite imagery, and indicate that IKONOS data and Landsat 7 ETM+ data both can serve to make the important distinction between tree/shade coverage and exposed understory grass coverage during peak summertime greenness in a ponderosa pine forest ecosystem.

  19. Estimating constituent loads

    Science.gov (United States)

    Cohn, T.A.; DeLong, L.L.; Gilroy, E.J.; Hirsch, R.M.; Wells, D.K.

    1989-01-01

    This paper compares the bias and variance of three procedures that can be used with log linear regression models: the traditional rating curve estimator, a modified rating curve method, and a minimum variance unbiased estimator (MVUE). Analytical derivations of the bias and efficiency of all three estimators are presented. It is shown that for many conditions the traditional and the modified estimator can provide satisfactory estimates. However, other conditions exist where they have substantial bias and a large mean square error. These conditions commonly occur when sample sizes are small, or when loads are estimated during high-flow conditions. The MVUE, however, is unbiased and always performs nearly as well or better than the rating curve estimator or the modified estimator provided that the hypothesis of the log linear model is correct. Since an efficient unbiased estimator is available, there seems to be no reason to employ biased estimators. -from Authors

  20. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    Science.gov (United States)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  1. Estimating urban vegetation fraction across 25 cities in pan-Pacific using Landsat time series data

    Science.gov (United States)

    Lu, Yuhao; Coops, Nicholas C.; Hermosilla, Txomin

    2017-04-01

    Urbanization globally is consistently reshaping the natural landscape to accommodate the growing human population. Urban vegetation plays a key role in moderating environmental impacts caused by urbanization and is critically important for local economic, social and cultural development. The differing patterns of human population growth, varying urban structures and development stages, results in highly varied spatial and temporal vegetation patterns particularly in the pan-Pacific region which has some of the fastest urbanization rates globally. Yet spatially-explicit temporal information on the amount and change of urban vegetation is rarely documented particularly in less developed nations. Remote sensing offers an exceptional data source and a unique perspective to map urban vegetation and change due to its consistency and ubiquitous nature. In this research, we assess the vegetation fractions of 25 cities across 12 pan-Pacific countries using annual gap-free Landsat surface reflectance products acquired from 1984 to 2012, using sub-pixel, spectral unmixing approaches. Vegetation change trends were then analyzed using Mann-Kendall statistics and Theil-Sen slope estimators. Unmixing results successfully mapped urban vegetation for pixels located in urban parks, forested mountainous regions, as well as agricultural land (correlation coefficient ranging from 0.66 to 0.77). The greatest vegetation loss from 1984 to 2012 was found in Shanghai, Tianjin, and Dalian in China. In contrast, cities including Vancouver (Canada) and Seattle (USA) showed stable vegetation trends through time. Using temporal trend analysis, our results suggest that it is possible to reduce noise and outliers caused by phenological changes particularly in cropland using dense new Landsat time series approaches. We conclude that simple yet effective approaches of unmixing Landsat time series data for assessing spatial and temporal changes of urban vegetation at regional scales can provide

  2. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    Science.gov (United States)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  3. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  4. Validation of an elastic registration technique to estimate anatomical lung modification in Non-Small-Cell Lung Cancer Tomotherapy

    Directory of Open Access Journals (Sweden)

    Persano Diego

    2011-04-01

    Full Text Available Abstract Background The study of lung parenchyma anatomical modification is useful to estimate dose discrepancies during the radiation treatment of Non-Small-Cell Lung Cancer (NSCLC patients. We propose and validate a method, based on free-form deformation and mutual information, to elastically register planning kVCT with daily MVCT images, to estimate lung parenchyma modification during Tomotherapy. Methods We analyzed 15 registrations between the planning kVCT and 3 MVCT images for each of the 5 NSCLC patients. Image registration accuracy was evaluated by visual inspection and, quantitatively, by Correlation Coefficients (CC and Target Registration Errors (TRE. Finally, a lung volume correspondence analysis was performed to specifically evaluate registration accuracy in lungs. Results Results showed that elastic registration was always satisfactory, both qualitatively and quantitatively: TRE after elastic registration (average value of 3.6 mm remained comparable and often smaller than voxel resolution. Lung volume variations were well estimated by elastic registration (average volume and centroid errors of 1.78% and 0.87 mm, respectively. Conclusions Our results demonstrate that this method is able to estimate lung deformations in thorax MVCT, with an accuracy within 3.6 mm comparable or smaller than the voxel dimension of the kVCT and MVCT images. It could be used to estimate lung parenchyma dose variations in thoracic Tomotherapy.

  5. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V......This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...

  6. Sub-pixel spatial resolution wavefront phase imaging

    Science.gov (United States)

    Stahl, H. Philip (Inventor); Mooney, James T. (Inventor)

    2012-01-01

    A phase imaging method for an optical wavefront acquires a plurality of phase images of the optical wavefront using a phase imager. Each phase image is unique and is shifted with respect to another of the phase images by a known/controlled amount that is less than the size of the phase imager's pixels. The phase images are then combined to generate a single high-spatial resolution phase image of the optical wavefront.

  7. Simulation of Subpixel Atmospherically Degraded Target Detectability in Cluttered Scenes

    Science.gov (United States)

    2013-09-06

    utilisées pour calculer les distributions de probabilité de détection et de fausses alarmes. Celles-ci sont ensuite converties en courbes ROC qui sont...sous-pixel solides ou gazeuses sont utilisées pour calculer des distributions de probabilité de détection et de fausses alarmes. Ceux-ci sont...27 13 Additional information

  8. Physics-based Detection of Subpixel Targets in Hyperspectral Imagery

    Science.gov (United States)

    2007-01-01

    IEEE Proceedings of the1999 Aerospace Conference, vol. 4, pp 177-181, March 1999. [4] E. A. Ashton and A. Schaum , “Algorithms for the detection...34Anomaly Detection from Hyperspectral Imagery," IEEE Signal Processing Magazine, vol. 19, no. 1, pp. 58-69, January 2002. [104] A.D. Stocker and P. Schaum

  9. High resolution change estimation of soil moisture and its assimilation into a land surface model

    Science.gov (United States)

    Narayan, Ujjwal

    Near surface soil moisture plays an important role in hydrological processes including infiltration, evapotranspiration and runoff. These processes depend non-linearly on soil moisture and hence sub-pixel scale soil moisture variability characterization is important for accurate modeling of water and energy fluxes at the pixel scale. Microwave remote sensing has evolved as an attractive technique for global monitoring of near surface soil moisture. A radiative transfer model has been tested and validated for soil moisture retrieval from passive microwave remote sensing data under a full range of vegetation water content conditions. It was demonstrated that soil moisture retrieval errors of approximately 0.04 g/g gravimetric soil moisture are attainable with vegetation water content as high as 5 kg/m2. Recognizing the limitation of low spatial resolution associated with passive sensors, an algorithm that uses low resolution passive microwave (radiometer) and high resolution active microwave (radar) data to estimate soil moisture change at the spatial resolution of radar operation has been developed and applied to coincident Passive and Active L and S band (PALS) and Airborne Synthetic Aperture Radar (AIRSAR) datasets acquired during the Soil Moisture Experiments in 2002 (SMEX02) campaign with root mean square error of 10% and a 4 times enhancement in spatial resolution. The change estimation algorithm has also been used to estimate soil moisture change at 5 km resolution using AMSR-E soil moisture product (50 km) in conjunction with the TRMM-PR data (5 km) for a 3 month period demonstrating the possibility of high resolution soil moisture change estimation using satellite based data. Soil moisture change is closely related to precipitation and soil hydraulic properties. A simple assimilation framework has been implemented to investigate whether assimilation of surface layer soil moisture change observations into a hydrologic model will potentially improve it

  10. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gilani, Syed Irtiza Ali

    2008-09-15

    correlation matrix is employed. This method is beneficial because it offers sub-pixel displacement estimation without interpolation, increased robustness to noise and limited computational complexity. Owing to all these advantages, the proposed technique is very suitable for the real-time implementation to solve the motion correction problem. (orig.)

  11. Estimating Mutual Information

    OpenAIRE

    Kraskov, A.; Stögbauer, H.; Grassberger, P.

    2003-01-01

    We present two classes of improved estimators for mutual information $M(X,Y)$, from samples of random points distributed according to some joint probability density $\\mu(x,y)$. In contrast to conventional estimators based on binnings, they are based on entropy estimates from $k$-nearest neighbour distances. This means that they are data efficient (with $k=1$ we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have ...

  12. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  13. Unsupervised segmentation of heel-strike IMU data using rapid cluster estimation of wavelet features.

    Science.gov (United States)

    Yuwono, Mitchell; Su, Steven W; Moulton, Bruce D; Nguyen, Hung T

    2013-01-01

    When undertaking gait-analysis, one of the most important factors to consider is heel-strike (HS). Signals from a waist worn Inertial Measurement Unit (IMU) provides sufficient accelerometric and gyroscopic information for estimating gait parameter and identifying HS events. In this paper we propose a novel adaptive, unsupervised, and parameter-free identification method for detection of HS events during gait episodes. Our proposed method allows the device to learn and adapt to the profile of the user without the need of supervision. The algorithm is completely parameter-free and requires no prior fine tuning. Autocorrelation features (ACF) of both antero-posterior acceleration (aAP) and medio-lateral acceleration (aML) are used to determine cadence episodes. The Discrete Wavelet Transform (DWT) features of signal peaks during cadence are extracted and clustered using Swarm Rapid Centroid Estimation (Swarm RCE). Left HS (LHS), Right HS (RHS), and movement artifacts are clustered based on intra-cluster correlation. Initial pilot testing of the system on 8 subjects show promising results up to 84.3%±9.2% and 86.7%±6.9% average accuracy with 86.8%±9.2% and 88.9%±7.1% average precision for the segmentation of LHS and RHS respectively.

  14. Real-Time Forecasting of Echo-Centroid Motion.

    Science.gov (United States)

    1979-01-01

    Selector Channel FAA Federal Aviation Administration GCK Garden City, Kansas GRND Ground clutter elimination criterion HBR Hobart, Oklahoma IBM...Memory SRI Stanford Research Institute SWO Sweetwater, Oklahoma t Time Td Pulse width t Latest time TSD Total distance error TTY Teletype ULIM Universal...element (&ar = 150 T d cos cp m sec )represents the flat earth ground range of one I radar cell where tp is the antenna elevation angle and Td is

  15. A Hybridized Centroid Technique for 3D Molodensky-Badekas ...

    African Journals Online (AJOL)

    Richannan

    1Department of Geomatic Engineering, University of Mines and Technology, Tarkwa, Ghana. 2Department ... 3Department of Geomatic Engineering, Kwame Nkrumah University of Science and Technology,. Kumasi .... Lands Commission, embarked on the Land Administration Project (LAP) sponsored by the World. Bank, to ...

  16. Centroid-Based Document Classification Algorithms: Analysis & Experimental Results

    Science.gov (United States)

    2000-03-06

    legion 0.20 trump 0.20 mirag 0.18 miami 0.18 aid 0.16 concert 0.13 wow 0.12 deauvill 17 0.43 internet 0.35 onlin 0.24 comput 0.18 servic 0.17 microsoft...0.24 traffick 0.23 gang 0.23 polic 0.20 heroin 0.17 arrest 0.16 narcot 0.16 kg 0.15 addict 0.12 cocain 11 0.49 nafta 0.40 mexico 0.24 job 0.23

  17. Bayesian ISOLA: new tool for automated centroid moment tensor inversion

    Czech Academy of Sciences Publication Activity Database

    Vackář, J.; Burjánek, Jan; Gallovič, F.; Zahradník, J.; Clinton, J.

    2017-01-01

    Roč. 210, č. 2 (2017), s. 693-705 ISSN 0956-540X Institutional support: RVO:67985530 Keywords : inverse theory * waveform inversion * computational seismology * earthquake source observations * seismic noise Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.414, year: 2016

  18. Hybridized centroid technique for 3D Molodensky-Badekas ...

    African Journals Online (AJOL)

    The results attained show that the Harmonic-Quadratic Mean produced reliable coordinate transformation results within the Ghana geodetic reference network and thus could serve as practical alternative technique to the frequently used arithmetic mean. Keywords: Coordinate transformation, Molodensky-Badekas model, ...

  19. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...

  20. Adaptive Spectral Doppler Estimation

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-01-01

    In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...

  1. Fast fundamental frequency estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2017-01-01

    Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda......Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate...... the fundamental frequency, and the maximum likelihood (ML) estimator is the most accurate estimator in statistical terms. When the noise is assumed to be white and Gaussian, the ML estimator is identical to the non-linear least squares (NLS) estimator. Despite being optimal in a statistical sense, the NLS...... estimator has a high computational complexity. In this paper, we propose an algorithm for lowering this complexity significantly by showing that the NLS estimator can be computed efficiently by solving two Toeplitz-plus-Hankel systems of equations and by exploiting the recursive-in-order matrix structures...

  2. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  3. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.; Heemstra, F.J.

    1993-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  4. Estimation of Jump Tails

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Todorov, Victor

    We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes...

  5. Estimation of vector velocity

    DEFF Research Database (Denmark)

    2000-01-01

    Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...... estimator automatically compensates for the axial velocity, when determining the transverse velocity by using fourth order moments rather than second order moments. The estimation is optimized by using a lag different from one in the estimation process, and noise artifacts are reduced by using averaging...... of RF samples. Further, compensation for the axial velocity can be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce spatial velocity dispersion....

  6. Transverse Spectral Velocity Estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2014-01-01

    array probe is used along with two different estimators based on the correlation of the received signal. They can estimate the velocity spectrum as a function of time as for ordinary spectrograms, but they also work at a beam-to-flow angle of 90°. The approach is validated using simulations of pulsatile...... flow using the Womersly–Evans flow model. The relative bias of the mean estimated frequency is 13.6% and the mean relative standard deviation is 14.3% at 90°, where a traditional estimator yields zero velocity. Measurements have been conducted with an experimental scanner and a convex array transducer....... A pump generated artificial femoral and carotid artery flow in the phantom. The estimated spectra degrade when the angle is different from 90°, but are usable down to 60° to 70°. Below this angle the traditional spectrum is best and should be used. The conventional approach can automatically be corrected...

  7. Fractional cointegration rank estimation

    DEFF Research Database (Denmark)

    Lasak, Katarzyna; Velasco, Carlos

    We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating the parame......We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...

  8. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....

  9. Comparison of ArcGIS and SAS Geostatistical Analyst to Estimate Population-Weighted Monthly Temperature for US Counties.

    Science.gov (United States)

    Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang

    Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R2, mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.

  10. Methods for age estimation

    Directory of Open Access Journals (Sweden)

    D. Sümeyra Demirkıran

    2014-03-01

    Full Text Available Concept of age estimation plays an important role on both civil law and regulation of criminal behaviors. In forensic medicine, age estimation is practiced for individual requests as well for request of the court. In this study it is aimed to compile the methods of age estimation and to make recommendations for the solution of the problems encountered. In radiological method the epiphyseal lines of the bones and views of the teeth are used. In order to estimate the age by comparing bone radiographs; Greulich-Pyle Atlas (GPA, Tanner-Whitehouse Atlas (TWA and “Adli Tıpta Yaş Tayini (ATYT” books are used. Bone age is found to be 2 years older averagely than chronologic age, especially in puberty, according to the forensic age estimations described in the ATYT book. For the age estimation with teeth, Demirjian method is used. In time different methods are developed by modifying Demirjian method. However no accurate method was found. Histopathological studies are done on bone marrow cellularity and dermis cells. No correlation was found between histopathoogical findings and choronologic age. Important ethical and legal issues are brought with current age estimation methods especially in teenage period. Therefore it is required to prepare atlases of bone age compatible with our society by collecting the findings of the studies in Turkey. Another recommendation could be to pay attention to the courts of age raising trials of teenage women and give special emphasis on birth and population records

  11. Power system state estimation

    CERN Document Server

    Ahmad, Mukhtar

    2012-01-01

    State estimation is one of the most important functions in power system operation and control. This area is concerned with the overall monitoring, control, and contingency evaluation of power systems. It is mainly aimed at providing a reliable estimate of system voltages. State estimator information flows to control centers, where critical decisions are made concerning power system design and operations. This valuable resource provides thorough coverage of this area, helping professionals overcome challenges involving system quality, reliability, security, stability, and economy.Engineers are

  12. Trawsfynydd Plutonium Estimate

    Energy Technology Data Exchange (ETDEWEB)

    Reid, Bruce D.; Gerlach, David C.; Heasler, Patrick G.; Livingston, J.

    2009-11-20

    Report serves to document an estimate of the cumulative plutonium production of the Trawsfynydd Unit II reactor (Traws II) over its operating life made using the Graphite Isotope Ratio Method (GIRM). The estimate of the plutonium production in Traws II provided in this report has been generated under blind conditions. In other words, the estimate ofthe Traws II plutonium production has been generated without the knowledge of the plutonium production declared by the reactor operator (Nuclear Electric). The objective of this report is to demonstrate that the GIRM can be employed to serve as an accurate tool to verify weapons materials production declarations.

  13. Adaptive warped kernel estimators

    OpenAIRE

    Chagny, Gaëlle

    2014-01-01

    In this work, we develop a method of adaptive nonparametric estimation, based on "warped" kernels. The aim is to estimate a real-valued function $s$ from a sample of random couples $(X,Y)$. We deal with transformed data $(\\Phi(X),Y)$, with $\\Phi$ a one-to-one function, to build a collection of kernel estimators. The data-driven bandwidth selection is done with a method inspired by Goldenshluger and Lepski~(2011). The method permits to handle various problems such as additive and multiplicativ...

  14. Multidimensional kernel estimation

    CERN Document Server

    Milosevic, Vukasin

    2015-01-01

    Kernel estimation is one of the non-parametric methods used for estimation of probability density function. Its first ROOT implementation, as part of RooFit package, has one major issue, its evaluation time is extremely slow making in almost unusable. The goal of this project was to create a new class (TKNDTree) which will follow the original idea of kernel estimation, greatly improve the evaluation time (using the TKTree class for storing the data and creating different user-controlled modes of evaluation) and add the interpolation option, for 2D case, with the help of the new Delaunnay2D class.

  15. Appropriate and Inappropriate Estimation Techniques

    OpenAIRE

    Sher, David

    2013-01-01

    Mode {also called MAP} estimation, mean estimation and median estimation are examined here to determine when they can be safely used to derive {posterior) cost minimizing estimates. (These are all Bayes procedures, using the mode. mean. or median of the posterior distribution). It is found that modal estimation only returns cost minimizing estimates when the cost function is 0-t. If the cost function is a function of distance then mean estimation only returns cost minimizing estimates when th...

  16. Estimating abundance: Chapter 27

    Science.gov (United States)

    Royle, J. Andrew

    2016-01-01

    This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).

  17. Cost function estimation

    DEFF Research Database (Denmark)

    Andersen, C K; Andersen, K; Kragh-Sørensen, P

    2000-01-01

    Statistical analysis of cost data is often difficult because of highly skewed data resulting from a few patients who incur high costs relative to the majority of patients. When the objective is to predict the cost for an individual patient, the literature suggests that one should choose...... on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...... a regression model based on the quality of its predictions. In exploring the econometric issues, the objective of this study was to estimate a cost function in order to estimate the annual health care cost of dementia. Using different models, health care costs were regressed on the degree of dementia, sex, age...

  18. Imperfect Channel State Estimation

    Directory of Open Access Journals (Sweden)

    Tao Qin

    2010-01-01

    in a multiuser OFDM CR system. A simple back-off scheme is proposed, and simulation results are provided which show that the proposed scheme is very effective in mitigating the negative impact of channel estimation errors.

  19. Bridged Race Population Estimates

    Data.gov (United States)

    U.S. Department of Health & Human Services — Population estimates from "bridging" the 31 race categories used in Census 2000, as specified in the 1997 Office of Management and Budget (OMB) race and ethnicity...

  20. Estimation of food consumption

    Energy Technology Data Exchange (ETDEWEB)

    Callaway, J.M. Jr.

    1992-04-01

    The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project.

  1. Capital cost estimate

    Science.gov (United States)

    1975-01-01

    The capital cost estimate for the nuclear process heat source (NPHS) plant was made by: (1) using costs from the current commercial HTGR for electricity production as a base for items that are essentially the same and (2) development of new estimates for modified or new equipment that is specifically for the process heat application. Results are given in tabular form and cover the total investment required for each process temperature studied.

  2. Cost-Estimation Program

    Science.gov (United States)

    Cox, Brian

    1995-01-01

    COSTIT computer program estimates cost of electronic design by reading item-list file and file containing cost for each item. Accuracy of cost estimate based on accuracy of cost-list file. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. The Sun version (NPO-19587). PC version (NPO-19157).

  3. Estimating marginal external costs for road, rail and river transport in Colombia

    Directory of Open Access Journals (Sweden)

    Luis Gabriel Márquez Díaz

    2011-01-01

    Full Text Available This report presents the results of research regarding strategic freight transport network modelling in Colombia using external cost. The model uses sequential equilibrium between distribution and traffic assignment phases; it is national and inter-regional, involving strategic decision-making. The Colombian transport network consists of 27,469 km of roads, 11,257 km of navigable rivers, 2,192 km of railway lines and a set of centroid connectors for establishing a link with the zoning system (consisting of 70 internal areas and 8 external areas. Each link in a network involves internal costs:  time, operation and external costs, congestion, accidents, air pollution and CO2 emissions. Vehicle ownership costs were excluded from internal cost analysis; costs such as noise, climate change and effects on the landscape were not studied in external costs. Marginal costs regarding the network were estimated by two methods. First, it was assumed that an additional unit of demand did not affect equilibrium in a transport network and then marginal cost was estimated as being the sum of marginal costs regarding links in the shortest path. The other approach assumed that an additional unit of demand changed network equilibrium; marginal costs were then estimated by calculating the difference between the two equilibrium scenarios. The methods were applied to 7 selected routes covering the most important Colombian freight transport corridors. An average 0.014 US$/ton/km rate was estimated for external costs regarding highway transport, 0.000105 US$/ton/km for water transport and 0.001625 US$/ton/km for railroad transport (preponderance of environmental costs exceeding 90%.

  4. Operational Estimates of Surface Albedo, Vegetation Photosynthetic Activity and Surface Structure: An Overview of the GVM/SAI Activities

    Science.gov (United States)

    Verstraete, M. M.; Pinty, B.; Gobron, N.; Widlowski, J.

    2001-05-01

    The GVM Unit of the SAI derives reliable, accurate, quantitative information on the state and evolution of the biosphere from remote sensing data, using state of the art techniques. This information is provided to various services of the European Commission in support of the verification of compliance with national and international treaties, protocols and conventions, and to the scientific community in the framework of defined collaborations. Estimates of land surface albedo have been obtained from an analysis of monospectral but multiangular observations from the geostationary Meteosat platform. An analysis of these results has shown the continental scale impact of human activities (in particular biomass burning over large areas). An extension of this approach to the more advanced Meteosat Second Generation platform, to be launched in 2002, will yield more and better products. High performance yet very fast algorithms have been derived to optimally assess the Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) of live green vegetation, which largely controls the productivity of plants and therefore their ability to sequester atmospheric carbon dioxide. These algorithms, typically used with multispectral but monoangular sensors such as AVHRR, SeaWiFS, or VEGETATION, have now been further developed to take advantage of the high spatial resolution or multiangular views offered by modern sensors such as the MISR on NASA's Terra platform. Recent advances in radiation transfer modeling and scientific collaborations with the cloud community have opened new vistas on the possibility of characterizing the structure of ecosystems a the sub-pixel scale on the basis of multiangular data, and may lead to improved land cover classifications and new applications.

  5. SAR-based Estimation of Glacial Extent and Velocity Fields on Isanotski Volcano, Aleutian Islands, Alaska

    Science.gov (United States)

    Sousa, D.; Lee, A.; Parker, O. P.; Pressler, Y.; Guo, S.; Osmanoglu, B.; Schmidt, C.

    2012-12-01

    Global studies show that Earth's glaciers are losing mass at increasing rates, creating a challenge for communities that rely on them as natural resources. Field observation of glacial environments is limited by cost and inaccessibility. Optical remote sensing is often precluded by cloud cover and seasonal darkness. Synthetic aperture radar (SAR) overcomes these obstacles by using microwave-frequency electromagnetic radiation to provide high resolution information on large spatial scales and in remote, atmospherically obscured environments. SAR is capable of penetrating clouds, operating in darkness, and discriminating between targets with ambiguous spectral signatures. This study evaluated the efficacy of two SAR Earth observation methods on small (Unimak Island, Aleutian Archipelago, USA. The local community on the island, the City of False Pass, relies on glacial melt for drinking water and hydropower. Two methods were used: (1) velocity field estimation based on Repeat Image Feature Tracking (RIFT) and (2) glacial boundary delineation based on interferometric coherence mapping. NASA Uninhabited Aerial Vehicle SAR (UAVSAR) single-polarized power images and JAXA Advanced Land Observing Satellite Phased Array type L-band SAR (ALOS PALSAR) single-look complex images were analyzed over the period 2008-2011. UAVSAR image pairs were coregistered to sub-pixel accuracy and processed with the Coregistration of Optically Sensed Images and Correlation (COSI-Corr) feature tracking module to derive glacial velocity field estimates. Maximum glacier velocities ranged from 28.9 meters/year to 58.3 meters/year. Glacial boundaries were determined from interferometric coherence of ALOS PALSAR data and subsequently refined with masking operations based on terrain slope and segment size. Accuracy was assessed against hand-digitized outlines from high resolution UAVSAR power images, yielding 83.0% producer's accuracy (errors of omission) and 86.1% user's accuracy (errors of

  6. Estimating population size with correlated sampling unit estimates

    Science.gov (United States)

    David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey

    2003-01-01

    Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on mark–recapture or distance sampling methods occur...

  7. Contingent kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Scott Fortmann-Roe

    Full Text Available Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a "contingent kernel density estimation" technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method.

  8. Transverse spectral velocity estimation.

    Science.gov (United States)

    Jensen, Jørgen

    2014-11-01

    A transverse oscillation (TO)-based method for calculating the velocity spectrum for fully transverse flow is described. Current methods yield the mean velocity at one position, whereas the new method reveals the transverse velocity spectrum as a function of time at one spatial location. A convex array probe is used along with two different estimators based on the correlation of the received signal. They can estimate the velocity spectrum as a function of time as for ordinary spectrograms, but they also work at a beam-to-flow angle of 90°. The approach is validated using simulations of pulsatile flow using the Womersly-Evans flow model. The relative bias of the mean estimated frequency is 13.6% and the mean relative standard deviation is 14.3% at 90°, where a traditional estimator yields zero velocity. Measurements have been conducted with an experimental scanner and a convex array transducer. A pump generated artificial femoral and carotid artery flow in the phantom. The estimated spectra degrade when the angle is different from 90°, but are usable down to 60° to 70°. Below this angle the traditional spectrum is best and should be used. The conventional approach can automatically be corrected for angles from 0° to 70° to give fully quantitative velocity spectra without operator intervention.

  9. Path-integral virial estimator based on the scaling of fluctuation coordinates: application to quantum clusters with fourth-order propagators.

    Science.gov (United States)

    Yamamoto, Takeshi M

    2005-09-08

    We first show that a simple scaling of fluctuation coordinates defined in terms of a given reference point gives the conventional virial estimator in discretized path integral, where different choices of the reference point lead to different forms of the estimator (e.g., centroid virial). The merit of this procedure is that it allows a finite-difference evaluation of the virial estimator with respect to temperature, which totally avoids the need of higher-order potential derivatives. We apply this procedure to energy and heat-capacity calculations of the (H(2))(22) and Ne(13) clusters at low temperature using the fourth-order Takahashi-Imada [J. Phys. Soc. Jpn. 53, 3765 (1984)] and Suzuki [Phys. Lett. A 201, 425 (1995)] propagators. This type of calculation requires up to third-order potential derivatives if analytical virial estimators are used, but in practice only first-order derivatives suffice by virtue of the finite-difference scheme above. From the application to quantum clusters, we find that the fourth-order propagators do improve upon the primitive approximation, and that the choice of the reference point plays a vital role in reducing the variance of the virial estimator.

  10. Risk estimates for bone

    Energy Technology Data Exchange (ETDEWEB)

    Schlenker, R.A.

    1981-01-01

    The primary sources of information on the skeletal effects of internal emitters in humans are the US radium cases with occupational and medical exposures to /sup 226/ /sup 228/Ra and the German patients injected with /sup 224/Ra primarily for treatment of ankylosing spondylitis and tuberculosis. During the past decade, dose-response data from both study populations have been used by committees, e.g., the BEIR committees, to estimate risks at low dose levels. NCRP Committee 57 and its task groups are now engaged in making risk estimates for internal emitters. This paper presents brief discussions of the radium data, the results of some new analyses and suggestions for expressing risk estimates in a form appropriate to radiation protection.

  11. Multi-Sensor Based Online Attitude Estimation and Stability Measurement of Articulated Heavy Vehicles

    Directory of Open Access Journals (Sweden)

    Qingyuan Zhu

    2018-01-01

    Full Text Available Articulated wheel loaders used in the construction industry are heavy vehicles and have poor stability and a high rate of accidents because of the unpredictable changes of their body posture, mass and centroid position in complex operation environments. This paper presents a novel distributed multi-sensor system for real-time attitude estimation and stability measurement of articulated wheel loaders to improve their safety and stability. Four attitude and heading reference systems (AHRS are constructed using micro-electro-mechanical system (MEMS sensors, and installed on the front body, rear body, rear axis and boom of an articulated wheel loader to detect its attitude. A complementary filtering algorithm is deployed for sensor data fusion in the system so that steady state margin angle (SSMA can be measured in real time and used as the judge index of rollover stability. Experiments are conducted on a prototype wheel loader, and results show that the proposed multi-sensor system is able to detect potential unstable states of an articulated wheel loader in real-time and with high accuracy.

  12. Digital Quantum Estimation

    Science.gov (United States)

    Hassani, Majid; Macchiavello, Chiara; Maccone, Lorenzo

    2017-11-01

    Quantum metrology calculates the ultimate precision of all estimation strategies, measuring what is their root-mean-square error (RMSE) and their Fisher information. Here, instead, we ask how many bits of the parameter we can recover; namely, we derive an information-theoretic quantum metrology. In this setting, we redefine "Heisenberg bound" and "standard quantum limit" (the usual benchmarks in the quantum estimation theory) and show that the former can be attained only by sequential strategies or parallel strategies that employ entanglement among probes, whereas parallel-separable strategies are limited by the latter. We highlight the differences between this setting and the RMSE-based one.

  13. Foundations of estimation theory

    CERN Document Server

    Kubacek, L

    1988-01-01

    The application of estimation theory renders the processing of experimental results both rational and effective, and thus helps not only to make our knowledge more precise but to determine the measure of its reliability. As a consequence, estimation theory is indispensable in the analysis of the measuring processes and of experiments in general.The knowledge necessary for studying this book encompasses the disciplines of probability and mathematical statistics as studied in the third or fourth year at university. For readers interested in applications, comparatively detailed chapters

  14. An Improved Fst Estimator

    OpenAIRE

    Chen, Guanjie; Yuan, Ao; Shriner, Daniel; Tekola-Ayele, Fasil; Zhou, Jie; Amy R Bentley; Zhou, Yanxun; Wang, Chuntao; Newport, Melanie J; Adeyemo, Adebowale; Charles N Rotimi

    2015-01-01

    The fixation index F st plays a central role in ecological and evolutionary genetic studies. The estimators of Wright ( F ^ s t 1 ), Weir and Cockerham ( F ^ s t 2 ), and Hudson et al. ( F ^ s t 3 ) are widely used to measure genetic differences among different populations, but all have limitations. We propose a minimum variance estimator F ^ s t m using F ^ s t 1 and F ^ s t 2 . We tested F ^ s t m in simulations and applied it to 120 unrelated East African individuals from Ethiopia and 11 s...

  15. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A. [VTT Energy, Espoo (Finland)

    1996-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  16. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  17. On Stein's unbiased risk estimate for reduced rank estimators

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2018-01-01

    Stein's unbiased risk estimate (SURE) is considered for matrix valued observables with low rank means. It is shown that SURE is applicable to a class of spectral function estimators including the reduced rank estimator.......Stein's unbiased risk estimate (SURE) is considered for matrix valued observables with low rank means. It is shown that SURE is applicable to a class of spectral function estimators including the reduced rank estimator....

  18. Iraqi Civilian Casualties Estimates

    Science.gov (United States)

    2008-03-13

    nonprofit organizations , or through statements made by officials to the press. Because these estimates are based on varying time periods and have been created using differing methodologies, readers should exercise caution when using these statistics and should look on them as guideposts rather than as statements of

  19. Biological dose estimation

    African Journals Online (AJOL)

    to this effect was found in at least 3 cases using biological dosimetric criteria, proving the ... The classification system described by Savage3 was used to determine the ... TABLE I. DISTANCE FROM RADIATION SOURCE, DETAILS OF CYTOGENETIC ANALYSIS AND BIOLOGICAL AND PHYSICAL. DOSE ESTIMATIONS.

  20. On Gnostical Estimates

    Czech Academy of Sciences Publication Activity Database

    Fabián, Zdeněk

    2017-01-01

    Roč. 56, č. 2 (2017), s. 125-132 ISSN 0973-1377 Institutional support: RVO:67985807 Keywords : gnostic theory * statistics * robust estimates Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability http://www.ceser.in/ceserp/index.php/ijamas/article/view/4707

  1. Estimating Gender Wage Gaps

    Science.gov (United States)

    McDonald, Judith A.; Thornton, Robert J.

    2011-01-01

    Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…

  2. An Estimated Income Scale.

    Science.gov (United States)

    Nicholson, Everard

    The decision to develop an estimated income scale arose from a wish to prove or disprove the statement that colleges like Brown University may be headed toward a situation where the student body will consist of the rich and the poor, the traditional group of middle class having been eliminated. As the research proceeded, it became evident that an…

  3. Estimating Thermoelectric Water Use

    Science.gov (United States)

    Hutson, S. S.

    2012-12-01

    In 2009, the Government Accountability Office recommended that the U.S. Geological Survey (USGS) and Department of Energy-Energy Information Administration, (DOE-EIA) jointly improve their thermoelectric water-use estimates. Since then, the annual mandatory reporting forms returned by powerplant operators to DOE-EIA have been revised twice to improve the water data. At the same time, the USGS began improving estimation of withdrawal and consumption. Because of the variation in amount and quality of water-use data across powerplants, the USGS adopted a hierarchy of methods for estimating water withdrawal and consumptive use for the approximately 1,300 water-using powerplants in the thermoelectric sector. About 800 of these powerplants have generation and cooling data, and the remaining 500 have generation data only, or sparse data. The preferred method is to accept DOE-EIA data following validation. This is the traditional USGS method and the best method if all operators follow best practices for measurement and reporting. However, in 2010, fewer than 200 powerplants reported thermodynamically realistic values of both withdrawal and consumption. Secondly, water use was estimated using linked heat and water budgets for the first group of 800 plants, and for some of the other 500 powerplants where data were sufficient for at least partial modeling using plant characteristics, electric generation, and fuel use. Thermodynamics, environmental conditions, and characteristics of the plant and cooling system constrain both the amount of heat discharged to the environment and the share of this heat that drives evaporation. Heat and water budgets were used to define reasonable estimates of withdrawal and consumption, including likely upper and lower thermodynamic limits. These results were used to validate the reported values at the 800 plants with water-use data, and reported values were replaced by budget estimates at most of these plants. Thirdly, at plants without valid

  4. Triangulation of the Source of Electromagnetic Pulses From Magnetometers in Peru, Leading to Estimation of the Areas of Possible Future Rupture Zones

    Science.gov (United States)

    Heraud, J. A.; Centa, V.; Vilchez, N.; Menendez, D.

    2014-12-01

    In a past presentation, a description was made of the process that led to the determination of the azimuth for the arrival of electromagnetic pulses sensed by magnetometers of the Peru-Magneto network. Through triangulation using information from nearby stations in the network, the source of pressure, identified by the motion of positive charges produced locally and leading to currents has been identified. Such mechanical pressure points have been processed in a pseudo mechanical analogue of the centroid, called the EM-centroid as the source of pre-earthquake stress that leads to rupture and possibly to earthquakes, when the ruptures can be sensed in a seismic or cause-effect way.. Thus, for certain areas of Peru, like the central coast or the south, we have been able to detect 8 EQs and many more are tectonic-wise related to possibly connected EM-pulse sources, in less than a year time period. With the above described scheme, the point has been reached where, for some areas in Central and Southern Peru under certain conditions, "when" and "where" an earthquake will occur can be known most of the time. The possibility of determining the magnitude is not so straight forward, since not all EM pulses end up in a rupture and besides, the magnitude of the seismic event depends on the extension of the rupture area and this on the strain, the rock structure, the depth and not only on the stress. Estimates of the areas and connected rupture possibilities are analyzed from magnetometers donated through the collaboration of Quakefinder and the results will be shown on videos as animated presentations of occurrence events in time, for several earthquakes in 2013 and 2014.

  5. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  6. Robust Wave Resource Estimation

    DEFF Research Database (Denmark)

    Lavelle, John; Kofoed, Jens Peter

    2013-01-01

    An assessment of the wave energy resource at the location of the Danish Wave Energy test Centre (DanWEC) is presented in this paper. The Wave Energy Converter (WEC) test centre is located at Hanstholm in the of North West Denmark. Information about the long term wave statistics of the resource...... is necessary for WEC developers, both to optimise the WEC for the site, and to estimate its average yearly power production using a power matrix. The wave height and wave period sea states parameters are commonly characterized with a bivariate histogram. This paper presents bivariate histograms and kernel...... density estimates of the PDF as a function both of Hm0 and Tp, and Hm0 and T0;2, together with the mean wave power per unit crest length, Pw, as a function of Hm0 and T0;2. The wave elevation parameters, from which the wave parameters are calculated, are filtered to correct or remove spurious data...

  7. Distribution load estimation (DLE)

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A.; Lehtonen, M. [VTT Energy, Espoo (Finland)

    1998-08-01

    The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented

  8. Estimation and inferential statistics

    CERN Document Server

    Sahu, Pradip Kumar; Das, Ajit Kumar

    2015-01-01

    This book focuses on the meaning of statistical inference and estimation. Statistical inference is concerned with the problems of estimation of population parameters and testing hypotheses. Primarily aimed at undergraduate and postgraduate students of statistics, the book is also useful to professionals and researchers in statistical, medical, social and other disciplines. It discusses current methodological techniques used in statistics and related interdisciplinary areas. Every concept is supported with relevant research examples to help readers to find the most suitable application. Statistical tools have been presented by using real-life examples, removing the “fear factor” usually associated with this complex subject. The book will help readers to discover diverse perspectives of statistical theory followed by relevant worked-out examples. Keeping in mind the needs of readers, as well as constantly changing scenarios, the material is presented in an easy-to-understand form.

  9. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  10. Estimating Venezuelas Latent Inflation

    OpenAIRE

    Juan Carlos Bencomo; Hugo J. Montesinos; Hugo M. Montesinos; Jose Roberto Rondo

    2011-01-01

    Percent variation of the consumer price index (CPI) is the inflation indicator most widely used. This indicator, however, has some drawbacks. In addition to measurement errors of the CPI, there is a problem of incongruence between the definition of inflation as a sustained and generalized increase of prices and the traditional measure associated with the CPI. We use data from 1991 to 2005 to estimate a complementary indicator for Venezuela, the highest inflation country in Latin America. Late...

  11. Airborne Crowd Density Estimation

    Science.gov (United States)

    Meynberg, O.; Kuschk, G.

    2013-10-01

    This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.

  12. Estimating directional epistasis

    Science.gov (United States)

    Le Rouzic, Arnaud

    2014-01-01

    Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828

  13. Estimating Spectra from Photometry

    Science.gov (United States)

    Bryce Kalmbach, J.; Connolly, Andrew J.

    2017-12-01

    Measuring the physical properties of galaxies such as redshift frequently requires the use of spectral energy distributions (SEDs). SED template sets are, however, often small in number and cover limited portions of photometric color space. Here we present a new method to estimate SEDs as a function of color from a small training set of template SEDs. We first cover the mathematical background behind the technique before demonstrating our ability to reconstruct spectra based upon colors and then compare our results to other common interpolation and extrapolation methods. When the photometric filters and spectra overlap, we show that the error in the estimated spectra is reduced by more than 65% compared to the more commonly used techniques. We also show an expansion of the method to wavelengths beyond the range of the photometric filters. Finally, we demonstrate the usefulness of our technique by generating 50 additional SED templates from an original set of 10 and by applying the new set to photometric redshift estimation. We are able to reduce the photometric redshifts standard deviation by at least 22.0% and the outlier rejected bias by over 86.2% compared to original set for z ≤ 3.

  14. Temperature estimation with ultrasound

    Science.gov (United States)

    Daniels, Matthew

    Hepatocelluar carcinoma is the fastest growing type of cancer in the United States. In addition, the survival rate after one year is approximately zero without treatment. In many instances, patients with hepatocelluar carcinoma may not be suitable candidates for the primary treatment options, i.e. surgical resection or liver transplantation. This has led to the development of minimally invasive therapies focused on destroying hepatocelluar by thermal or chemical methods. The focus of this dissertation is on the development of ultrasound-based image-guided monitoring options for minimally invasive therapies such as radiofrequency ablation. Ultrasound-based temperature imaging relies on relating the gradient of locally estimated tissue displacements to a temperature change. First, a realistic Finite Element Analysis/ultrasound simulation of ablation was developed. This allowed evaluation of the ability of ultrasound-based temperature estimation algorithms to track temperatures for three different ablation scenarios in the liver. It was found that 2-Dimensional block matching and a 6 second time step was able to accurately track the temperature over a 12 minute ablation procedure. Next, a tissue-mimicking phantom was constructed to determine the accuracy of the temperature estimation method by comparing estimated temperatures to that measured using invasive fiber-optic temperature probes. The 2-Dimensional block matching was able to track the temperature accurately over the entire 8 minute heating procedure in the tissue-mimicking phantom. Finally, two separate in-vivo experiments were performed. The first experiment examined the ability of our algorithm to track frame-to-frame displacements when external motion due to respiration and the cardiac cycle were considered. It was determined that a frame rate between 13 frames per second and 33 frames per second was sufficient to track frame-to-frame displacements between respiratory cycles. The second experiment examined

  15. Mixtures Estimation and Applications

    CERN Document Server

    Mengersen, Kerrie; Titterington, Mike

    2011-01-01

    This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete. The editors provide a complete account of the applications, mathematical structure and statistical analysis of finite mixture distributions along with MCMC computational methods, together with a range of detailed discussions covering the applications of the methods and features chapters from the leading experts on the subject

  16. Estimating Gear Teeth Stiffness

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2013-01-01

    The estimation of gear stiffness is important for determining the load distribution between the gear teeth when two sets of teeth are in contact. Two factors have a major influence on the stiffness; firstly the boundary condition through the gear rim size included in the stiffness calculation...... and secondly the size of the contact. In the FE calculation the true gear tooth root profile is applied. The meshing stiffness’s of gears are highly non-linear, it is however found that the stiffness of an individual tooth can be expressed in a linear form assuming that the contact length is constant....

  17. Parameter estimation through ignorance.

    Science.gov (United States)

    Du, Hailiang; Smith, Leonard A

    2012-07-01

    Dynamical modeling lies at the heart of our understanding of physical systems. Its role in science is deeper than mere operational forecasting, in that it allows us to evaluate the adequacy of the mathematical structure of our models. Despite the importance of model parameters, there is no general method of parameter estimation outside linear systems. A relatively simple method of parameter estimation for nonlinear systems is introduced, based on variations in the accuracy of probability forecasts. It is illustrated on the logistic map, the Henon map, and the 12-dimensional Lorenz96 flow, and its ability to outperform linear least squares in these systems is explored at various noise levels and sampling rates. As expected, it is more effective when the forecast error distributions are non-Gaussian. The method selects parameter values by minimizing a proper, local skill score for continuous probability forecasts as a function of the parameter values. This approach is easier to implement in practice than alternative nonlinear methods based on the geometry of attractors or the ability of the model to shadow the observations. Direct measures of inadequacy in the model, the "implied ignorance," and the information deficit are introduced.

  18. Tibio-femoral joint constraints for bone pose estimation during movement using multi-body optimization.

    Science.gov (United States)

    Bergamini, E; Pillet, H; Hausselle, J; Thoreux, P; Guerard, S; Camomilla, V; Cappozzo, A; Skalli, W

    2011-04-01

    When using skin markers and stereophotogrammetry for movement analysis, bone pose estimation may be performed using multi-body optimization with the intent of reducing the effect of soft tissue artefacts. When the joint of interest is the knee, improvement of this approach requires defining subject-specific relevant kinematic constraints. The aim of this work was to provide these constraints in the form of plausible values for the distances between origin and insertion of the main ligaments (ligament lengths), during loaded healthy knee flexion, taking into account the indeterminacies associated with landmark identification during anatomical calibration. Ligament attachment sites were identified through virtual palpation on digital bone templates. Attachments sites were estimated for six knee specimens by matching the femur and tibia templates to low-dose stereoradiography images. Movement data were obtained using stereophotogrammetry and pin markers. Relevant ligament lengths for the anterior and posterior cruciate, lateral collateral, and deep and superficial bundles of the medial collateral ligaments (ACL, PCL, LCL, MCLdeep, MCLsup) were calculated. The effect of landmark identification variability was evaluated performing a Monte Carlo simulation on the coordinates of the origin-insertion centroids. The ACL and LCL lengths were found to decrease, and the MCLdeep length to increase significantly during flexion, while variations in PCL and MCLsup length was concealed by the experimental indeterminacy. An analytical model is given that provides subject-specific plausible ligament length variations as functions of the knee flexion angle and that can be incorporated in a multi-body optimization procedure. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Top-of-the-Atmosphere Shortwave Flux Estimation from UV Observations: An Empirical Approach

    Science.gov (United States)

    Gupta, P.; Joiner, Joanna; Vasilkov, A.; Bhartia, P. K.; da Silva, Arlindo

    2012-01-01

    Measurements of top of the atmosphere (TOA) radiation are essential to the understanding of Earth's climate. Clouds, aerosols, and ozone (0,) are among the most important agents impacting the Earth's short-wave (SW) radiation budget. There are several sensors in orbit that provide independent information related to the Earth's SW radiation budget. Having coincident information from these sensors is important for understanding their potential contributions. The A-train constellation of satellites provides a unique opportunity to analyze near-simultaneous data from several of these sensors. They include the Ozone Monitoring Instrument (OMI), on the NASA Aura satellite, that makes TOA hyper-spectral measurements from ultraviolet (UV) to visible wavelengths, and Clouds and the Earth's Radiant Energy System (CERES) instrument, on the NASA Aqua satellite, that makes broadband measurements in both the long- and short-wave. OMI measurements have been successfully utilized to derive the information on trace gases (e.g., 0 1, NO" and SO,), clouds, and absorbing aerosols. TOA SW fluxes are estimated using a combination of data from CERES and the Aqua MODerate-resolution Imaging Spectroradiometer (MODIS). In this paper, OMI retrievals of cloud/aerosol parameters and 0 1 have been collocated with CERES TOA SW flux retrievals. We use this collocated data to develop a neural network that estimates TOA shortwave flux globally over ocean using data from OMI and meteorological analyses. This input data include the effective cloud fraction, cloud optical centroid pressure (OCP), total-column 0" and sun-satellite viewing geometry from OMI as well as wind speed and water vapor from the Goddard Earth Observing System 5 Modern Era Retrospective-analysis for Research and Applications (GEOS-5 MERRA) along with a climatology of chlorophyll content. We train the neural network using a subset of CERES retrievals of TOA SW flux as the target output (truth) and withhold a different subset of

  20. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  1. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  2. Wind turbine state estimation

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2014-01-01

    Dynamic inflow is an effect which is normally not included in the models used for wind turbine control design. Therefore, potential improvement from including this effect exists. The objective in this project is to improve the methods previously developed for this and especially to verify...... the results using full-scale wind turbine data. The previously developed methods were based on extended Kalman filtering. This method has several drawback compared to unscented Kalman filtering which has therefore been developed. The unscented Kalman filter was first tested on linear and non-linear test cases...... which was successful. Then the estimation of a wind turbine state including dynamic inflow was tested on a simulated NREL 5MW turbine was performed. This worked perfectly with wind speeds from low to nominal wind speed as the output prediction errors where white. In high wind where the pitch actuator...

  3. Electric propulsion cost estimation

    Science.gov (United States)

    Palaszewski, B. A.

    1985-01-01

    A parametric cost model for mercury ion propulsion modules is presented. A detailed work breakdown structure is included. Cost estimating relationships were developed for the individual subsystems and the nonhardware items (systems engineering, software, etc.). Solar array and power processor unit (PPU) costs are the significant cost drivers. Simplification of both of these subsystems through applications of advanced technology (lightweight solar arrays and high-efficiency, self-radiating PPUs) can reduce costs. Comparison of the performance and cost of several chemical propulsion systems with the Hg ion module are also presented. For outer-planet missions, advanced solar electric propulsion (ASEP) trip times and O2/H2 propulsion trip times are comparable. A three-year trip time savings over the baselined NTO/MMH propulsion system is possible with ASEP.

  4. Remote remanence estimation (RRE)

    Science.gov (United States)

    Pratt, David A.; McKenzie, K. Blair; White, Tony S.

    2014-09-01

    The remote determination of magnetic remanence in rocks is a method that has largely been ignored because of the ambiguity associated with the estimation of both the Koenigsberger ratio and remanent magnetisation direction. Our research shows that the resultant magnetisation direction can be derived directly through inversion of magnetic data for an isolated magnetic anomaly from a compact magnetic source. The resultant magnetisation direction is a property of the target magnetic rocks and a robust inversion parameter. The departure angle of the resultant magnetisation vector from that of the inducing magnetic field is an important indicator of the existence of remanent magnetisation and the inversion process can detect departures that are not easily detected by visual inspection. This departure angle is called the apparent resultant rotation angle or ARRA. The induced field vector, remanent magnetisation vector and resultant magnetisation vector lie on the plane of a great circle. We find the intersection of the transformed polar wander vector trace with the great circle plane to obtain one or more possible solutions for the remanent magnetisation vector. Geological deduction will normally allow us to reduce the ambiguity for multiple solutions to obtain the most likely remanent magnetisation direction. Once the remanent magnetisation direction is established, it is then possible to determine the Koenigsberger ratio and magnetic susceptibility for the target. We illustrate the methodology using survey data over the Black Hill Norite which also has extensive palaeomagnetic data available for comparison with the inversion results. We then apply the remote remanence estimation (RRE) method to a systematic study of a large number of intrusive pipes in the Thomson Orogen, New South Wales. The corrected magnetic susceptibility and remanence properties, spatial distribution and underlying uncertainties are evaluated for their potential use by diamond explorers. The

  5. Intrapartum sonographic weight estimation.

    Science.gov (United States)

    Faschingbauer, F; Dammer, U; Raabe, E; Schneider, M; Faschingbauer, C; Schmid, M; Schild, R L; Beckmann, M W; Kehl, S; Mayr, A

    2015-10-01

    To evaluate the accuracy of intrapartum sonographic weight estimation (WE). This retrospective, cross-sectional study included 1958 singleton pregnancies. Inclusion criteria were singleton pregnancy with cephalic presentation, vaginal delivery and ultrasound examination with complete biometric parameters performed on the day of delivery during the latent or active phase of labor, and absence of chromosomal or structural anomalies. The accuracy of intrapartum WE was compared to a control group of fetuses delivered by primary cesarean section at our perinatal center and an ultrasound examination with complete biometric parameters performed within 3 days before delivery (n = 392). Otherwise, the same inclusion criteria as in the study group were applied. The accuracy of WE was compared between five commonly applied formulas using means of percentage errors (MPE), medians of absolute percentage errors (MAPE), and proportions of estimates within 10 % of actual birth weight. In the whole study group, all equations showed a systematic underestimation of fetal weight (negative MPEs). Overall, best MAPE and MPE values were found with the Hadlock II formula, using BPD, AC and FL as biometric parameters (Hadlock II, MPE: -1.28; MAPE: 6.52). MPEs differed significantly between WE in the study and control group for all evaluated formulas: in the control group, either no systematic error (Hadlock III, IV and V) or a significant overestimation (Hadlock I, II) was found. Regarding MAPEs, application of the Hadlock III (HC, AC, FL) and V (AC) formula resulted in significant lower values in the control group (Hadlock III, MAPE: 7.48 vs. 5.95, p = 0.0008 and Hadlock V, MAPE: 8.79 vs. 7.52, p = 0.0085). No significant differences were found for the other equations. A systematic underestimation of fetal weight has to be taken into account in sonographic WE performed intrapartum. Overall, the best results can be achieved with WE formulas using the BPD as the only head

  6. WAYS HIERARCHY OF ACCOUNTING ESTIMATES

    National Research Council Canada - National Science Library

    ŞERBAN CLAUDIU VALENTIN; NĂSTASIE MIHAELA - ANDREEA

    2015-01-01

    ... and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate...

  7. Spread, estimators and nuisance parameters

    NARCIS (Netherlands)

    Heuvel, Edwin R. van den; Klaassen, Chris A.J.

    1997-01-01

    A general spread inequality for arbitrary estimators of a one-dimensional parameter is given. This finite-sample inequality yields bounds on the distribution of estimators in the presence of finite- or infinite-dimensional nuisance parameters.

  8. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  9. A new estimator for vector velocity estimation [medical ultrasonics

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2001-01-01

    A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation....... The new estimator automatically compensates for the axial velocity when determining the transverse velocity. The estimation is optimized by using a lag different from one in the estimation process, and noise artifacts are reduced by averaging RF samples. Further, compensation for the axial velocity can...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...

  10. An improved estimation and focusing scheme for vector velocity estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1999-01-01

    The full blood velocity vector must be estimated in medical ultrasound to give a correct depiction of the blood flow. This can be done by introducing a transversely oscillating pulse-echo ultrasound field, which makes the received signal influenced by a transverse motion. Such an approach...... was suggested in [1]. Here the conventional autocorrelation approach was used for estimating the transverse velocity and a compensation for the axial motion was necessary in the estimation procedure. This paper introduces a new estimator for determining the two-dimensional velocity vector and a new dynamic...... beamforming method. A modified autocorrelation approach employing fourth order moments of the input data is used for velocity estimation. The new estimator calculates the axial and lateral velocity component independently of each other. The estimation is optimized for differences in axial and lateral...

  11. Acquisition Cost/Price Estimating

    Science.gov (United States)

    1981-01-01

    SYSTEMS, RECOMMENDING COST COALS FOR THOSE SYSTEMS, AND VALIDATING THOSE ESTIMATES THROUGH INDEPENDENT COSTING METHODS. INSTRUMENTS THROUGH WHICH SYSTEM...REVIEW AND VALIDATION, (3) RESEARCH AND METHODOLOGY AND (4) DATA ANALYSIS. THESE FUNCTIONAL THRUSTS ARE IN TURN FOCUSED TO ESTIMATING AND ANALISIS ...ANALYSIS OF COST ISSUES -- TO PROVIDE CONSISTENCY AND COMPLETENESS OF ESTIMATES PREPARED BY OTHER FUNCTIONAL ACTIVITIES. MANAGERIAL 1. COST ANALISIS HAS

  12. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  13. Improved tilt sensing in an LGS-based tomographic AO system based on instantaneous PSF estimation

    Science.gov (United States)

    Veran, Jean-Pierre

    2013-12-01

    Laser guide star (LGS)-based tomographic AO systems, such as Multi-Conjugate AO (MCAO), Multi-Object AO (MOAO) and Laser Tomography AO (LTAO), require natural guide stars (NGSs) to sense tip-tilt (TT) and possibly other low order modes, to get rid of the LGS-tilt indetermination problem. For example, NFIRAOS, the first-light facility MCAO system for the Thirty Meter Telescope requires three NGSs, in addition to six LGSs: two to measure TT and one to measure TT and defocus. In order to improve sky coverage, these NGSs are selected in a so-called technical field (2 arcmin in diameter for NFIRAOS), which is much larger than the on-axis science field (17x17 arcsec for NFIRAOS), on which the AO correction is optimized. Most times, the NGSs are far off-axis and thus poorly corrected by the high-order AO loop, resulting in spots with low contrast and high speckle noise. Accurately finding the position of such spots is difficult, even with advanced methods such as matched-filtering or correlation, because these methods rely on the knowledge of an average spot image, which is quite different from the instantaneous spot image, especially in case of poor correction. This results in poor tilt estimation, which, ultimately, impacts sky coverage. We propose to improve the estimation of the position of the NGS spots by using, for each frame, a current estimate of the instantaneous spot profile instead of an average profile. This estimate can be readily obtained by tracing wavefront errors in the direction of the NGS through the turbulence volume. The latter is already computed by the tomographic process from the LGS measurements as part of the high order AO loop. Computing such a wavefront estimate has actually already been proposed for the purpose of driving a deformable mirror (DM) in each NGS WFS, to optically correct the NGS spot, which does lead to improved centroiding accuracy. Our approach, however, is much simpler, because it does not require the complication of extra DMs

  14. Estimation of soil permeability

    Directory of Open Access Journals (Sweden)

    Amr F. Elhakim

    2016-09-01

    Full Text Available Soils are permeable materials because of the existence of interconnected voids that allow the flow of fluids when a difference in energy head exists. A good knowledge of soil permeability is needed for estimating the quantity of seepage under dams and dewatering to facilitate underground construction. Soil permeability, also termed hydraulic conductivity, is measured using several methods that include constant and falling head laboratory tests on intact or reconstituted specimens. Alternatively, permeability may be measured in the field using insitu borehole permeability testing (e.g. [2], and field pumping tests. A less attractive method is to empirically deduce the coefficient of permeability from the results of simple laboratory tests such as the grain size distribution. Otherwise, soil permeability has been assessed from the cone/piezocone penetration tests (e.g. [13,14]. In this paper, the coefficient of permeability was measured using field falling head at different depths. Furthermore, the field coefficient of permeability was measured using pumping tests at the same site. The measured permeability values are compared to the values empirically deduced from the cone penetration test for the same location. Likewise, the coefficients of permeability are empirically obtained using correlations based on the index soil properties of the tested sand for comparison with the measured values.

  15. Cooperative photometric redshift estimation

    Science.gov (United States)

    Cavuoti, S.; Tortora, C.; Brescia, M.; Longo, G.; Radovich, M.; Napolitano, N. R.; Amaro, V.; Vellucci, C.

    2017-06-01

    In the modern galaxy surveys photometric redshifts play a central role in a broad range of studies, from gravitational lensing and dark matter distribution to galaxy evolution. Using a dataset of ~ 25,000 galaxies from the second data release of the Kilo Degree Survey (KiDS) we obtain photometric redshifts with five different methods: (i) Random forest, (ii) Multi Layer Perceptron with Quasi Newton Algorithm, (iii) Multi Layer Perceptron with an optimization network based on the Levenberg-Marquardt learning rule, (iv) the Bayesian Photometric Redshift model (or BPZ) and (v) a classical SED template fitting procedure (Le Phare). We show how SED fitting techniques could provide useful information on the galaxy spectral type which can be used to improve the capability of machine learning methods constraining systematic errors and reduce the occurrence of catastrophic outliers. We use such classification to train specialized regression estimators, by demonstrating that such hybrid approach, involving SED fitting and machine learning in a single collaborative framework, is capable to improve the overall prediction accuracy of photometric redshifts.

  16. Direct volume estimation without segmentation

    Science.gov (United States)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  17. Precision cosmological parameter estimation

    Science.gov (United States)

    Fendt, William Ashton, Jr.

    2009-09-01

    methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.

  18. Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In many scenarios, a periodic signal of interest is often contaminated by different types of noise that may render many existing pitch estimation methods suboptimal, e.g., due to an incorrect white Gaussian noise assumption. In this paper, a method is established to estimate the pitch of such sig......In many scenarios, a periodic signal of interest is often contaminated by different types of noise that may render many existing pitch estimation methods suboptimal, e.g., due to an incorrect white Gaussian noise assumption. In this paper, a method is established to estimate the pitch...... against different noise situations. The simulation results confirm that the proposed MVDR method outperforms the state-of-the-art weighted least squares (WLS) pitch estimator in colored noise and has robust pitch estimates against missing harmonics in some time-frames....

  19. Comparison of thermal, salt and dye tracing to estimate shallow flow velocities: Novel triple-tracer approach

    Science.gov (United States)

    Abrantes, João R. C. B.; Moruzzi, Rodrigo B.; Silveira, Alexandre; de Lima, João L. M. P.

    2018-02-01

    The accurate measurement of shallow flow velocities is crucial to understand and model the dynamics of sediment and pollutant transport by overland flow. In this study, a novel triple-tracer approach was used to re-evaluate and compare the traditional and well established dye and salt tracer techniques with the more recent thermal tracer technique in estimating shallow flow velocities. For this purpose a triple tracer (i.e. dyed-salted-heated water) was used. Optical and infrared video cameras and an electrical conductivity sensor were used to detect the tracers in the flow. Leading edge and centroid velocities of the tracers were measured and the correction factors used to determine the actual mean flow velocities from tracer measured velocities were compared and investigated. Experiments were carried out for different flow discharges (32-1813 ml s-1) on smooth acrylic, sand, stones and synthetic grass bed surfaces with 0.8, 4.4 and 13.2% slopes. The results showed that thermal tracers can be used to estimate shallow flow velocities, since the three techniques yielded very similar results without significant differences between them. The main advantages of the thermal tracer were that the movement of the tracer along the measuring section was more easily visible than it was in the real image videos and that it was possible to measure space-averaged flow velocities instead of only one velocity value, with the salt tracer. The correction factors used to determine the actual mean velocity of overland flow varied directly with Reynolds and Froude numbers, flow velocity and slope and inversely with flow depth and bed roughness. In shallow flows, velocity estimation using tracers entails considerable uncertainty and caution must be taken with these measurements, especially in field studies where these variables vary appreciably in space and time.

  20. Modal mass estimation from ambient vibrations measurement: A method for civil buildings

    Science.gov (United States)

    Acunzo, G.; Fiorini, N.; Mori, F.; Spina, D.

    2018-01-01

    A new method for estimating the modal mass ratios of buildings from unscaled mode shapes identified from ambient vibrations is presented. The method is based on the Multi Rigid Polygons (MRP) model in which each floor of the building is ideally divided in several non-deformable polygons that move independent of each other. The whole mass of the building is concentrated in the centroid of the polygons and the experimental mode shapes are expressed in term of rigid translations and of rotations. In this way, the mass matrix of the building can be easily computed on the basis of simple information about the geometry and the materials of the structure. The modal mass ratios can be then obtained through the classical equation of structural dynamics. Ambient vibrations measurement must be performed according to this MRP models, using at least two biaxial accelerometers per polygon. After a brief illustration of the theoretical background of the method, numerical validations are presented analysing the method sensitivity for possible different source of errors. Quality indexes are defined for evaluating the approximation of the modal mass ratios obtained from a certain MRP model. The capability of the proposed model to be applied to real buildings is illustrated through two experimental applications. In the first one, a geometrically irregular reinforced concrete building is considered, using a calibrated Finite Element Model for validating the results of the method. The second application refers to a historical monumental masonry building, with a more complex geometry and with less information available. In both cases, MRP models with a different number of rigid polygons per floor are compared.

  1. Age estimation in the living

    DEFF Research Database (Denmark)

    Larsen, Sara Tangmose; Thevissen, Patrick; Lynnerup, Niels

    2015-01-01

    A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...... unbiased age estimates which minimize the risk of wrongly estimating minors as adults. Furthermore, when corrected ad hoc, TA produces appropriate prediction intervals. As TA allows expansion with additional traits, i.e. stages of development of the left hand-wrist and the clavicle, it has a great...

  2. WAYS HIERARCHY OF ACCOUNTING ESTIMATES

    Directory of Open Access Journals (Sweden)

    ŞERBAN CLAUDIU VALENTIN

    2015-03-01

    Full Text Available Based on one hand on the premise that the estimate is an approximate evaluation, completed with the fact that the term estimate is increasingly common and used by a variety of both theoretical and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate. Completing on the other hand the idea above with the phrase "estimated value", which implies that we are dealing with a value obtained from an evaluation process, but its size is not exact but approximated, meaning is close to the actual size, it becomes obvious the neccessity to delimit the hierarchical relationship between evaluation / estimate while considering the context in which the evaluation activity is derulated at entity level.

  3. Salmonellosis Control: Estimated Economic Benefits

    OpenAIRE

    Roberts, Tanya

    1987-01-01

    Salmonellosis, a common human intestinal disorder primarily caused by contaminated meats and poultry, attacks an estimated two million Americans annually. Using a cost of illness approach, the medical costs and productivity losses alone were estimated to cost around one billion dollars in 1987. If pain and suffering, lost leisure time, and chronic disease costs could be quantified, the estimate would increase significantly. Other procedures for calculating the value of life could either raise...

  4. Estimated Blood Loss in Craniotomy

    OpenAIRE

    Sitohang, Diana; AM, Rachmawati; Arif, Mansyur

    2016-01-01

    Introduction: Estimated blood loss is an estimation of how much blood is loss during surgery. Surgical procedure requires a preparation of blood stock, but the demand for blood often larger than the actual blood used. This predicament happens because there is no blood requirement protocol being used. This study aims to determine the estimated blood loss during craniotomy procedure and it's conformity to blood units ordered for craniotomy procedure. Methods: This study is a retrospective study...

  5. Estimating the Modified Allan Variance

    Science.gov (United States)

    Greenhall, Charles

    1995-01-01

    The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.

  6. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...

  7. Multisensor estimation: New distributed algorithms

    Directory of Open Access Journals (Sweden)

    Plataniotis K. N.

    1997-01-01

    Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  8. Strichartz Estimates in Spherical Coordinates

    OpenAIRE

    Cho, Yonggeun; Lee, Sanghyuk

    2012-01-01

    In this paper we study Strichartz estimates for dispersive equations which are defined by radially symmetric pseudo-differential operators, and of which initial data belongs to spaces of Sobolev type defined in spherical coordinates. We obtain the space time estimates on the best possible range including the endpoint cases.

  9. Preliminary test-shrinkage estimators

    Directory of Open Access Journals (Sweden)

    H. H. Lemmer

    1983-03-01

    Full Text Available The advantages of using the very simple shrinkage estimator TL proposed by Lemmer rather than that proposed by Mehta and Srivivasan in the case of preliminary test estimators for parameters of the normal, binomial and Poisson distributions are examined.

  10. Estimating Supplies Program: Evaluation Report

    Science.gov (United States)

    2002-12-24

    actually treat those patients. (U) It is also important to remember that ESP is estimation software. Keep in mind that the number and variety of...will not be indicative of the supplies needed to actually treat those patients. (U) It is also important to remember that ESP is estimation software

  11. Estimating Probabilities in Recommendation Systems

    OpenAIRE

    Sun, Mingxuan; Lebanon, Guy; Kidwell, Paul

    2010-01-01

    Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computat...

  12. State estimation for large ensembles

    NARCIS (Netherlands)

    Gill, R.D.; Massar, S.

    2000-01-01

    We consider the problem of estimating the state of a large but nite number N of identical quantum systems. As N becomes large the problem simplies dramatically. The only relevant measure of the quality of estimation becomes the mean quadratic error matrix. Here we present a bound on this quantity: a

  13. State estimation for large ensembles

    NARCIS (Netherlands)

    Gill, R.D.; Massar, S.

    1999-01-01

    We consider the problem of estimating the state of a large but nite number N of identical quantum systems In the limit of large N the problem simplies In particular the only relevant measure of the quality of the estimation is the mean quadratic error matrix Here we present a bound on the mean

  14. Age estimation in competitive sports.

    Science.gov (United States)

    Timme, Maximilian; Steinacker, Jürgen Michael; Schmeling, Andreas

    2017-01-01

    To maintain the principle of sporting fairness and to protect the health of athletes, it is essential that age limits for youth sporting competitions are complied with. Forensic scientists have developed validated procedures for age estimation in living individuals. Methods have also been published for age estimation in competitive sports. These methods make use of the ossification stage of an epiphyseal plate to draw conclusions about an athlete's age. This article presents published work on the use of magnetic resonance imaging for age estimation in competitive sports. In addition, it looks at the effect on age estimation of factors such as an athlete's socioeconomic status, the use of hormones and anabolic substances as well as chronic overuse of the growth plates. Finally, recommendations on the components required for a valid age estimation procedure in competitive sports are suggested.

  15. UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY

    Directory of Open Access Journals (Sweden)

    Jean-Paul Jernot

    2011-05-01

    Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.

  16. Estimating basin lagtime and hydrograph-timing indexes used to characterize stormflows for runoff-quality analysis

    Science.gov (United States)

    Granato, Gregory E.

    2012-01-01

    A nationwide study to better define triangular-hydrograph statistics for use with runoff-quality and flood-flow studies was done by the U.S. Geological Survey (USGS) in cooperation with the Federal Highway Administration. Although the triangular hydrograph is a simple linear approximation, the cumulative distribution of stormflow with a triangular hydrograph is a curvilinear S-curve that closely approximates the cumulative distribution of stormflows from measured data. The temporal distribution of flow within a runoff event can be estimated using the basin lagtime, (which is the time from the centroid of rainfall excess to the centroid of the corresponding runoff hydrograph) and the hydrograph recession ratio (which is the ratio of the duration of the falling limb to the rising limb of the hydrograph). This report documents results of the study, methods used to estimate the variables, and electronic files that facilitate calculation of variables. Ten viable multiple-linear regression equations were developed to estimate basin lagtimes from readily determined drainage basin properties using data published in 37 stormflow studies. Regression equations using the basin lag factor (BLF, which is a variable calculated as the main-channel length, in miles, divided by the square root of the main-channel slope in feet per mile) and two variables describing development in the drainage basin were selected as the best candidates, because each equation explains about 70 percent of the variability in the data. The variables describing development are the USGS basin development factor (BDF, which is a function of the amount of channel modifications, storm sewers, and curb-and-gutter streets in a basin) and the total impervious area variable (IMPERV) in the basin. Two datasets were used to develop regression equations. The primary dataset included data from 493 sites that have values for the BLF, BDF, and IMPERV variables. This dataset was used to develop the best-fit regression

  17. SDR Input Power Estimation Algorithms

    Science.gov (United States)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  18. SDR input power estimation algorithms

    Science.gov (United States)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  19. Radiation dose estimates for radiopharmaceuticals

    Energy Technology Data Exchange (ETDEWEB)

    Stabin, M.G.; Stubbs, J.B.; Toohey, R.E. [Oak Ridge Inst. of Science and Education, TN (United States). Radiation Internal Dose Information Center

    1996-04-01

    Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms.

  20. Estimating equivalence with quantile regression

    Science.gov (United States)

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  1. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  2. Fast and Statistically Efficient Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2016-01-01

    Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...

  3. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  4. Power Quality Indices Estimation Platform

    Directory of Open Access Journals (Sweden)

    Eliana I. Arango-Zuluaga

    2013-11-01

    Full Text Available An interactive platform for estimating the quality indices in single phase electric power systems is presented. It meets the IEEE 1459-2010 standard recommendations. The platform was developed in order to support teaching and research activities in electric power quality. The platform estimates the power quality indices from voltage and current signals using three different algorithms based on fast Fourier transform (FFT, wavelet packet transform (WPT and least squares method. The results show that the algorithms implemented are efficient for estimating the quality indices of the power and the platform can be used according to the objectives established. 

  5. Phase estimation in optical interferometry

    CERN Document Server

    Rastogi, Pramod

    2014-01-01

    Phase Estimation in Optical Interferometry covers the essentials of phase-stepping algorithms used in interferometry and pseudointerferometric techniques. It presents the basic concepts and mathematics needed for understanding the phase estimation methods in use today. The first four chapters focus on phase retrieval from image transforms using a single frame. The next several chapters examine the local environment of a fringe pattern, give a broad picture of the phase estimation approach based on local polynomial phase modeling, cover temporal high-resolution phase evaluation methods, and pre

  6. Real-time estimation of optical flow based on optimized haar wavelet features

    DEFF Research Database (Denmark)

    Salmen, Jan; Caup, Lukas; Igel, Christian

    2011-01-01

    -objective optimization. In this work, we build on a popular algorithm developed for realtime applications. It is originally based on the Census transform and benefits from this encoding for table-based matching and tracking of interest points. We propose to use the more universal Haar wavelet features instead...... of the Census transform within the same framework. The resulting approach is more flexible, in particular it allows for sub-pixel accuracy. For comparison with the original method and another baseline algorithm, we considered both popular benchmark datasets as well as a long synthetic video sequence. We...

  7. 50th Percentile Rent Estimates

    Data.gov (United States)

    Department of Housing and Urban Development — Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. Fair Market Rents (FMRs) are primarily used to determine payment...

  8. Travel time estimation using Bluetooth.

    Science.gov (United States)

    2015-06-01

    The objective of this study was to investigate the feasibility of using a Bluetooth Probe Detection System (BPDS) to : estimate travel time in an urban area. Specifically, the study investigated the possibility of measuring overall congestion, the : ...

  9. Performance Bounds of Quaternion Estimators.

    Science.gov (United States)

    Xia, Yili; Jahanchahi, Cyrus; Nitta, Tohru; Mandic, Danilo P

    2015-12-01

    The quaternion widely linear (WL) estimator has been recently introduced for optimal second-order modeling of the generality of quaternion data, both second-order circular (proper) and second-order noncircular (improper). Experimental evidence exists of its performance advantage over the conventional strictly linear (SL) as well as the semi-WL (SWL) estimators for improper data. However, rigorous theoretical and practical performance bounds are still missing in the literature, yet this is crucial for the development of quaternion valued learning systems for 3-D and 4-D data. To this end, based on the orthogonality principle, we introduce a rigorous closed-form solution to quantify the degree of performance benefits, in terms of the mean square error, obtained when using the WL models. The cases when the optimal WL estimation can simplify into the SWL or the SL estimation are also discussed.

  10. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  11. Estimating Emissions from Railway Traffic

    DEFF Research Database (Denmark)

    Jørgensen, Morten W.; Sorenson, Spencer C.

    1998-01-01

    Several parameters of importance for estimating emissions from railway traffic are discussed, and typical results presented. Typical emissions factors from diesel engines and electrical power generation are presented, and the effect of differences in national electrical generation sources...

  12. LPS Catch and Effort Estimation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data collected from the LPS dockside (LPIS) and the LPS telephone (LPTS) surveys are combined to produce estimates of total recreational catch, landings, and fishing...

  13. Perceptual estimation obeys Occam's razor

    National Research Council Canada - National Science Library

    Gershman, Samuel J; Niv, Yael

    2013-01-01

    .... In a series of experiments, we tested this prediction by asking participants to estimate the number of colored circles on a computer screen, with the number of circles drawn from a color-specific distribution...

  14. Load Estimation from Modal Parameters

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández

    2007-01-01

    In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF...... the accuracy in the load estimation. Finally, the results of an experimental program carried out on a simple structure are presented....

  15. Estimating emissions from railway traffic

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, M.W.; Sorenson, C.

    1997-07-01

    The report discusses methods that can be used to estimate the emissions from various kinds of railway traffic. The methods are based on the estimation of the energy consumption of the train, so that comparisons can be made between electric and diesel driven trains. Typical values are given for the necessary traffic parameters, emission factors, and train loading. Detailed models for train energy consumption are presented, as well as empirically based methods using average train speed and distance between stop. (au)

  16. ATR Performance Estimation Seed Program

    Science.gov (United States)

    2015-09-28

    19b. TELEPHONE NUMBER (Include area code) 09/28/2015 Research Performance Report July 2014 - June 2015 ATR Performance Estimation Seed Program Cook...Report: ATR Performance Estimation Seed Program Daniel A. Cook Georgia Tech Research Institute Sensors and Electromagnetic Applications Laboratory...term seed program to expand the Navy’s efforts in performance prediction for MCM. The team included individuals from ARL/PSU, APL-UW, GTRI, and NSWC

  17. Estimating uncertainty in resolution tests

    CSIR Research Space (South Africa)

    Goncalves, DP

    2006-05-01

    Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...

  18. The Psychology of Cost Estimating

    Science.gov (United States)

    Price, Andy

    2016-01-01

    Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.

  19. Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range

    Energy Technology Data Exchange (ETDEWEB)

    Soman, M.R., E-mail: m.r.soman@open.ac.uk [e2v centre for electronic imaging, The Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom); Hall, D.J.; Tutt, J.H.; Murray, N.J.; Holland, A.D. [e2v centre for electronic imaging, The Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom); Schmitt, T.; Raabe, J.; Schmitt, B. [Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland)

    2013-12-11

    The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/ΔE) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 µm from the current 24 µm spatial resolution (FWHM). The 400 eV–1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 µm square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 μm and 3.9±0.1 μm at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these

  20. Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range

    Science.gov (United States)

    Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.

    2013-12-01

    The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/ΔE) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 μm from the current 24 μm spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 μm square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 μm and 3.9±0.1 μm at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these

  1. Blind estimation of reverberation time.

    Science.gov (United States)

    Ratnam, Rama; Jones, Douglas L; Wheeler, Bruce C; O'Brien, William D; Lansing, Charissa R; Feng, Albert S

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  2. Relaxation times estimation in MRI

    Science.gov (United States)

    Baselice, Fabio; Caivano, Rocchina; Cammarota, Aldo; Ferraioli, Giampaolo; Pascazio, Vito

    2014-03-01

    Magnetic Resonance Imaging is a very powerful techniques for soft tissue diagnosis. At the present, the clinical evaluation is mainly conducted exploiting the amplitude of the recorded MR image which, in some specific cases, is modified by using contrast enhancements. Nevertheless, spin-lattice (T1) and spin-spin (T2) relaxation times can play an important role in many pathology diagnosis, such as cancer, Alzheimer or Parkinson diseases. Different algorithms for relaxation time estimation have been proposed in literature. In particular, the two most adopted approaches are based on Least Squares (LS) and on Maximum Likelihood (ML) techniques. As the amplitude noise is not zero mean, the first one produces a biased estimator, while the ML is unbiased but at the cost of high computational effort. Recently the attention has been focused on the estimation in the complex, instead of the amplitude, domain. The advantage of working with real and imaginary decomposition of the available data is mainly the possibility of achieving higher quality estimations. Moreover, the zero mean complex noise makes the Least Square estimation unbiased, achieving low computational times. First results of complex domain relaxation times estimation on real datasets are presented. In particular, a patient with an occipital lesion has been imaged on a 3.0T scanner. Globally, the evaluation of relaxation times allow us to establish a more precise topography of biologically active foci, also with respect to contrast enhanced images.

  3. Parameter estimation in food science.

    Science.gov (United States)

    Dolan, Kirk D; Mishra, Dharmendra K

    2013-01-01

    Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.

  4. Blind estimation of reverberation time

    Science.gov (United States)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  5. Weak-lensing shear estimates with general adaptive moments, and studies of bias by pixellation, PSF distortions, and noise

    Science.gov (United States)

    Simon, Patrick; Schneider, Peter

    2017-08-01

    In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We have employed general adaptive moments (GLAM ) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. The GLAM ellipticity has useful properties for any chosen weight profile: the weighted ellipticity is identical to that of isophotes of elliptical images, and in absence of noise and pixellation it is always an unbiased estimator of reduced shear. We show that moment-based techniques, adaptive or unweighted, are similar to a model-based approach in the sense that they can be seen as imperfect fit of an elliptical profile to the image. Due to residuals in the fit, moment-based estimates of ellipticities are prone to underfitting bias when inferred from observed images. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimised estimator for the pre-seeing GLAM ellipticity and quantify its bias for noise-free images. To deal with images where pixel noise is prominent, we consider a Bayesian approach to infer GLAM ellipticity where, similar to the noise-free case, the ellipticity posterior can be inconsistent with the true ellipticity if we do not properly account for our ignorance about fit residuals. This underfitting bias, quantified in the paper, does not vary with the overall noise level but changes with the pre-seeing brightness profile and the correlation or heterogeneity of pixel noise over the image. Furthermore, when inferring a constant ellipticity or, more relevantly, constant shear from a source sample with a distribution of intrinsic properties (sizes, centroid positions, intrinsic shapes), an additional, now noise-dependent bias arises towards low signal-to-noise if incorrect prior densities for the intrinsic properties are used. We discuss the origin of this

  6. Weldon Spring historical dose estimate

    Energy Technology Data Exchange (ETDEWEB)

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.

  7. Sacro-femoral-pubic angle: a coronal parameter to estimate pelvic tilt.

    Science.gov (United States)

    Blondel, Benjamin; Schwab, Frank; Patel, Ashish; Demakakos, Jason; Moal, Bertrand; Farcy, Jean-Pierre; Lafage, Virginie

    2012-04-01

    Pelvic tilt is an established measure of position which has been tied to sagittal plane spinal deformity. Increased tilt is noted in the setting of the aging spine and sagittal malalignment syndromes such as flatback (compensatory mechanism). However, the femoral heads are often poorly visualized on sagittal films of scoliosis series in adults, limiting the ability to determine pelvic incidence and tilt. There is a need to establish a coronal plane (better visualization) pelvic parameter which correlates closely with pelvic tilt. This is a retrospective review of 71 adult patients (47 females and 24 males) with full-length standing spine radiographs. Visualization of all spinal and pelvic landmarks was available coronally and sagittally (including pelvis and acetabuli). Pelvic tilt was calculated through validated digital analysis software (SpineView(®)). A new parameter, the sacro-femoral-pubic angle (midpoint of S1 endplate to centroid of acetabuli to superior border of the pubic symphysis) was analyzed for correlation (and predictive ability) with sagittal pelvic tilt. The sacro-femoral-pubic angle (SFP angle) was highly correlated to PT, and according to this analysis, pelvic tilt could be estimated by the formula: PT = 75 - (SFP angle). A Pearson's correlation coefficient of 0.74 (p < 0.005) and predictive ability of 76% accuracy was obtained (±7.5°). The correlation and predictive ability was greater for males compared to females (male: r = 0.87 and predictive model = 93%; female: r = 0.67 and predictive model = 67%). The pelvic tilt is an essential measure in the context of radiographic evaluation of spinal deformity and malalignment. Given the routinely excellent visibility of coronal films this study established the SFP as a coronal parameter which can reliably estimate pelvic tilt. The high correlation and predictive ability of the SFP angle should prompt further study and clinical application when lateral radiographs do not permit

  8. estimating formwork striking time for concrete mixes estimating

    African Journals Online (AJOL)

    eobe

    In this study, we estimated the time for strength development in concrete cured up to 56 days. Water .... concrete made from crushed aggregate. Non-linear ... Table 4: Mix Proportion. Grade. N/mm2. Slump. (mm). Mix ratio. Water- cement ratio w/c. Cement. (Kg/m3). Water. (Kg/m3). Fine. Aggregate. (kg/m3). Coarse.

  9. Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach

    Science.gov (United States)

    Stevenson, Glenn A.

    2012-01-01

    For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…

  10. Multispacecraft current estimates at swarm

    DEFF Research Database (Denmark)

    Dunlop, M. W.; Yang, Y.-Y.; Yang, J.-Y.

    2015-01-01

    During the first several months of the three-spacecraft Swarm mission all three spacecraft camerepeatedly into close alignment, providing an ideal opportunity for validating the proposed dual-spacecraftmethod for estimating current density from the Swarm magnetic field data. Two of the Swarm...... orbit the use oftime-shifted positions allow stable estimates of current density to be made and can verify temporal effects aswell as validating the interpretation of the current components as arising predominantly from field-alignedcurrents. In the case of four-spacecraft configurations we can resolve...... the full vector current and therefore cancheck the perpendicular as well as parallel current density components directly, together with the qualityfactor for the estimates directly (for the first time in situ at low Earth orbit)....

  11. Data Handling and Parameter Estimation

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist

    2016-01-01

    ). Models have also been used as an integral part of the comprehensive analysis and interpretation of data obtained from a range of experimental methods from the laboratory, as well as pilot-scale studies to characterise and study wastewater treatment plants. In this regard, models help to properly explain...... various kinetic parameters for different microbial groups and their activities in WWTPs by using parameter estimation techniques. Indeed, estimating parameters is an integral part of model development and application (Seber andWild, 1989; Ljung, 1999; Dochain and Vanrolleghem,2001; Omlin and Reichert......, 1999; Brun et al., 2002; Sinet al., 2010) and can be broadly defined as follows: Given a model and a set of data/measurements from the experimental setup in question, estimate all or some of the parameters of the model using an appropriate statistical method. The focus of this chapter is to provide...

  12. Integral Criticality Estimators in MCATK

    Energy Technology Data Exchange (ETDEWEB)

    Nolen, Steven Douglas [Los Alamos National Laboratory; Adams, Terry R. [Los Alamos National Laboratory; Sweezy, Jeremy Ed [Los Alamos National Laboratory

    2016-06-14

    The Monte Carlo Application ToolKit (MCATK) is a component-based software toolset for delivering customized particle transport solutions using the Monte Carlo method. Currently under development in the XCP Monte Carlo group at Los Alamos National Laboratory, the toolkit has the ability to estimate the ke f f and a eigenvalues for static geometries. This paper presents a description of the estimators and variance reduction techniques available in the toolkit and includes a preview of those slated for future releases. Along with the description of the underlying algorithms is a description of the available user inputs for controlling the iterations. The paper concludes with a comparison of the MCATK results with those provided by analytic solutions. The results match within expected statistical uncertainties and demonstrate MCATK’s usefulness in estimating these important quantities.

  13. Scaling behaviour of entropy estimates

    Science.gov (United States)

    Schürmann, Thomas

    2002-02-01

    Entropy estimation of information sources is highly non-trivial for symbol sequences with strong long-range correlations. The rabbit sequence, related to the symbolic dynamics of the nonlinear circle map at the critical point as well as the logistic map at the Feigenbaum point, is known to produce long memory tails. For both dynamical systems the scaling behaviour of the block entropy of order n has been shown to increase ∝log n. In contrast to such probabilistic concepts, we investigate the scaling behaviour of certain non-probabilistic entropy estimation schemes suggested by Lempel and Ziv (LZ) in the context of algorithmic complexity and data compression. These are applied in a sequential manner with the scaling variable being the length N of the sequence. We determine the scaling law for the LZ entropy estimate applied to the case of the critical circle map and the logistic map at the Feigenbaum point in a binary partition.

  14. System for estimating fatigue damage

    Science.gov (United States)

    LeMonds, Jeffrey; Guzzo, Judith Ann; Liu, Shaopeng; Dani, Uttara Ashwin

    2017-03-14

    In one aspect, a system for estimating fatigue damage in a riser string is provided. The system includes a plurality of accelerometers which can be deployed along a riser string and a communications link to transmit accelerometer data from the plurality of accelerometers to one or more data processors in real time. With data from a limited number of accelerometers located at sensor locations, the system estimates an optimized current profile along the entire length of the riser including riser locations where no accelerometer is present. The optimized current profile is then used to estimate damage rates to individual riser components and to update a total accumulated damage to individual riser components. The number of sensor locations is small relative to the length of a deepwater riser string, and a riser string several miles long can be reliably monitored along its entire length by fewer than twenty sensor locations.

  15. Estimation and valuation in accounting

    Directory of Open Access Journals (Sweden)

    Cicilia Ionescu

    2014-03-01

    Full Text Available The relationships of the enterprise with the external environment give rise to a range of informational needs. Satisfying those needs requires the production of coherent, comparable, relevant and reliable information included into the individual or consolidated financial statements. International Financial Reporting Standards IAS / IFRS aim to ensure the comparability and relevance of the accounting information, providing, among other things, details about the issue of accounting estimates and changes in accounting estimates. Valuation is a process continually used, in order to assign values to the elements that are to be recognised in the financial statements. Most of the times, the values reflected in the books are clear, they are recorded in the contracts with third parties, in the supporting documents, etc. However, the uncertainties in which a reporting entity operates determines that, sometimes, the assigned or values attributable to some items composing the financial statements be determined by use estimates.

  16. Density estimation in wildlife surveys

    Science.gov (United States)

    Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John

    2004-01-01

    Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.

  17. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation......Modelling spatial variability, typically in terms of the semivariogram, is of great interest when the objective is to compute spatial predictions of parameters measured in space. Such parameters could be rainfall, temperature or concentrations of polluting agents in aquatic environments...... is insensitive to the choice of estimation method, but also that the uncertainties of predictions were reduced when applying maximum likelihood....

  18. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  19. Perceptual frames in frequency estimation.

    Science.gov (United States)

    Zyłowska, Aleksandra; Kossek, Marcin; Wawrzyniak, Małgorzata

    2014-02-01

    This study is an introductory investigation of cognitive frames, focused on perceptual frames divided into information and formal perceptual frames, which were studied based on sub-additivity of frequency estimations. It was postulated that different presentations of a response scale would result in different percentage estimates of time spent watching TV or using the Internet. The results supported the existence of perceptual frames that influence the perception process and indicated that information perceptual frames had a stronger effect than formal frames. The measures made possible the exploration of the operation of perceptual frames and also outlined the relations between heuristics and cognitive frames.

  20. Multicollinearity and maximum entropy leuven estimator

    OpenAIRE

    Sudhanshu Mishra

    2004-01-01

    Multicollinearity is a serious problem in applied regression analysis. Q. Paris (2001) introduced the MEL estimator to resolve the multicollinearity problem. This paper improves the MEL estimator to the Modular MEL (MMEL) estimator and shows by Monte Carlo experiments that MMEL estimator performs significantly better than OLS as well as MEL estimators.

  1. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  2. Depth estimation via stage classification

    NARCIS (Netherlands)

    Nedović, V.; Smeulders, A.W.M.; Redert, A.; Geusebroek, J.M.

    2008-01-01

    We identify scene categorization as the first step towards efficient and robust depth estimation from single images. Categorizing the scene into one of the geometric classes greatly reduces the possibilities in subsequent phases. To that end, we introduce 15 typical 3D scene geometries, called

  3. Estimating latency from inhibitory input

    DEFF Research Database (Denmark)

    Levakova, Marie; Ditlevsen, Susanne; Lansky, Petr

    2014-01-01

    Stimulus response latency is the time period between the presentation of a stimulus and the occurrence of a change in the neural firing evoked by the stimulation. The response latency has been explored and estimation methods proposed mostly for excitatory stimuli, which means that the neuron reac...

  4. Helicopter Toy and Lift Estimation

    Science.gov (United States)

    Shakerin, Said

    2013-01-01

    A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)

  5. Age estimation of brown shrimp

    NARCIS (Netherlands)

    Campos, J.; Bio, A.; Freitas, V.; Moreiro, C.; van der Veer, H.W.

    2013-01-01

    In this study, 2 methods for age estimation of Crangon crangon were compared: one based on total length, the other based on the number of segments in the antennules, as suggested by Tiews’ findings (1954: Ber Deut Wiss Kommiss 13:235-269). Shrimps from populations near the species’ geographic edges,

  6. Performance of optimal registration estimators

    NARCIS (Netherlands)

    Pham, T.Q.; Bezuijen, M.; Van Vliet, L.J.; Schutte, K.; Luengo Hendriks, C.L.

    2005-01-01

    This paper derives a theoretical limit for image registration and presents an iterative estimator that achieves the limit. The variance of any parametric registration is bounded by the Cramer-Rao bound (CRB). This bound is signal-dependent and is proportional to the variance of input noise. Since

  7. [Estimating renal function with formulas

    NARCIS (Netherlands)

    Verhave, J.C.; Wetzels, J.F.M.; Bakker, S.J.; Gansevoort, R.T.

    2007-01-01

    A glomerular filtration rate (GFR) <60 ml/min/1.73 m2 is associated with an increased risk of cardiovascular disease and renal insufficiency. The formula of the 'Modification of diet in renal disease' (MDRD) study is derived from plasma-creatinine concentrations and estimates GFR based on age, sex

  8. Estimation of Motion Vector Fields

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    1993-01-01

    This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...

  9. Online Wavelet Complementary velocity Estimator.

    Science.gov (United States)

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-01-02

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Estimation of Bridge Reliability Distributions

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper...

  11. Estimating uncertainty in map intersections

    Science.gov (United States)

    Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker

    2009-01-01

    Traditionally, natural resource managers have asked the question "How much?" and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question "Where?" and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases...

  12. Nonparametric estimation of ultrasound pulses

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Leeman, Sidney

    1994-01-01

    An algorithm for nonparametric estimation of 1D ultrasound pulses in echo sequences from human tissues is derived. The technique is a variation of the homomorphic filtering technique using the real cepstrum, and the underlying basis of the method is explained. The algorithm exploits a priori...

  13. State Estimation for Tensegrity Robots

    Science.gov (United States)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  14. An Improved Cluster Richness Estimator

    Energy Technology Data Exchange (ETDEWEB)

    Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  15. Function Estimation Employing Exponential Splines

    Science.gov (United States)

    Dose, V.; Fischer, R.

    2005-11-01

    We introduce and discuss the use of the exponential spline family for Bayesian nonparametric function estimation. Exponential splines span the range of shapes between the limiting cases of traditional cubic spline and piecewise linear interpolation. They are therefore particularly suited for problems where both, smooth and rapid function changes occur.

  16. Estimation of Synchronous Machine Parameters

    Directory of Open Access Journals (Sweden)

    Oddvar Hallingstad

    1980-01-01

    Full Text Available The present paper gives a short description of an interactive estimation program based on the maximum likelihood (ML method. The program may also perform identifiability analysis by calculating sensitivity functions and the Hessian matrix. For the short circuit test the ML method is able to estimate the q-axis subtransient reactance x''q, which is not possible by means of the conventional graphical method (another set of measurements has to be used. By means of the synchronization and close test, the ML program can estimate the inertial constant (M, the d-axis transient open circuit time constant (T'do, the d-axis subtransient o.c.t.c (T''do and the q-axis subtransient o.c.t.c (T''qo. In particular, T''qo is difficult to estimate by any of the methods at present in use. Parameter identifiability is thoroughly examined both analytically and by numerical methods. Measurements from a small laboratory machine are used.

  17. Expression-Invariant Age Estimation

    NARCIS (Netherlands)

    Alnajar, F.; Lou, Z.; Alvarez, J.; Gevers, T.; Valstar, M.; French, A.; Pridmore, T.

    2014-01-01

    In this paper, we investigate and exploit the influence of facial expressions on automatic age estimation. Different from existing approaches, our method jointly learns the age and expression by introducing a new graphical model with a latent layer between the age/expression labels and the features.

  18. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  19. Progress Toward Automated Cost Estimation

    Science.gov (United States)

    Brown, Joseph A.

    1992-01-01

    Report discusses efforts to develop standard system of automated cost estimation (ACE) and computer-aided design (CAD). Advantage of system is time saved and accuracy enhanced by automating extraction of quantities from design drawings, consultation of price lists, and application of cost and markup formulas.

  20. Properties of Estimated Characteristic Roots

    DEFF Research Database (Denmark)

    Nielsen, Bent; Nielsen, Heino Bohn

    Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear...

  1. Estimating landscape resistance to dispersal

    Science.gov (United States)

    Graves, Tabitha A.; Chandler, Richard B.; Royle, J. Andrew; Beier, Paul; Kendall, Katherine C.

    2014-01-01

    Dispersal is an inherently spatial process that can be affected by habitat conditions in sites encountered by dispersers. Understanding landscape resistance to dispersal is important in connectivity studies and reserve design, but most existing methods use resistance functions with cost parameters that are subjectively chosen by the investigator. We develop an analytic approach allowing for direct estimation of resistance parameters that folds least cost path methods typically used in simulation approaches into a formal statistical model of dispersal distributions. The core of our model is a frequency distribution of dispersal distances expressed as least cost distance rather than Euclidean distance, and which includes terms for feature-specific costs to dispersal and sex (or other traits) of the disperser. The model requires only origin and settlement locations for multiple individuals, such as might be obtained from mark–recapture studies or parentage analyses, and maps of the relevant habitat features. To evaluate whether the model can estimate parameters correctly, we fit our model to data from simulated dispersers in three kinds of landscapes (in which resistance of environmental variables was categorical, continuous with a patchy configuration, or continuous in a trend pattern). We found maximum likelihood estimators of resistance and individual trait parameters to be approximately unbiased with moderate sample sizes. We applied the model to a small grizzly bear dataset to demonstrate how this approach could be used when the primary interest is in the prediction of costs and found that estimates were consistent with expectations based on bear ecology. Our method has important practical applications for testing hypotheses about dispersal ecology and can be used to inform connectivity planning efforts, via the resistance estimates and confidence intervals, which can be used to create a data-driven resistance surface.

  2. Techniques for estimating allometric equations.

    Science.gov (United States)

    Manaster, B J; Manaster, S

    1975-11-01

    Morphologists have long been aware that differential size relationships of variables can be fo great value when studying shape. Allometric patterns have been the basis of many interpretations of adaptations, biomechanisms, and taxonomies. It is of importance that the parameters of the allometric equation be as accurate estimates as possible since they are so commonly used in such interpretations. Since the error term may come into the allometric relation either exponentially or additively, there are at least two methods of estimating the parameters of the allometric equation. That most commonly used assumes exponentiality of the error term, and operates by forming a linear function by a logarithmic transformation and then solving by the method of ordinary least squares. On the other hand, if the rrror term comes into the equation in an additive way, a nonlinear method may be used, searching the parameter space for those parameters which minimize the sum of squared residuals. Study of data on body weight and metabolism in birds explores the issues involved in discriminating between the two models by working through a specific example and shows that these two methods of estimation can yield highly different results. Not only minimizing the sum of squared residuals, but also the distribution and randomness of the residuals must be considered in determing which model more precisely estimates the parameters. In general there is no a priori way to tell which model will be best. Given the importance often attached to the parameter estimates, it may be well worth considerable effort to find which method of solution is appropriate for a given set of data.

  3. Application of a finite size of the charge cloud shape generated by an X-ray photon inside the CCD

    CERN Document Server

    Tsunemi, H; Miyata, E

    2002-01-01

    A mesh experiment enables us to specify the X-ray landing position on a charge-coupled device (CCD) with subpixel resolution. By this experiment, we find that the final charge cloud shape generated by Ti-K X-ray photons (4.5 keV) in the CCD is about 1.5x1.1 mu m sup 2 (standard deviation). An X-ray photon photoabsorbed in the CCD generates a number of electrons, forming an X-ray event. It becomes up to a 4-pixel-split event since the pixel size of the CCD used (12 mu m square pixel) is bigger than the charge cloud size. Using the mesh experiment, we can determine the X-ray landing position on the CCD. In this way, we can compare the estimated X-ray landing position with the actual landing position on the CCD. Employing the charge cloud shape, we can improve the position resolution of the X-ray CCD by referring to the X-ray event pattern. We find that the position accuracy of our method is about 1.0 mu m. We discuss our method, comparing it with the charge centroid method.

  4. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    as the corresponding sensitivity equations are discussed. Chapter 6 summarizes the main contribution of this thesis. It briefly discusses the pros and cons of using the extended linear quadratic control framework for solution of deterministic optimal control problems. Appendices. Appendix A demonstrates how quadratic...... successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control....... An efficient structure-employing methodology for solution of the extended linear quadratic optimal control problem is provided and it is discussed how this solution is employed in solution of constrained model predictive control problems as well as in the solution of nonlinear optimal control and estimation...

  5. Organ volume estimation using SPECT

    CERN Document Server

    Zaidi, H

    1996-01-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...

  6. Cost Estimates and Investment Decisions

    Energy Technology Data Exchange (ETDEWEB)

    Emhjellen, Kjetil; Emhjellen Magne; Osmundsen, Petter

    2001-08-01

    When evaluating new investment projects, oil companies traditionally use the discounted cashflow method. This method requires expected cashflows in the numerator and a risk adjusted required rate of return in the denominator in order to calculate net present value. The capital expenditure (CAPEX) of a project is one of the major cashflows used to calculate net present value. Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. In the oil industry and many other industries, it is common practice to report a CAPEX that is the estimated 50/50 (median) CAPEX instead of the estimated expected (expected value) CAPEX. In this article we demonstrate how the practice of using a 50/50 (median) CAPEX, when the cost distributions are asymmetric, causes project valuation errors and therefore may lead to wrong investment decisions with acceptance of projects that have negative net present values. (author)

  7. Estimating Foreign Exchange Reserve Adequacy

    Directory of Open Access Journals (Sweden)

    Abdul Hakim

    2013-04-01

    Full Text Available Accumulating foreign exchange reserves, despite their cost and their impacts on other macroeconomics variables, provides some benefits. This paper models such foreign exchange reserves. To measure the adequacy of foreign exchange reserves for import, it uses total reserves-to-import ratio (TRM. The chosen independent variables are gross domestic product growth, exchange rates, opportunity cost, and a dummy variable separating the pre and post 1997 Asian financial crisis. To estimate the risky TRM value, this paper uses conditional Value-at-Risk (VaR, with the help of Glosten-Jagannathan-Runkle (GJR model to estimate the conditional volatility. The results suggest that all independent variables significantly influence TRM. They also suggest that the short and long run volatilities are evident, with the additional evidence of asymmetric effects of negative and positive past shocks. The VaR, which are calculated assuming both normal and t distributions, provide similar results, namely violations in 2005 and 2008.

  8. Estimation of joint position error.

    Science.gov (United States)

    Agostini, Valentina; Rosati, Samanta; Balestra, Gabriella; Trucco, Marco; Visconti, Lorenzo; Knaflitz, Marco

    2017-07-01

    Joint position error (JPE) is frequently used to assess proprioception in rehabilitation and sport science. During position-reposition tests the subject is asked to replicate a specific target angle (e.g. 30° of knee flexion) for a specific number of times. The aim of this study is to find an effective method to estimate JPE from the joint kinematic signal. Forty healthy subjects were tested to assess knee joint position sense. Three different methods of JPE estimation are described and compared using a hierarchical clustering approach. Overall, the 3 methods showed a high degree of similarity, ranging from 88% to 100%. We concluded that it is preferable to use the more user-independent method, in which the operator does not have to manually place "critical" markers.

  9. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  10. Location Estimation using Delayed Measurements

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus

    1998-01-01

    When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported...

  11. Backgrounds in AFP Detector Estimation

    CERN Document Server

    Huang, Yicong

    2016-01-01

    The ATLAS Forward Proton (AFP) detectors aim to measure protons that are scattered in the ATLAS interaction point under very small angles ($90-160 \\mu rad$). The diffractive protons detected by the AFP may be accompanied by beam halo. This report presents an estimation of the beam halo backgrounds in the AFP using low pile-up data, and position distributions of the backgrounds in the AFP.

  12. Position Estimation Using Image Derivative

    Science.gov (United States)

    Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato

    2015-01-01

    This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.

  13. Mode choice model parameters estimation

    OpenAIRE

    Strnad, Irena

    2010-01-01

    The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...

  14. Estimating the coherence of noise

    Science.gov (United States)

    Wallman, Joel

    To harness the advantages of quantum information processing, quantum systems have to be controlled to within some maximum threshold error. Certifying whether the error is below the threshold is possible by performing full quantum process tomography, however, quantum process tomography is inefficient in the number of qubits and is sensitive to state-preparation and measurement errors (SPAM). Randomized benchmarking has been developed as an efficient method for estimating the average infidelity of noise to the identity. However, the worst-case error, as quantified by the diamond distance from the identity, can be more relevant to determining whether an experimental implementation is at the threshold for fault-tolerant quantum computation. The best possible bound on the worst-case error (without further assumptions on the noise) scales as the square root of the infidelity and can be orders of magnitude greater than the reported average error. We define a new quantification of the coherence of a general noise channel, the unitarity, and show that it can be estimated using an efficient protocol that is robust to SPAM. Furthermore, we also show how the unitarity can be used with the infidelity obtained from randomized benchmarking to obtain improved estimates of the diamond distance and to efficiently determine whether experimental noise is close to stochastic Pauli noise.

  15. Population entropies estimates of proteins

    Science.gov (United States)

    Low, Wai Yee

    2017-05-01

    The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.

  16. Cognitive estimation and its assessment.

    Science.gov (United States)

    Gansler, David A; Varvaris, Mark; Swenson, Lance; Schretlen, David J

    2014-01-01

    We evaluated the internal consistency and construct and criterion validity of a 10-item revision of the Cognitive Estimation Task (CET-R) developed by Shallice and Evans to assess problem-solving hypothesis generation. The CET-R was administered to 216 healthy adults from the Aging, Brain Imaging, and Cognition study and 57 adult outpatients with schizophrenia. Exploratory and confirmatory factor analysis (EFA and CFA) of the healthy sample revealed that seven of the 10 CET-R items constitute a more internally consistent scale (CET-R-7). Though EFA indicated that two CET-R-7 dimensions might be present (length and speed/time estimation, respectively), CFA confirmed that a single factor best represents the seven items. The CET-R-7 was modeled best by crystallized intelligence, adequately by fluid intelligence, and inadequately by visuospatial problem solving. Performance on the CET-R-7 correlated significantly with the neuropsychological domains of speed and fluency, but not memory or executive function. Finally, CET-R performance differed by diagnosis, sex, and education, but not age. This study identified an internally consistent set of items that measures the construct of cognitive estimation. This construct relates to several important dimensions of psychological functioning, including crystallized and fluid intelligence, generativity, and self-monitoring. It also is sensitive to cognitive dysfunction in adults with schizophrenia.

  17. Learning headway estimation in driving.

    Science.gov (United States)

    Taieb-Maimon, Meirav

    2007-08-01

    The main purpose of the present study was to examine to what extent the ability to attain a required headway of 1 or 2 s can be improved through practical driving instruction under real traffic conditions and whether the learning is sustained after a period during which there has been no controlled training. The failure of drivers to estimate headways correctly has been demonstrated in previous studies. Two methods of training were used: time based (in seconds) and distance based (in a combination of meters and car lengths). For each method, learning curves were examined for 18 participants at speeds of 50, 80, and 100 km/hr. The results indicated that drivers were weak in estimating headway prior to training using both methods. The learning process was rapid for both methods and similar for all speeds; thus, after one trial with feedback, there was already a significant improvement. The learning was retained over time, for at least the 1 month examined in this study. Both the time and distance training of headway improved drivers' ability to attain required headways, with the learning being maintained over a retention interval. The learning process was based on perceptual cues from the driving scene and feedback from the experimenter, regardless of the formal training method. The implications of these results are that all drivers should be trained in headway estimation using an objective distance measuring device, which can be installed on driver instruction vehicles.

  18. Abundance estimation and Conservation Biology

    Directory of Open Access Journals (Sweden)

    Nichols, J. D.

    2004-06-01

    Full Text Available Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001. The initial capture–recapture models developed for partially (Darroch, 1959 and completely (Jolly, 1965; Seber, 1965 open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992, and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993. However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001. The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004 is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004 emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004 also suggest that

  19. Abundance estimation and conservation biology

    Science.gov (United States)

    Nichols, J.D.; MacKenzie, D.I.

    2004-01-01

    Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001). The initial capture–recapture models developed for partially (Darroch, 1959) and completely (Jolly, 1965; Seber, 1965) open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992), and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993). However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001). The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004) is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004) emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004) also suggest that our attention

  20. Parameter Estimation Using VLA Data

    Science.gov (United States)

    Venter, Willem C.

    The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters

  1. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false When deposit of estimated duties, estimated taxes... Estimated Duties § 141.102 When deposit of estimated duties, estimated taxes, or both not required. Entry or... duties, or estimated taxes, or both, as specifically noted: (a) Cigars and cigarettes. A qualified dealer...

  2. Estimating the Costs of Preventive Interventions

    Science.gov (United States)

    Foster, E. Michael; Porter, Michele M.; Ayers, Tim S.; Kaplan, Debra L.; Sandler, Irwin

    2007-01-01

    The goal of this article is to improve the practice and reporting of cost estimates of prevention programs. It reviews the steps in estimating the costs of an intervention and the principles that should guide estimation. The authors then review prior efforts to estimate intervention costs using a sample of well-known but diverse studies. Finally,…

  3. Empirical methods in the evaluation of estimators

    Science.gov (United States)

    Gerald S. Walton; C.J. DeMars; C.J. DeMars

    1973-01-01

    The authors discuss the problem of selecting estimators of density and survival by making use of data on a forest-defoliating larva, the spruce budworm. Varlous estimators are compared. The results show that, among the estimators considered, ratio-type estimators are superior in terms of bias and variance. The methods used in making comparisons, particularly simulation...

  4. Load Estimation by Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune

    2007-01-01

    When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated...... and the errors on the estimated loads are determined....

  5. On semiautomatic estimation of surface area

    DEFF Research Database (Denmark)

    Dvorak, J.; Jensen, Eva B. Vedel

    2013-01-01

    In this paper, we propose a semiautomatic procedure for estimation of particle surface area. It uses automatic segmentation of the boundaries of the particle sections and applies different estimators depending on whether the segmentation was judged by a supervising expert to be satisfactory....... If the segmentation is correct the estimate is computed automatically, otherwise the expert performs the necessary measurements manually. In case of convex particles we suggest to base the semiautomatic estimation on the so-called flower estimator, a new local stereological estimator of particle surface area....... For convex particles, the estimator is equal to four times the area of the support set (flower set) of the particle transect. We study the statistical properties of the flower estimator and compare its performance to that of two discretizations of the flower estimator, namely the pivotal estimator...

  6. Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using......-squares (WLS) DOA estimator....

  7. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  8. Mass Estimation and Its Applications

    Science.gov (United States)

    2012-02-23

    tour maps, produced from the mass values estimated from h:d-Trees for a lattice of equal-spaced points in the entire feature space. 0 100 200 300 1...Online co- ordinate boosting. In Proceedings of the 3rd IEEE On-line Learning for Computer Vision Workshop, 2009. D.M. Rocke and D.L. Woodruff...are likely to be in different regions. In the following paragraphs, we first provide the topologically distinct iTree structures in the setting we

  9. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-06

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

  10. Minimum Distance and Robust Estimation.

    Science.gov (United States)

    1979-10-05

    K(a)) - (L(b) - L(a))j -m<a maximal ’ interval probability distance. , , ,, i iv) Z ab(KL) a f (K(x) - L(x))2 dL(x) + b[7 (K(x) - L(x...support this proposition, showing :MD-8titors to be competitive with some of the better estimators thus far pro DD , O: ’\\1473 EDITIO O’ ROY 5 IS OBSOLETE SECURITY CLASSIFICATIONI OF TWIS PAGE (whmt Desl ’ntld) ~J

  11. Estimation of cerebrospinal fluid protein

    Science.gov (United States)

    Pennock, C. A.; Passant, L. P.; Bolton, F. G.

    1968-01-01

    Three turbidometric methods and one method using ultraviolet spectrophotometry for estimating total cerebrospinal fluid protein have been examined. The necessity for preliminary dialysis renders the ultraviolet method unsuitable for routine use. The turbidometric method of Meulemans (1960) using a sulphosalicylic acid-sodium sulphate precipitating fluid is better than a method using sulphosalicylic acid alone which is affected by the albumin-globulin ratio, and has a greater sensitivity and better reproducibility than a method using trichloracetic acid as a precipitant. Turbidity may be measured with a spectrophotometer or an MRC grey wedge photometer with human or bovine albumin as a standard. This method deserves wider acceptance. PMID:5697354

  12. Tidal Power Plant Energy Estimation

    OpenAIRE

    Silva–Casarín R.; Hiriart–Le Bert G.; López–González J.

    2010-01-01

    In this paper a methodology is presented which allows a quick and simple means of estimating the potential energy that can be obtained from a tidal power plant. The evaluation is made using a normalised nomograph, which is a function of the area of the tidal basin against the electricity installed capacity to thus obtain the potential energy for any location. The results describe two means of operation, one of "flow tide" and the other "flow–ebb tides", with two tidal basin systems operating:...

  13. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Ridi Ferdiana

    2011-01-01

    Full Text Available Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation. DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming.

  14. Regional PV power estimation and forecast to mitigate the impact of high photovoltaic penetration on electric grid.

    Science.gov (United States)

    Pierro, Marco; De Felice, Matteo; Maggioni, Enrico; Moser, David; Perotto, Alessandro; Spada, Francesco; Cornaro, Cristina

    2017-04-01

    The growing photovoltaic generation results in a stochastic variability of the electric demand that could compromise the stability of the grid and increase the amount of energy reserve and the energy imbalance cost. On regional scale, solar power estimation and forecast is becoming essential for Distribution System Operators, Transmission System Operator, energy traders, and aggregators of generation. Indeed the estimation of regional PV power can be used for PV power supervision and real time control of residual load. Mid-term PV power forecast can be employed for transmission scheduling to reduce energy imbalance and related cost of penalties, residual load tracking, trading optimization, secondary energy reserve assessment. In this context, a new upscaling method was developed and used for estimation and mid-term forecast of the photovoltaic distributed generation in a small area in the north of Italy under the control of a local DSO. The method was based on spatial clustering of the PV fleet and neural networks models that input satellite or numerical weather prediction data (centered on cluster centroids) to estimate or predict the regional solar generation. It requires a low computational effort and very few input information should be provided by users. The power estimation model achieved a RMSE of 3% of installed capacity. Intra-day forecast (from 1 to 4 hours) obtained a RMSE of 5% - 7% while the one and two days forecast achieve to a RMSE of 7% and 7.5%. A model to estimate the forecast error and the prediction intervals was also developed. The photovoltaic production in the considered region provided the 6.9% of the electric consumption in 2015. Since the PV penetration is very similar to the one observed at national level (7.9%), this is a good case study to analyse the impact of PV generation on the electric grid and the effects of PV power forecast on transmission scheduling and on secondary reserve estimation. It appears that, already with 7% of PV

  15. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  16. CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE

    Directory of Open Access Journals (Sweden)

    Nino Serdarevic

    2012-10-01

    Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.

  17. Clustering with position-specific constraints on variance: applying redescending M-estimators to label-free LC-MS data analysis.

    Science.gov (United States)

    Frühwirth, Rudolf; Mani, D R; Pyne, Saumyadipta

    2011-08-31

    Clustering is a widely applicable pattern recognition method for discovering groups of similar observations in data. While there are a large variety of clustering algorithms, very few of these can enforce constraints on the variation of attributes for data points included in a given cluster. In particular, a clustering algorithm that can limit variation within a cluster according to that cluster's position (centroid location) can produce effective and optimal results in many important applications ranging from clustering of silicon pixels or calorimeter cells in high-energy physics to label-free liquid chromatography based mass spectrometry (LC-MS) data analysis in proteomics and metabolomics. We present MEDEA (M-Estimator with DEterministic Annealing), an M-estimator based, new unsupervised algorithm that is designed to enforce position-specific constraints on variance during the clustering process. The utility of MEDEA is demonstrated by applying it to the problem of "peak matching"--identifying the common LC-MS peaks across multiple samples--in proteomic biomarker discovery. Using real-life datasets, we show that MEDEA not only outperforms current state-of-the-art model-based clustering methods, but also results in an implementation that is significantly more efficient, and hence applicable to much larger LC-MS data sets. MEDEA is an effective and efficient solution to the problem of peak matching in label-free LC-MS data. The program implementing the MEDEA algorithm, including datasets, clustering results, and supplementary information is available from the author website at http://www.hephy.at/user/fru/medea/.

  18. Clustering with position-specific constraints on variance: Applying redescending M-estimators to label-free LC-MS data analysis

    Directory of Open Access Journals (Sweden)

    Mani D R

    2011-08-01

    Full Text Available Abstract Background Clustering is a widely applicable pattern recognition method for discovering groups of similar observations in data. While there are a large variety of clustering algorithms, very few of these can enforce constraints on the variation of attributes for data points included in a given cluster. In particular, a clustering algorithm that can limit variation within a cluster according to that cluster's position (centroid location can produce effective and optimal results in many important applications ranging from clustering of silicon pixels or calorimeter cells in high-energy physics to label-free liquid chromatography based mass spectrometry (LC-MS data analysis in proteomics and metabolomics. Results We present MEDEA (M-Estimator with DEterministic Annealing, an M-estimator based, new unsupervised algorithm that is designed to enforce position-specific constraints on variance during the clustering process. The utility of MEDEA is demonstrated by applying it to the problem of "peak matching"--identifying the common LC-MS peaks across multiple samples--in proteomic biomarker discovery. Using real-life datasets, we show that MEDEA not only outperforms current state-of-the-art model-based clustering methods, but also results in an implementation that is significantly more efficient, and hence applicable to much larger LC-MS data sets. Conclusions MEDEA is an effective and efficient solution to the problem of peak matching in label-free LC-MS data. The program implementing the MEDEA algorithm, including datasets, clustering results, and supplementary information is available from the author website at http://www.hephy.at/user/fru/medea/.

  19. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin

    2015-02-03

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  20. Noise estimation of beam position monitors at RHIC

    Energy Technology Data Exchange (ETDEWEB)

    Shen, X. [Indiana Univ., Bloomington, IN (United States); Bai, M. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Lee, S. Y. [Indiana Univ., Bloomington, IN (United States)

    2014-02-10

    Beam position monitors (BPM) are used to record the average orbits and transverse turn-by-turn displacements of the beam centroid motion. The Relativistic Hadron Ion Collider (RHIC) has 160 BPMs for each plane in each of the Blue and Yellow rings: 72 dual-plane BPMs in the insertion regions (IR) and 176 single-plane modules in the arcs. Each BPM is able to acquire 1024 or 4096 consecutive turn-by-turn beam positions. Inevitably, there are broadband noisy signals in the turn-by-turn data due to BPM electronics as well as other sources. A detailed study of the BPM noise performance is critical for reliable optics measurement and beam dynamics analysis based on turn-by-turn data.

  1. A Class of Modified Ratio Estimators for Estimation of Population Variance

    Directory of Open Access Journals (Sweden)

    Subramani J.

    2015-05-01

    Full Text Available In this paper we have proposed a class of modified ratio type variance estimators for estimation of population variance of the study variable using the known parameters of the auxiliary variable. The bias and mean squared error of the proposed estimators are obtained and also derived the conditions for which the proposed estimators perform better than the traditional ratio type variance estimator and existing modified ratio type variance estimators. Further we have compared the proposed estimators with that of the traditional ratio type variance estimator and existing modified ratio type variance estimators for certain natural populations.

  2. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...... linearity of the estimator is established under weak conditions. Indeed, we show that the bandwidth conditions employed are necessary in some cases. A bias-corrected version of the estimator is proposed and shown to be asymptotically linear under yet weaker bandwidth conditions. Consistency of an analog...... estimator of the asymptotic variance is also established. To establish the results, a novel result on uniform convergence rates for kernel estimators is obtained....

  3. Equating accelerometer estimates among youth

    DEFF Research Database (Denmark)

    Brazendale, Keith; Beets, Michael W; Bornstein, Daniel B

    2016-01-01

    OBJECTIVES: Different accelerometer cutpoints used by different researchers often yields vastly different estimates of moderate-to-vigorous intensity physical activity (MVPA). This is recognized as cutpoint non-equivalence (CNE), which reduces the ability to accurately compare youth MVPA across...... percent error was 12.6% (range: 1.3 to 30.1) and the proportion of variance explained ranged from 66.7% to 99.8%. Mean difference for the best performing prediction equation (VC from EV) was -0.110mind(-1) (limits of agreement (LOA), -2.623 to 2.402). The mean difference for the worst performing...... prediction equation (FR3 from PY) was 34.76mind(-1) (LOA, -60.392 to 129.910). CONCLUSIONS: For six different sets of published cutpoints, the use of this equating system can assist individuals attempting to synthesize the growing body of literature on Actigraph, accelerometry-derived MVPA....

  4. 2007 Estimated International Energy Flows

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C A; Belles, R D; Simon, A J

    2011-03-10

    An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.

  5. Neutron background estimates in GESA

    Directory of Open Access Journals (Sweden)

    Fernandes A.C.

    2014-01-01

    Full Text Available The SIMPLE project looks for nuclear recoil events generated by rare dark matter scattering interactions. Nuclear recoils are also produced by more prevalent cosmogenic neutron interactions. While the rock overburden shields against (μ,n neutrons to below 10−8 cm−2 s−1, it itself contributes via radio-impurities. Additional shielding of these is similar, both suppressing and contributing neutrons. We report on the Monte Carlo (MCNP estimation of the on-detector neutron backgrounds for the SIMPLE experiment located in the GESA facility of the Laboratoire Souterrain à Bas Bruit, and its use in defining additional shielding for measurements which have led to a reduction in the extrinsic neutron background to ∼ 5 × 10−3 evts/kgd. The calculated event rate induced by the neutron background is ∼ 0,3 evts/kgd, with a dominant contribution from the detector container.

  6. Interval estimates and their precision

    Science.gov (United States)

    Marek, Luboš; Vrabec, Michal

    2015-06-01

    A task very often met in in practice is computation of confidence interval bounds for the relative frequency within sampling without replacement. A typical situation includes preelection estimates and similar tasks. In other words, we build the confidence interval for the parameter value M in the parent population of size N on the basis of a random sample of size n. There are many ways to build this interval. We can use a normal or binomial approximation. More accurate values can be looked up in tables. We consider one more method, based on MS Excel calculations. In our paper we compare these different methods for specific values of M and we discuss when the considered methods are suitable. The aim of the article is not a publication of new theoretical methods. This article aims to show that there is a very simple way how to compute the confidence interval bounds without approximations, without tables and without other software costs.

  7. Overdiagnosis: epidemiologic concepts and estimation

    Science.gov (United States)

    Bae, Jong-Myon

    2015-01-01

    Overdiagnosis of thyroid cancer was propounded regarding the rapidly increasing incidence in South Korea. Overdiagnosis is defined as ‘the detection of cancers that would never have been found were it not for the screening test’, and may be an extreme form of lead bias due to indolent cancers, as is inevitable when conducting a cancer screening programme. Because it is solely an epidemiological concept, it can be estimated indirectly by phenomena such as a lack of compensatory drop in post-screening periods, or discrepancies between incidence and mortality. The erstwhile trials for quantifying the overdiagnosis in screening mammography were reviewed in order to secure the data needed to establish its prevalence in South Korea. PMID:25824531

  8. Equating accelerometer estimates among youth

    DEFF Research Database (Denmark)

    Brazendale, Keith; Beets, Michael W; Bornstein, Daniel B

    2016-01-01

    OBJECTIVES: Different accelerometer cutpoints used by different researchers often yields vastly different estimates of moderate-to-vigorous intensity physical activity (MVPA). This is recognized as cutpoint non-equivalence (CNE), which reduces the ability to accurately compare youth MVPA across......,112 Actigraph accelerometer data files from 21 worldwide studies (children 3-18 years, 61.5% female) were used to develop prediction equations for six sets of published cutpoints. Linear and non-linear modeling, using a leave one out cross-validation technique, was employed to develop equations to convert MVPA...... from one set of cutpoints into another. Bland Altman plots illustrate the agreement between actual MVPA and predicted MVPA values. RESULTS: Across the total sample, mean MVPA ranged from 29.7MVPAmind(-1) (Puyau) to 126.1MVPAmind(-1) (Freedson 3 METs). Across conversion equations, median absolute...

  9. Runoff estimation in residencial area

    Directory of Open Access Journals (Sweden)

    Meire Regina de Almeida Siqueira

    2013-12-01

    Full Text Available This study aimed to estimate the watershed runoff caused by extreme events that often result in the flooding of urban areas. The runoff of a residential area in the city of Guaratinguetá, São Paulo, Brazil was estimated using the Curve-Number method proposed by USDA-NRCS. The study also investigated current land use and land cover conditions, impermeable areas with pasture and indications of the reforestation of those areas. Maps and satellite images of Residential Riverside I Neighborhood were used to characterize the area. In addition to characterizing land use and land cover, the definition of the soil type infiltration capacity, the maximum local rainfall, and the type and quality of the drainage system were also investigated. The study showed that this neighborhood, developed in 1974, has an area of 792,700 m², a population of 1361 inhabitants, and a sloping area covered with degraded pasture (Guaratinguetá-Piagui Peak located in front of the residential area. The residential area is located in a flat area near the Paraiba do Sul River, and has a poor drainage system with concrete pipes, mostly 0.60 m in diameter, with several openings that capture water and sediments from the adjacent sloping area. The Low Impact Development (LID system appears to be a viable solution for this neighborhood drainage system. It can be concluded that the drainage system of the Guaratinguetá Riverside I Neighborhood has all of the conditions and characteristics that make it suitable for the implementation of a low impact urban drainage system. Reforestation of Guaratinguetá-Piagui Peak can reduce the basin’s runoff by 50% and minimize flooding problems in the Beira Rio neighborhood.

  10. Modelling Firm Innovation using Panel Probit Estimators.

    OpenAIRE

    Mark N. Harris; Mark Rogers; Anthony Siouclis

    2001-01-01

    Firm-level innovation is investigated using three probit panel estimators, which control for unobserved heterogeneity, and a standard probit estimator. Results indicate the standard probit model is misspecified and that inter-firm networks are important for innovation.

  11. Econometric Analysis on Efficiency of Estimator

    OpenAIRE

    M Khoshnevisan; Kaymram, F.; Singh, Housila P.; Singh, Rajesh; Smarandache, Florentin

    2003-01-01

    This paper investigates the efficiency of an alternative to ratio estimator under the super population model with uncorrelated errors and a gamma-distributed auxiliary variable. Comparisons with usual ratio and unbiased estimators are also made.

  12. Fish population density estimates - 1969 Fredericton District

    National Research Council Canada - National Science Library

    Hyatt, R.A

    1969-01-01

    Discussions with biologists concerning electroseining methods for fish population density estimates led to the conduct of a series of comparisons between the Peterson Index type of estimate (mark and recapture...

  13. River Forecasting Center Quantitative Precipitation Estimate Archive

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Radar indicated-rain gage verified and corrected hourly precipitation estimate on a corrected ~4km HRAP grid. This archive contains hourly estimates of precipitation...

  14. Simplified Life-Cycle Cost Estimation

    Science.gov (United States)

    Remer, D. S.; Lorden, G.; Eisenberger, I.

    1983-01-01

    Simple method for life-cycle cost (LCC) estimation avoids pitfalls inherent in formulations requiring separate estimates of inflation and interest rates. Method depends for validity observation that interest and inflation rates closely track each other.

  15. Estimating Inter-Deployment Training Cycle Performances

    National Research Council Canada - National Science Library

    Eriskin, Levent

    2003-01-01

    ... (COMET) metrics. The objective was primarily to decide whether the COMET database can be used to estimate the performances of ships, and to build regression models to estimate Final Evaluation Problem (FEP...

  16. Spectral Unmixing Applied to Desert Soils for the Detection of Sub-Pixel Disturbances

    Science.gov (United States)

    2012-09-01

    scattering and absorption (From Olsen, 2007). ....4 Figure 2. This figure from http://www.astro.cornell.edu/ academics /courses/astro201...spectral regions that are illustrated in Figure 2 (Elachi and Van Zyl, 2006). Figure 2. This figure from http://www.astro.cornell.edu/ academics ...gopher till values 0.0419, 0.026, 0.073, home hill clay values 0.092, 0.0087, 0.137, and hard picnic area clay values 0.0805, 0.154, 0.000 (flat

  17. Visual in-plane positioning of a Labeled target with subpixel Resolution: basics and application

    Directory of Open Access Journals (Sweden)

    Patrick Sandoz

    2017-05-01

    Full Text Available Vision is a convenient tool for position measurements. In this paper, we present several applications in which a reference pattern can be defined on the target for a priori knowledge of image features and further optimization by software. Selecting pseudoperiodic patterns leads to high resolution in absolute phase measurements. This method is adapted to position encoding of live cell culture boxes. Our goal is to capture each biological image along with its absolute highly accurate position regarding the culture box itself. Thus, it becomes straightforward to find again an already observed region of interest when a culture box is brought back to the microscope stage from the cell incubator where it was temporarily placed for cell culture. In order to evaluate the performance of this method, we tested it during a wound healing assay of human liver tumor-derived cells. In this case, the procedure enabled more accurate measurements of the wound healing rate than the usual method. It was also applied to the characterization of the in-plane vibration amplitude from a tapered probe of a shear force microscope. The amplitude was interpolated by a quartz tuning fork with an attached pseudo-periodic pattern. Nanometer vibration amplitude resolution is achieved by processing the pattern images. Such pictures were recorded by using a common 20x magnification lens.

  18. Subpixel translation of MEMS measured by discrete fourier transform analysis of CCD images

    NARCIS (Netherlands)

    Yamahata, C.; Sarajlic, Edin; Stranczl, M.; Krijnen, Gijsbertus J.M.; Gijs, M.A.M.

    2011-01-01

    We present a straightforward method for measuring in-plane linear displacements of microelectromechanical systems (MEMS) with subnanometer resolution. The technique is based on Fourier transform analysis of a video recorded with a Charge-Coupled Device (CCD) camera attached to an optical microscope

  19. Finite mixture models for sub-pixel coastal land cover classification

    CSIR Research Space (South Africa)

    Ritchie, Michaela C

    2017-05-01

    Full Text Available monitoring and management. A solution for this problem might be the spectral unmixing classification approach on medium resolution imagery (e.g. Landsat 8; Sentinel-2) which have no acquisition cost and are therefore affordable for operational use. Finite...

  20. Detection of Spatially Unresolved (Nominally Sub-Pixel) Submerged and Surface Targets Using Hyperspectral Data

    Science.gov (United States)

    2012-09-01

    Santa Barbara, 2007 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN REMOTE SENSING INTELLIGENCE...David Trask Second Reader Dan Boger Chair, Department of Information Sciences iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT Due...days as a navy mine-sweeping team searched the river. No 6 mine was ever found and the threat was deemed a hoax , but the cost to maritime shipping

  1. Retrieval of subpixel snow covered area, grain size, and albedo from MODIS

    OpenAIRE

    Painter, Thomas H.; Rittger, Karl; McKenzie, Ceretha; Slaughter, Peter; Davis, Robert E.; Dozier, Jeff

    2009-01-01

    The article of record as published may be found at http://dx.doi.org/10.1016/j.rse.2009.01.001 We describe and validate a model that retrieves fractional snow-covered area and the grain size and albedo of that snow from surface reflectance data (product MOD09GA) acquired by NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). The model analyzes the MODIS visible, near infrared, and shortwave infrared bands with multiple endmember spectral mixtures from a library of snow, vegeta...

  2. Automated tracking of colloidal clusters with sub-pixel accuracy and precision

    Science.gov (United States)

    van der Wel, Casper; Kraft, Daniela J.

    2017-02-01

    Quantitative tracking of features from video images is a basic technique employed in many areas of science. Here, we present a method for the tracking of features that partially overlap, in order to be able to track so-called colloidal molecules. Our approach implements two improvements into existing particle tracking algorithms. Firstly, we use the history of previously identified feature locations to successfully find their positions in consecutive frames. Secondly, we present a framework for non-linear least-squares fitting to summed radial model functions and analyze the accuracy (bias) and precision (random error) of the method on artificial data. We find that our tracking algorithm correctly identifies overlapping features with an accuracy below 0.2% of the feature radius and a precision of 0.1 to 0.01 pixels for a typical image of a colloidal cluster. Finally, we use our method to extract the three-dimensional diffusion tensor from the Brownian motion of colloidal dimers. , which features invited work from the best early-career researchers working within the scope of Journal of Physics: Condensed Matter. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Daniela Kraft was selected by the Editorial Board of Journal of Physics: Condensed Matter as an Emerging Leader.

  3. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  4. Calibrated Weighting for Small Area Estimation

    OpenAIRE

    Chambers, R. L.

    2005-01-01

    Calibrated weighting methods for estimation of survey population characteristics are widely used. At the same time, model-based prediction methods for estimation of small area or domain characteristics are becoming increasingly popular. This paper explores weighting methods based on the mixed models that underpin small area estimates to see whether they can deliver equivalent small area estimation performance when compared with standard prediction methods and superior population level estimat...

  5. Probabilistic Lane Estimation using Basis Curves

    OpenAIRE

    Huang, Albert S.; Teller, Seth

    2010-01-01

    Lane estimation for autonomous driving can be formulated as a curve estimation problem, where local sensor data provides partial and noisy observations of spatial curves. The number of curves to estimate may be initially unknown and many of the observations may be outliers or false detections (due e.g. to to tree shadows or lens flare). The challenges lie in detecting lanes when and where they exist, and updating lane estimates as new observations are made. This paper ...

  6. Applications of Generalized Method of Moments Estimation

    OpenAIRE

    Wooldridge, Jeffrey M.

    2001-01-01

    I describe how the method of moments approach to estimation, including the more recent generalized method of moments (GMM) theory, can be applied to problems using cross section, time series, and panel data. Method of moments estimators can be attractive because in many circumstances they are robust to failures of auxiliary distributional assumptions that are not needed to identify key parameters. I conclude that while sophisticated GMM estimators are indispensable for complicated estimation ...

  7. Simple nonparametric estimators for unemployment duration analysis

    OpenAIRE

    Wichert, Laura; Wilke, Ralf A.

    2007-01-01

    "We consider an extension of conventional univariate Kaplan-Meier type estimators for the hazard rate and the survivor function to multivariate censored data with a censored random regressor. It is an Akritas (1994) type estimator which adapts the nonparametric conditional hazard rate estimator of Beran (1981) to more typical data situations in applied analysis. We show with simulations that the estimator has nice finite sample properties and our implementation appears to be fast. As an appli...

  8. Parameter Uncertainty in Exponential Family Tail Estimation

    OpenAIRE

    Landsman, Z.; Tsanakas, A.

    2012-01-01

    Actuaries are often faced with the task of estimating tails of loss distributions from just a few observations. Thus estimates of tail probabilities (reinsurance prices) and percentiles (solvency capital requirements) are typically subject to substantial parameter uncertainty. We study the bias and MSE of estimators of tail probabilities and percentiles, with focus on 1-parameter exponential families. Using asymptotic arguments it is shown that tail estimates are subject to significant positi...

  9. A RELATIVE STUDY ON COST ESTIMATION TECHNIQUES

    OpenAIRE

    K. Jayapratha; M. Muthamizharasan

    2017-01-01

    Software Cost Estimation is one of the most important part in software development. It involves in estimating the effort and cost in terms of money to complete the software development. Software Cost Estimation is very important when lines of code for the particular project exceeds certain limit, also when the software deployed with too many bugs and uncovered requirements the project will go incomplete. Software cost estimation of a project plays a vital role in acceptance or rejection of it...

  10. Constrained map-based inventory estimation

    Science.gov (United States)

    Paul C. Van Deusen; Francis A. Roesch

    2007-01-01

    A region can conceptually be tessellated into polygons at different scales or resolutions. Likewise, samples can be taken from the region to determine the value of a polygon variable for each scale. Sampled polygons can be used to estimate values for other polygons at the same scale. However, estimates should be compatible across the different scales. Estimates are...

  11. Estimating Stability Class in the Field

    Science.gov (United States)

    Leonidas G. Lavdas

    1997-01-01

    A simple and easily remembered method is described for estimating cloud ceiling height in the field. Estimating ceiling height provides the means to estimate stability class, a parameter used to help determine Dispersion Index and Low Visibility Occurrence Risk Index, indices used as smoke management aids. Stability class is also used as an input to VSMOKE, an...

  12. Instrumental variable estimation based on grouped data

    NARCIS (Netherlands)

    Bekker, Paul A.; Ploeg, Jan van der

    2000-01-01

    The paper considers the estimation of the coefficients of a single equation in the presence of dummy intruments. We derive pseudo ML and GMM estimators based on moment restrictions induced either by the structural form or by the reduced form of the model. The performance of the estimators is

  13. Instrumental variable estimation based on grouped data

    NARCIS (Netherlands)

    Bekker, PA; van der Ploeg, Jan

    The paper considers the estimation of the coefficients of a single equation in the presence of dummy intruments. We derive pseudo ML and GMM estimators based on moment restrictions induced either by the structural form or by the reduced form of the model. The performance of the estimators is

  14. Estimating Loan-to-value Distributions

    DEFF Research Database (Denmark)

    Korteweg, Arthur; Sørensen, Morten

    2016-01-01

    procedure to recover the price path for individual properties and produce selection-corrected estimates of historical CLTV distributions. Estimating our model with transactions of residential properties in Alameda, California, we find that 35% of single-family homes are underwater, compared to 19% estimated...... by existing approaches. Our results reduce the index revision problem and have applications for pricing mortgage-backed securities....

  15. Current Term Enrollment Estimates: Spring 2014

    Science.gov (United States)

    National Student Clearinghouse, 2014

    2014-01-01

    Current Term Enrollment Estimates, published every December and May by the National Student Clearinghouse Research Center, include national enrollment estimates by institutional sector, state, enrollment intensity, age group, and gender. Enrollment estimates are adjusted for Clearinghouse data coverage rates by institutional sector, state, and…

  16. Sharp Strichartz estimates in spherical coordinates

    OpenAIRE

    Schippa, Robert

    2016-01-01

    We prove almost Strichartz estimates found after adding regularity in the spherical coordinates for Schr\\"odinger-like equations. The estimates are sharp up to endpoints. The proof relies on estimates involving spherical averages. Sharpness is discussed making use of a modified Knapp-type example.

  17. Bayesian techniques for surface fuel loading estimation

    Science.gov (United States)

    Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell

    2016-01-01

    A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...

  18. Efficient Estimation in Heteroscedastic Varying Coefficient Models

    Directory of Open Access Journals (Sweden)

    Chuanhua Wei

    2015-07-01

    Full Text Available This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.

  19. Stability constant estimator user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Hay, B.P.; Castleton, K.J.; Rustad, J.R.

    1996-12-01

    The purpose of the Stability Constant Estimator (SCE) program is to estimate aqueous stability constants for 1:1 complexes of metal ions with ligands by using trends in existing stability constant data. Such estimates are useful to fill gaps in existing thermodynamic databases and to corroborate the accuracy of reported stability constant values.

  20. Automating Recession Curve Displacement Recharge Estimation.

    Science.gov (United States)

    Smith, Brennan; Schwartz, Stuart

    2017-01-01

    Recharge estimation is an important and challenging element of groundwater management and resource sustainability. Many recharge estimation methods have been developed with varying data requirements, applicable to different spatial and temporal scales. The variability and inherent uncertainty in recharge estimation motivates the recommended use of multiple methods to estimate and bound regional recharge estimates. Despite the inherent limitations of using daily gauged streamflow, recession curve displacement methods provide a convenient first-order estimate as part of a multimethod hierarchical approach to estimate watershed-scale annual recharge. The implementation of recession curve displacement recharge estimation in the United States Geologic Survey (USGS) RORA program relies on the subjective, operator-specific selection of baseflow recession events to estimate a gauge-specific recession index. This paper presents a parametric algorithm that objectively automates this tedious, subjective process, parameterizing and automating the implementation of recession curve displacement. Results using the algorithm reproduce regional estimates of groundwater recharge from the USGS Appalachian Valley and Piedmont Regional Aquifer-System Analysis, with an average absolute error of less than 2%. The algorithm facilitates consistent, completely automated estimation of annual recharge that complements more rigorous data-intensive techniques for recharge estimation. © 2016, National Ground Water Association.