WorldWideScience

Sample records for sampling distributions accuracy

  1. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    Science.gov (United States)

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

  2. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  3. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  4. Relationship between accuracy and number of samples on statistical quantity and contour map of environmental gamma-ray dose rate. Example of random sampling

    International Nuclear Information System (INIS)

    Matsuda, Hideharu; Minato, Susumu

    2002-01-01

    The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)

  5. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  6. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel 'V-plot' methodology to display accuracy values.

    Science.gov (United States)

    Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.

  7. Comparing simulated and theoretical sampling distributions of the U3 person-fit statistic

    NARCIS (Netherlands)

    Emons, W.H.M.; Meijer, R.R.; Sijtsma, K.

    2002-01-01

    The accuracy with which the theoretical sampling distribution of van der Flier's person-.t statistic U3 approaches the empirical U3 sampling distribution is affected by the item discrimination. A simulation study showed that for tests with a moderate or a strong mean item discrimination, the Type I

  8. Comparing simulated and theoretical sampling distributions of the U3 person-fit statistic

    NARCIS (Netherlands)

    Emons, Wilco H.M.; Meijer, R.R.; Sijtsma, Klaas

    2002-01-01

    The accuracy with which the theoretical sampling distribution of van der Flier’s person-fit statistic U3 approaches the empirical U3 sampling distribution is affected by the item discrimination. A simulation study showed that for tests with a moderate or a strong mean item discrimination, the Type I

  9. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel ‘V-plot’ methodology to display accuracy values

    Science.gov (United States)

    Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424

  10. Accuracy assessment of the National Forest Inventory map of Mexico: sampling designs and the fuzzy characterization of landscapes

    Directory of Open Access Journals (Sweden)

    Stéphane Couturier

    2009-10-01

    Full Text Available There is no record so far in the literature of a comprehensive method to assess the accuracy of regional scale Land Cover/ Land Use (LCLU maps in the sub-tropical belt. The elevated biodiversity and the presence of highly fragmented classes hamper the use of sampling designs commonly employed in previous assessments of mainly temperate zones. A sampling design for assessing the accuracy of the Mexican National Forest Inventory (NFI map at community level is presented. A pilot study was conducted on the Cuitzeo Lake watershed region covering 400 000 ha of the 2000 Landsat-derived map. Various sampling designs were tested in order to find a trade-off between operational costs, a good spatial distribution of the sample and the inclusion of all scarcely distributed classes (‘rare classes’. A two-stage sampling design where the selection of Primary Sampling Units (PSU was done under separate schemes for commonly and scarcely distributed classes, showed best characteristics. A total of 2 023 punctual secondary sampling units were verified against their NFI map label. Issues regarding the assessment strategy and trends of class confusions are devised.

  11. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters.

    Science.gov (United States)

    Xu, Huijun; Gordon, J James; Siebers, Jeffrey V

    2011-02-01

    A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The

  12. Extending the alias Monte Carlo sampling method to general distributions

    International Nuclear Information System (INIS)

    Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.

    1991-01-01

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

  13. Analysis of spatial distribution of land cover maps accuracy

    Science.gov (United States)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain

  14. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Science.gov (United States)

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  15. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    Science.gov (United States)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  16. Egg distribution and sampling of Diaprepes abbreviatus (Coleoptera: Curculionidae) on silver buttonwood

    International Nuclear Information System (INIS)

    Pena, J.E.; Mannion, C.; Amalin, D.; Hunsberger, A.

    2007-01-01

    Taylor's power law and Iwao's patchiness regression were used to analyze spatial distribution of eggs of the Diaprepes root weevil, Diaprepes abbreviatus (L.), on silver buttonwood trees, Conocarpus erectus, during 1997 and 1998. Taylor's power law and Iwao's patchiness regression provided similar descriptions of variance-mean relationship for egg distribution within trees. Sample size requirements were determined. Information presented in this paper should help to improve accuracy and efficiency in sampling of the weevil eggs in the future. (author) [es

  17. Accuracy and Effort of Interpolation and Sampling: Can GIS Help Lower Field Costs?

    Directory of Open Access Journals (Sweden)

    Greg Simpson

    2014-12-01

    Full Text Available Sedimentation is a problem for all reservoirs in the Black Hills of South Dakota. Before working on sediment removal, a survey on the extent and distribution of the sediment is needed. Two sample lakes were used to determine which of three interpolation methods gave the most accurate volume results. A secondary goal was to see if fewer samples could be taken while still providing similar results. The smaller samples would mean less field time and thus lower costs. Subsamples of 50%, 33% and 25% were taken from the total samples and evaluated for the lowest Root Mean Squared Error values. Throughout the trials, the larger sample sizes generally showed better accuracy than smaller samples. Graphing the sediment volume estimates of the full sample, 50%, 33% and 25% showed little improvement after a sample of approximately 40%–50% when comparing the asymptote of the separate samples. When we used smaller subsamples the predicted sediment volumes were normally greater than the full sample volumes. It is suggested that when planning future sediment surveys, workers plan on gathering data at approximately every 5.21 meters. These sample sizes can be cut in half and still retain relative accuracy if time savings are needed. Volume estimates may slightly suffer with these reduced samples sizes, but the field work savings can be of benefit. Results from these surveys are used in prioritization of available funds for reclamation efforts.

  18. Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.

    Science.gov (United States)

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  19. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  20. Accuracy analysis of measurements on a stable power-law distributed series of events

    International Nuclear Information System (INIS)

    Matthews, J O; Hopcraft, K I; Jakeman, E; Siviour, G B

    2006-01-01

    We investigate how finite measurement time limits the accuracy with which the parameters of a stably distributed random series of events can be determined. The model process is generated by timing the emigration of individuals from a population that is subject to deaths and a particular choice of multiple immigration events. This leads to a scale-free discrete random process where customary measures, such as mean value and variance, do not exist. However, converting the number of events occurring in fixed time intervals to a 1-bit 'clipped' process allows the construction of well-behaved statistics that still retain vestiges of the original power-law and fluctuation properties. These statistics include the clipped mean and correlation function, from measurements of which both the power-law index of the distribution of events and the time constant of its fluctuations can be deduced. We report here a theoretical analysis of the accuracy of measurements of the mean of the clipped process. This indicates that, for a fixed experiment time, the error on measurements of the sample mean is minimized by an optimum choice of the number of samples. It is shown furthermore that this choice is sensitive to the power-law index and that the approach to Poisson statistics is dominated by rare events or 'outliers'. Our results are supported by numerical simulation

  1. Rational Arithmetic Mathematica Functions to Evaluate the Two-Sided One Sample K-S Cumulative Sampling Distribution

    Directory of Open Access Journals (Sweden)

    J. Randall Brown

    2007-06-01

    Full Text Available One of the most widely used goodness-of-fit tests is the two-sided one sample Kolmogorov-Smirnov (K-S test which has been implemented by many computer statistical software packages. To calculate a two-sided p value (evaluate the cumulative sampling distribution, these packages use various methods including recursion formulae, limiting distributions, and approximations of unknown accuracy developed over thirty years ago. Based on an extensive literature search for the two-sided one sample K-S test, this paper identifies an exact formula for sample sizes up to 31, six recursion formulae, and one matrix formula that can be used to calculate a p value. To ensure accurate calculation by avoiding catastrophic cancelation and eliminating rounding error, each of these formulae is implemented in rational arithmetic. For the six recursion formulae and the matrix formula, computational experience for sample sizes up to 500 shows that computational times are increasing functions of both the sample size and the number of digits in the numerator and denominator integers of the rational number test statistic. The computational times of the seven formulae vary immensely but the Durbin recursion formula is almost always the fastest. Linear search is used to calculate the inverse of the cumulative sampling distribution (find the confidence interval half-width and tables of calculated half-widths are presented for sample sizes up to 500. Using calculated half-widths as input, computational times for the fastest formula, the Durbin recursion formula, are given for sample sizes up to two thousand.

  2. Computer Graphics Simulations of Sampling Distributions.

    Science.gov (United States)

    Gordon, Florence S.; Gordon, Sheldon P.

    1989-01-01

    Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…

  3. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  4. Process planning and accuracy distribution of marine power plant modularization

    Directory of Open Access Journals (Sweden)

    ZHANG Jinguo

    2018-02-01

    Full Text Available [Objectives] Modular shipbuilding can shorten the cycle of design and construction, lower production costs and improve the quality of products, but higher shipbuilding capabilities are required, especially for the installation of power plants. Because of such characteristics of modular shipbuilding as the high precision of docking links, long size equipment installation chain and quantitative docking interfaces, docking installation is very difficult due to high docking deviation and low accuracy of docking installation, leading to the abnormal vibration of equipment. In order to solve this problem, [Methods] on the basis of domestic shipbuilding capability, numerical calculation methods are used to analyze the accuracy distribution of modular installation. [Results] The results show that the accuracy distribution of different docking links is reasonable and feasible, and the setting of adjusting allowance matches the requirements of shipbuilding. [Conclusions] This method provides a reference for the modular construction of marine power plants.

  5. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.

  6. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2013-01-01

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis

  7. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  8. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  9. Statistical distribution sampling

    Science.gov (United States)

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  10. Micro-organism distribution sampling for bioassays

    Science.gov (United States)

    Nelson, B. A.

    1975-01-01

    Purpose of sampling distribution is to characterize sample-to-sample variation so statistical tests may be applied, to estimate error due to sampling (confidence limits) and to evaluate observed differences between samples. Distribution could be used for bioassays taken in hospitals, breweries, food-processing plants, and pharmaceutical plants.

  11. Diagnostic accuracy of language sample measures with Persian-speaking preschool children.

    Science.gov (United States)

    Kazemi, Yalda; Klee, Thomas; Stringer, Helen

    2015-04-01

    This study examined the diagnostic accuracy of selected language sample measures (LSMs) with Persian-speaking children. A pre-accuracy study followed by phase I and II studies are reported. Twenty-four Persian-speaking children, aged 42 to 54 months, with primary language impairment (PLI) were compared to 27 age-matched children without PLI on a set of measures derived from play-based, conversational language samples. Results showed that correlations between age and LSMs were not statistically significant in either group of children. However, a majority of LSMs differentiated children with and without PLI at the group level (phase I), while three of the measures exhibited good diagnostic accuracy at the level of the individual (phase II). We conclude that general LSMs are promising for distinguishing between children with and without PLI. Persian-specific measures are mainly helpful in identifying children without language impairment while their ability to identify children with PLI is poor.

  12. The effect of sampling frequency on the accuracy of estimates of milk ...

    African Journals Online (AJOL)

    The results of this study support the five-weekly sampling procedure currently used by the South African National Dairy Cattle Performance Testing Scheme. However, replacement of proportional bulking of individual morning and evening samples with a single evening milk sample would not compromise accuracy provided ...

  13. Introducing a rainfall compound distribution model based on weather patterns sub-sampling

    Directory of Open Access Journals (Sweden)

    F. Garavaglia

    2010-06-01

    Full Text Available This paper presents a probabilistic model for daily rainfall, using sub-sampling based on meteorological circulation. We classified eight typical but contrasted synoptic situations (weather patterns for France and surrounding areas, using a "bottom-up" approach, i.e. from the shape of the rain field to the synoptic situations described by geopotential fields. These weather patterns (WP provide a discriminating variable that is consistent with French climatology, and allows seasonal rainfall records to be split into more homogeneous sub-samples, in term of meteorological genesis.

    First results show how the combination of seasonal and WP sub-sampling strongly influences the identification of the asymptotic behaviour of rainfall probabilistic models. Furthermore, with this level of stratification, an asymptotic exponential behaviour of each sub-sample appears as a reasonable hypothesis. This first part is illustrated with two daily rainfall records from SE of France.

    The distribution of the multi-exponential weather patterns (MEWP is then defined as the composition, for a given season, of all WP sub-sample marginal distributions, weighted by the relative frequency of occurrence of each WP. This model is finally compared to Exponential and Generalized Pareto distributions, showing good features in terms of robustness and accuracy. These final statistical results are computed from a wide dataset of 478 rainfall chronicles spread on the southern half of France. All these data cover the 1953–2005 period.

  14. ExSample. A library for sampling Sudakov-type distributions

    Energy Technology Data Exchange (ETDEWEB)

    Plaetzer, Simon

    2011-08-15

    Sudakov-type distributions are at the heart of generating radiation in parton showers as well as contemporary NLO matching algorithms along the lines of the POWHEG algorithm. In this paper, the C++ library ExSample is introduced, which implements adaptive sampling of Sudakov-type distributions for splitting kernels which are in general only known numerically. Besides the evolution variable, the splitting kernels can depend on an arbitrary number of other degrees of freedom to be sampled, and any number of further parameters which are fixed on an event-by-event basis. (orig.)

  15. ExSample. A library for sampling Sudakov-type distributions

    International Nuclear Information System (INIS)

    Plaetzer, Simon

    2011-08-01

    Sudakov-type distributions are at the heart of generating radiation in parton showers as well as contemporary NLO matching algorithms along the lines of the POWHEG algorithm. In this paper, the C++ library ExSample is introduced, which implements adaptive sampling of Sudakov-type distributions for splitting kernels which are in general only known numerically. Besides the evolution variable, the splitting kernels can depend on an arbitrary number of other degrees of freedom to be sampled, and any number of further parameters which are fixed on an event-by-event basis. (orig.)

  16. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets.

    Directory of Open Access Journals (Sweden)

    Der-Chiang Li

    Full Text Available It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP method merging in the D3C method (PPDP+D3C with those of the one-sided selection (OSS, the well-known SMOTEBoost (SB study, and the normal distribution-based oversampling (NDO approach, and the proposed data pre-processing (PPDP method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.

  17. Distributed database kriging for adaptive sampling (D2KAS)

    International Nuclear Information System (INIS)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-01-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters

  18. Effect of species rarity on the accuracy of species distribution models for reptiles and amphibians in southern California

    Science.gov (United States)

    Franklin, J.; Wejnert, K.E.; Hathaway, S.A.; Rochester, C.J.; Fisher, R.N.

    2009-01-01

    Aim: Several studies have found that more accurate predictive models of species' occurrences can be developed for rarer species; however, one recent study found the relationship between range size and model performance to be an artefact of sample prevalence, that is, the proportion of presence versus absence observations in the data used to train the model. We examined the effect of model type, species rarity class, species' survey frequency, detectability and manipulated sample prevalence on the accuracy of distribution models developed for 30 reptile and amphibian species. Location: Coastal southern California, USA. Methods: Classification trees, generalized additive models and generalized linear models were developed using species presence and absence data from 420 locations. Model performance was measured using sensitivity, specificity and the area under the curve (AUC) of the receiver-operating characteristic (ROC) plot based on twofold cross-validation, or on bootstrapping. Predictors included climate, terrain, soil and vegetation variables. Species were assigned to rarity classes by experts. The data were sampled to generate subsets with varying ratios of presences and absences to test for the effect of sample prevalence. Join count statistics were used to characterize spatial dependence in the prediction errors. Results: Species in classes with higher rarity were more accurately predicted than common species, and this effect was independent of sample prevalence. Although positive spatial autocorrelation remained in the prediction errors, it was weaker than was observed in the species occurrence data. The differences in accuracy among model types were slight. Main conclusions: Using a variety of modelling methods, more accurate species distribution models were developed for rarer than for more common species. This was presumably because it is difficult to discriminate suitable from unsuitable habitat for habitat generalists, and not as an artefact of the

  19. The Influence of Methods Massed Practice and Distributed Practice Model on The Speed and Accuracy of Service Tennis Courts

    Directory of Open Access Journals (Sweden)

    Desak Wiwin,

    2017-06-01

    Full Text Available The purpose of this study was to analyze about (1 the effect of the method massed practice against the speed and accuracy of service, (2 the effect of the method of distributed practice against the speed and accuracy of service and (3 the influence of methods of massed practice and distributed practice against the speed and accuracy of service. This type of research used in this research is quantitative with quasiexperimental methods. The research design uses a non-randomized control group pretest posttest design, and data analysis using Manova. The process of data collection is done by testing the speed of service (dartfish and test accuracy (Hewitt during the pretest and posttest. The results of the study as follows: (1 there is a significant influence on the methods of massed practice to increase the speed and accuracy of service (2 there is a significant influence on the method of distributed practice to increase the speed and accuracy of service (3 There is no significant difference influence among methods massed ptactice practice and distributed to the speed and accuracy of service. Conclusions of this research is a method massed practice and distributed practice equally provide significant results but that gives the influence of better is method distributed practice to speed and accuracy of service.

  20. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  1. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    Science.gov (United States)

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  2. Adaptive Metropolis Sampling with Product Distributions

    Science.gov (United States)

    Wolpert, David H.; Lee, Chiu Fan

    2005-01-01

    The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.

  3. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  4. Understanding the Sampling Distribution and the Central Limit Theorem.

    Science.gov (United States)

    Lewis, Charla P.

    The sampling distribution is a common source of misuse and misunderstanding in the study of statistics. The sampling distribution, underlying distribution, and the Central Limit Theorem are all interconnected in defining and explaining the proper use of the sampling distribution of various statistics. The sampling distribution of a statistic is…

  5. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    Science.gov (United States)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  6. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  7. Micro/Nano-scale Strain Distribution Measurement from Sampling Moiré Fringes.

    Science.gov (United States)

    Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi

    2017-05-23

    This work describes the measurement procedure and principles of a sampling moiré technique for full-field micro/nano-scale deformation measurements. The developed technique can be performed in two ways: using the reconstructed multiplication moiré method or the spatial phase-shifting sampling moiré method. When the specimen grid pitch is around 2 pixels, 2-pixel sampling moiré fringes are generated to reconstruct a multiplication moiré pattern for a deformation measurement. Both the displacement and strain sensitivities are twice as high as in the traditional scanning moiré method in the same wide field of view. When the specimen grid pitch is around or greater than 3 pixels, multi-pixel sampling moiré fringes are generated, and a spatial phase-shifting technique is combined for a full-field deformation measurement. The strain measurement accuracy is significantly improved, and automatic batch measurement is easily achievable. Both methods can measure the two-dimensional (2D) strain distributions from a single-shot grid image without rotating the specimen or scanning lines, as in traditional moiré techniques. As examples, the 2D displacement and strain distributions, including the shear strains of two carbon fiber-reinforced plastic specimens, were measured in three-point bending tests. The proposed technique is expected to play an important role in the non-destructive quantitative evaluations of mechanical properties, crack occurrences, and residual stresses of a variety of materials.

  8. A Story-Based Simulation for Teaching Sampling Distributions

    Science.gov (United States)

    Turner, Stephen; Dabney, Alan R.

    2015-01-01

    Statistical inference relies heavily on the concept of sampling distributions. However, sampling distributions are difficult to teach. We present a series of short animations that are story-based, with associated assessments. We hope that our contribution can be useful as a tool to teach sampling distributions in the introductory statistics…

  9. Precision and Accuracy of k0-NAA Method for Analysis of Multi Elements in Reference Samples

    International Nuclear Information System (INIS)

    Sri-Wardani

    2004-01-01

    Accuracy and precision of k 0 -NAA method could determine in the analysis of multi elements contained in reference samples. The analyzed results of multi elements in SRM 1633b sample were obtained with optimum results in bias of 20% but it is in a good accuracy and precision. The analyzed results of As, Cd and Zn in CCQM-P29 rice flour sample were obtained with very good result in bias of 0.5 - 5.6%. (author)

  10. Succinct Sampling from Discrete Distributions

    DEFF Research Database (Denmark)

    Bringmann, Karl; Larsen, Kasper Green

    2013-01-01

    We revisit the classic problem of sampling from a discrete distribution: Given n non-negative w-bit integers x_1,...,x_n, the task is to build a data structure that allows sampling i with probability proportional to x_i. The classic solution is Walker's alias method that takes, when implemented...

  11. Evaluating the accuracy of sampling to estimate central line-days: simplification of the National Healthcare Safety Network surveillance methods.

    Science.gov (United States)

    Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S

    2013-03-01

    To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

  12. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  13. Estimates of laboratory accuracy and precision on Hanford waste tank samples

    International Nuclear Information System (INIS)

    Dodd, D.A.

    1995-01-01

    A review was performed on three sets of analyses generated in Battelle, Pacific Northwest Laboratories and three sets generated by Westinghouse Hanford Company, 222-S Analytical Laboratory. Laboratory accuracy and precision was estimated by analyte and is reported in tables. The sources used to generate this estimate is of limited size but does include the physical forms, liquid and solid, which are representative of samples from tanks to be characterized. This estimate was published as an aid to programs developing data quality objectives in which specified limits are established. Data resulting from routine analyses of waste matrices can be expected to be bounded by the precision and accuracy estimates of the tables. These tables do not preclude or discourage direct negotiations between program and laboratory personnel while establishing bounding conditions. Programmatic requirements different than those listed may be reliably met on specific measurements and matrices. It should be recognized, however, that these are specific to waste tank matrices and may not be indicative of performance on samples from other sources

  14. Accuracy criteria recommended for the certification of gravimetric coal-mine-dust samplers

    International Nuclear Information System (INIS)

    Bowman, J.D.; Bartley, D.L.; Breuer, G.M.; Doemeny, L.J.; Murdock, D.J.

    1984-07-01

    Procedures for testing bias and precision of gravimetric coal-mine-dust sampling units are reviewed. Performance criteria for NIOSH certification of personal coal-mine dust samplers are considered. The NIOSH criterion is an accuracy of 25% at the 95% confidence interval. Size distributions of coal-mine-dust are discussed. Methods for determining size distributions are described. Sampling and sizing methods are considered. Cyclone parameter estimation is discussed. Bias computations for general sampling units are noted. Recommended procedures for evaluating bias and precision of gravimetric coal mine dust personal samplers are given. The authors conclude that when cyclones are operated at lower rates, the NIOSH accuracy criteria can be met

  15. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Scheid Anika

    2012-07-01

    Full Text Available Abstract Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent stochastic context-free grammar (SCFG that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples, where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones, then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst

  16. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  17. The Influence of Methods Massed Practice and Distributed Practice Model on The Speed and Accuracy of Service Tennis Courts

    OpenAIRE

    Desak Wiwin,; Edy Mintarto; Nurkholis Nurkholis

    2017-01-01

    The purpose of this study was to analyze about (1) the effect of the method massed practice against the speed and accuracy of service, (2) the effect of the method of distributed practice against the speed and accuracy of service and (3) the influence of methods of massed practice and distributed practice against the speed and accuracy of service. This type of research used in this research is quantitative with quasiexperimental methods. The research design uses a non-randomized control group...

  18. Assessment of Sr-90 in water samples: precision and accuracy

    International Nuclear Information System (INIS)

    Nisti, Marcelo B.; Saueia, Cátia H.R.; Castilho, Bruna; Mazzilli, Barbara P.

    2017-01-01

    The study of artificial radionuclides dispersion into the environment is very important to control the nuclear waste discharges, nuclear accidents and nuclear weapons testing. The accidents in Fukushima Daiichi Nuclear Power Plant and Chernobyl Nuclear Power Plant, released several radionuclides in the environment by aerial deposition and liquid discharge, with various level of radioactivity. The 90 Sr was one of the elements released into the environment. The 90 Sr is produced by nuclear fission with a physical half-life of 28.79 years with decay energy of 0.546 MeV. The aims of this study are to evaluate the precision and accuracy of three methodologies for the determination of 90 Sr in water samples: Cerenkov, LSC direct method and with radiochemical separation. The performance of the methodologies was evaluated by using two scintillation counters (Quantulus and Hidex). The parameters Minimum Detectable Activity (MDA) and Figure Of Merit (FOM) were determined for each method, the precision and accuracy were checked using 90 Sr standard solutions. (author)

  19. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...

  20. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices.

    Science.gov (United States)

    Liu, Wei; Kulin, Merima; Kazaz, Tarik; Shahid, Adnan; Moerman, Ingrid; De Poorter, Eli

    2017-09-12

    Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.

  1. Improving diagnostic accuracy using agent-based distributed data mining system.

    Science.gov (United States)

    Sridhar, S

    2013-09-01

    The use of data mining techniques to improve the diagnostic system accuracy is investigated in this paper. The data mining algorithms aim to discover patterns and extract useful knowledge from facts recorded in databases. Generally, the expert systems are constructed for automating diagnostic procedures. The learning component uses the data mining algorithms to extract the expert system rules from the database automatically. Learning algorithms can assist the clinicians in extracting knowledge automatically. As the number and variety of data sources is dramatically increasing, another way to acquire knowledge from databases is to apply various data mining algorithms that extract knowledge from data. As data sets are inherently distributed, the distributed system uses agents to transport the trained classifiers and uses meta learning to combine the knowledge. Commonsense reasoning is also used in association with distributed data mining to obtain better results. Combining human expert knowledge and data mining knowledge improves the performance of the diagnostic system. This work suggests a framework of combining the human knowledge and knowledge gained by better data mining algorithms on a renal and gallstone data set.

  2. Impact Of Tissue Sampling On Accuracy Of Ki67 Immunohistochemistry Evaluation In Breast Cancer

    Directory of Open Access Journals (Sweden)

    Justinas Besusparis

    2016-06-01

    The sampling requirements were dependent on the heterogeneity of the biomarker expression. To achieve a coefficient error of 10%, 5-6 cores were needed for homogeneous cases, while 11-12 cores for heterogeneous cases. In mixed tumor population, 8 TMA cores were required. Similarly, to achieve the same accuracy, approximately 4,000 nuclei must be counted when the intra-tumor heterogeneity is mixed/unknown. Tumors at the lower scale of proliferative activity would require larger sampling (10-12 TMA cores, or 5,000 nuclei to achieve the same error measurement results as for highly proliferative tumors. Our data show that optimal tissue sampling for IHC biomarker evaluation is dependent on the heterogeneity of the tissue under study and needs to be determined on a per-use basis. We propose a method that can be applied to determine the TMA sampling strategy for specific biomarkers, tissues and study targets. In addition, our findings highlight the importance of high-capacity computer-based IHC measurement techniques to improve accuracy of the testing.

  3. Climatic associations of British species distributions show good transferability in time but low predictive accuracy for range change.

    Directory of Open Access Journals (Sweden)

    Giovanni Rapacciuolo

    Full Text Available Conservation planners often wish to predict how species distributions will change in response to environmental changes. Species distribution models (SDMs are the primary tool for making such predictions. Many methods are widely used; however, they all make simplifying assumptions, and predictions can therefore be subject to high uncertainty. With global change well underway, field records of observed range shifts are increasingly being used for testing SDM transferability. We used an unprecedented distribution dataset documenting recent range changes of British vascular plants, birds, and butterflies to test whether correlative SDMs based on climate change provide useful approximations of potential distribution shifts. We modelled past species distributions from climate using nine single techniques and a consensus approach, and projected the geographical extent of these models to a more recent time period based on climate change; we then compared model predictions with recent observed distributions in order to estimate the temporal transferability and prediction accuracy of our models. We also evaluated the relative effect of methodological and taxonomic variation on the performance of SDMs. Models showed good transferability in time when assessed using widespread metrics of accuracy. However, models had low accuracy to predict where occupancy status changed between time periods, especially for declining species. Model performance varied greatly among species within major taxa, but there was also considerable variation among modelling frameworks. Past climatic associations of British species distributions retain a high explanatory power when transferred to recent time--due to their accuracy to predict large areas retained by species--but fail to capture relevant predictors of change. We strongly emphasize the need for caution when using SDMs to predict shifts in species distributions: high explanatory power on temporally-independent records

  4. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    Science.gov (United States)

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  5. Accuracy and Precision in Elemental Analysis of Environmental Samples using Inductively Coupled Plasma-Atomic Emission Spectrometry

    International Nuclear Information System (INIS)

    Quraishi, Shamsad Begum; Chung, Yong-Sam; Choi, Kwang Soon

    2005-01-01

    Inductively Coupled Plasma-Atomic Emission Spectrometry followed by micro-wave digestion have been performed on different environmental Certified Reference Materials (CRMs). Analytical results show that accuracy and precision in ICP-AES analysis were acceptable and satisfactory in case of soil and hair CRM samples. The relative error of most of the elements in these two CRMs is within 10% with few exceptions and coefficient of variation is also less than 10%. Z-score as an analytical performance was also within the acceptable range (±2). ICP-AES was found as an inadequate method for Air Filter CRM due to incomplete dissolution, low concentration of elements and very low mass of the sample. However, real air filter sample could have been analyzed with high accuracy and precision by increasing sample mass during collection. (author)

  6. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  7. Assessment of Sr-90 in water samples: precision and accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Nisti, Marcelo B.; Saueia, Cátia H.R.; Castilho, Bruna; Mazzilli, Barbara P., E-mail: mbnisti@ipen.br, E-mail: chsaueia@ipen.br, E-mail: bcastilho@ipen.br, E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-11-01

    The study of artificial radionuclides dispersion into the environment is very important to control the nuclear waste discharges, nuclear accidents and nuclear weapons testing. The accidents in Fukushima Daiichi Nuclear Power Plant and Chernobyl Nuclear Power Plant, released several radionuclides in the environment by aerial deposition and liquid discharge, with various level of radioactivity. The {sup 90}Sr was one of the elements released into the environment. The {sup 90}Sr is produced by nuclear fission with a physical half-life of 28.79 years with decay energy of 0.546 MeV. The aims of this study are to evaluate the precision and accuracy of three methodologies for the determination of {sup 90}Sr in water samples: Cerenkov, LSC direct method and with radiochemical separation. The performance of the methodologies was evaluated by using two scintillation counters (Quantulus and Hidex). The parameters Minimum Detectable Activity (MDA) and Figure Of Merit (FOM) were determined for each method, the precision and accuracy were checked using {sup 90}Sr standard solutions. (author)

  8. Distribution-Preserving Stratified Sampling for Learning Problems.

    Science.gov (United States)

    Cervellera, Cristiano; Maccio, Danilo

    2017-06-09

    The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.

  9. Simulating quantum correlations as a distributed sampling problem

    International Nuclear Information System (INIS)

    Degorre, Julien; Laplante, Sophie; Roland, Jeremie

    2005-01-01

    It is known that quantum correlations exhibited by a maximally entangled qubit pair can be simulated with the help of shared randomness, supplemented with additional resources, such as communication, postselection or nonlocal boxes. For instance, in the case of projective measurements, it is possible to solve this problem with protocols using one bit of communication or making one use of a nonlocal box. We show that this problem reduces to a distributed sampling problem. We give a new method to obtain samples from a biased distribution, starting with shared random variables following a uniform distribution, and use it to build distributed sampling protocols. This approach allows us to derive, in a simpler and unified way, many existing protocols for projective measurements, and extend them to positive operator value measurements. Moreover, this approach naturally leads to a local hidden variable model for Werner states

  10. Sampling from the normal and exponential distributions

    International Nuclear Information System (INIS)

    Chaplin, K.R.; Wills, C.A.

    1982-01-01

    Methods for generating random numbers from the normal and exponential distributions are described. These involve dividing each function into subregions, and for each of these developing a method of sampling usually based on an acceptance rejection technique. When sampling from the normal or exponential distribution, each subregion provides the required random value with probability equal to the ratio of its area to the total area. Procedures written in FORTRAN for the CYBER 175/CDC 6600 system are provided to implement the two algorithms

  11. Distribution of age at menopause in two Danish samples

    DEFF Research Database (Denmark)

    Boldsen, J L; Jeune, B

    1990-01-01

    We analyzed the distribution of reported age at natural menopause in two random samples of Danish women (n = 176 and n = 150) to determine the shape of the distribution and to disclose any possible trends in the distribution parameters. It was necessary to correct the frequencies of the reported...... ages for the effect of differing ages at reporting. The corrected distribution of age at menopause differs from the normal distribution in the same way in both samples. Both distributions could be described by a mixture of two normal distributions. It appears that most of the parameters of the normal...... distribution mixtures remain unchanged over a 50-year time lag. The position of the distribution, that is, the mean age at menopause, however, increases slightly but significantly....

  12. Aspects of Students' Reasoning about Variation in Empirical Sampling Distributions

    Science.gov (United States)

    Noll, Jennifer; Shaughnessy, J. Michael

    2012-01-01

    Sampling tasks and sampling distributions provide a fertile realm for investigating students' conceptions of variability. A project-designed teaching episode on samples and sampling distributions was team-taught in 6 research classrooms (2 middle school and 4 high school) by the investigators and regular classroom mathematics teachers. Data…

  13. The accuracy of endometrial sampling in women with postmenopausal bleeding: a systematic review and meta-analysis

    NARCIS (Netherlands)

    van Hanegem, Nehalennia; Prins, Marileen M. C.; Bongers, Marlies Y.; Opmeer, Brent C.; Sahota, Daljit Singh; Mol, Ben Willem J.; Timmermans, Anne

    2016-01-01

    Postmenopausal bleeding (PMB) can be the first sign of endometrial cancer. In case of thickened endometrium, endometrial sampling is often used in these women. In this systematic review, we studied the accuracy of endometrial sampling for the diagnoses of endometrial cancer, atypical hyperplasia and

  14. Water sample-collection and distribution system

    Science.gov (United States)

    Brooks, R. R.

    1978-01-01

    Collection and distribution system samples water from six designated stations, filtered if desired, and delivers it to various analytical sensors. System may be controlled by Water Monitoring Data Acquisition System or operated manually.

  15. Enhanced conformational sampling using enveloping distribution sampling.

    Science.gov (United States)

    Lin, Zhixiong; van Gunsteren, Wilfred F

    2013-10-14

    To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.

  16. An Investigation of the Sampling Distribution of the Congruence Coefficient.

    Science.gov (United States)

    Broadbooks, Wendy J.; Elmore, Patricia B.

    This study developed and investigated an empirical sampling distribution of the congruence coefficient. The effects of sample size, number of variables, and population value of the congruence coefficient on the sampling distribution of the congruence coefficient were examined. Sample data were generated on the basis of the common factor model and…

  17. An Investigation of the Sampling Distributions of Equating Coefficients.

    Science.gov (United States)

    Baker, Frank B.

    1996-01-01

    Using the characteristic curve method for dichotomously scored test items, the sampling distributions of equating coefficients were examined. Simulations indicate that for the equating conditions studied, the sampling distributions of the equating coefficients appear to have acceptable characteristics, suggesting confidence in the values obtained…

  18. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  19. Accuracy of sampling during mushroom cultivation

    NARCIS (Netherlands)

    Baars, J.J.P.; Hendrickx, P.M.; Sonnenberg, A.S.M.

    2015-01-01

    Experiments described in this report were performed to increase the accuracy of the analysis of the biological efficiency of Agaricus bisporus strains. Biological efficiency is a measure of the efficiency with which the mushroom strains use dry matter in the compost to produce mushrooms (expressed

  20. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    Science.gov (United States)

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  1. Accuracy in estimation of timber assortments and stem distribution - A comparison of airborne and terrestrial laser scanning techniques

    Science.gov (United States)

    Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto

    2014-11-01

    Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.

  2. Log-stable concentration distributions of trace elements in biomedical samples

    International Nuclear Information System (INIS)

    Kubala-Kukus, A.; Kuternoga, E.; Braziewicz, J.; Pajek, M.

    2004-01-01

    In the present paper, which follows our earlier observation that the asymmetric and long-tailed concentration distributions of trace elements in biomedical samples, measured by the X-ray fluorescence techniques, can be modeled by the log-stable distributions, further specific aspects of this observation are discussed. First, we demonstrate that, typically, for a quite substantial fraction (10-20%) of trace elements studied in different kinds of biomedical samples, the measured concentration distributions are described in fact by the 'symmetric' log-stable distributions, i.e. the asymmetric distributions which are described by the symmetric stable distributions. This observation is, in fact, expected for the random multiplicative process, which models the concentration distributions of trace elements in the biomedical samples. The log-stable nature of concentration distribution of trace elements results in several problems of statistical nature, which have to be addressed in XRF data analysis practice. Consequently, in the present paper, the following problems, namely (i) the estimation of parameters for stable distributions and (ii) the testing of the log-stable nature of the concentration distribution by using the Anderson-Darling (A 2 ) test, especially for symmetric stable distributions, are discussed in detail. In particular, the maximum likelihood estimation and Monte Carlo simulation techniques were used, respectively, for estimation of stable distribution parameters and calculation of the critical values for the Anderson-Darling test. The discussed ideas are exemplified by the results of the study of trace element concentration distributions in selected biomedical samples, which were obtained by using the X-ray fluorescence (XRF, TXRF) methods

  3. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  4. Improving shuffler assay accuracy

    International Nuclear Information System (INIS)

    Rinard, P.M.

    1995-01-01

    Drums of uranium waste should be disposed of in an economical and environmentally sound manner. The most accurate possible assays of the uranium masses in the drums are required for proper disposal. The accuracies of assays from a shuffler are affected by the type of matrix material in the drums. Non-hydrogenous matrices have little effect on neutron transport and accuracies are very good. If self-shielding is known to be a minor problem, good accuracies are also obtained with hydrogenous matrices when a polyethylene sleeve is placed around the drums. But for those cases where self-shielding may be a problem, matrices are hydrogenous, and uranium distributions are non-uniform throughout the drums, the accuracies are degraded. They can be greatly improved by determining the distributions of the uranium and then applying correction factors based on the distributions. This paper describes a technique for determining uranium distributions by using the neutron count rates in detector banks around the waste drum and solving a set of overdetermined linear equations. Other approaches were studied to determine the distributions and are described briefly. Implementation of this correction is anticipated on an existing shuffler next year

  5. Distributed Secondary Control for DC Microgrid Applications with Enhanced Current Sharing Accuracy

    DEFF Research Database (Denmark)

    Lu, Xiaonan; Guerrero, Josep M.; Sun, Kai

    2013-01-01

    With the consideration of line resistances in a dc microgrid, the current sharing accuracy is lowered down, since the dc output voltage cannot be exactly the same for different interfacing converters. Meanwhile, the dc bus voltage deviation is involved by using droop control. In this paper...... control diagram is accomplished and the requirement of distributed configuration in a microgrid is satisfied. The experimental validation based on a 2×2.2 kW prototype was implemented to demonstrate the proposed approach......., a distributed secondary control method is proposed. Droop control is employed as the primary control method for load current sharing. Meanwhile, the dc output voltage and current in each module is transferred to the others by the low bandwidth communication (LBC) network. Average voltage and current controllers...

  6. Interference Imaging of Refractive Index Distribution in Thin Samples

    Directory of Open Access Journals (Sweden)

    Ivan Turek

    2004-01-01

    Full Text Available There are three versions of interference imaging of refractive index distribution in thin samples suggested in this contribution. These are based on imaging of interference field created by waves reflected from the front and the back sample surface or imaging of interference field of Michelson or Mach-Zehnder interferometer with the sample put in one of the interferometers arm. The work discusses the advantages and disadvantages of these techniques and presents the results of imaging of refrective index distribution in photorefractive record of a quasi-harmonic optical field in thin LiNbO3 crystal sample.

  7. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  8. Immediate Feedback on Accuracy and Performance: The Effects of Wireless Technology on Food Safety Tracking at a Distribution Center

    Science.gov (United States)

    Goomas, David T.

    2012-01-01

    The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…

  9. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  10. Predictable return distributions

    DEFF Research Database (Denmark)

    Pedersen, Thomas Quistgaard

    trace out the entire distribution. A univariate quantile regression model is used to examine stock and bond return distributions individually, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that certain parts of the return distributions......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing...

  11. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Science.gov (United States)

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  12. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Directory of Open Access Journals (Sweden)

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  13. Analyzing thematic maps and mapping for accuracy

    Science.gov (United States)

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  14. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    Science.gov (United States)

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  15. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  16. Using Language Sample Analysis in Clinical Practice: Measures of Grammatical Accuracy for Identifying Language Impairment in Preschool and School-Aged Children.

    Science.gov (United States)

    Eisenberg, Sarita; Guo, Ling-Yu

    2016-05-01

    This article reviews the existing literature on the diagnostic accuracy of two grammatical accuracy measures for differentiating children with and without language impairment (LI) at preschool and early school age based on language samples. The first measure, the finite verb morphology composite (FVMC), is a narrow grammatical measure that computes children's overall accuracy of four verb tense morphemes. The second measure, percent grammatical utterances (PGU), is a broader grammatical measure that computes children's accuracy in producing grammatical utterances. The extant studies show that FVMC demonstrates acceptable (i.e., 80 to 89% accurate) to good (i.e., 90% accurate or higher) diagnostic accuracy for children between 4;0 (years;months) and 6;11 in conversational or narrative samples. In contrast, PGU yields acceptable to good diagnostic accuracy for children between 3;0 and 8;11 regardless of sample types. Given the diagnostic accuracy shown in the literature, we suggest that FVMC and PGU can be used as one piece of evidence for identifying children with LI in assessment when appropriate. However, FVMC or PGU should not be used as therapy goals directly. Instead, when children are low in FVMC or PGU, we suggest that follow-up analyses should be conducted to determine the verb tense morphemes or grammatical structures that children have difficulty with. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts

    Directory of Open Access Journals (Sweden)

    Hossein Hassani

    2015-08-01

    Full Text Available This paper introduces a complement statistical test for distinguishing between the predictive accuracy of two sets of forecasts. We propose a non-parametric test founded upon the principles of the Kolmogorov-Smirnov (KS test, referred to as the KS Predictive Accuracy (KSPA test. The KSPA test is able to serve two distinct purposes. Initially, the test seeks to determine whether there exists a statistically significant difference between the distribution of forecast errors, and secondly it exploits the principles of stochastic dominance to determine whether the forecasts with the lower error also reports a stochastically smaller error than forecasts from a competing model, and thereby enables distinguishing between the predictive accuracy of forecasts. We perform a simulation study for the size and power of the proposed test and report the results for different noise distributions, sample sizes and forecasting horizons. The simulation results indicate that the KSPA test is correctly sized, and robust in the face of varying forecasting horizons and sample sizes along with significant accuracy gains reported especially in the case of small sample sizes. Real world applications are also considered to illustrate the applicability of the proposed KSPA test in practice.

  18. Attenuation of species abundance distributions by sampling

    Science.gov (United States)

    Shimadzu, Hideyasu; Darnell, Ross

    2015-01-01

    Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626

  19. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    Science.gov (United States)

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  20. Sampling Molecular Conformers in Solution with Quantum Mechanical Accuracy at a Nearly Molecular-Mechanics Cost.

    Science.gov (United States)

    Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano

    2016-09-13

    We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.

  1. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Directory of Open Access Journals (Sweden)

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  2. Meta-analysis for diagnostic accuracy studies: a new statistical model using beta-binomial distributions and bivariate copulas.

    Science.gov (United States)

    Kuss, Oliver; Hoyer, Annika; Solms, Alexander

    2014-01-15

    There are still challenges when meta-analyzing data from studies on diagnostic accuracy. This is mainly due to the bivariate nature of the response where information on sensitivity and specificity must be summarized while accounting for their correlation within a single trial. In this paper, we propose a new statistical model for the meta-analysis for diagnostic accuracy studies. This model uses beta-binomial distributions for the marginal numbers of true positives and true negatives and links these margins by a bivariate copula distribution. The new model comes with all the features of the current standard model, a bivariate logistic regression model with random effects, but has the additional advantages of a closed likelihood function and a larger flexibility for the correlation structure of sensitivity and specificity. In a simulation study, which compares three copula models and two implementations of the standard model, the Plackett and the Gauss copula do rarely perform worse but frequently better than the standard model. We use an example from a meta-analysis to judge the diagnostic accuracy of telomerase (a urinary tumor marker) for the diagnosis of primary bladder cancer for illustration. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Diagnostic accuracy of serological diagnosis of hepatitis C and B using dried blood spot samples (DBS): two systematic reviews and meta-analyses.

    Science.gov (United States)

    Lange, Berit; Cohn, Jennifer; Roberts, Teri; Camp, Johannes; Chauffour, Jeanne; Gummadi, Nina; Ishizaki, Azumi; Nagarathnam, Anupriya; Tuaillon, Edouard; van de Perre, Philippe; Pichler, Christine; Easterbrook, Philippa; Denkinger, Claudia M

    2017-11-01

    Dried blood spots (DBS) are a convenient tool to enable diagnostic testing for viral diseases due to transport, handling and logistical advantages over conventional venous blood sampling. A better understanding of the performance of serological testing for hepatitis C (HCV) and hepatitis B virus (HBV) from DBS is important to enable more widespread use of this sampling approach in resource limited settings, and to inform the 2017 World Health Organization (WHO) guidance on testing for HBV/HCV. We conducted two systematic reviews and meta-analyses on the diagnostic accuracy of HCV antibody (HCV-Ab) and HBV surface antigen (HBsAg) from DBS samples compared to venous blood samples. MEDLINE, EMBASE, Global Health and Cochrane library were searched for studies that assessed diagnostic accuracy with DBS and agreement between DBS and venous sampling. Heterogeneity of results was assessed and where possible a pooled analysis of sensitivity and specificity was performed using a bivariate analysis with maximum likelihood estimate and 95% confidence intervals (95%CI). We conducted a narrative review on the impact of varying storage conditions or limits of detection in subsets of samples. The QUADAS-2 tool was used to assess risk of bias. For the diagnostic accuracy of HBsAg from DBS compared to venous blood, 19 studies were included in a quantitative meta-analysis, and 23 in a narrative review. Pooled sensitivity and specificity were 98% (95%CI:95%-99%) and 100% (95%CI:99-100%), respectively. For the diagnostic accuracy of HCV-Ab from DBS, 19 studies were included in a pooled quantitative meta-analysis, and 23 studies were included in a narrative review. Pooled estimates of sensitivity and specificity were 98% (CI95%:95-99) and 99% (CI95%:98-100), respectively. Overall quality of studies and heterogeneity were rated as moderate in both systematic reviews. HCV-Ab and HBsAg testing using DBS compared to venous blood sampling was associated with excellent diagnostic accuracy

  4. Accuracy of self-reported height, weight and waist circumference in a Japanese sample.

    Science.gov (United States)

    Okamoto, N; Hosono, A; Shibata, K; Tsujimura, S; Oka, K; Fujita, H; Kamiya, M; Kondo, F; Wakabayashi, R; Yamada, T; Suzuki, S

    2017-12-01

    Inconsistent results have been found in prior studies investigating the accuracy of self-reported waist circumference, and no study has investigated the validity of self-reported waist circumference among Japanese individuals. This study used the diagnostic standard of metabolic syndrome to assess the accuracy of individual's self-reported height, weight and waist circumference in a Japanese sample. Study participants included 7,443 Japanese men and women aged 35-79 years. They participated in a cohort study's baseline survey between 2007 and 2011. Participants' height, weight and waist circumference were measured, and their body mass index was calculated. Self-reported values were collected through a questionnaire before the examination. Strong correlations between measured and self-reported values for height, weight and body mass index were detected. The correlation was lowest for waist circumference (men, 0.87; women, 0.73). Men significantly overestimated their waist circumference (mean difference, 0.8 cm), whereas women significantly underestimated theirs (mean difference, 5.1 cm). The sensitivity of self-reported waist circumference using the cut-off value of metabolic syndrome was 0.83 for men and 0.57 for women. Due to systematic and random errors, the accuracy of self-reported waist circumference was low. Therefore, waist circumference should be measured without relying on self-reported values, particularly in the case of women.

  5. Fluid sample collection and distribution system. [qualitative analysis of aqueous samples from several points

    Science.gov (United States)

    Brooks, R. L. (Inventor)

    1979-01-01

    A multipoint fluid sample collection and distribution system is provided wherein the sample inputs are made through one or more of a number of sampling valves to a progressive cavity pump which is not susceptible to damage by large unfiltered particles. The pump output is through a filter unit that can provide a filtered multipoint sample. An unfiltered multipoint sample is also provided. An effluent sample can be taken and applied to a second progressive cavity pump for pumping to a filter unit that can provide one or more filtered effluent samples. The second pump can also provide an unfiltered effluent sample. Means are provided to periodically back flush each filter unit without shutting off the whole system.

  6. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    Science.gov (United States)

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  7. Sampling theorem for geometric moment determination and its application to a laser beam position detector.

    Science.gov (United States)

    Loce, R P; Jodoin, R E

    1990-09-10

    Using the tools of Fourier analysis, a sampling requirement is derived that assures that sufficient information is contained within the samples of a distribution to calculate accurately geometric moments of that distribution. The derivation follows the standard textbook derivation of the Whittaker-Shannon sampling theorem, which is used for reconstruction, but further insight leads to a coarser minimum sampling interval for moment determination. The need for fewer samples to determine moments agrees with intuition since less information should be required to determine a characteristic of a distribution compared with that required to construct the distribution. A formula for calculation of the moments from these samples is also derived. A numerical analysis is performed to quantify the accuracy of the calculated first moment for practical nonideal sampling conditions. The theory is applied to a high speed laser beam position detector, which uses the normalized first moment to measure raster line positional accuracy in a laser printer. The effects of the laser irradiance profile, sampling aperture, number of samples acquired, quantization, and noise are taken into account.

  8. Investigation of elemental distribution in lung samples by X-ray fluorescence microtomography

    International Nuclear Information System (INIS)

    Pereira, Gabriela R.; Rocha, Henrique S.; Lopes, Ricardo T.

    2007-01-01

    X-Ray Fluorescence Microtomography (XRFCT) is a suitable technique to find elemental distributions in heterogeneous samples. While x-ray transmission microtomography provides information about the linear attenuation coefficient distribution, XRFCT allows one to map the most important elements in the sample. The x-ray fluorescence tomography is based on the use of the X-ray fluorescence emitted from the elements contained in a sample so as to give additional information to characterize the object under study. In this work a rat lung and two human lung tissue samples have been investigated in order to verify the efficiency of the system in determination of the internal distribution of detected elements in these kinds of samples and to compare the elemental distribution in the lung tissue of an old human and a fetus. The experiments were performed at the X-Ray Fluorescence beamline (XRF) of the Brazilian Synchrotron Light Source (LNLS), Campinas, Brazil. A white beam was used for the excitation of the elements and the fluorescence photons have been detected by a HPGe detector. All the tomographies have been reconstructed using a filtered-back projection algorithm. It was possible to visualize the distribution of high atomic number elements on both, artificial and tissues samples. It was compared the quantity of Zn, Cu and Fe for the lung human tissue samples and verify that these elements have a higher concentration on the fetus tissue sample than the adult tissue sample. (author)

  9. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  10. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  11. Fault Diagnosis in Condition of Sample Type Incompleteness Using Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Hui Yi

    2015-01-01

    Full Text Available Faulty samples are much harder to acquire than normal samples, especially in complicated systems. This leads to incompleteness for training sample types and furthermore a decrease of diagnostic accuracy. In this paper, the relationship between sample-type incompleteness and the classifier-based diagnostic accuracy is discussed first. Then, a support vector data description-based approach, which has taken the effects of sample-type incompleteness into consideration, is proposed to refine the construction of fault regions and increase the diagnostic accuracy for the condition of incomplete sample types. The effectiveness of the proposed method was validated on both a Gaussian distributed dataset and a practical dataset. Satisfactory results have been obtained.

  12. Using Group Projects to Assess the Learning of Sampling Distributions

    Science.gov (United States)

    Neidigh, Robert O.; Dunkelberger, Jake

    2012-01-01

    In an introductory business statistics course, student groups used sample data to compare a set of sample means to the theoretical sampling distribution. Each group was given a production measurement with a population mean and standard deviation. The groups were also provided an excel spreadsheet with 40 sample measurements per week for 52 weeks…

  13. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  14. The redshift distribution of cosmological samples: a forward modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina, E-mail: joerg.herbel@phys.ethz.ch, E-mail: tomasz.kacprzak@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch, E-mail: claudio.bruderer@phys.ethz.ch, E-mail: andrina.nicola@phys.ethz.ch [Institute for Astronomy, Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, 8093 Zürich (Switzerland)

    2017-08-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  15. The redshift distribution of cosmological samples: a forward modeling approach

    Science.gov (United States)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  16. The redshift distribution of cosmological samples: a forward modeling approach

    International Nuclear Information System (INIS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-01-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  17. Spatial distribution sampling and Monte Carlo simulation of radioactive isotopes

    CERN Document Server

    Krainer, Alexander Michael

    2015-01-01

    This work focuses on the implementation of a program for random sampling of uniformly spatially distributed isotopes for Monte Carlo particle simulations and in specific FLUKA. With FLUKA it is possible to calculate the radio nuclide production in high energy fields. The decay of these nuclide, and therefore the resulting radiation field, however can only be simulated in the same geometry. This works gives the tool to simulate the decay of the produced nuclide in other geometries. With that the radiation field from an irradiated object can be simulated in arbitrary environments. The sampling of isotope mixtures was tested by simulating a 50/50 mixture of $Cs^{137}$ and $Co^{60}$. These isotopes are both well known and provide therefore a first reliable benchmark in that respect. The sampling of uniformly distributed coordinates was tested using the histogram test for various spatial distributions. The advantages and disadvantages of the program compared to standard methods are demonstrated in the real life ca...

  18. Distribution of analytes over TXRF reflectors

    International Nuclear Information System (INIS)

    Bernasconi, G.; Tajani, A.

    2000-01-01

    One of the most frequently used methods for trace element analysis in TXRF involves the evaporation of small amounts of aqueous solutions over flat reflectors. This method has the advantage of in-situ pre-concentration of the analytes, which together with the low background due to the total reflection in the substrate leads to excellent detection limits and high signal to noise ratio. The spiking of the liquid sample with an internal standard provides also a simple way to achieve multielemental quantitative analysis. However the elements are not homogeneously distributed over the reflector after the liquid phase has been evaporated. This distribution may be different for the unknown elements and the internal standards and may influence the accuracy of the quantitative results. In this presentation we used μ-XRF techniques to map this distribution. Small (20 μl) drops of a binary solution were evaporated over silicon reflectors and then mapped using a focused X-ray beam with about 100 μm resolution. A typical ring structure showing some differences in the distribution of both elements has been observed. One of the reflectors was also measured in a TXRF setup turning it at different angles with reference to the X-ray beam (with constant incidence and take-off angles) and variations of the intensity relation between both elements were measured. This work shows the influence of the sample distribution and proposes methods to evaluate it. In order to assess the limitations of the accuracy of the results due to the sample distribution more measurements would be necessary, however due to the small size of typical TXRF samples and the tight geometry of TXRF setups the influence of the sample distribution is not large. (author)

  19. Accuracy of micro four-point probe measurements on inhomogeneous samples: A probe spacing dependence study

    DEFF Research Database (Denmark)

    Wang, Fei; Petersen, Dirch Hjorth; Østerberg, Frederik Westergaard

    2009-01-01

    In this paper, we discuss a probe spacing dependence study in order to estimate the accuracy of micro four-point probe measurements on inhomogeneous samples. Based on sensitivity calculations, both sheet resistance and Hall effect measurements are studied for samples (e.g. laser annealed samples...... the probe spacing is smaller than 1/40 of the variation wavelength, micro four-point probes can provide an accurate record of local properties with less than 1% measurement error. All the calculations agree well with previous experimental results.......) with periodic variations of sheet resistance, sheet carrier density, and carrier mobility. With a variation wavelength of ¿, probe spacings from 0.0012 to 1002 have been applied to characterize the local variations. The calculations show that the measurement error is highly dependent on the probe spacing. When...

  20. Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution

    Science.gov (United States)

    Samohyl, Robert Wayne

    2017-10-01

    This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The

  1. Remifentanil maintains lower initial delayed nonmatching-to-sample accuracy compared to food pellets in male rhesus monkeys.

    Science.gov (United States)

    Hutsell, Blake A; Banks, Matthew L

    2017-12-01

    Emerging human laboratory and preclinical drug self-administration data suggest that a history of contingent abused drug exposure impairs performance in operant discrimination procedures, such as delayed nonmatching-to-sample (DNMTS), that are hypothesized to assess components of executive function. However, these preclinical discrimination studies have exclusively used food as the reinforcer and the effects of drugs as reinforcers in these operant procedures are unknown. The present study determined effects of contingent intravenous remifentanil injections on DNMTS performance hypothesized to assess 1 aspect of executive function, working memory. Daily behavioral sessions consisted of 2 components with sequential intravenous remifentanil (0, 0.01-1.0 μg/kg/injection) or food (0, 1-10 pellets) availability in nonopioid dependent male rhesus monkeys (n = 3). Remifentanil functioned as a reinforcer in the DNMTS procedure. Similar delay-dependent DNMTS accuracy was observed under both remifentanil- and food-maintained components, such that higher accuracies were maintained at shorter (0.1-1.0 s) delays and lower accuracies approaching chance performance were maintained at longer (10-32 s) delays. Remifentanil maintained significantly lower initial DNMTS accuracy compared to food. Reinforcer magnitude was not an important determinant of DNMTS accuracy for either remifentanil or food. These results extend the range of experimental procedures under which drugs function as reinforcers. Furthermore, the selective remifentanil-induced decrease in initial DNMTS accuracy is consistent with a selective impairment of attentional, but not memorial, processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Sampling and assessment accuracy in mate choice: a random-walk model of information processing in mating decision.

    Science.gov (United States)

    Castellano, Sergio; Cermelli, Paolo

    2011-04-07

    Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Comparing distribution models for small samples of overdispersed counts of freshwater fish

    Science.gov (United States)

    Vaudor, Lise; Lamouroux, Nicolas; Olivier, Jean-Michel

    2011-05-01

    The study of species abundance often relies on repeated abundance counts whose number is limited by logistic or financial constraints. The distribution of abundance counts is generally right-skewed (i.e. with many zeros and few high values) and needs to be modelled for statistical inference. We used an extensive dataset involving about 100,000 fish individuals of 12 freshwater fish species collected in electrofishing points (7 m 2) during 350 field surveys made in 25 stream sites, in order to compare the performance and the generality of four distribution models of counts (Poisson, negative binomial and their zero-inflated counterparts). The negative binomial distribution was the best model (Bayesian Information Criterion) for 58% of the samples (species-survey combinations) and was suitable for a variety of life histories, habitat, and sample characteristics. The performance of the models was closely related to samples' statistics such as total abundance and variance. Finally, we illustrated the consequences of a distribution assumption by calculating confidence intervals around the mean abundance, either based on the most suitable distribution assumption or on an asymptotical, distribution-free (Student's) method. Student's method generally corresponded to narrower confidence intervals, especially when there were few (≤3) non-null counts in the samples.

  4. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  5. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  6. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  7. Testing the accuracy of a Bayesian central-dose model for single-grain OSL, using known-age samples

    DEFF Research Database (Denmark)

    Guerin, Guillaume; Combès, Benoit; Lahaye, Christelle

    2015-01-01

    on multi-grain OSL age estimates, these samples are presumed to have been both well-bleached at burial, and unaffected by mixing after deposition. Two ways of estimating single-grain ages are then compared: the standard approach on the one hand, consisting of applying the Central Age Model to De values...... for well-bleached samples; (ii) dose recovery experiments do not seem to be a very reliable tool to estimate the accuracy of a SAR measurement protocol for age determination....

  8. Empirical Sampling Distributions of Equating Coefficients for Graded and Nominal Response Instruments.

    Science.gov (United States)

    Baker, Frank B.

    1997-01-01

    Examined the sampling distributions of equating coefficients produced by the characteristic curve method for tests using graded and nominal response scoring using simulated data. For both models and across all three equating situations, the sampling distributions were generally bell-shaped and peaked, and occasionally had a small degree of…

  9. Comparing the Accuracy of Copula-Based Multivariate Density Forecasts in Selected Regions of Support

    NARCIS (Netherlands)

    C.G.H. Diks (Cees); V. Panchenko (Valentyn); O. Sokolinskiy (Oleg); D.J.C. van Dijk (Dick)

    2013-01-01

    textabstractThis paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample)

  10. Comparing the accuracy of copula-based multivariate density forecasts in selected regions of support

    NARCIS (Netherlands)

    Diks, C.; Panchenko, V.; Sokolinskiy, O.; van Dijk, D.

    2013-01-01

    This paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample) conditional

  11. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  12. [Effects of sampling plot number on tree species distribution prediction under climate change].

    Science.gov (United States)

    Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu

    2013-05-01

    Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.

  13. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    International Nuclear Information System (INIS)

    Kasviki, K.; Stamatelatos, I.E.; Yannakopoulou, E.; Papadopoulou, P.; Kalef-Ezra, J.

    2007-01-01

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv

  14. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kasviki, K. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece); Stamatelatos, I.E. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece)], E-mail: ion@ipta.demokritos.gr; Yannakopoulou, E. [Institute of Physical Chemistry, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Papadopoulou, P. [Institute of Technology of Agricultural Products, NAGREF, Lycovrissi, Attikis 14123 (Greece); Kalef-Ezra, J. [Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece)

    2007-10-15

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv.

  15. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    Directory of Open Access Journals (Sweden)

    Sophie Marchal

    Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.

  16. A random sampling procedure for anisotropic distributions

    International Nuclear Information System (INIS)

    Nagrajan, P.S.; Sethulakshmi, P.; Raghavendran, C.P.; Bhatia, D.P.

    1975-01-01

    A procedure is described for sampling the scattering angle of neutrons as per specified angular distribution data. The cosine of the scattering angle is written as a double Legendre expansion in the incident neutron energy and a random number. The coefficients of the expansion are given for C, N, O, Si, Ca, Fe and Pb and these elements are of interest in dosimetry and shielding. (author)

  17. The effects of sampling on the efficiency and accuracy of k-mer indexes: Theoretical and empirical comparisons using the human genome.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2017-01-01

    One of the most common ways to search a sequence database for sequences that are similar to a query sequence is to use a k-mer index such as BLAST. A big problem with k-mer indexes is the space required to store the lists of all occurrences of all k-mers in the database. One method for reducing the space needed, and also query time, is sampling where only some k-mer occurrences are stored. Most previous work uses hard sampling, in which enough k-mer occurrences are retained so that all similar sequences are guaranteed to be found. In contrast, we study soft sampling, which further reduces the number of stored k-mer occurrences at a cost of decreasing query accuracy. We focus on finding highly similar local alignments (HSLA) over nucleotide sequences, an operation that is fundamental to biological applications such as cDNA sequence mapping. For our comparison, we use the NCBI BLAST tool with the human genome and human ESTs. When identifying HSLAs, we find that soft sampling significantly reduces both index size and query time with relatively small losses in query accuracy. For the human genome and HSLAs of length at least 100 bp, soft sampling reduces index size 4-10 times more than hard sampling and processes queries 2.3-6.8 times faster, while still achieving retention rates of at least 96.6%. When we apply soft sampling to the problem of mapping ESTs against the genome, we map more than 98% of ESTs perfectly while reducing the index size by a factor of 4 and query time by 23.3%. These results demonstrate that soft sampling is a simple but effective strategy for performing efficient searches for HSLAs. We also provide a new model for sampling with BLAST that predicts empirical retention rates with reasonable accuracy by modeling two key problem factors.

  18. High-accuracy measurements of snow Bidirectional Reflectance Distribution Function at visible and NIR wavelengths – comparison with modelling results

    Directory of Open Access Journals (Sweden)

    M. Dumont

    2010-03-01

    Full Text Available High-accuracy measurements of snow Bidirectional Reflectance Distribution Function (BRDF were performed for four natural snow samples with a spectrogonio-radiometer in the 500–2600 nm wavelength range. These measurements are one of the first sets of direct snow BRDF values over a wide range of lighting and viewing geometry. They were compared to BRDF calculated with two optical models. Variations of the snow anisotropy factor with lighting geometry, wavelength and snow physical properties were investigated. Results show that at wavelengths with small penetration depth, scattering mainly occurs in the very top layers and the anisotropy factor is controlled by the phase function. In this condition, forward scattering peak or double scattering peak is observed. In contrast at shorter wavelengths, the penetration of the radiation is much deeper and the number of scattering events increases. The anisotropy factor is thus nearly constant and decreases at grazing observation angles. The whole dataset is available on demand from the corresponding author.

  19. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  20. Illustrating Sampling Distribution of a Statistic: Minitab Revisited

    Science.gov (United States)

    Johnson, H. Dean; Evans, Marc A.

    2008-01-01

    Understanding the concept of the sampling distribution of a statistic is essential for the understanding of inferential procedures. Unfortunately, this topic proves to be a stumbling block for students in introductory statistics classes. In efforts to aid students in their understanding of this concept, alternatives to a lecture-based mode of…

  1. Brief report: accuracy and response time for the recognition of facial emotions in a large sample of children with autism spectrum disorders.

    Science.gov (United States)

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander

    2014-09-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.

  2. Quality control on the accuracy of the total Beta activity index in different sample matrices water

    International Nuclear Information System (INIS)

    Pujol, L.; Pablo, M. A. de; Payeras, J.

    2013-01-01

    The standard ISO/IEC 17025:2005 of general requirements for the technical competence of testing and calibration laboratories, provides that a laboratory shall have quality control procedures for monitoring the validity of tests and calibrations ago. In this paper, the experience of Isotopic Applications Laboratory (CEDEX) in controlling the accuracy rate of total beta activity in samples of drinking water, inland waters and marine waters is presented. (Author)

  3. Simulation of the Sampling Distribution of the Mean Can Mislead

    Science.gov (United States)

    Watkins, Ann E.; Bargagliotti, Anna; Franklin, Christine

    2014-01-01

    Although the use of simulation to teach the sampling distribution of the mean is meant to provide students with sound conceptual understanding, it may lead them astray. We discuss a misunderstanding that can be introduced or reinforced when students who intuitively understand that "bigger samples are better" conduct a simulation to…

  4. Connecting Research to Teaching: Using Data to Motivate the Use of Empirical Sampling Distributions

    Science.gov (United States)

    Lee, Hollylynne S.; Starling, Tina T.; Gonzalez, Marggie D.

    2014-01-01

    Research shows that students often struggle with understanding empirical sampling distributions. Using hands-on and technology models and simulations of problems generated by real data help students begin to make connections between repeated sampling, sample size, distribution, variation, and center. A task to assist teachers in implementing…

  5. Impact of sampling interval in training data acquisition on intrafractional predictive accuracy of indirect dynamic tumor-tracking radiotherapy.

    Science.gov (United States)

    Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro

    2017-08-01

    To explore the effect of sampling interval of training data acquisition on the intrafractional prediction error of surrogate signal-based dynamic tumor-tracking using a gimbal-mounted linac. Twenty pairs of respiratory motions were acquired from 20 patients (ten lung, five liver, and five pancreatic cancer patients) who underwent dynamic tumor-tracking with the Vero4DRT. First, respiratory motions were acquired as training data for an initial construction of the prediction model before the irradiation. Next, additional respiratory motions were acquired for an update of the prediction model due to the change of the respiratory pattern during the irradiation. The time elapsed prior to the second acquisition of the respiratory motion was 12.6 ± 3.1 min. A four-axis moving phantom reproduced patients' three dimensional (3D) target motions and one dimensional surrogate motions. To predict the future internal target motion from the external surrogate motion, prediction models were constructed by minimizing residual prediction errors for training data acquired at 80 and 320 ms sampling intervals for 20 s, and at 500, 1,000, and 2,000 ms sampling intervals for 60 s using orthogonal kV x-ray imaging systems. The accuracies of prediction models trained with various sampling intervals were estimated based on training data with each sampling interval during the training process. The intrafractional prediction errors for various prediction models were then calculated on intrafractional monitoring images taken for 30 s at the constant sampling interval of a 500 ms fairly to evaluate the prediction accuracy for the same motion pattern. In addition, the first respiratory motion was used for the training and the second respiratory motion was used for the evaluation of the intrafractional prediction errors for the changed respiratory motion to evaluate the robustness of the prediction models. The training error of the prediction model was 1.7 ± 0.7 mm in 3D for all sampling

  6. Calculation of absolute protein-ligand binding free energy using distributed replica sampling.

    Science.gov (United States)

    Rodinger, Tomas; Howell, P Lynne; Pomès, Régis

    2008-10-21

    Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.

  7. Current distribution between petals in PF-FSJS sample

    International Nuclear Information System (INIS)

    Zani, L.

    2003-01-01

    6 Rogowski coils have been installed on each leg of each of the 12 petals in the PF-FSJS sample (poloidal field - full size joint sample) in order to diagnostic current. It appears that Rogowski signal seem reliable for current distribution analysis (Ampere's law is checked and reproducibility is assured) but there is some limitations for qualitative diagnostics. In the series of transparencies results are detailed for the PU1 position, for both leg and right legs and for various unique-angle shift (Δθ) configurations but only results for 0 < Δθ < -5 are consistent

  8. Pigeons exhibit higher accuracy for chosen memory tests than for forced memory tests in duration matching-to-sample.

    Science.gov (United States)

    Adams, Allison; Santi, Angelo

    2011-03-01

    Following training to match 2- and 8-sec durations of feederlight to red and green comparisons with a 0-sec baseline delay, pigeons were allowed to choose to take a memory test or to escape the memory test. The effects of sample omission, increases in retention interval, and variation in trial spacing on selection of the escape option and accuracy were studied. During initial testing, escaping the test did not increase as the task became more difficult, and there was no difference in accuracy between chosen and forced memory tests. However, with extended training, accuracy for chosen tests was significantly greater than for forced tests. In addition, two pigeons exhibited higher accuracy on chosen tests than on forced tests at the short retention interval and greater escape rates at the long retention interval. These results have not been obtained in previous studies with pigeons when the choice to take the test or to escape the test is given before test stimuli are presented. It appears that task-specific methodological factors may determine whether a particular species will exhibit the two behavioral effects that were initially proposed as potentially indicative of metacognition.

  9. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    Science.gov (United States)

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  10. Accuracy of recommended sampling and assay methods for the determination of plasma-free and urinary fractionated metanephrines in the diagnosis of pheochromocytoma and paraganglioma: a systematic review.

    Science.gov (United States)

    Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme

    2017-06-01

    To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p sampling compared with 24-h urine, 95 vs. 90% (p sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.

  11. Apparent density measurement by mercury pycnometry. Improved accuracy. Simplification of handling for possible application to irradiated samples

    International Nuclear Information System (INIS)

    Marlet, Bernard

    1978-12-01

    The accuracy of the apparent density measurement on massive samples of any geometrical shape has been improved and the method simplified. A standard deviation of +-1 to 5.10 -3 g.ml -1 according to the size and surface state of the sample, was obtained by the use of a flat ground stopper on a mercury pycnometer which fills itself under vacuum. This method saves considerable time and has been adapted to work in shielded cells for the measurement of radioactive materials, especially sintered uranium dioxide leaving the pile. The different parameters are analysed and criticized [fr

  12. Simulations of the Sampling Distribution of the Mean Do Not Necessarily Mislead and Can Facilitate Learning

    Science.gov (United States)

    Lane, David M.

    2015-01-01

    Recently Watkins, Bargagliotti, and Franklin (2014) discovered that simulations of the sampling distribution of the mean can mislead students into concluding that the mean of the sampling distribution of the mean depends on sample size. This potential error arises from the fact that the mean of a simulated sampling distribution will tend to be…

  13. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  14. Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.

    Science.gov (United States)

    Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas

    2002-01-01

    Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…

  15. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  16. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  17. High accuracy FIONA-AFM hybrid imaging

    International Nuclear Information System (INIS)

    Fronczek, D.N.; Quammen, C.; Wang, H.; Kisker, C.; Superfine, R.; Taylor, R.; Erie, D.A.; Tessmer, I.

    2011-01-01

    Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.

  18. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  19. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  20. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  1. Forecasting Value-at-Risk under Different Distributional Assumptions

    Directory of Open Access Journals (Sweden)

    Manuela Braione

    2016-01-01

    Full Text Available Financial asset returns are known to be conditionally heteroskedastic and generally non-normally distributed, fat-tailed and often skewed. These features must be taken into account to produce accurate forecasts of Value-at-Risk (VaR. We provide a comprehensive look at the problem by considering the impact that different distributional assumptions have on the accuracy of both univariate and multivariate GARCH models in out-of-sample VaR prediction. The set of analyzed distributions comprises the normal, Student, Multivariate Exponential Power and their corresponding skewed counterparts. The accuracy of the VaR forecasts is assessed by implementing standard statistical backtesting procedures used to rank the different specifications. The results show the importance of allowing for heavy-tails and skewness in the distributional assumption with the skew-Student outperforming the others across all tests and confidence levels.

  2. Measurement facilities and accuracy limits of sampling digital interferometers. Meresi lehetoesegek es hibaanalizis digitalis mintavetelezoe interferometeren

    Energy Technology Data Exchange (ETDEWEB)

    Czitrovszky, A.; Jani, P.; Szoter, L.

    1990-12-15

    We discuss the measurement facilities of a recently development sampling digital interferometer for machine tool testing. As opposed to conventional interferometers the present device provides possibilities for the digital storage up to 4 kHz of the complete information of the motion so that displacement, velocity, acceleration and power density spectrum measurement can be performed. An estimation is given for the truncation, round-off, jitter and frequency-aliasing sources of error of the reconstructed motion parameters. On the basis of the Shannon sampling theory optimal conditions of measurement parameters are defined for the case when the accuracy of the reconstructed part of motion and vibration is equal to the resolution of the conventional interferometer. 7 refs., 3 figs., 1 tab.

  3. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  4. Accuracy Enhancement of Raman Spectroscopy Using Complementary Laser-Induced Breakdown Spectroscopy (LIBS) with Geologically Mixed Samples.

    Science.gov (United States)

    Choi, Soojin; Kim, Dongyoung; Yang, Junho; Yoh, Jack J

    2017-04-01

    Quantitative Raman analysis was carried out with geologically mixed samples that have various matrices. In order to compensate the matrix effect in Raman shift, laser-induced breakdown spectroscopy (LIBS) analysis was performed. Raman spectroscopy revealed the geological materials contained in the mixed samples. However, the analysis of a mixture containing different matrices was inaccurate due to the weak signal of the Raman shift, interference, and the strong matrix effect. On the other hand, the LIBS quantitative analysis of atomic carbon and calcium in mixed samples showed high accuracy. In the case of the calcite and gypsum mixture, the coefficient of determination of atomic carbon using LIBS was 0.99, while the signal using Raman was less than 0.9. Therefore, the geological composition of the mixed samples is first obtained using Raman and the LIBS-based quantitative analysis is then applied to the Raman outcome in order to construct highly accurate univariate calibration curves. The study also focuses on a method to overcome matrix effects through the two complementary spectroscopic techniques of Raman spectroscopy and LIBS.

  5. Spatial distribution of single-nucleotide polymorphisms related to fungicide resistance and implications for sampling.

    Science.gov (United States)

    Van der Heyden, H; Dutilleul, P; Brodeur, L; Carisse, O

    2014-06-01

    Spatial distribution of single-nucleotide polymorphisms (SNPs) related to fungicide resistance was studied for Botrytis cinerea populations in vineyards and for B. squamosa populations in onion fields. Heterogeneity in this distribution was characterized by performing geostatistical analyses based on semivariograms and through the fitting of discrete probability distributions. Two SNPs known to be responsible for boscalid resistance (H272R and H272Y), both located on the B subunit of the succinate dehydrogenase gene, and one SNP known to be responsible for dicarboximide resistance (I365S) were chosen for B. cinerea in grape. For B. squamosa in onion, one SNP responsible for dicarboximide resistance (I365S homologous) was chosen. One onion field was sampled in 2009 and another one was sampled in 2010 for B. squamosa, and two vineyards were sampled in 2011 for B. cinerea, for a total of four sampled sites. Cluster sampling was carried on a 10-by-10 grid, each of the 100 nodes being the center of a 10-by-10-m quadrat. In each quadrat, 10 samples were collected and analyzed by restriction fragment length polymorphism polymerase chain reaction (PCR) or allele specific PCR. Mean SNP incidence varied from 16 to 68%, with an overall mean incidence of 43%. In the geostatistical analyses, omnidirectional variograms showed spatial autocorrelation characterized by ranges of 21 to 1 m. Various levels of anisotropy were detected, however, with variograms computed in four directions (at 0°, 45°, 90°, and 135° from the within-row direction used as reference), indicating that spatial autocorrelation was prevalent or characterized by a longer range in one direction. For all eight data sets, the β-binomial distribution was found to fit the data better than the binomial distribution. This indicates local aggregation of fungicide resistance among sampling units, as supported by estimates of the parameter θ of the β-binomial distribution of 0.09 to 0.23 (overall median value = 0

  6. Group Acceptance Sampling Plan for Lifetime Data Using Generalized Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2010-02-01

    Full Text Available In this paper, a group acceptance sampling plan (GASP is introduced for the situations when lifetime of the items follows the generalized Pareto distribution. The design parameters such as minimum group size and acceptance number are determined when the consumer’s risk and the test termination time are specified. The proposed sampling plan is compared with the existing sampling plan. It is concluded that the proposed sampling plan performs better than the existing plan in terms of minimum sample size required to reach the same decision.

  7. Laser ablation: Laser parameters: Frequency, pulse length, power, and beam charter play significant roles with regard to sampling complex samples for ICP/MS analysis

    International Nuclear Information System (INIS)

    Smith, M.R.; Alexander, M.L.; Hartman, J.S.; Koppenaal, D.W.

    1996-01-01

    Inductively coupled plasma mass spectrometry is used to investigate the influence of laser parameters with regard to sampling complex matrices ranging from relatively homogenous glasses to multi-phase sludge/slurry materials including radioactive Hanford tank waste. The resulting plume composition caused by the pulsed laser is evaluated as a function of wavelength, pulse energy, pulse length, focus, and beam power profiles. The author's studies indicate that these parameters play varying and often synergistic roles regarding quantitative results. (In a companion paper, particle transport and size distribution studies are presented.) The work described here will illustrate other laser parameters such as focusing and consequently power density and beam power profiles which are shown to influence precision and accuracy. Representative sampling by the LA approach is largely dependent on the sample's optical properties as well as laser parameters. Experimental results indicate that optimal laser parameters; short wavelength (UV), relatively low power (300 mJ), low-to-sub ns pulse lengths, and laser beams with reasonable power distributions (i.e., Gaussian or top-hat beam profiles) provide superior precision and accuracy. Remote LA-ICP/MS analyses of radioactive sludges are used to illustrate these optimal conditions laser ablation sampling

  8. Spatial Distribution and Sampling Plans for Grapevine Plant Canopy-Inhabiting Scaphoideus titanus (Hemiptera: Cicadellidae) Nymphs.

    Science.gov (United States)

    Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann

    2016-04-01

    The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling.

  9. Simulated tempering distributed replica sampling: A practical guide to enhanced conformational sampling

    Energy Technology Data Exchange (ETDEWEB)

    Rauscher, Sarah; Pomes, Regis, E-mail: pomes@sickkids.ca

    2010-11-01

    Simulated tempering distributed replica sampling (STDR) is a generalized-ensemble method designed specifically for simulations of large molecular systems on shared and heterogeneous computing platforms [Rauscher, Neale and Pomes (2009) J. Chem. Theor. Comput. 5, 2640]. The STDR algorithm consists of an alternation of two steps: (1) a short molecular dynamics (MD) simulation; and (2) a stochastic temperature jump. Repeating these steps thousands of times results in a random walk in temperature, which allows the system to overcome energetic barriers, thereby enhancing conformational sampling. The aim of the present paper is to provide a practical guide to applying STDR to complex biomolecular systems. We discuss the details of our STDR implementation, which is a highly-parallel algorithm designed to maximize computational efficiency while simultaneously minimizing network communication and data storage requirements. Using a 35-residue disordered peptide in explicit water as a test system, we characterize the efficiency of the STDR algorithm with respect to both diffusion in temperature space and statistical convergence of structural properties. Importantly, we show that STDR provides a dramatic enhancement of conformational sampling compared to a canonical MD simulation.

  10. Detecting the Water-soluble Chloride Distribution of Cement Paste in a High-precision Way.

    Science.gov (United States)

    Chang, Honglei; Mu, Song

    2017-11-21

    To improve the accuracy of the chloride distribution along the depth of cement paste under cyclic wet-dry conditions, a new method is proposed to obtain a high-precision chloride profile. Firstly, paste specimens are molded, cured, and exposed to cyclic wet-dry conditions. Then, powder samples at different specimen depths are grinded when the exposure age is reached. Finally, the water-soluble chloride content is detected using a silver nitrate titration method, and chloride profiles are plotted. The key to improving the accuracy of the chloride distribution along the depth is to exclude the error in the powderization, which is the most critical step for testing the distribution of chloride. Based on the above concept, the grinding method in this protocol can be used to grind powder samples automatically layer by layer from the surface inward, and it should be noted that a very thin grinding thickness (less than 0.5 mm) with a minimum error less than 0.04 mm can be obtained. The chloride profile obtained by this method better reflects the chloride distribution in specimens, which helps researchers to capture the distribution features that are often overlooked. Furthermore, this method can be applied to studies in the field of cement-based materials, which require high chloride distribution accuracy.

  11. An efficient method of randomly sampling the coherent angular scatter distribution

    International Nuclear Information System (INIS)

    Williamson, J.F.; Morin, R.L.

    1983-01-01

    Monte Carlo simulations of photon transport phenomena require random selection of an interaction process at each collision site along the photon track. Possible choices are usually limited to photoelectric absorption and incoherent scatter as approximated by the Klein-Nishina distribution. A technique is described for sampling the coherent angular scatter distribution, for the benefit of workers in medical physics. (U.K.)

  12. An alternative phase-space distribution to sample initial conditions for classical dynamics simulations

    International Nuclear Information System (INIS)

    Garcia-Vela, A.

    2002-01-01

    A new quantum-type phase-space distribution is proposed in order to sample initial conditions for classical trajectory simulations. The phase-space distribution is obtained as the modulus of a quantum phase-space state of the system, defined as the direct product of the coordinate and momentum representations of the quantum initial state. The distribution is tested by sampling initial conditions which reproduce the initial state of the Ar-HCl cluster prepared by ultraviolet excitation, and by simulating the photodissociation dynamics by classical trajectories. The results are compared with those of a wave packet calculation, and with a classical simulation using an initial phase-space distribution recently suggested. A better agreement is found between the classical and the quantum predictions with the present phase-space distribution, as compared with the previous one. This improvement is attributed to the fact that the phase-space distribution propagated classically in this work resembles more closely the shape of the wave packet propagated quantum mechanically

  13. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    Science.gov (United States)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  14. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-08-15

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  15. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man

    2015-01-01

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media

  16. On peculiarities of distribution of some elements in vegetation samples

    International Nuclear Information System (INIS)

    Bakiev, S.A.; Rakhmanov, J.; Khakimov, Z.M.; Turayev, S.

    2005-01-01

    This work is devoted to the neutron-activation analysis of medicines of vegetation origin and some herbs, vegetables, fruits and cereals, which are used in oriental medicine, in order to reveal peculiarities of distribution of studied elements in them and possible relations between this distribution and parameters of oriental medicine. The sampling involving 85 species and their preparation for analysis, as well as complex of necessary methodological studies were performed and the method of sample analysis for 14 macro- and microelements (Na, Al, Cl, K, Sc, Mn, Fe, Co, Cu, Zn, Br, J, La, Au) was developed. The studies carried out have enabled one to obtain data on concentrations of these elements and to reveal peculiarities of their distribution in the samples under interest. It was revealed herbs, fruits and cereals with pronounced higher concentrations (with respect to the mean values) of one or another element, which are perhaps concentrators of those elements, as well as samples with lower concentrations of elements (see table). It is indicative that in all herbs only enhanced concentrations of elements are observed, but in fruits and cereals-only lowered concentrations of elements. These results can be of interest for geochemical ecology, dietology, therapy, as well as for activities on correction of elemental content of ecosystems, including soils, and alive organisms It is suggested to continue studies with extension of range of object types and analysed elements. Mathematical analysis of the obtained results was performed with comparison of concentrations of a number of elements in the different objects with classifying parameters ('cold-hot' and 'dry-wet') of these objects according to oriental medicine. In the current stage of studies no relation between these parameters and concentrations has been found. It does not mean that there are not such relations at all, they may be revealed with extension and development of our' studies.

  17. Internal Stress Distribution Measurement of TIG Welded SUS304 Samples Using Neutron Diffraction Technique

    Science.gov (United States)

    Muslih, M. Refai; Sumirat, I.; Sairun; Purwanta

    2008-03-01

    The distribution of residual stress of SUS304 samples that were undergone TIG welding process with four different electric currents has been measured. The welding has been done in the middle part of the samples that was previously grooved by milling machine. Before they were welded the samples were annealed at 650 degree Celsius for one hour. The annealing process was done to eliminate residual stress generated by grooving process so that the residual stress within the samples was merely produced from welding process. The calculation of distribution of residual stress was carried out by measuring the strains within crystal planes of Fe(220) SUS304. Strain, Young modulus, and Poisson ratio of Fe(220) SUS304 were measured using DN1-M neutron diffractometer. Young modulus and Poisson ratio of Fe(220) SUS304 sample were measured in-situ. The result of calculations showed that distribution of residual stress of SUS304 in the vicinity of welded area is influenced both by treatments given at the samples-making process and by the electric current used during welding process.

  18. Feasibility and accuracy evaluation of three human papillomavirus assays for FTA card-based sampling: a pilot study in cervical cancer screening

    OpenAIRE

    Wang, Shao-Ming; Hu, Shang-Ying; Chen, Wen; Chen, Feng; Zhao, Fang-Hui; He, Wei; Ma, Xin-Ming; Zhang, Yu-Qing; Wang, Jian; Sivasubramaniam, Priya; Qiao, You-Lin

    2015-01-01

    Background Liquid-state specimen carriers are inadequate for sample transportation in large-scale screening projects in low-resource settings, which necessitates the exploration of novel non-hazardous solid-state alternatives. Studies investigating the feasibility and accuracy of a solid-state human papillomavirus (HPV) sampling medium in combination with different down-stream HPV DNA assays for cervical cancer screening are needed. Methods We collected two cervical specimens from 396 women, ...

  19. Accuracy and precision in thermoluminescence dosimetry

    International Nuclear Information System (INIS)

    Marshall, T.O.

    1984-01-01

    The question of accuracy and precision in thermoluminescent dosimetry, particularly in relation to lithium fluoride phosphor, is discussed. The more important sources of error, including those due to the detectors, the reader, annealing and dosemeter design, are identified and methods of reducing their effects on accuracy and precision to a minimum are given. Finally, the accuracy and precision achievable for three quite different applications are discussed, namely, for personal dosimetry, environmental monitoring and for the measurement of photon dose distributions in phantoms. (U.K.)

  20. Psychometric Properties and Diagnostic Accuracy of the Edinburgh Postnatal Depression Scale in a Sample of Iranian Women

    Directory of Open Access Journals (Sweden)

    Gholam Reza Kheirabadi

    2012-03-01

    Full Text Available Background: Edinburgh Postnatal Depression Scale (EPDS has been used as a reliable screening tool for postpartum depression in many countries. This study aimed to assess the psychometric properties and diagnostic accuracy of the EPDS in a sample of Iranian women.Methods: Using stratified sampling 262 postpartum women (2 weeks-3 months after delivery were selected from urban and rural health center in the city of Isfahan. They were interviewed using EPDS and Hamilton depression rating scale (HDRS. Data were assessed using factor analysis, diagnosis analysis of receiver operating characteristic (ROC curve, Cronbach's alpha and Pearson correlation coefficient.Results: The age of then participants ranged 18-45 years (26.6±5.1. Based on a cut-off point of >13 for HDRS, 18.3% of the participants. The overall reliability (Cronbach's alpha of EPDS was 0.79. There was a significant correlation (r2=0.60, P value<0.01 between EPDS and HDRS. Two factor analysis showed that anhedonia and depression were two explanatory factors. At a cut-off point12 the sensitivity of the questionnaire was 78% (95% CI: 73%-83% and its specificity was 75% (95% CI: 72%-78%. Conclusion: The Persian version of the EPDS showed appropriate psychometric properties diagnostic accuracy index. It can be used by health system professionals for detection, assessment and treatment for mothers with post partum depression.

  1. Aspects of precision and accuracy in neutron activation analysis

    International Nuclear Information System (INIS)

    Heydorn, K.

    1980-03-01

    Analytical results without systematic errors and with accurately known random errors are normally distributed around their true values. Such results may be produced by means of neutron activation analysis both with and without radiochemical separation. When all sources of random variation are known a priori, their effect may be combined with the Poisson statistics characteristic of the counting process, and the standard deviation of a single analytical result may be estimated. The various steps of a complete neutron activation analytical procedure are therefore studied in detail with respect to determining their contribution to the overall variability of the final result. Verification of the estimated standard deviation is carried out by demonstrating the absence of significant unknown random errors through analysing, in replicate, samples covering the range of concentrations and matrices anticipated in actual use. Agreement between the estimated and the observed variability of replicate results is then tested by a simple statistic T based on the chi-square distribution. It is found that results from neutron activation analysis on biological samples can be brought into statistical control. In routine application of methods in statistical control the same statistical test may be used for quality control when some of the actual samples are analysed in duplicate. This analysis of precision serves to detect unknown or unexpected sources of variation of the analytical results, and both random and systematic errors have been discovered in practical trace element investigations in different areas of research. Particularly, at the ultratrace level of concentration where there are few or no standard reference materials for ascertaining the accuracy of results, the proposed quality control based on the analysis of precision combined with neutron activation analysis with radiochemical separation, with an a priori precision independent of the level of concentration, becomes a

  2. Data accuracy assessment using enterprise architecture

    Science.gov (United States)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  3. A nested sampling particle filter for nonlinear data assimilation

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-04-15

    We present an efficient nonlinear data assimilation filter that combines particle filtering with the nested sampling algorithm. Particle filters (PF) utilize a set of weighted particles as a discrete representation of probability distribution functions (PDF). These particles are propagated through the system dynamics and their weights are sequentially updated based on the likelihood of the observed data. Nested sampling (NS) is an efficient sampling algorithm that iteratively builds a discrete representation of the posterior distributions by focusing a set of particles to high-likelihood regions. This would allow the representation of the posterior PDF with a smaller number of particles and reduce the effects of the curse of dimensionality. The proposed nested sampling particle filter (NSPF) iteratively builds the posterior distribution by applying a constrained sampling from the prior distribution to obtain particles in high-likelihood regions of the search space, resulting in a reduction of the number of particles required for an efficient behaviour of particle filters. Numerical experiments with the 3-dimensional Lorenz63 and the 40-dimensional Lorenz96 models show that NSPF outperforms PF in accuracy with a relatively smaller number of particles. © 2013 Royal Meteorological Society.

  4. Accuracy of Endometrial Sampling in Endometrial Carcinoma: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Visser, Nicole C M; Reijnen, Casper; Massuger, Leon F A G; Nagtegaal, Iris D; Bulten, Johan; Pijnenborg, Johanna M A

    2017-10-01

    To assess the agreement between preoperative endometrial sampling and final diagnosis for tumor grade and subtype in patients with endometrial carcinoma. MEDLINE, EMBASE, ClinicalTrials.gov, and the Cochrane library were searched from inception to January 1, 2017, for studies that compared tumor grade and histologic subtype in preoperative endometrial samples and hysterectomy specimens. In eligible studies, the index test included office endometrial biopsy, hysteroscopic biopsy, or dilatation and curettage; the reference standard was hysterectomy. Outcome measures included tumor grade, histologic subtype, or both. Two independent reviewers assessed the eligibility of the studies. Risk of bias was assessed (Quality Assessment of Diagnostic Accuracy Studies). A total of 45 studies (12,459 patients) met the inclusion criteria. The pooled agreement rate on tumor grade was 0.67 (95% CI 0.60-0.75) and Cohen's κ was 0.45 (95% CI 0.34-0.55). Agreement between hysteroscopic biopsy and final diagnosis was higher (0.89, 95% CI 0.80-0.98) than for dilatation and curettage (0.70, 95% CI 0.60-0.79; P=.02); however, it was not significantly higher than for office endometrial biopsy (0.73, 95% CI 0.60-0.86; P=.08). The lowest agreement rate was found for grade 2 carcinomas (0.61, 95% CI 0.53-0.69). Downgrading was found in 25% and upgrading was found in 21% of the endometrial samples. Agreement on histologic subtypes was 0.95 (95% CI 0.94-0.97) and 0.81 (95% CI 0.69-0.92) for preoperative endometrioid and nonendometrioid carcinomas, respectively. Overall there is only moderate agreement on tumor grade between preoperative endometrial sampling and final diagnosis with the lowest agreement for grade 2 carcinomas.

  5. Data-driven importance distributions for articulated tracking

    DEFF Research Database (Denmark)

    Hauberg, Søren; Pedersen, Kim Steenstrup

    2011-01-01

    We present two data-driven importance distributions for particle filterbased articulated tracking; one based on background subtraction, another on depth information. In order to keep the algorithms efficient, we represent human poses in terms of spatial joint positions. To ensure constant bone le...... filter, where they improve both accuracy and efficiency of the tracker. In fact, they triple the effective number of samples compared to the most commonly used importance distribution at little extra computational cost....

  6. Assessment Of Accuracies Of Remote-Sensing Maps

    Science.gov (United States)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  7. Proper and Paradigmatic Metonymy as a Lens for Characterizing Student Conceptions of Distributions and Sampling

    Science.gov (United States)

    Noll, Jennifer; Hancock, Stacey

    2015-01-01

    This research investigates what students' use of statistical language can tell us about their conceptions of distribution and sampling in relation to informal inference. Prior research documents students' challenges in understanding ideas of distribution and sampling as tools for making informal statistical inferences. We know that these…

  8. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  9. Distribution and Origin of Amino Acids in Lunar Regolith Samples

    Science.gov (United States)

    Elsila, J. E.; Callahan, M. P.; Glavin, D. P.; Dworkin, J. P.; McLain, H. L.; Noble, S. K.; Gibson, E. K., Jr.

    2015-01-01

    The existence of organic compounds on the lunar surface has been a question of interest from the Apollo era to the present. Investigations of amino acids immediately after collection of lunar samples yielded inconclusive identifications, in part due to analytical limitations including insensitivity to certain compounds, an inability to separate enantiomers, and lack of compound-specific isotopic measurements. It was not possible to determine if the detected amino acids were indigenous to the lunar samples or the result of terrestrial contamination. Recently, we presented initial data from the analysis of amino acid abundances in 12 lunar regolith samples and discussed those results in the context of four potential amino acid sources [5]. Here, we expand on our previous work, focusing on amino acid abundances and distributions in seven regolith samples and presenting the first compound-specific carbon isotopic ratios measured for amino acids in a lunar sample.

  10. The Role of the Sampling Distribution in Understanding Statistical Inference

    Science.gov (United States)

    Lipson, Kay

    2003-01-01

    Many statistics educators believe that few students develop the level of conceptual understanding essential for them to apply correctly the statistical techniques at their disposal and to interpret their outcomes appropriately. It is also commonly believed that the sampling distribution plays an important role in developing this understanding.…

  11. Inference for Local Distributions at High Sampling Frequencies: A Bootstrap Approach

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Varneskov, Rasmus T.

    of "large" jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special block-wise dependence structure induced by sampling errors. We show that the LDWB replicates first and second-order limit theory from the usual...... empirical process and the stochastic scale estimate, respectively, as well as an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and and a nonparametric local block bootstrap, also introduced here, up to second-order distribution theory....... Finally, we introduce LDWB-aided Kolmogorov-Smirnov tests for local Gaussianity as well as local von-Mises statistics, with and without bootstrap inference, and establish their asymptotic validity using the second-order distribution theory. The finite sample performance of CLT and LDWB-aided local...

  12. Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution

    Directory of Open Access Journals (Sweden)

    Amer Ibrahim Al-Omari

    2018-03-01

    Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.

  13. Teaching the Concept of the Sampling Distribution of the Mean

    Science.gov (United States)

    Aguinis, Herman; Branstetter, Steven A.

    2007-01-01

    The authors use proven cognitive and learning principles and recent developments in the field of educational psychology to teach the concept of the sampling distribution of the mean, which is arguably one of the most central concepts in inferential statistics. The proposed pedagogical approach relies on cognitive load, contiguity, and experiential…

  14. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    Science.gov (United States)

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  15. Improving the Accuracy of the Hyperspectral Model for Apple Canopy Water Content Prediction using the Equidistant Sampling Method.

    Science.gov (United States)

    Zhao, Huan-San; Zhu, Xi-Cun; Li, Cheng; Wei, Yu; Zhao, Geng-Xing; Jiang, Yuan-Mao

    2017-09-11

    The influence of the equidistant sampling method was explored in a hyperspectral model for the accurate prediction of the water content of apple tree canopy. The relationship between spectral reflectance and water content was explored using the sample partition methods of equidistant sampling and random sampling, and a stepwise regression model of the apple canopy water content was established. The results showed that the random sampling model was Y = 0.4797 - 721787.3883 × Z 3 - 766567.1103 × Z 5 - 771392.9030 × Z 6 ; the equidistant sampling model was Y = 0.4613 - 480610.4213 × Z 2 - 552189.0450 × Z 5 - 1006181.8358 × Z 6 . After verification, the equidistant sampling method was verified to offer a superior prediction ability. The calibration set coefficient of determination of 0.6599 and validation set coefficient of determination of 0.8221 were higher than that of the random sampling model by 9.20% and 10.90%, respectively. The root mean square error (RMSE) of 0.0365 and relative error (RE) of 0.0626 were lower than that of the random sampling model by 17.23% and 17.09%, respectively. Dividing the calibration set and validation set by the equidistant sampling method can improve the prediction accuracy of the hyperspectral model of apple canopy water content.

  16. Obtaining Samples Representative of Contaminant Distribution in an Aquifer

    International Nuclear Information System (INIS)

    Schalla, Ronald; Spane, Frank A.; Narbutovskih, Susan M.; Conley, Scott F.; Webber, William D.

    2002-01-01

    Historically, groundwater samples collected from monitoring wells have been assumed to provide average indications of contaminant concentrations within the aquifer over the well-screen interval. In-well flow circulation, heterogeneity in the surrounding aquifer, and the sampling method utilized, however, can significantly impact the representativeness of samples as contaminant indicators of actual conditions within the surrounding aquifer. This paper identifies the need and approaches essential for providing cost-effective and technically meaningful groundwater-monitoring results. Proper design of the well screen interval is critical. An accurate understanding of ambient (non-pumping) flow conditions within the monitoring well is essential for determining the contaminant distribution within the aquifer. The ambient in-well flow velocity, flow direction and volumetric flux rate are key to this understanding. Not only do the ambient flow conditions need to be identified for preferential flow zones, but also the probable changes that will be imposed under dynamic conditions that occur during groundwater sampling. Once the in-well flow conditions are understood, effective sampling can be conducted to obtain representative samples for specific depth zones or zones of interest. The question of sample representativeness has become an important issue as waste minimization techniques such as low flow purging and sampling are implemented to combat the increasing cost of well purging and sampling at many hazardous waste sites. Several technical approaches (e.g., well tracer techniques and flowmeter surveys) can be used to determine in-well flow conditions, and these are discussed with respect to both their usefulness and limitations. Proper fluid extraction methods using minimal, (low) volume and no purge sampling methods that are used to obtain representative samples of aquifer conditions are presented

  17. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    Science.gov (United States)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  18. Power distribution system reliability evaluation using dagger-sampling Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Y.; Zhao, S.; Ma, Y. [North China Electric Power Univ., Hebei (China). Dept. of Electrical Engineering

    2009-03-11

    A dagger-sampling Monte Carlo simulation method was used to evaluate power distribution system reliability. The dagger-sampling technique was used to record the failure of a component as an incident and to determine its occurrence probability by generating incident samples using random numbers. The dagger sampling technique was combined with the direct sequential Monte Carlo method to calculate average values of load point indices and system indices. Results of the 2 methods with simulation times of up to 100,000 years were then compared. The comparative evaluation showed that less computing time was required using the dagger-sampling technique due to its higher convergence speed. When simulation times were 1000 years, the dagger-sampling method required 0.05 seconds to accomplish an evaluation, while the direct method required 0.27 seconds. 12 refs., 3 tabs., 4 figs.

  19. Eccentricity samples: Implications on the potential and the velocity distribution

    Directory of Open Access Journals (Sweden)

    Cubarsi R.

    2017-01-01

    Full Text Available Planar and vertical epicycle frequencies and local angular velocity are related to the derivatives up to the second order of the local potential and can be used to test the shape of the potential from stellar disc samples. These samples show a more complex velocity distribution than halo stars and should provide a more realistic test. We assume an axisymmetric potential allowing a mixture of independent ellipsoidal velocity distributions, of separable or Staeckel form in cylindrical or spherical coordinates. We prove that values of local constants are not consistent with a potential separable in addition in cylindrical coordinates and with a spherically symmetric potential. The simplest potential that fits the local constants is used to show that the harmonical and non-harmonical terms of the potential are equally important. The same analysis is used to estimate the local constants. Two families of nested subsamples selected for decreasing planar and vertical eccentricities are used to borne out the relation between the mean squared planar and vertical eccentricities and the velocity dispersions of the subsamples. According to the first-order epicycle model, the radial and vertical velocity components provide accurate information on the planar and vertical epicycle frequencies. However, it is impossible to account for the asymmetric drift which introduces a systematic bias in estimation of the third constant. Under a more general model, when the asymmetric drift is taken into account, the rotation velocity dispersions together with their asymmetric drift provide the correct fit for the local angular velocity. The consistency of the results shows that this new method based on the distribution of eccentricities is worth using for kinematic stellar samples. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. No 176011: Dynamics and Kinematics of Celestial Bodies and Systems

  20. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  1. Distribution of pesticide residues in soil and uncertainty of sampling.

    Science.gov (United States)

    Suszter, Gabriela K; Ambrus, Árpád

    2017-08-03

    Pesticide residues were determined in about 120 soil cores taken randomly from the top 15 cm layer of two sunflower fields about 30 days after preemergence herbicide treatments. Samples were extracted with acetone-ethyl acetate mixture and the residues were determined with GC-TSD. Residues of dimethenamid, pendimethalin, and prometryn ranged from 0.005 to 2.97 mg/kg. Their relative standard deviations (CV) were between 0.66 and 1.13. The relative frequency distributions of residues in soil cores were very similar to those observed in root and tuber vegetables grown in pesticide treated soils. Based on all available information, a typical CV of 1.00 was estimated for pesticide residues in primary soil samples (soil cores). The corresponding expectable relative uncertainty of sampling is 20% when composite samples of size 25 are taken. To obtain a reliable estimate of the average residues in the top 15 cm layer of soil of a field up to 8 independent replicate random samples should be taken. To obtain better estimate of the actual residue level of the sampled filed would be marginal if larger number of samples were taken.

  2. Large Sample Neutron Activation Analysis: A Challenge in Cultural Heritage Studies

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Tzika, F.

    2007-01-01

    Large sample neutron activation analysis compliments and significantly extends the analytical tools available for cultural heritage and authentication studies providing unique applications of non-destructive, multi-element analysis of materials that are too precious to damage for sampling purposes, representative sampling of heterogeneous materials or even analysis of whole objects. In this work, correction factors for neutron self-shielding, gamma-ray attenuation and volume distribution of the activity in large volume samples composed of iron and ceramic material were derived. Moreover, the effect of inhomogeneity on the accuracy of the technique was examined

  3. Measuring radioactive half-lives via statistical sampling in practice

    Science.gov (United States)

    Lorusso, G.; Collins, S. M.; Jagan, K.; Hitt, G. W.; Sadek, A. M.; Aitken-Smith, P. M.; Bridi, D.; Keightley, J. D.

    2017-10-01

    The statistical sampling method for the measurement of radioactive decay half-lives exhibits intriguing features such as that the half-life is approximately the median of a distribution closely resembling a Cauchy distribution. Whilst initial theoretical considerations suggested that in certain cases the method could have significant advantages, accurate measurements by statistical sampling have proven difficult, for they require an exercise in non-standard statistical analysis. As a consequence, no half-life measurement using this method has yet been reported and no comparison with traditional methods has ever been made. We used a Monte Carlo approach to address these analysis difficulties, and present the first experimental measurement of a radioisotope half-life (211Pb) by statistical sampling in good agreement with the literature recommended value. Our work also focused on the comparison between statistical sampling and exponential regression analysis, and concluded that exponential regression achieves generally the highest accuracy.

  4. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  5. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  6. Multicounter neutron detector for examination of content and spatial distribution of fissile materials in bulk samples

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Starosta, W.; Zoltowski, T.

    1999-01-01

    A new neutron coincidence well-counter is presented. This experimental device can be applied for passive assay of fissile and, in particular, for plutonium bearing materials. It contains of a set of the 3 He tubes placed inside a polyethylene moderator. Outputs from the tubes, first processed by preamplifier/amplifier/discriminator circuits, are then analysed using a correlator connected with PC, and correlation techniques implemented in software. Such a neutron counter enables determination of the 240 Pu effective mass in samples of a small Pu content (i.e., where the multiplication effects can be neglected) having a fairly big volume (up to 0.17 m 3 ), if only the isotopic composition is known. For determination of neutron sources distribution inside a sample, a heuristic method based on hierarchical cluster analysis was applied. As input parameters, amplitudes and phases of two-dimensional Fourier transformation of the count profiles matrices for known point sources distributions and for the examined samples were taken. Such matrices of profiles counts are collected using the sample scanning with detection head. In the clustering processes, process, counts profiles of unknown samples are fitted into dendrograms employing the 'proximity' criterion of the examined sample profile to standard samples profiles. Distribution of neutron sources in the examined sample is then evaluated on the basis of a comparison with standard sources distributions. (author)

  7. Elemental distribution and sample integrity comparison of freeze-dried and frozen-hydrated biological tissue samples with nuclear microprobe

    Energy Technology Data Exchange (ETDEWEB)

    Vavpetič, P., E-mail: primoz.vavpetic@ijs.si [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Vogel-Mikuš, K. [Biotechnical Faculty, Department of Biology, University of Ljubljana, Jamnikarjeva 101, SI-1000 Ljubljana (Slovenia); Jeromel, L. [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Ogrinc Potočnik, N. [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); FOM-Institute AMOLF, Science Park 104, 1098 XG Amsterdam (Netherlands); Pongrac, P. [Biotechnical Faculty, Department of Biology, University of Ljubljana, Jamnikarjeva 101, SI-1000 Ljubljana (Slovenia); Department of Plant Physiology, University of Bayreuth, Universitätstr. 30, 95447 Bayreuth (Germany); Drobne, D.; Pipan Tkalec, Ž.; Novak, S.; Kos, M.; Koren, Š.; Regvar, M. [Biotechnical Faculty, Department of Biology, University of Ljubljana, Jamnikarjeva 101, SI-1000 Ljubljana (Slovenia); Pelicon, P. [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia)

    2015-04-01

    The analysis of biological samples in frozen-hydrated state with micro-PIXE technique at Jožef Stefan Institute (JSI) nuclear microprobe has matured to a point that enables us to measure and examine frozen tissue samples routinely as a standard research method. Cryotome-cut slice of frozen-hydrated biological sample is mounted between two thin foils and positioned on the sample holder. The temperature of the cold stage in the measuring chamber is kept below 130 K throughout the insertion of the samples and the proton beam exposure. Matrix composition of frozen-hydrated tissue is consisted mostly of ice. Sample deterioration during proton beam exposure is monitored during the experiment, as both Elastic Backscattering Spectrometry (EBS) and Scanning Transmission Ion Microscopy (STIM) in on–off axis geometry are recorded together with the events in two PIXE detectors and backscattered ions from the chopper in a single list-mode file. The aim of this experiment was to determine differences and similarities between two kinds of biological sample preparation techniques for micro-PIXE analysis, namely freeze-drying and frozen-hydrated sample preparation in order to evaluate the improvements in the elemental localisation of the latter technique if any. In the presented work, a standard micro-PIXE configuration for tissue mapping at JSI was used with five detection systems operating in parallel, with proton beam cross section of 1.0 × 1.0 μm{sup 2} and a beam current of 100 pA. The comparison of the resulting elemental distributions measured at the biological tissue prepared in the frozen-hydrated and in the freeze-dried state revealed differences in elemental distribution of particular elements at the cellular level due to the morphology alteration in particular tissue compartments induced either by water removal in the lyophilisation process or by unsatisfactory preparation of samples for cutting and mounting during the shock-freezing phase of sample preparation.

  8. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Science.gov (United States)

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  9. Neutron multicounter detector for investigation of content and spatial distribution of fission materials in large volume samples

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Starosta, W.; Zoltowski, T.

    1998-01-01

    The experimental device is a neutron coincidence well counter. It can be applied for passive assay of fissile - especially for plutonium bearing - materials. It consist of a set of 3 He tubes placed inside a polyethylene moderator; outputs from the tubes, first processed by preamplifier/amplifier/discriminator circuits, are then analysed using neutron correlator connected with a PC, and correlation techniques implemented in software. Such a neutron counter allows for determination of plutonium mass ( 240 Pu effective mass) in nonmultiplying samples having fairly big volume (up to 0.14 m 3 ). For determination of neutron sources distribution inside the sample, the heuristic methods based on hierarchical cluster analysis are applied. As an input parameters, amplitudes and phases of two-dimensional Fourier transformation of the count profiles matrices for known point sources distributions and for the examined samples, are taken. Such matrices are collected by means of sample scanning by detection head. During clustering process, counts profiles for unknown samples fitted into dendrograms using the 'proximity' criterion of the examined sample profile to standard samples profiles. Distribution of neutron sources in an examined sample is then evaluated on the basis of comparison with standard sources distributions. (author)

  10. Discrete Ziggurat: A time-memory trade-off for sampling from a Gaussian distribution over the integers

    NARCIS (Netherlands)

    Buchmann, J.; Cabarcas, D.; Göpfert, F.; Hülsing, A.T.; Weiden, P.; Lange, T.; Lauter, K.; Lisonek, P.

    2014-01-01

    Several lattice-based cryptosystems require to sample from a discrete Gaussian distribution over the integers. Existing methods to sample from such a distribution either need large amounts of memory or they are very slow. In this paper we explore a different method that allows for a flexible

  11. Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.

    Science.gov (United States)

    Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua

    2016-09-05

    In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.

  12. Sensitivity and accuracy of atomic absorption spectrophotometry for trace elements in marine biological samples

    International Nuclear Information System (INIS)

    Fukai, R.; Oregioni, B.

    1976-01-01

    During the course of 1974-75 atomic absorption spectrophotometry (AAS) has been used extensively in our laboratory for measuring various trace elements in marine biological materials in order to conduct homogeneity tests on the intercalibration samples for trace metal analysis as well as to obtain baseline data for trace elements in various kinds of marine organisms collected from different locations in the Mediterranean Sea. Several series of test experiments have been conducted on the current methodology in use in our laboratory to ensure satisfactory analytical performance in measuring a number of trace elements for which analytical problems have not completely been solved. Sensitivities of the techniques used were repeatedly checked for various elements and the accuracy of the analyses were always critically evaluated by analyzing standard reference materials. The results of these test experiments have uncovered critical points relevant to the application of the AAS to routine analysis

  13. Non-parametric adaptive importance sampling for the probability estimation of a launcher impact position

    International Nuclear Information System (INIS)

    Morio, Jerome

    2011-01-01

    Importance sampling (IS) is a useful simulation technique to estimate critical probability with a better accuracy than Monte Carlo methods. It consists in generating random weighted samples from an auxiliary distribution rather than the distribution of interest. The crucial part of this algorithm is the choice of an efficient auxiliary PDF that has to be able to simulate more rare random events. The optimisation of this auxiliary distribution is often in practice very difficult. In this article, we propose to approach the IS optimal auxiliary density with non-parametric adaptive importance sampling (NAIS). We apply this technique for the probability estimation of spatial launcher impact position since it has currently become a more and more important issue in the field of aeronautics.

  14. Diagnostic accuracy of liver fibrosis based on red cell distribution width (RDW) to platelet ratio with fibroscan in chronic hepatitis B

    Science.gov (United States)

    Sembiring, J.; Jones, F.

    2018-03-01

    Red cell Distribution Width (RDW) and platelet ratio (RPR) can predict liver fibrosis and cirrhosis in chronic hepatitis B with relatively high accuracy. RPR was superior to other non-invasive methods to predict liver fibrosis, such as AST and ALT ratio, AST and platelet ratio Index and FIB-4. The aim of this study was to assess diagnostic accuracy liver fibrosis by using RDW and platelets ratio in chronic hepatitis B patients based on compared with Fibroscan. This cross-sectional study was conducted at Adam Malik Hospital from January-June 2015. We examine 34 patients hepatitis B chronic, screen RDW, platelet, and fibroscan. Data were statistically analyzed. The result RPR with ROC procedure has an accuracy of 72.3% (95% CI: 84.1% - 97%). In this study, the RPR had a moderate ability to predict fibrosis degree (p = 0.029 with AUC> 70%). The cutoff value RPR was 0.0591, sensitivity and spesificity were 71.4% and 60%, Positive Prediction Value (PPV) was 55.6% and Negative Predictions Value (NPV) was 75%, positive likelihood ratio was 1.79 and negative likelihood ratio was 0.48. RPR have the ability to predict the degree of liver fibrosis in chronic hepatitis B patients with moderate accuracy.

  15. Different goodness of fit tests for Rayleigh distribution in ranked set sampling

    Directory of Open Access Journals (Sweden)

    Amer Al-Omari

    2016-03-01

    Full Text Available In this paper, different goodness of fit tests for the Rayleigh distribution are considered based on simple random sampling (SRS and ranked set sampling (RSS techniques. The performance of the suggested estimators is evaluated in terms of the power of the tests by using Monte Carlo simulation. It is found that the suggested RSS tests perform better than their counterparts  in SRS.

  16. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    Science.gov (United States)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  17. Determination and optimization of spatial samples for distributed measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Huo, Xiaoming (Georgia Institute of Technology, Atlanta, GA); Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong (Georgia Institute of Technology, Atlanta, GA)

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  18. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  19. The behavior of Metropolis-coupled Markov chains when sampling rugged phylogenetic distributions.

    Science.gov (United States)

    Brown, Jeremy M; Thomson, Robert C

    2018-02-15

    Bayesian phylogenetic inference involves sampling from posterior distributions of trees, which sometimes exhibit local optima, or peaks, separated by regions of low posterior density. Markov chain Monte Carlo (MCMC) algorithms are the most widely used numerical method for generating samples from these posterior distributions, but they are susceptible to entrapment on individual optima in rugged distributions when they are unable to easily cross through or jump across regions of low posterior density. Ruggedness of posterior distributions can result from a variety of factors, including unmodeled variation in evolutionary processes and unrecognized variation in the true topology across sites or genes. Ruggedness can also become exaggerated when constraints are placed on topologies that require the presence or absence of particular bipartitions (often referred to as positive or negative constraints, respectively). These types of constraints are frequently employed when conducting tests of topological hypotheses (Bergsten et al. 2013; Brown and Thomson 2017). Negative constraints can lead to particularly rugged distributions when the data strongly support a forbidden clade, because monophyly of the clade can be disrupted by inserting outgroup taxa in many different ways. However, topological moves between the alternative disruptions are very difficult, because they require swaps between the inserted outgroup taxa while the data constrain taxa from the forbidden clade to remain close together on the tree. While this precise form of ruggedness is particular to negative constraints, trees with high posterior density can be separated by similarly complicated topological rearrangements, even in the absence of constraints.

  20. Sampling strategies for indoor radon investigations

    International Nuclear Information System (INIS)

    Prichard, H.M.

    1983-01-01

    Recent investigations prompted by concern about the environmental effects of residential energy conservation have produced many accounts of indoor radon concentrations far above background levels. In many instances time-normalized annual exposures exceeded the 4 WLM per year standard currently used for uranium mining. Further investigations of indoor radon exposures are necessary to judge the extent of the problem and to estimate the practicality of health effects studies. A number of trends can be discerned as more indoor surveys are reported. It is becoming increasingly clear that local geological factors play a major, if not dominant role in determining the distribution of indoor radon concentrations in a given area. Within a giving locale, indoor radon concentrations tend to be log-normally distributed, and sample means differ markedly from one region to another. The appreciation of geological factors and the general log-normality of radon distributions will improve the accuracy of population dose estimates and facilitate the design of preliminary health effects studies. The relative merits of grab samples, short and long term integrated samples, and more complicated dose assessment strategies are discussed in the context of several types of epidemiological investigations. A new passive radon sampler with a 24 hour integration time is described and evaluated as a tool for pilot investigations

  1. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar

    Science.gov (United States)

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-01-01

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058

  2. An Investigation to Improve Classifier Accuracy for Myo Collected Data

    Science.gov (United States)

    2017-02-01

    Bad Samples Effect on Classification Accuracy 7 5.1 Naïve Bayes (NB) Classifier Accuracy 7 5.2 Logistic Model Tree (LMT) 10 5.3 K-Nearest Neighbor...gesture, pitch feature, user 06. All samples exhibit reversed movement...20 Fig. A-2 Come gesture, pitch feature, user 14. All samples exhibit reversed movement

  3. Analysis of stationary power/amplitude distributions for multiple channels of sampled FBGs.

    Science.gov (United States)

    Xing, Ya; Zou, Xihua; Pan, Wei; Yan, Lianshan; Luo, Bin; Shao, Liyang

    2015-08-10

    Stationary power/amplitude distributions for multiple channels of the sampled fiber Bragg grating (SFBG) along the grating length are analyzed. Unlike a uniform FBG, the SFBG has multiple channels in the reflection spectrum, not a single channel. Thus, the stationary power/amplitude distributions for these multiple channels are analyzed by using two different theoretical models. In the first model, the SFBG is regarded as a set of grating sections and non-grating sections, which are alternately stacked. A step-like distribution is obtained for the corresponding power/amplitude of each channel along the grating length. While, in the second model, the SFBG is decomposed into multiple uniform "ghost" gratings, and a continuous distribution is obtained for each ghost grating (i.e., each channel). After a comparison, the distributions obtained in the two models are identical, and the equivalence between the two models is demonstrated. In addition, the impacts of the duty cycle on the power/amplitude distributions of multiple channels of SFBG are presented.

  4. An Analysis of Spherical Particles Distribution Randomly Packed in a Medium for the Monte Carlo Implicit Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Yong; Kim, Song Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this study, as a preliminary study to develop an implicit method having high accuracy, the distribution characteristics of spherical particles were evaluated by using explicit modeling techniques in various volume packing fractions. This study was performed to evaluate implicitly simulated distribution of randomly packed spheres in a medium. At first, an explicit modeling method to simulate random packed spheres in a hexahedron medium was proposed. The distributed characteristics of l{sub p} and r{sub p}, which are used in the particle position sampling, was estimated. It is analyzed that the use of the direct exponential distribution, which is generally used in the implicit modeling, can cause the distribution bias of the spheres. It is expected that the findings in this study can be utilized for improving the accuracy in using the implicit method. Spherical particles, which are randomly distributed in medium, are utilized for the radiation shields, fusion reactor blanket, fuels of VHTR reactors. Due to the difficulty on the simulation of the stochastic distribution, Monte Carlo (MC) method has been mainly considered as the tool for the analysis of the particle transport. For the MC modeling of the spherical particles, three methods are known; repeated structure, explicit modeling, and implicit modeling. Implicit method (called as the track length sampling method) is a modeling method that is the sampling based modeling technique of each spherical geometry (or track length of the sphere) during the MC simulation. Implicit modeling method has advantages in high computational efficiency and user convenience. However, it is noted that the implicit method has lower modeling accuracy in various finite mediums.

  5. Feasibility and accuracy evaluation of three human papillomavirus assays for FTA card-based sampling: a pilot study in cervical cancer screening.

    Science.gov (United States)

    Wang, Shao-Ming; Hu, Shang-Ying; Chen, Wen; Chen, Feng; Zhao, Fang-Hui; He, Wei; Ma, Xin-Ming; Zhang, Yu-Qing; Wang, Jian; Sivasubramaniam, Priya; Qiao, You-Lin

    2015-11-04

    Liquid-state specimen carriers are inadequate for sample transportation in large-scale screening projects in low-resource settings, which necessitates the exploration of novel non-hazardous solid-state alternatives. Studies investigating the feasibility and accuracy of a solid-state human papillomavirus (HPV) sampling medium in combination with different down-stream HPV DNA assays for cervical cancer screening are needed. We collected two cervical specimens from 396 women, aged 25-65 years, who were enrolled in a cervical cancer screening trial. One sample was stored using DCM preservative solution and the other was applied to a Whatman Indicating FTA Elute® card (FTA card). All specimens were processed using three HPV testing methods, including Hybrid capture 2 (HC2), careHPV™, and Cobas®4800 tests. All the women underwent a rigorous colposcopic evaluation that included using a microbiopsy protocol. Compared to the liquid-based carrier, the FTA card demonstrated comparable sensitivity for detecting high grade Cervical Intraepithelial Neoplasia (CIN) using HC2 (91.7 %), careHPV™ (83.3 %), and Cobas®4800 (91.7 %) tests. Moreover, the FTA card showed a higher specificity compared to a liquid-based carrier for HC2 (79.5 % vs. 71.6 %, P = 0.015), comparable specificity for careHPV™ (78.1 % vs. 73.0 %, P > 0.05), but lower specificity for the Cobas®4800 test (62.4 % vs. 69.9 %, P = 0.032). Generally, the FTA card-based sampling medium's accuracy was comparable with that of liquid-based medium for the three HPV testing assays. FTA cards are a promising sample carrier for cervical cancer screening. With further optimization, it can be utilized for HPV testing in areas of varying economic development.

  6. AN EMPIRICAL INVESTIGATION OF THE EFFECTS OF NONNORMALITY UPON THE SAMPLING DISTRIBUTION OF THE PROJECT MOMENT CORRELATION COEFFICIENT.

    Science.gov (United States)

    HJELM, HOWARD; NORRIS, RAYMOND C.

    THE STUDY EMPIRICALLY DETERMINED THE EFFECTS OF NONNORMALITY UPON SOME SAMPLING DISTRIBUTIONS OF THE PRODUCT MOMENT CORRELATION COEFFICIENT (PMCC). SAMPLING DISTRIBUTIONS OF THE PMCC WERE OBTAINED BY DRAWING NUMEROUS SAMPLES FROM CONTROL AND EXPERIMENTAL POPULATIONS HAVING VARIOUS DEGREES OF NONNORMALITY AND BY CALCULATING CORRELATION COEFFICIENTS…

  7. The accuracy of endometrial sampling in women with postmenopausal bleeding: a systematic review and meta-analysis.

    Science.gov (United States)

    van Hanegem, Nehalennia; Prins, Marileen M C; Bongers, Marlies Y; Opmeer, Brent C; Sahota, Daljit Singh; Mol, Ben Willem J; Timmermans, Anne

    2016-02-01

    Postmenopausal bleeding (PMB) can be the first sign of endometrial cancer. In case of thickened endometrium, endometrial sampling is often used in these women. In this systematic review, we studied the accuracy of endometrial sampling for the diagnoses of endometrial cancer, atypical hyperplasia and endometrial disease (endometrial pathology, including benign polyps). We systematically searched the literature for studies comparing the results of endometrial sampling in women with postmenopausal bleeding with two different reference standards: blind dilatation and curettage (D&C) and hysteroscopy with histology. We assessed the quality of the detected studies by the QUADAS-2 tool. For each included study, we calculated the fraction of women in whom endometrial sampling failed. Furthermore, we extracted numbers of cases of endometrial cancer, atypical hyperplasia and endometrial disease that were identified or missed by endometrial sampling. We detected 12 studies reporting on 1029 women with postmenopausal bleeding: five studies with dilatation and curettage (D&C) and seven studies with hysteroscopy as a reference test. The weighted sensitivity of endometrial sampling with D&C as a reference for the diagnosis of endometrial cancer was 100% (range 100-100%) and 92% (71-100) for the diagnosis of atypical hyperplasia. Only one study reported sensitivity for endometrial disease, which was 76%. When hysteroscopy was used as a reference, weighted sensitivities of endometrial sampling were 90% (range 50-100), 82% (range 56-94) and 39% (21-69) for the diagnosis of endometrial cancer, atypical hyperplasia and endometrial disease, respectively. For all diagnosis studied and the reference test used, specificity was 98-100%. The weighted failure rate of endometrial sampling was 11% (range 1-53%), while insufficient samples were found in 31% (range 7-76%). In these women with insufficient or failed samples, an endometrial (pre) cancer was found in 7% (range 0-18%). In women with

  8. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  9. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  10. Just add water: Accuracy of analysis of diluted human milk samples using mid-infrared spectroscopy.

    Science.gov (United States)

    Smith, R W; Adamkin, D H; Farris, A; Radmacher, P G

    2017-01-01

    To determine the maximum dilution of human milk (HM) that yields reliable results for protein, fat and lactose when analyzed by mid-infrared spectroscopy. De-identified samples of frozen HM were obtained. Milk was thawed and warmed (40°C) prior to analysis. Undiluted (native) HM was analyzed by mid-infrared spectroscopy for macronutrient composition: total protein (P), fat (F), carbohydrate (C); Energy (E) was calculated from the macronutrient results. Subsequent analyses were done with 1 : 2, 1 : 3, 1 : 5 and 1 : 10 dilutions of each sample with distilled water. Additional samples were sent to a certified lab for external validation. Quantitatively, F and P showed statistically significant but clinically non-critical differences in 1 : 2 and 1 : 3 dilutions. Differences at higher dilutions were statistically significant and deviated from native values enough to render those dilutions unreliable. External validation studies also showed statistically significant but clinically unimportant differences at 1 : 2 and 1 : 3 dilutions. The Calais Human Milk Analyzer can be used with HM samples diluted 1 : 2 and 1 : 3 and return results within 5% of values from undiluted HM. At a 1 : 5 or 1 : 10 dilution, however, results vary as much as 10%, especially with P and F. At the 1 : 2 and 1 : 3 dilutions these differences appear to be insignificant in the context of nutritional management. However, the accuracy and reliability of the 1 : 5 and 1 : 10 dilutions are questionable.

  11. Improving Statistics Education through Simulations: The Case of the Sampling Distribution.

    Science.gov (United States)

    Earley, Mark A.

    This paper presents a summary of action research investigating statistics students' understandings of the sampling distribution of the mean. With four sections of an introductory Statistics in Education course (n=98 students), a computer simulation activity (R. delMas, J. Garfield, and B. Chance, 1999) was implemented and evaluated to show…

  12. Use of spatially distributed time-integrated sediment sampling networks and distributed fine sediment modelling to inform catchment management.

    Science.gov (United States)

    Perks, M T; Warburton, J; Bracken, L J; Reaney, S M; Emery, S B; Hirst, S

    2017-11-01

    Under the EU Water Framework Directive, suspended sediment is omitted from environmental quality standards and compliance targets. This omission is partly explained by difficulties in assessing the complex dose-response of ecological communities. But equally, it is hindered by a lack of spatially distributed estimates of suspended sediment variability across catchments. In this paper, we demonstrate the inability of traditional, discrete sampling campaigns for assessing exposure to fine sediment. Sampling frequencies based on Environmental Quality Standard protocols, whilst reflecting typical manual sampling constraints, are unable to determine the magnitude of sediment exposure with an acceptable level of precision. Deviations from actual concentrations range between -35 and +20% based on the interquartile range of simulations. As an alternative, we assess the value of low-cost, suspended sediment sampling networks for quantifying suspended sediment transfer (SST). In this study of the 362 km 2 upland Esk catchment we observe that spatial patterns of sediment flux are consistent over the two year monitoring period across a network of 17 monitoring sites. This enables the key contributing sub-catchments of Butter Beck (SST: 1141 t km 2 yr -1 ) and Glaisdale Beck (SST: 841 t km 2 yr -1 ) to be identified. The time-integrated samplers offer a feasible alternative to traditional infrequent and discrete sampling approaches for assessing spatio-temporal changes in contamination. In conjunction with a spatially distributed diffuse pollution model (SCIMAP), time-integrated sediment sampling is an effective means of identifying critical sediment source areas in the catchment, which can better inform sediment management strategies for pollution prevention and control. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    Directory of Open Access Journals (Sweden)

    Francisco J Valverde-Albacete

    Full Text Available The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA, a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT, a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.

  14. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  15. [Monitoring microbiological safety of small systems of water distribution. Comparison of two sampling programs in a town in central Italy].

    Science.gov (United States)

    Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A

    2005-01-01

    To determine the frequency of sampling in small water distribution systems (distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.

  16. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  17. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    Science.gov (United States)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  18. Probability Distribution and Deviation Information Fusion Driven Support Vector Regression Model and Its Application

    Directory of Open Access Journals (Sweden)

    Changhao Fan

    2017-01-01

    Full Text Available In modeling, only information from the deviation between the output of the support vector regression (SVR model and the training sample is considered, whereas the other prior information of the training sample, such as probability distribution information, is ignored. Probabilistic distribution information describes the overall distribution of sample data in a training sample that contains different degrees of noise and potential outliers, as well as helping develop a high-accuracy model. To mine and use the probability distribution information of a training sample, a new support vector regression model that incorporates probability distribution information weight SVR (PDISVR is proposed. In the PDISVR model, the probability distribution of each sample is considered as the weight and is then introduced into the error coefficient and slack variables of SVR. Thus, the deviation and probability distribution information of the training sample are both used in the PDISVR model to eliminate the influence of noise and outliers in the training sample and to improve predictive performance. Furthermore, examples with different degrees of noise were employed to demonstrate the performance of PDISVR, which was then compared with those of three SVR-based methods. The results showed that PDISVR performs better than the three other methods.

  19. Respiratory motion sampling in 4DCT reconstruction for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chi Yuwei; Liang Jian; Qin Xu; Yan Di [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States); Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, Michigan 48073 (United States)

    2012-04-15

    Purpose: Phase-based and amplitude-based sorting techniques are commonly used in four-dimensional CT (4DCT) reconstruction. However, effect of these sorting techniques on 4D dose calculation has not been explored. In this study, the authors investigated a candidate 4DCT sorting technique by comparing its 4D dose calculation accuracy with that for phase-based and amplitude-based sorting techniques.Method: An optimization model was formed using organ motion probability density function (PDF) in the 4D dose convolution. The objective function for optimization was defined as the maximum difference between the expected 4D dose in organ of interest and the 4D dose calculated using a 4DCT sorted by a candidate sampling method. Sorting samples, as optimization variables, were selected on the respiratory motion PDF assessed during the CT scanning. Breathing curves obtained from patients' 4DCT scanning, as well as 3D dose distribution from treatment planning, were used in the study. Given the objective function, a residual error analysis was performed, and k-means clustering was found to be an effective sampling scheme to improve the 4D dose calculation accuracy and independent with the patient-specific dose distribution. Results: Patient data analysis demonstrated that the k-means sampling was superior to the conventional phase-based and amplitude-based sorting and comparable to the optimal sampling results. For phase-based sorting, the residual error in 4D dose calculations may not be further reduced to an acceptable accuracy after a certain number of phases, while for amplitude-based sorting, k-means sampling, and the optimal sampling, the residual error in 4D dose calculations decreased rapidly as the number of 4DCT phases increased to 6.Conclusion: An innovative phase sorting method (k-means method) is presented in this study. The method is dependent only on tumor motion PDF. It could provide a way to refine the phase sorting in 4DCT reconstruction and is effective

  20. Characterization of spatial distribution of Tetranychus urticae in peppermint in California and implication for improving sampling plan.

    Science.gov (United States)

    Rijal, Jhalendra P; Wilson, Rob; Godfrey, Larry D

    2016-02-01

    Twospotted spider mite, Tetranychus urticae Koch, is an important pest of peppermint in California, USA. Spider mite feeding on peppermint leaves causes physiological changes in the plant, which coupling with the favorable environmental condition can lead to increased mite infestations. Significant yield loss can occur in absence of pest monitoring and timely management. Understating the within-field spatial distribution of T. urticae is critical for the development of reliable sampling plan. The study reported here aims to characterize the spatial distribution of mite infestation in four commercial peppermint fields in northern California using spatial techniques, variogram and Spatial Analysis by Distance IndicEs (SADIE). Variogram analysis revealed that there was a strong evidence for spatially dependent (aggregated) mite population in 13 of 17 sampling dates and the physical distance of the aggregation reached maximum to 7 m in peppermint fields. Using SADIE, 11 of 17 sampling dates showed aggregated distribution pattern of mite infestation. Combining results from variogram and SADIE analysis, the spatial aggregation of T. urticae was evident in all four fields for all 17 sampling dates evaluated. Comparing spatial association using SADIE, ca. 62% of the total sampling pairs showed a positive association of mite spatial distribution patterns between two consecutive sampling dates, which indicates a strong spatial and temporal stability of mite infestation in peppermint fields. These results are discussed in relation to behavior of spider mite distribution within field, and its implications for improving sampling guidelines that are essential for effective pest monitoring and management.

  1. Evaluating sample allocation and effort in detecting population differentiation for discrete and continuously distributed individuals

    Science.gov (United States)

    Erin L. Landguth; Michael K. Schwartz

    2014-01-01

    One of the most pressing issues in spatial genetics concerns sampling. Traditionally, substructure and gene flow are estimated for individuals sampled within discrete populations. Because many species may be continuously distributed across a landscape without discrete boundaries, understanding sampling issues becomes paramount. Given large-scale, geographically broad...

  2. Feasibility and accuracy evaluation of three human papillomavirus assays for FTA card-based sampling: a pilot study in cervical cancer screening

    International Nuclear Information System (INIS)

    Wang, Shao-Ming; Hu, Shang-Ying; Chen, Wen; Chen, Feng; Zhao, Fang-Hui; He, Wei; Ma, Xin-Ming; Zhang, Yu-Qing; Wang, Jian; Sivasubramaniam, Priya; Qiao, You-Lin

    2015-01-01

    Liquid-state specimen carriers are inadequate for sample transportation in large-scale screening projects in low-resource settings, which necessitates the exploration of novel non-hazardous solid-state alternatives. Studies investigating the feasibility and accuracy of a solid-state human papillomavirus (HPV) sampling medium in combination with different down-stream HPV DNA assays for cervical cancer screening are needed. We collected two cervical specimens from 396 women, aged 25–65 years, who were enrolled in a cervical cancer screening trial. One sample was stored using DCM preservative solution and the other was applied to a Whatman Indicating FTA Elute® card (FTA card). All specimens were processed using three HPV testing methods, including Hybrid capture 2 (HC2), careHPV™, and Cobas®4800 tests. All the women underwent a rigorous colposcopic evaluation that included using a microbiopsy protocol. Compared to the liquid-based carrier, the FTA card demonstrated comparable sensitivity for detecting high grade Cervical Intraepithelial Neoplasia (CIN) using HC2 (91.7 %), careHPV™ (83.3 %), and Cobas®4800 (91.7 %) tests. Moreover, the FTA card showed a higher specificity compared to a liquid-based carrier for HC2 (79.5 % vs. 71.6 %, P = 0.015), comparable specificity for careHPV™ (78.1 % vs. 73.0 %, P > 0.05), but lower specificity for the Cobas®4800 test (62.4 % vs. 69.9 %, P = 0.032). Generally, the FTA card-based sampling medium’s accuracy was comparable with that of liquid-based medium for the three HPV testing assays. FTA cards are a promising sample carrier for cervical cancer screening. With further optimization, it can be utilized for HPV testing in areas of varying economic development

  3. Accuracy of Short Forms of the Dutch Wechsler Preschool and Primary Scale of Intelligence: Third Edition.

    Science.gov (United States)

    Hurks, Petra; Hendriksen, Jos; Dek, Joelle; Kooij, Andress

    2016-04-01

    This article investigated the accuracy of six short forms of the Dutch Wechsler Preschool and Primary Scale of Intelligence-Third edition (WPPSI-III-NL) in estimating intelligent quotient (IQ) scores in healthy children aged 4 to 7 years (N = 1,037). Overall, accuracy for each short form was studied, comparing IQ equivalences based on the short forms with the original WPPSI-III-NL Full Scale IQ (FSIQ) scores. Next, our sample was divided into three groups: children performing below average, average, or above average, based on the WPPSI-III-NL FSIQ estimates of the original long form, to study the accuracy of WPPSI-III-NL short forms at the tails of the FSIQ distribution. While studying the entire sample, all IQ estimates of the WPPSI-III-NL short forms correlated highly with the FSIQ estimates of the original long form (all rs ≥ .83). Correlations decreased significantly while studying only the tails of the IQ distribution (rs varied between .55 and .83). Furthermore, IQ estimates of the short forms deviated significantly from the FSIQ score of the original long form, when the IQ estimates were based on short forms containing only two subtests. In contrast, unlike the short forms that contained two to four subtests, the Wechsler Abbreviated Scale of Intelligence short form (containing the subtests Vocabulary, Similarities, Block Design, and Matrix Reasoning) and the General Ability Index short form (containing the subtests Vocabulary, Similarities, Comprehension, Block Design, Matrix Reasoning, and Picture Concepts) produced less variations when compared with the original FSIQ score. © The Author(s) 2015.

  4. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    Science.gov (United States)

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  5. ACCURACY ASSESSMENT OF COASTAL TOPOGRAPHY DERIVED FROM UAV IMAGES

    Directory of Open Access Journals (Sweden)

    N. Long

    2016-06-01

    Full Text Available To monitor coastal environments, Unmanned Aerial Vehicle (UAV is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR or Terrestrial Laser Scanning (TLS, this solution produces Digital Surface Model (DSM with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm, a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs and the influence of spatial image resolution (4.6 cm vs 2 cm. The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm. The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm; the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  6. Distributed Wireless Data Acquisition System with Synchronized Data Flow

    CERN Document Server

    Astakhova, N V; Dikoussar, N D; Eremin, G I; Gerasimov, A V; Ivanov, A I; Kryukov, Yu S; Mazny, N G; Ryabchun, O V; Salamatin, I M

    2006-01-01

    New methods to provide succession of computer codes under changes of the class of problems and to integrate the drivers of special-purpose devices into application are devised. The worked out scheme and methods for constructing automation systems are used to elaborate a distributed wireless system intended for registration of the characteristics of pulse processes with synchronized data flow, transmitted over a radio channel. The equipment with a sampling frequency of 20 kHz allowed us to achieve a synchronization accuracy of up to $\\pm $ 50 $\\mu$s. Modification of part of the equipment (sampling frequency) permits one to improve the accuracy up to 0.1 $\\mu$s. The obtained results can be applied to develop systems for monitoring various objects, as well as automation systems for experiments and automated process control systems.

  7. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  8. Diagnostic accuracy of detection and quantification of HBV-DNA and HCV-RNA using dried blood spot (DBS) samples - a systematic review and meta-analysis.

    Science.gov (United States)

    Lange, Berit; Roberts, Teri; Cohn, Jennifer; Greenman, Jamie; Camp, Johannes; Ishizaki, Azumi; Messac, Luke; Tuaillon, Edouard; van de Perre, Philippe; Pichler, Christine; Denkinger, Claudia M; Easterbrook, Philippa

    2017-11-01

    The detection and quantification of hepatitis B (HBV) DNA and hepatitis C (HCV) RNA in whole blood collected on dried blood spots (DBS) may facilitate access to diagnosis and treatment of HBV and HCV infection in resource-poor settings. We evaluated the diagnostic performance of DBS compared to venous blood samples for detection and quantification of HBV-DNA and HCV-RNA in two systematic reviews and meta-analyses on the diagnostic accuracy of HBV DNA and HCV RNA from DBS compared to venous blood samples. We searched MEDLINE, Embase, Global Health, Web of Science, LILAC and Cochrane library for studies that assessed diagnostic accuracy with DBS. Heterogeneity was assessed and where appropriate pooled estimates of sensitivity and specificity were generated using bivariate analyses with maximum likelihood estimates and 95% confidence intervals. We also conducted a narrative review on the impact of varying storage conditions or different cut-offs for detection from studies that undertook this in a subset of samples. The QUADAS-2 tool was used to assess risk of bias. In the quantitative synthesis for diagnostic accuracy of HBV-DNA using DBS, 521 citations were identified, and 12 studies met the inclusion criteria. Overall quality of studies was rated as low. The pooled estimate of sensitivity and specificity for HBV-DNA was 95% (95% CI: 83-99) and 99% (95% CI: 53-100), respectively. In the two studies that reported on cut-offs and limit of detection (LoD) - one reported a sensitivity of 98% for a cut-off of ≥2000 IU/ml and another reported a LoD of 914 IU/ml using a commercial assay. Varying storage conditions for individual samples did not result in a significant variation of results. In the synthesis for diagnostic accuracy of HCV-RNA using DBS, 15 studies met the inclusion criteria, and this included six additional studies to a previously published review. The pooled sensitivity and specificity was 98% (95% CI:95-99) and 98% (95% CI:95-99.0), respectively

  9. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  10. The effects of spatial sampling choices on MR temperature measurements.

    Science.gov (United States)

    Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L

    2011-02-01

    The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case. Copyright © 2010 Wiley-Liss, Inc.

  11. Multiobjecitve Sampling Design for Calibration of Water Distribution Network Model Using Genetic Algorithm and Neural Network

    Directory of Open Access Journals (Sweden)

    Kourosh Behzadian

    2008-03-01

    Full Text Available In this paper, a novel multiobjective optimization model is presented for selecting optimal locations in the water distribution network (WDN with the aim of installing pressure loggers. The pressure data collected at optimal locations will be used later on in the calibration of the proposed WDN model. Objective functions consist of maximization of calibrated model prediction accuracy and minimization of the total cost for sampling design. In order to decrease the model run time, an optimization model has been developed using multiobjective genetic algorithm and adaptive neural network (MOGA-ANN. Neural networks (NNs are initially trained after a number of initial GA generations and periodically retrained and updated after generation of a specified number of full model-analyzed solutions. Trained NNs are replaced with the fitness evaluation of some chromosomes within the GA progress. Using cache prevents objective function evaluation of repetitive chromosomes within GA. Optimal solutions are obtained through pareto-optimal front with respect to the two objective functions. Results show that jointing NNs in MOGA for approximating portions of chromosomes’ fitness in each generation leads to considerable savings in model run time and can be promising for reducing run-time in optimization models with significant computational effort.

  12. Sampling frequency of ciliated protozoan microfauna for seasonal distribution research in marine ecosystems.

    Science.gov (United States)

    Xu, Henglong; Yong, Jiang; Xu, Guangjian

    2015-12-30

    Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain sampling frequency, the biotic data showed stronger correlations with seasonal variables (e.g., temperature, salinity) in combination with nutrients. It is suggested that the 8-sampling events per year may be an optimal sampling strategy for ciliated protozoan seasonal research in marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Basic distribution free identification tests for small size samples of environmental data

    International Nuclear Information System (INIS)

    Federico, A.G.; Musmeci, F.

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data [it

  14. Bayesian view of single-qubit clocks, and an energy versus accuracy tradeoff

    Science.gov (United States)

    Gopalkrishnan, Manoj; Kandula, Varshith; Sriram, Praveen; Deshpande, Abhishek; Muralidharan, Bhaskaran

    2017-09-01

    We bring a Bayesian approach to the analysis of clocks. Using exponential distributions as priors for clocks, we analyze how well one can keep time with a single qubit freely precessing under a magnetic field. We find that, at least with a single qubit, quantum mechanics does not allow exact timekeeping, in contrast to classical mechanics, which does. We find the design of the single-qubit clock that leads to maximum accuracy. Further, we find an energy versus accuracy tradeoff—the energy cost is at least kBT times the improvement in accuracy as measured by the entropy reduction in going from the prior distribution to the posterior distribution. We propose a physical realization of the single-qubit clock using charge transport across a capacitively coupled quantum dot.

  15. Local indicators of geocoding accuracy (LIGA: theory and application

    Directory of Open Access Journals (Sweden)

    Jacquez Geoffrey M

    2009-10-01

    Full Text Available Abstract Background Although sources of positional error in geographic locations (e.g. geocoding error used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously and locally (to identify those locations that would benefit most from increased geocoding accuracy. We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error and high leverage (that contribute the most to the spatial weight being considered will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density

  16. A Proposal of New Spherical Particle Modeling Method Based on Stochastic Sampling of Particle Locations in Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Kim, Do Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jea Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    To the high computational efficiency and user convenience, the implicit method had received attention; however, it is noted that the implicit method in the previous studies has low accuracy at high packing fraction. In this study, a new implicit method, which can be used at any packing fraction with high accuracy, is proposed. In this study, the implicit modeling method in the spherical particle distributed medium for using the MC simulation is proposed. A new concept in the spherical particle sampling was developed to solve the problems in the previous implicit methods. The sampling method was verified by simulating the sampling method in the infinite and finite medium. The results show that the particle implicit modeling with the proposed method was accurately performed in all packing fraction boundaries. It is expected that the proposed method can be efficiently utilized for the spherical particle distributed mediums, which are the fusion reactor blanket, VHTR reactors, and shielding analysis.

  17. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    Science.gov (United States)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  18. Spatial distribution and sequential sampling plans for Tuta absoluta (Lepidoptera: Gelechiidae) in greenhouse tomato crops.

    Science.gov (United States)

    Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino

    2015-09-01

    The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.

  19. The measurement of radioactive microspheres in biological samples

    International Nuclear Information System (INIS)

    Mernagh, J.R.; Spiers, E.W.; Adiseshiah, M.

    1976-01-01

    Measurements of the distribution of radioactive microspheres are used in investigations of regional coronary blood flow, but the size and shape of the heart varies for different test animals, and the organ is frequently divided into smaller pieces for studies of regional perfusion. Errors are introduced by variations in the distribution of the radioactive source and the amount of Compton scatter in different samples. A technique has therefore been developed to allow the counting of these tissue samples in their original form, and correction factors have been derived to inter-relate the various counting geometries thus encountered. Dogs were injected with microspheres labelled with 141 Ce, 51 Cr or 85 Sr. The tissue samples did not require remodelling to fit a standard container, and allowance was made for the inhomogeneous distribution in the blood samples. The activities in the centrifuged blood samples were correlated with those from the tissue samples by a calibration procedure involving comparisons of the counts from samples of microspheres embedded in sachets of gelatine, and similar samples mixed with blood and then centrifuged. The calibration data have indicated that 51 Cr behaves anomalously, and its use as a label for microspheres may introduce unwarranted errors. A plane cylindrical 10 x 20 cm NaI detector was used, and a 'worst case' correction of 20% was found to be necessary for geometry effects. The accuracy of this method of correlating different geometries was tested by remodelling the same tissue sample into different sizes and comparing the results, and the validity of the technique was supported by agreement of the final results with previously published data. (U.K.)

  20. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi; Gao, Xin; Huang, Jianhua Z.

    2012-01-01

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  1. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi

    2012-08-27

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  2. Evaluation of precision and accuracy of neutron activation analysis method of environmental samples analysis

    International Nuclear Information System (INIS)

    Wardani, Sri; Rina M, Th.; L, Dyah

    2000-01-01

    Evaluation of precision and accuracy of Neutron Activation Analysis (NAA) method used by P2TRR performed by analyzed the standard reference samples from the National Institute of Environmental Study of Japan (NIES-CRM No.10 (rice flour) and the National Bureau of USA (NBS-SRM 1573a (tomato leave) by NAA method. In analyze the environmental SRM No.10 by NAA method in qualitatively could identified multi elements of contents, namely: Br, Ca, Co, CI, Cs, Gd, I, K< La, Mg, Mn, Na, Pa, Sb, Sm, Sr, Ta, Th, and Zn (19 elements) for SRM 1573a; As, Br, Cr, CI, Ce, Co, Cs, Fe, Ga, Hg, K, Mn, Mg, Mo, Na, Ni, Pb, Rb, Sr, Se, Sc, Sb, Ti, and Zn, (25 elements) for CRM No.10a; Ag, As, Br, Cr, CI, Ce, Cd, Co, Cs, Eu, Fe, Ga, Hg, K, Mg, Mn, Mo, Na, Nb, Pb, Rb, Sb, Sc, Th, TI, and Zn, (26 elements) for CRM No. 10b; As, Br, Co, CI, Ce, Cd, Ga, Hg, K, Mn, Mg, Mo, Na, Nb, Pb, Rb, Sb, Se, TI, and Zn (20 elementary) for CRM No.10c. In the quantitatively analysis could determined only some element of sample contents, namely: As, Co, Cd, Mo, Mn, and Zn. From the result compared with NIES or NBS values attained with deviation of 3% ∼ 15%. Overall, the result shown that the method and facilities have a good capability, but the irradiation facility and the software of spectrometry gamma ray necessary to developing or seriously research perform

  3. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  4. Fluorescence imaging of ion distributions in an inductively coupled plasma with laser ablation sample introduction

    International Nuclear Information System (INIS)

    Moses, Lance M.; Ellis, Wade C.; Jones, Derick D.; Farnsworth, Paul B.

    2015-01-01

    High-resolution images of the spatial distributions of Sc II, Ca II, and Ba II ion densities in the 10 mm upstream from the sampling cone in a laser ablation-inductively coupled plasma-mass spectrometer (LA-ICP-MS) were obtained using planar laser induced fluorescence. Images were obtained for each analyte as a function of the carrier gas flow rate with laser ablation (LA) sample introduction and compared to images with solution nebulization (SN) over the same range of flow rates. Additionally, images were obtained using LA at varying fluences and with varying amounts of helium added to a constant flow of argon gas. Ion profiles in SN images followed a pattern consistent with previous work: increasing gas flow caused a downstream shift in the ion profiles. When compared to SN, LA led to ion profiles that were much narrower radially and reached a maximum near the sampling cone at higher flow rates. Increasing the fluence led to ions formed in the ICP over greater axial and radial distances. The addition of He to the carrier gas prior to the ablation cell led to an upstream shift in the position of ionization and lower overall fluorescence intensities. - Highlights: • We map distributions of analytes in the ICP using laser ablation sample introduction. • We compare images from laser ablation with those from a pneumatic nebulizer. • We document the effects of water added to the laser ablation aerosol. • We compare distributions from a metal to those from crystalline solids. • We document the effect of laser fluence on ion distributions

  5. Effect of non-Poisson samples on turbulence spectra from laser velocimetry

    Science.gov (United States)

    Sree, Dave; Kjelgaard, Scott O.; Sellers, William L., III

    1994-01-01

    Spectral analysis of laser velocimetry (LV) data plays an important role in characterizing a turbulent flow and in estimating the associated turbulence scales, which can be helpful in validating theoretical and numerical turbulence models. The determination of turbulence scales is critically dependent on the accuracy of the spectral estimates. Spectral estimations from 'individual realization' laser velocimetry data are typically based on the assumption of a Poisson sampling process. What this Note has demonstrated is that the sampling distribution must be considered before spectral estimates are used to infer turbulence scales.

  6. Symbol synchronization and sampling frequency synchronization techniques in real-time DDO-OFDM systems

    Science.gov (United States)

    Chen, Ming; He, Jing; Cao, Zizheng; Tang, Jin; Chen, Lin; Wu, Xian

    2014-09-01

    In this paper, we propose and experimentally demonstrate a symbol synchronization and sampling frequency synchronization techniques in real-time direct-detection optical orthogonal frequency division multiplexing (DDO-OFDM) system, over 100-km standard single mode fiber (SSMF) using a cost-effective directly modulated distributed feedback (DFB) laser. The experiment results show that the proposed symbol synchronization based on training sequence (TS) has a low complexity and high accuracy even at a sampling frequency offset (SFO) of 5000-ppm. Meanwhile, the proposed pilot-assisted sampling frequency synchronization between digital-to-analog converter (DAC) and analog-to-digital converter (ADC) is capable of estimating SFOs with an accuracy of technique can also compensate SFO effects within a small residual SFO caused by deviation of SFO estimation and low-precision or unstable clock source. The two synchronization techniques are suitable for high-speed DDO-OFDM transmission systems.

  7. Evaluation of a new tear osmometer for repeatability and accuracy, using 0.5-microL (500-Nanoliter) samples.

    Science.gov (United States)

    Yildiz, Elvin H; Fan, Vincent C; Banday, Hina; Ramanathan, Lakshmi V; Bitra, Ratna K; Garry, Eileen; Asbell, Penny A

    2009-07-01

    To evaluate the repeatability and accuracy of a new tear osmometer that measures the osmolality of 0.5-microL (500-nanoliter) samples. Four standardized solutions were tested with 0.5-microL (500-nanoliter) samples for repeatability of measurements and comparability to standardized technique. Two known standard salt solutions (290 mOsm/kg H2O, 304 mOsm/kg H2O), a normal artificial tear matrix sample (306 mOsm/kg H2O), and an abnormal artificial tear matrix sample (336 mOsm/kg H2O) were repeatedly tested (n = 20 each) for osmolality with use of the Advanced Instruments Model 3100 Tear Osmometer (0.5-microL [500-nanoliter] sample size) and the FDA-approved Advanced Instruments Model 3D2 Clinical Osmometer (250-microL sample size). Four standard solutions were used, with osmolality values of 290, 304, 306, and 336 mOsm/kg H2O. The respective precision data, including the mean and standard deviation, were: 291.8 +/- 4.4, 305.6 +/- 2.4, 305.1 +/- 2.3, and 336.4 +/- 2.2 mOsm/kg H2O. The percent recoveries for the 290 mOsm/kg H2O standard solution, the 304 mOsm/kg H2O reference solution, the normal value-assigned 306 mOsm/kg H2O sample, and the abnormal value-assigned 336 mOsm/kg H2O sample were 100.3, 100.2, 99.8, and 100.3 mOsm/kg H2O, respectively. The repeatability data are in accordance with data obtained on clinical osmometers with use of larger sample sizes. All 4 samples tested on the tear osmometer have osmolality values that correlate well to the clinical instrument method. The tear osmometer is a suitable instrument for testing the osmolality of microliter-sized samples, such as tears, and therefore may be useful in diagnosing, monitoring, and classifying tear abnormalities such as the severity of dry eye disease.

  8. Attention failures versus misplaced diligence: separating attention lapses from speed-accuracy trade-offs.

    Science.gov (United States)

    Seli, Paul; Cheyne, James Allan; Smilek, Daniel

    2012-03-01

    In two studies of a GO-NOGO task assessing sustained attention, we examined the effects of (1) altering speed-accuracy trade-offs through instructions (emphasizing both speed and accuracy or accuracy only) and (2) auditory alerts distributed throughout the task. Instructions emphasizing accuracy reduced errors and changed the distribution of GO trial RTs. Additionally, correlations between errors and increasing RTs produced a U-function; excessively fast and slow RTs accounted for much of the variance of errors. Contrary to previous reports, alerts increased errors and RT variability. The results suggest that (1) standard instructions for sustained attention tasks, emphasizing speed and accuracy equally, produce errors arising from attempts to conform to the misleading requirement for speed, which become conflated with attention-lapse produced errors and (2) auditory alerts have complex, and sometimes deleterious, effects on attention. We argue that instructions emphasizing accuracy provide a more precise assessment of attention lapses in sustained attention tasks. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy.

    Directory of Open Access Journals (Sweden)

    Kieran A Monaghan

    Full Text Available Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data

  10. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    Science.gov (United States)

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  11. English Verb Accuracy of Bilingual Cantonese-English Preschoolers

    Science.gov (United States)

    Rezzonico, Stefano; Goldberg, Ahuva; Milburn, Trelani; Belletti, Adriana; Girolametto, Luigi

    2017-01-01

    Purpose: Knowledge of verb development in typically developing bilingual preschoolers may inform clinicians about verb accuracy rates during the 1st 2 years of English instruction. This study aimed to investigate tensed verb accuracy in 2 assessment contexts in 4- and 5-year-old Cantonese-English bilingual preschoolers. Method: The sample included…

  12. Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.

    Science.gov (United States)

    Xiao, Dan; Balcom, Bruce J

    2012-07-01

    Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. A method for ion distribution function evaluation using escaping neutral atom kinetic energy samples

    International Nuclear Information System (INIS)

    Goncharov, P.R.; Ozaki, T.; Veshchev, E.A.; Sudo, S.

    2008-01-01

    A reliable method to evaluate the probability density function for escaping atom kinetic energies is required for the analysis of neutral particle diagnostic data used to study the fast ion distribution function in fusion plasmas. Digital processing of solid state detector signals is proposed in this paper as an improvement of the simple histogram approach. Probability density function for kinetic energies of neutral particles escaping from the plasma has been derived in a general form taking into account the plasma ion energy distribution, electron capture and loss rates, superposition along the diagnostic sight line and the magnetic surface geometry. A pseudorandom number generator has been realized that enables a sample of escaping neutral particle energies to be simulated for given plasma parameters and experimental conditions. Empirical probability density estimation code has been developed and tested to reconstruct the probability density function from simulated samples assuming. Maxwellian and classical slowing down plasma ion energy distribution shapes for different temperatures and different slowing down times. The application of the developed probability density estimation code to the analysis of experimental data obtained by the novel Angular-Resolved Multi-Sightline Neutral Particle Analyzer has been studied to obtain the suprathermal particle distributions. The optimum bandwidth parameter selection algorithm has also been realized. (author)

  14. Vessel Sampling and Blood Flow Velocity Distribution With Vessel Diameter for Characterizing the Human Bulbar Conjunctival Microvasculature.

    Science.gov (United States)

    Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua

    2016-03-01

    This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, Psampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.

  15. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming; Jin, Ick-Hoon

    2013-01-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  16. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming

    2013-08-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  17. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  18. A QUANTITATIVE EVALUATION OF THE WATER DISTRIBUTION IN A SOIL SAMPLE USING NEUTRON IMAGING

    Directory of Open Access Journals (Sweden)

    Jan Šácha

    2016-10-01

    Full Text Available This paper presents an empirical method by Kang et al. recently proposed for correcting two-dimensional neutron radiography for water quantification in soil. The method was tested on data from neutron imaging of the water infiltration in a soil sample. The raw data were affected by neutron scattering and by beam hardening artefacts. Two strategies for identifying the correction parameters are proposed in this paper. The method has been further developed for the case of three-dimensional neutron tomography. In a related experiment, neutron imaging is used to record ponded-infiltration experiments in two artificial soil samples. Radiograms, i.e., two-dimensional projections of the sample, were acquired during infiltration. A calculation was made of the amount of water and its distribution within the radiograms, in the form of two-dimensional water thickness maps. Tomograms were reconstructed from the corrected and uncorrected water thickness maps to obtain the 3D spatial distribution of the water content within the sample. Without the correction, the beam hardening and the scattering effects overestimated the water content values close to the perimeter of the sample, and at the same time underestimated the values close to the centre of the sample. The total water content of the entire sample was the same in both cases. The empirical correction method presented in this study is a relatively accurate, rapid and simple way to obtain the quantitatively determined water content from two-dimensional and three-dimensional neutron images. However, an independent method for measuring the total water volume in the sample is needed in order to identify the correction parameters.

  19. Assessing Understanding of Sampling Distributions and Differences in Learning amongst Different Learning Styles

    Science.gov (United States)

    Beeman, Jennifer Leigh Sloan

    2013-01-01

    Research has found that students successfully complete an introductory course in statistics without fully comprehending the underlying theory or being able to exhibit statistical reasoning. This is particularly true for the understanding about the sampling distribution of the mean, a crucial concept for statistical inference. This study…

  20. Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations

    Science.gov (United States)

    Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong

    2017-08-01

    Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.

  1. Seasonal phenology, spatial distribution, and sampling plan for the invasive mealybug Phenacoccus peruvianus (Hemiptera: Pseudococcidae).

    Science.gov (United States)

    Beltrá, A; Garcia-Marí, F; Soto, A

    2013-06-01

    Phlenacoccus peruvianus Granara de Willink (Hemiptera: Pseudococcidae) is an invasive mealybug of Neotropical origin. In recent years it has invaded the Mediterranean Basin causing significant damages in bougainvillea and other ornamental plants. This article examines its phenology, location on the plant and spatial distribution, and presents a sampling plan to determine P. peruvianus population density for the management of this mealybug in southern Europe. Six urban green spaces with bougainvillea plants were periodically surveyed between March 2008 and September 2010 in eastern Spain, sampling bracts, leaves, and twigs. Our results show that P. peruvianus abundance was high in spring and summer, declining to almost undetectable levels in autumn and winter. The mealybugs showed a preference for settling on bracts and there were no significant migrations between plant organs. P. peruvianus showed a highly aggregated distribution on bracts, leaves, and twigs. We recommend abinomial sampling of 200 leaves and an action threshold of 55% infested leaves for integrated pest management purposes on urban landscapes and enumerative sampling for ornamental nursery management and additional biological studies.

  2. Actual distribution of Cronobacter spp. in industrial batches of powdered infant formula and consequences for performance of sampling strategies.

    Science.gov (United States)

    Jongenburger, I; Reij, M W; Boer, E P J; Gorris, L G M; Zwietering, M H

    2011-11-15

    The actual spatial distribution of microorganisms within a batch of food influences the results of sampling for microbiological testing when this distribution is non-homogeneous. In the case of pathogens being non-homogeneously distributed, it markedly influences public health risk. This study investigated the spatial distribution of Cronobacter spp. in powdered infant formula (PIF) on industrial batch-scale for both a recalled batch as well a reference batch. Additionally, local spatial occurrence of clusters of Cronobacter cells was assessed, as well as the performance of typical sampling strategies to determine the presence of the microorganisms. The concentration of Cronobacter spp. was assessed in the course of the filling time of each batch, by taking samples of 333 g using the most probable number (MPN) enrichment technique. The occurrence of clusters of Cronobacter spp. cells was investigated by plate counting. From the recalled batch, 415 MPN samples were drawn. The expected heterogeneous distribution of Cronobacter spp. could be quantified from these samples, which showed no detectable level (detection limit of -2.52 log CFU/g) in 58% of samples, whilst in the remainder concentrations were found to be between -2.52 and 2.75 log CFU/g. The estimated average concentration in the recalled batch was -2.78 log CFU/g and a standard deviation of 1.10 log CFU/g. The estimated average concentration in the reference batch was -4.41 log CFU/g, with 99% of the 93 samples being below the detection limit. In the recalled batch, clusters of cells occurred sporadically in 8 out of 2290 samples of 1g taken. The two largest clusters contained 123 (2.09 log CFU/g) and 560 (2.75 log CFU/g) cells. Various sampling strategies were evaluated for the recalled batch. Taking more and smaller samples and keeping the total sampling weight constant, considerably improved the performance of the sampling plans to detect such a type of contaminated batch. Compared to random sampling

  3. Atmospheric aerosol sampling campaign in Budapest and K-puszta. Part 1. Elemental concentrations and size distributions

    International Nuclear Information System (INIS)

    Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.

    2004-01-01

    Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)

  4. High Accuracy Transistor Compact Model Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  5. Mechanical properties and filler distribution as a function filler content in silica filled PDMS samples

    International Nuclear Information System (INIS)

    Hawley, Marilyn E.; Wrobleski, Debra A.; Orler, E. Bruce; Houlton, Robert J.; Chitanvis, Kiran E.; Brown, Geoffrey W.; Hanson, David E.

    2004-01-01

    Atomic force microscopy (AFM) phase imaging and tensile stress-strain measurements are used to study a series of model compression molded fumed silica filled polydimethysiloxane (PDMS) samples with filler content of zero, 20, 35, and 50 parts per hundred (phr) to determine the relationship between filler content and stress-strain properties. AFM phase imaging was used to determine filler size, degree of aggregation, and distribution within the soft PDMS matrix. A small tensile stage was used to measure mechanical properties. Samples were not pulled to break in order to study Mullins and aging effects. Several identical 35 phr samples were subjected to an initial stress, and then one each was reevaluated over intervals up to 26 weeks to determine the degree to which these samples recovered their initial stress-strain behavior as a function of time. One sample was tested before and after heat treatment to determine if heating accelerated recovery of the stress-strain behavior. The effect of filler surface treatment on mechanical properties was examined for two samples containing 35 phr filler treated or untreated with hexamethyldisilazane (HMDZ), respectively. Fiduciary marks were used on several samples to determine permanent set. 35 phr filler samples were found to give the optimum mechanical properties. A clear Mullins effect was seen. Within experimental error, no change was seen in mechanical behavior as a function of time or heat-treatment. The mechanical properties of the sample containing the HDMZ treated silica were adversely affected. AFM phase images revealed aggregation and nonuniform distribution of the filler for all samples. Finally, a permanent set of about 3 to 6 percent was observed for the 35 phr samples.

  6. Accuracy of reported food intake in a sample of 7-10 year-old children in Serbia.

    Science.gov (United States)

    Šumonja, S; Jevtić, M

    2016-09-01

    Children's ability to recall and report dietary intake is affected by age and cognitive skills. Dietary intake reporting accuracy in children is associated with age, weight status, cognitive, behavioural, social factors and dietary assessment techniques. This study analysed accuracy of 7-10 year-old children's reported food intake for one day. Validation study. Sample included 94 children aged 7-10 years (median = 9 years) from two elementary schools in a local community in Serbia. 'My meals for one day' questionnaire was a combination of 24-h recall and food recognition form. It included recalls for five meals: breakfast at home; snack at home; lunch at home; snack at school and dinner at home. Parental reports were used as reference information about children's food intake for meals obtained at home and observation was used to gain reference information for school meal. Observed and reported amounts were used to calculate omission rate, intrusion rate, corresponding, over-reported and unreported amounts of energy, correspondence rate and inflation ratio. Overall omission rate (37.5%) was higher than overall intrusion rate (36.7%). The same food item (bread) has been the most often correctly reported and omitted food item for breakfast, lunch and dinner. Snack at school had the greatest mean correspondence rate (79.6%) and snack at home the highest mean inflation ratio (90.7%). Most errors in children's recalls were incorrectly reported amounts and not the food items. The questionnaire should be improved to facilitate accurate reports of the amounts. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  7. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-02-15

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.

  8. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    International Nuclear Information System (INIS)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup

    2015-01-01

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF

  9. Sampling procedures for inventory of commercial volume tree species in Amazon Forest.

    Science.gov (United States)

    Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R

    2017-01-01

    The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.

  10. Accuracy of magnetic resonance based susceptibility measurements

    Science.gov (United States)

    Erdevig, Hannah E.; Russek, Stephen E.; Carnicka, Slavka; Stupic, Karl F.; Keenan, Kathryn E.

    2017-05-01

    Magnetic Resonance Imaging (MRI) is increasingly used to map the magnetic susceptibility of tissue to identify cerebral microbleeds associated with traumatic brain injury and pathological iron deposits associated with neurodegenerative diseases such as Parkinson's and Alzheimer's disease. Accurate measurements of susceptibility are important for determining oxygen and iron content in blood vessels and brain tissue for use in noninvasive clinical diagnosis and treatment assessments. Induced magnetic fields with amplitude on the order of 100 nT, can be detected using MRI phase images. The induced field distributions can then be inverted to obtain quantitative susceptibility maps. The focus of this research was to determine the accuracy of MRI-based susceptibility measurements using simple phantom geometries and to compare the susceptibility measurements with magnetometry measurements where SI-traceable standards are available. The susceptibilities of paramagnetic salt solutions in cylindrical containers were measured as a function of orientation relative to the static MRI field. The observed induced fields as a function of orientation of the cylinder were in good agreement with simple models. The MRI susceptibility measurements were compared with SQUID magnetometry using NIST-traceable standards. MRI can accurately measure relative magnetic susceptibilities while SQUID magnetometry measures absolute magnetic susceptibility. Given the accuracy of moment measurements of tissue mimicking samples, and the need to look at small differences in tissue properties, the use of existing NIST standard reference materials to calibrate MRI reference structures is problematic and better reference materials are required.

  11. Improved orientation sampling for indexing diffraction patterns of polycrystalline materials

    DEFF Research Database (Denmark)

    Larsen, Peter Mahler; Schmidt, Søren

    2017-01-01

    to that of optimally distributing points on a four‐dimensional sphere. In doing so, the number of orientation samples needed to achieve a desired indexing accuracy is significantly reduced. Orientation sets at a range of sizes are generated in this way for all Laue groups and are made available online for easy use.......Orientation mapping is a widely used technique for revealing the microstructure of a polycrystalline sample. The crystalline orientation at each point in the sample is determined by analysis of the diffraction pattern, a process known as pattern indexing. A recent development in pattern indexing...... in the presence of noise, it has very high computational requirements. In this article, the computational burden is reduced by developing a method for nearly optimal sampling of orientations. By using the quaternion representation of orientations, it is shown that the optimal sampling problem is equivalent...

  12. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  13. DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES

    Science.gov (United States)

    Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...

  14. Accuracy of reading liquid based cytology slides using the ThinPrep Imager compared with conventional cytology: prospective study

    Science.gov (United States)

    d'Assuncao, Jefferson; Irwig, Les; Macaskill, Petra; Chan, Siew F; Richards, Adele; Farnsworth, Annabelle

    2007-01-01

    Objective To compare the accuracy of liquid based cytology using the computerised ThinPrep Imager with that of manually read conventional cytology. Design Prospective study. Setting Pathology laboratory in Sydney, Australia. Participants 55 164 split sample pairs (liquid based sample collected after conventional sample from one collection) from consecutive samples of women choosing both types of cytology and whose specimens were examined between August 2004 and June 2005. Main outcome measures Primary outcome was accuracy of slides for detecting squamous lesions. Secondary outcomes were rate of unsatisfactory slides, distribution of squamous cytological classifications, and accuracy of detecting glandular lesions. Results Fewer unsatisfactory slides were found for imager read cytology than for conventional cytology (1.8% v 3.1%; Pcytology (7.4% v 6.0% overall and 2.8% v 2.2% for cervical intraepithelial neoplasia of grade 1 or higher). Among 550 patients in whom imager read cytology was cervical intraepithelial neoplasia grade 1 or higher and conventional cytology was less severe than grade 1, 133 of 380 biopsy samples taken were high grade histology. Among 294 patients in whom imager read cytology was less severe than cervical intraepithelial neoplasia grade 1 and conventional cytology was grade 1 or higher, 62 of 210 biopsy samples taken were high grade histology. Imager read cytology therefore detected 71 more cases of high grade histology than did conventional cytology, resulting from 170 more biopsies. Similar results were found when one pathologist reread the slides, masked to cytology results. Conclusion The ThinPrep Imager detects 1.29 more cases of histological high grade squamous disease per 1000 women screened than conventional cytology, with cervical intraepithelial neoplasia grade 1 as the threshold for referral to colposcopy. More imager read slides than conventional slides were satisfactory for examination and more contained low grade cytological

  15. An Empirical Consideration of the Use of R in Actively Constructing Sampling Distributions

    Science.gov (United States)

    Vaughn, Brandon K.

    2009-01-01

    In this paper, an interactive teaching approach to introduce the concept of sampling distributions using the statistical software program, R, is shown. One advantage of this approach is that the program R is freely available via the internet. Instructors can easily demonstrate concepts in class, outfit entire computer labs, and/or assign the…

  16. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    Energy Technology Data Exchange (ETDEWEB)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2011-07-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  17. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    International Nuclear Information System (INIS)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G.; Silva, Ademir X.

    2011-01-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  18. Missing data and the accuracy of magnetic-observatory hour means

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2009-09-01

    Full Text Available Analysis is made of the accuracy of magnetic-observatory hourly means constructed from definitive minute data having missing values (gaps. Bootstrap sampling from different data-gap distributions is used to estimate average errors on hourly means as a function of the number of missing data. Absolute and relative error results are calculated for horizontal-intensity, declination, and vertical-component data collected at high, medium, and low magnetic latitudes. For 90% complete coverage (10% missing data, average (RMS absolute errors on hourly means are generally less than errors permitted by Intermagnet for minute data. As a rule of thumb, the average relative error for hourly means with 10% missing minute data is approximately equal to 10% of the hourly standard deviation of the source minute data.

  19. Mechanical Properties Distribution within Polypropylene Injection Molded Samples: Effect of Mold Temperature under Uneven Thermal Conditions

    Directory of Open Access Journals (Sweden)

    Sara Liparoti

    2017-11-01

    Full Text Available The quality of the polymer parts produced by injection molding is strongly affected by the processing conditions. Uncontrolled deviations from the proper process parameters could significantly affect both internal structure and final material properties. In this work, to mimic an uneven temperature field, a strong asymmetric heating is applied during the production of injection-molded polypropylene samples. The morphology of the samples is characterized by optical and atomic force microscopy (AFM, whereas the distribution of mechanical modulus at different scales is obtained by Indentation and HarmoniX AFM tests. Results clearly show that the temperature differences between the two mold surfaces significantly affect the morphology distributions of the molded parts. This is due to both the uneven temperature field evolutions and to the asymmetric flow field. The final mechanical property distributions are determined by competition between the local molecular stretch and the local structuring achieved during solidification. The cooling rate changes affect internal structures in terms of relaxation/reorganization levels and give rise to an asymmetric distribution of mechanical properties.

  20. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  1. Effect of genetic architecture on the prediction accuracy of quantitative traits in samples of unrelated individuals.

    Science.gov (United States)

    Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C

    2018-06-01

    Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.

  2. Simple method for highlighting the temperature distribution into a liquid sample heated by microwave power field

    International Nuclear Information System (INIS)

    Surducan, V.; Surducan, E.; Dadarlat, D.

    2013-01-01

    Microwave induced heating is widely used in medical treatments, scientific and industrial applications. The temperature field inside a microwave heated sample is often inhomogenous, therefore multiple temperature sensors are required for an accurate result. Nowadays, non-contact (Infra Red thermography or microwave radiometry) or direct contact temperature measurement methods (expensive and sophisticated fiber optic temperature sensors transparent to microwave radiation) are mainly used. IR thermography gives only the surface temperature and can not be used for measuring temperature distributions in cross sections of a sample. In this paper we present a very simple experimental method for temperature distribution highlighting inside a cross section of a liquid sample, heated by a microwave radiation through a coaxial applicator. The method proposed is able to offer qualitative information about the heating distribution, using a temperature sensitive liquid crystal sheet. Inhomogeneities as smaller as 1°-2°C produced by the symmetry irregularities of the microwave applicator can be easily detected by visual inspection or by computer assisted color to temperature conversion. Therefore, the microwave applicator is tuned and verified with described method until the temperature inhomogeneities are solved

  3. Study on the influence of X-ray tube spectral distribution on the analysis of bulk samples and thin films: Fundamental parameters method and theoretical coefficient algorithms

    International Nuclear Information System (INIS)

    Sitko, Rafal

    2008-01-01

    Knowledge of X-ray tube spectral distribution is necessary in theoretical methods of matrix correction, i.e. in both fundamental parameter (FP) methods and theoretical influence coefficient algorithms. Thus, the influence of X-ray tube distribution on the accuracy of the analysis of thin films and bulk samples is presented. The calculations are performed using experimental X-ray tube spectra taken from the literature and theoretical X-ray tube spectra evaluated by three different algorithms proposed by Pella et al. (X-Ray Spectrom. 14 (1985) 125-135), Ebel (X-Ray Spectrom. 28 (1999) 255-266), and Finkelshtein and Pavlova (X-Ray Spectrom. 28 (1999) 27-32). In this study, Fe-Cr-Ni system is selected as an example and the calculations are performed for X-ray tubes commonly applied in X-ray fluorescence analysis (XRF), i.e., Cr, Mo, Rh and W. The influence of X-ray tube spectra on FP analysis is evaluated when quantification is performed using various types of calibration samples. FP analysis of bulk samples is performed using pure-element bulk standards and multielement bulk standards similar to the analyzed material, whereas for FP analysis of thin films, the bulk and thin pure-element standards are used. For the evaluation of the influence of X-ray tube spectra on XRF analysis performed by theoretical influence coefficient methods, two algorithms for bulk samples are selected, i.e. Claisse-Quintin (Can. Spectrosc. 12 (1967) 129-134) and COLA algorithms (G.R. Lachance, Paper Presented at the International Conference on Industrial Inorganic Elemental Analysis, Metz, France, June 3, 1981) and two algorithms (constant and linear coefficients) for thin films recently proposed by Sitko (X-Ray Spectrom. 37 (2008) 265-272)

  4. A sampling device for counting insect egg clusters and measuring vertical distribution of vegetation

    Science.gov (United States)

    Robert L. Talerico; Robert W., Jr. Wilson

    1978-01-01

    The use of a vertical sampling pole that delineates known volumes and position is illustrated and demonstrated for counting egg clusters of N. sertifer. The pole can also be used to estimate vertical and horizontal coverage, distribution or damage of vegetation or foliage.

  5. Impact of marker ascertainment bias on genomic selection accuracy and estimates of genetic diversity.

    Directory of Open Access Journals (Sweden)

    Nicolas Heslot

    Full Text Available Genome-wide molecular markers are often being used to evaluate genetic diversity in germplasm collections and for making genomic selections in breeding programs. To accurately predict phenotypes and assay genetic diversity, molecular markers should assay a representative sample of the polymorphisms in the population under study. Ascertainment bias arises when marker data is not obtained from a random sample of the polymorphisms in the population of interest. Genotyping-by-sequencing (GBS is rapidly emerging as a low-cost genotyping platform, even for the large, complex, and polyploid wheat (Triticum aestivum L. genome. With GBS, marker discovery and genotyping occur simultaneously, resulting in minimal ascertainment bias. The previous platform of choice for whole-genome genotyping in many species such as wheat was DArT (Diversity Array Technology and has formed the basis of most of our knowledge about cereals genetic diversity. This study compared GBS and DArT marker platforms for measuring genetic diversity and genomic selection (GS accuracy in elite U.S. soft winter wheat. From a set of 365 breeding lines, 38,412 single nucleotide polymorphism GBS markers were discovered and genotyped. The GBS SNPs gave a higher GS accuracy than 1,544 DArT markers on the same lines, despite 43.9% missing data. Using a bootstrap approach, we observed significantly more clustering of markers and ascertainment bias with DArT relative to GBS. The minor allele frequency distribution of GBS markers had a deficit of rare variants compared to DArT markers. Despite the ascertainment bias of the DArT markers, GS accuracy for three traits out of four was not significantly different when an equal number of markers were used for each platform. This suggests that the gain in accuracy observed using GBS compared to DArT markers was mainly due to a large increase in the number of markers available for the analysis.

  6. Impact of Marker Ascertainment Bias on Genomic Selection Accuracy and Estimates of Genetic Diversity

    Science.gov (United States)

    Heslot, Nicolas; Rutkoski, Jessica; Poland, Jesse; Jannink, Jean-Luc; Sorrells, Mark E.

    2013-01-01

    Genome-wide molecular markers are often being used to evaluate genetic diversity in germplasm collections and for making genomic selections in breeding programs. To accurately predict phenotypes and assay genetic diversity, molecular markers should assay a representative sample of the polymorphisms in the population under study. Ascertainment bias arises when marker data is not obtained from a random sample of the polymorphisms in the population of interest. Genotyping-by-sequencing (GBS) is rapidly emerging as a low-cost genotyping platform, even for the large, complex, and polyploid wheat (Triticum aestivum L.) genome. With GBS, marker discovery and genotyping occur simultaneously, resulting in minimal ascertainment bias. The previous platform of choice for whole-genome genotyping in many species such as wheat was DArT (Diversity Array Technology) and has formed the basis of most of our knowledge about cereals genetic diversity. This study compared GBS and DArT marker platforms for measuring genetic diversity and genomic selection (GS) accuracy in elite U.S. soft winter wheat. From a set of 365 breeding lines, 38,412 single nucleotide polymorphism GBS markers were discovered and genotyped. The GBS SNPs gave a higher GS accuracy than 1,544 DArT markers on the same lines, despite 43.9% missing data. Using a bootstrap approach, we observed significantly more clustering of markers and ascertainment bias with DArT relative to GBS. The minor allele frequency distribution of GBS markers had a deficit of rare variants compared to DArT markers. Despite the ascertainment bias of the DArT markers, GS accuracy for three traits out of four was not significantly different when an equal number of markers were used for each platform. This suggests that the gain in accuracy observed using GBS compared to DArT markers was mainly due to a large increase in the number of markers available for the analysis. PMID:24040295

  7. The WAIS Melt Monitor: An automated ice core melting system for meltwater sample handling and the collection of high resolution microparticle size distribution data

    Science.gov (United States)

    Breton, D. J.; Koffman, B. G.; Kreutz, K. J.; Hamilton, G. S.

    2010-12-01

    Paleoclimate data are often extracted from ice cores by careful geochemical analysis of meltwater samples. The analysis of the microparticles found in ice cores can also yield unique clues about atmospheric dust loading and transport, dust provenance and past environmental conditions. Determination of microparticle concentration, size distribution and chemical makeup as a function of depth is especially difficult because the particle size measurement either consumes or contaminates the meltwater, preventing further geochemical analysis. Here we describe a microcontroller-based ice core melting system which allows the collection of separate microparticle and chemistry samples from the same depth intervals in the ice core, while logging and accurately depth-tagging real-time electrical conductivity and particle size distribution data. This system was designed specifically to support microparticle analysis of the WAIS Divide WDC06A deep ice core, but many of the subsystems are applicable to more general ice core melting operations. Major system components include: a rotary encoder to measure ice core melt displacement with 0.1 millimeter accuracy, a meltwater tracking system to assign core depths to conductivity, particle and sample vial data, an optical debubbler level control system to protect the Abakus laser particle counter from damage due to air bubbles, a Rabbit 3700 microcontroller which communicates with a host PC, collects encoder and optical sensor data and autonomously operates Gilson peristaltic pumps and fraction collectors to provide automatic sample handling, melt monitor control software operating on a standard PC allowing the user to control and view the status of the system, data logging software operating on the same PC to collect data from the melting, electrical conductivity and microparticle measurement systems. Because microparticle samples can easily be contaminated, we use optical air bubble sensors and high resolution ice core density

  8. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    Directory of Open Access Journals (Sweden)

    Junguo Hu

    Full Text Available Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK and Co-Kriging (Co-OK methods. The results indicated that the root mean squared errors (RMSEs and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193 were less than those for the OK method (1.146 and 1.539 when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  9. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    Science.gov (United States)

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  10. Studies on cellular distribution of elements in human hepatocellular carcinoma samples by molecular activation analysis

    International Nuclear Information System (INIS)

    Deng Guilong; Chen Chunying; Zhang Peiqun; Zhao Jiujiang; Chai Zhifang

    2005-01-01

    The distribution patterns of 17 elements in the subcellular fractions of nuclei, mitochondria, lysosome, microsome and cytosol of human hepatocellular carcinoma (HCC) and normal liver samples were investigated by using molecular activation analysis (MAA) and differential centrifugation. Their significant difference was checked by the Studient's t-test. These elements exhibit inhomogeneous distributions in each subcellular fraction. Some elements have no significant difference between hepatocellular carcinoma and normal liver samples. However, the concentrations of Br, Ca, Cd and Cs are significantly higher in each component of hepatocarcinoma than in normal liver. The content of Fe in microsome of HCC is significantly lower, almost half of normal liver samples, but higher in other subcellular fractions than in those of normal tissues. The rare earth elements of La and Ce have the patterns similar to Fe. The concentrations of Sb and Zn in nuclei of HCC are obviously lower (P<0.05, P<0.05). The contents of K and Na are higher in cytosol of HCC (P<0.05). The distributions of Ba and Rb show no significant difference between two groups. The relationships of Fe, Cd and K with HCC were also discussed. The levels of some elements in subcellular fractions of tumor were quite different from those of normal liver, which suggested that trace elements might play important roles in the occurrence and development of hepatocellular carcinoma. (authors)

  11. Studies on cellular distribution of elements in human hepatocellular carcinoma samples by molecular activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Guilong, Deng [Chinese Academy of Sciences, Beijing (China). Inst. of High Energy Physics, Key Laboratory of Nuclear Analytical Techniques; Department of General Surgery, the Second Affiliated Hospital, School of Medicine, Zhejiang Univ., Hangzhou (China); Chunying, Chen; Peiqun, Zhang; Jiujiang, Zhao; Zhifang, Chai [Chinese Academy of Sciences, Beijing (China). Inst. of High Energy Physics, Key Laboratory of Nuclear Analytical Techniques; Yingbin, Liu; Jianwei, Wang; Bin, Xu; Shuyou, Peng [Department of General Surgery, the Second Affiliated Hospital, School of Medicine, Zhejiang Univ., Hangzhou (China)

    2005-07-15

    The distribution patterns of 17 elements in the subcellular fractions of nuclei, mitochondria, lysosome, microsome and cytosol of human hepatocellular carcinoma (HCC) and normal liver samples were investigated by using molecular activation analysis (MAA) and differential centrifugation. Their significant difference was checked by the Studient's t-test. These elements exhibit inhomogeneous distributions in each subcellular fraction. Some elements have no significant difference between hepatocellular carcinoma and normal liver samples. However, the concentrations of Br, Ca, Cd and Cs are significantly higher in each component of hepatocarcinoma than in normal liver. The content of Fe in microsome of HCC is significantly lower, almost half of normal liver samples, but higher in other subcellular fractions than in those of normal tissues. The rare earth elements of La and Ce have the patterns similar to Fe. The concentrations of Sb and Zn in nuclei of HCC are obviously lower (P<0.05, P<0.05). The contents of K and Na are higher in cytosol of HCC (P<0.05). The distributions of Ba and Rb show no significant difference between two groups. The relationships of Fe, Cd and K with HCC were also discussed. The levels of some elements in subcellular fractions of tumor were quite different from those of normal liver, which suggested that trace elements might play important roles in the occurrence and development of hepatocellular carcinoma. (authors)

  12. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    Science.gov (United States)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  13. Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2008-01-01

    uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining......In recent years, there has been an increase in the application of distributed, physically-based and integrated hydrological models. Many questions regarding how to properly calibrate and validate distributed models and assess the uncertainty of the estimated parameters and the spatially......-site validation must complement the usual time validation. In this study, we develop, through an application, a comprehensive framework for multi-criteria calibration and uncertainty assessment of distributed physically-based, integrated hydrological models. A revised version of the generalized likelihood...

  14. Evaluation of the Accuracy of Polymer Gels for Determining Electron Dose Distributions in the Presence of Small Heterogeneities.

    Science.gov (United States)

    Asl, R Ghahraman; Nedaie, H A; Banaee, N

    2017-12-01

    The aim of this study is to evaluate the application and accuracy of polymer gels for determining electron dose distributions in the presence of small heterogeneities made of bone and air. Different cylindrical phantoms containing MAGIC (Methacrylic and Ascorbic acid in Gelatin Initiated by Copper) normoxic polymer gel were used under the slab phantoms during irradiation. MR images of the irradiated gel phantoms were obtained to determine their R2 (spin-spin) relaxation maps for conversion to absorbed dose. One- and 2-dimensional lateral dose profiles were acquired at depths of 1 and 4 cm for 8 and 15 MeV electron beams. The results were compared with the doses measured by a diode detector at the same positions. In addition, the dose distribution in the axial orientation was measured by the gel dosimeter. The slope and intercept for the R2 versus dose curve were 0.509 ± 0.002 Gy s and 4.581 ± 0.005 s, respectively. No significant variation in dose-R2 response was seen for the two electron energies within the applied dose ranges. The mean dose difference between the measured gel dose profiles was smaller than 3% compared to those measured by the diode detector. These results provide further demonstration that electron dose distributions are significantly altered in the presence of tissue inhomogeneities such as bone and air cavity and that MAGIC gel is a useful tool for 3-dimensional dose visualization and qualitative assessment of tissue inhomogeneity effects in electron beam dosimetry.

  15. Actual distribution of Cronobacter spp. in industrial batches of powdered infant formula and consequences for performance of sampling strategies

    NARCIS (Netherlands)

    Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.

    2011-01-01

    The actual spatial distribution of microorganisms within a batch of food influences the results of sampling for microbiological testing when this distribution is non-homogeneous. In the case of pathogens being non-homogeneously distributed, it markedly influences public health risk. This study

  16. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  17. DISCLOSING THE RADIO LOUDNESS DISTRIBUTION DICHOTOMY IN QUASARS: AN UNBIASED MONTE CARLO APPROACH APPLIED TO THE SDSS-FIRST QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Balokovic, M. [Department of Astronomy, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Smolcic, V. [Argelander-Institut fuer Astronomie, Auf dem Hugel 71, D-53121 Bonn (Germany); Ivezic, Z. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Zamorani, G. [INAF-Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Schinnerer, E. [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, D-69117 Heidelberg (Germany); Kelly, B. C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States)

    2012-11-01

    We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 {+-} 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.

  18. DISCLOSING THE RADIO LOUDNESS DISTRIBUTION DICHOTOMY IN QUASARS: AN UNBIASED MONTE CARLO APPROACH APPLIED TO THE SDSS-FIRST QUASAR SAMPLE

    International Nuclear Information System (INIS)

    Baloković, M.; Smolčić, V.; Ivezić, Ž.; Zamorani, G.; Schinnerer, E.; Kelly, B. C.

    2012-01-01

    We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 ± 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.

  19. Ancylostoma caninum: calibration and comparison of diagnostic accuracy of flotation in tube, McMaster and FLOTAC in faecal samples of dogs.

    Science.gov (United States)

    Cringoli, Giuseppe; Rinaldi, Laura; Maurelli, Maria Paola; Morgoglione, Maria Elena; Musella, Vincenzo; Utzinger, Jürg

    2011-05-01

    We performed a calibration of flotation in tube, McMaster and FLOTAC to determine the optimal flotation solution (FS) and the influence of faecal preservation for the diagnosis of Ancylostoma caninum in dogs, and compared the accuracy of the three copromicroscopic techniques. Among nine different FS, sodium chloride and sodium nitrate performed best for detection and quantification of A. caninum eggs. Faecal samples, either fresh or preserved in formalin 5%, resulted in higher A. caninum egg counts, compared to frozen samples or preserved in formalin 10% or sodium acetate-acetic acid-formalin. FLOTAC consistently resulted in higher A. caninum eggs per gram of faeces (EPG) and lower coefficient of variation (CV) than McMaster and flotation in tube. The best results in terms of mean faecal egg counts (highest value, i.e. 117.0EPG) and CV (lowest value, i.e. 4.8%) were obtained with FLOTAC using sodium chloride and faecal samples preserved in formalin 5%. Our findings suggest that the FLOTAC technique should be considered for the diagnosis of A. caninum in dogs. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. The hybrid model for sampling multiple elastic scattering angular deflections based on Goudsmit-Saunderson theory

    Directory of Open Access Journals (Sweden)

    Wasaye Muhammad Abdul

    2017-01-01

    Full Text Available An algorithm for the Monte Carlo simulation of electron multiple elastic scattering based on the framework of SuperMC (Super Monte Carlo simulation program for nuclear and radiation process is presented. This paper describes efficient and accurate methods by which the multiple scattering angular deflections are sampled. The Goudsmit-Saunderson theory of multiple scattering has been used for sampling angular deflections. Differential cross-sections of electrons and positrons by neutral atoms have been calculated by using Dirac partial wave program ELSEPA. The Legendre coefficients are accurately computed by using the Gauss-Legendre integration method. Finally, a novel hybrid method for sampling angular distribution has been developed. The model uses efficient rejection sampling method for low energy electrons (500 mean free paths. For small path lengths, a simple, efficient and accurate analytical distribution function has been proposed. The later uses adjustable parameters determined from the fitting of Goudsmith-Saunderson angular distribution. A discussion of the sampling efficiency and accuracy of this newly developed algorithm is given. The efficiency of rejection sampling algorithm is at least 50 % for electron kinetic energies less than 500 keV and longer path lengths (>500 mean free paths. Monte Carlo Simulation results are then compared with measured angular distributions of Ross et al. The comparison shows that our results are in good agreement with experimental measurements.

  1. A system for on-line monitoring of light element concentration distributions in thin samples

    NARCIS (Netherlands)

    Brands, P.J.M.; Mutsaers, P.H.A.; Voigt, de M.J.A.

    1999-01-01

    At the Cyclotron Laboratory, a scanning proton microprobe is used to determine concentration distributions in biomedical samples. The data acquired in these measurements used to be analysed in a time consuming off-line analysis. To avoid the loss of valuable measurement and analysis time, DYANA was

  2. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  3. Molecular dynamics equation designed for realizing arbitrary density: Application to sampling method utilizing the Tsallis generalized distribution

    International Nuclear Information System (INIS)

    Fukuda, Ikuo; Nakamura, Haruki

    2010-01-01

    Several molecular dynamics techniques applying the Tsallis generalized distribution are presented. We have developed a deterministic dynamics to generate an arbitrary smooth density function ρ. It creates a measure-preserving flow with respect to the measure ρdω and realizes the density ρ under the assumption of the ergodicity. It can thus be used to investigate physical systems that obey such distribution density. Using this technique, the Tsallis distribution density based on a full energy function form along with the Tsallis index q ≥ 1 can be created. From the fact that an effective support of the Tsallis distribution in the phase space is broad, compared with that of the conventional Boltzmann-Gibbs (BG) distribution, and the fact that the corresponding energy-surface deformation does not change energy minimum points, the dynamics enhances the physical state sampling, in particular for a rugged energy surface spanned by a complicated system. Other feature of the Tsallis distribution is that it provides more degree of the nonlinearity, compared with the case of the BG distribution, in the deterministic dynamics equation, which is very useful to effectively gain the ergodicity of the dynamical system constructed according to the scheme. Combining such methods with the reconstruction technique of the BG distribution, we can obtain the information consistent with the BG ensemble and create the corresponding free energy surface. We demonstrate several sampling results obtained from the systems typical for benchmark tests in MD and from biomolecular systems.

  4. Effects of XPS operational parameters on investigated sample surfaces

    International Nuclear Information System (INIS)

    Mrad, O.; Ismail, I.

    2013-04-01

    In this work, we studied the effects of the operating conditions of the xray photoelectron spectroscopy analysis technique (XPS) on the investigated samples. Firstly, the performances of the whole system have been verified as well as the accuracy of the analysis. Afterwards, the problem of the analysis of insulating samples caused by the charge buildup on the surface has been studied. The use of low-energy electron beam (<100 eV) to compensate the surface charge has been applied. The effect of X-ray on the samples have been assessed and was found to be nondestructive within the analysis time. The effect of low- and high-energy electron beams on the sample surface have been investigated. Highenergy electrons were found to have destructive effect on organic samples. The sample heating procedure has been tested and its effect on the chemical stat of the surface was followed. Finally, the ion source was used to determine the elements distribution and the chemical stat of different depths of the sample. A method has been proposed to determine these depths (author).

  5. Testing the accuracy of clustering redshifts with simulations

    Science.gov (United States)

    Scottez, V.; Benoit-Lévy, A.; Coupon, J.; Ilbert, O.; Mellier, Y.

    2018-03-01

    We explore the accuracy of clustering-based redshift inference within the MICE2 simulation. This method uses the spatial clustering of galaxies between a spectroscopic reference sample and an unknown sample. This study give an estimate of the reachable accuracy of this method. First, we discuss the requirements for the number objects in the two samples, confirming that this method does not require a representative spectroscopic sample for calibration. In the context of next generation of cosmological surveys, we estimated that the density of the Quasi Stellar Objects in BOSS allows us to reach 0.2 per cent accuracy in the mean redshift. Secondly, we estimate individual redshifts for galaxies in the densest regions of colour space ( ˜ 30 per cent of the galaxies) without using the photometric redshifts procedure. The advantage of this procedure is threefold. It allows: (i) the use of cluster-zs for any field in astronomy, (ii) the possibility to combine photo-zs and cluster-zs to get an improved redshift estimation, (iii) the use of cluster-z to define tomographic bins for weak lensing. Finally, we explore this last option and build five cluster-z selected tomographic bins from redshift 0.2 to 1. We found a bias on the mean redshift estimate of 0.002 per bin. We conclude that cluster-z could be used as a primary redshift estimator by next generation of cosmological surveys.

  6. Accuracy, precision, and lower detection limits (a deficit reduction approach)

    International Nuclear Information System (INIS)

    Bishop, C.T.

    1993-01-01

    The evaluation of the accuracy, precision and lower detection limits of the determination of trace radionuclides in environmental samples can become quite sophisticated and time consuming. This in turn could add significant cost to the analyses being performed. In the present method, a open-quotes deficit reduction approachclose quotes has been taken to keep costs low, but at the same time provide defensible data. In order to measure the accuracy of a particular method, reference samples are measured over the time period that the actual samples are being analyzed. Using a Lotus spreadsheet, data are compiled and an average accuracy is computed. If pairs of reference samples are analyzed, then precision can also be evaluated from the duplicate data sets. The standard deviation can be calculated if the reference concentrations of the duplicates are all in the same general range. Laboratory blanks are used to estimate the lower detection limits. The lower detection limit is calculated as 4.65 times the standard deviation of a set of blank determinations made over a given period of time. A Lotus spreadsheet is again used to compile data and LDLs over different periods of time can be compared

  7. Numerical simulation of permanent magnet method: Influence of experimental conditions on accuracy of j{sub C}-distribution

    Energy Technology Data Exchange (ETDEWEB)

    Takayama, T., E-mail: takayama@yz.yamagata-u.ac.j [Faculty of Engineering, Yamagata University, 4-3-16, Johnan, Yonezawa, Yamagata 992-8510 (Japan); Kamitani, A.; Tanaka, A. [Graduate School of Science and Engineering, Yamagata University, 4-3-16, Johnan, Yonezawa, Yamagata 992-8510 (Japan)

    2010-11-01

    Influence of the magnet position on the determination of the distribution of the critical current density in a high-temperature superconducting (HTS) thin film has been investigated numerically. For this purpose, a numerical code has been developed for analyzing the shielding current density in a HTS sample. By using the code, the permanent magnet method is reproduced. The results of computations show that, even if the center of the permanent magnet is located near the film edge, the maximum repulsive force is roughly proportional to the critical current density. This means that the distribution of the critical current density in the HTS film can be estimated from the proportionality constants determined by using the relations between the maximum repulsive force and the critical current density.

  8. Production of vegetation samples containing radionuclides gamma emitters to attend the interlaboratory programs

    International Nuclear Information System (INIS)

    Souza, Poliana Santos de

    2016-01-01

    The production of environmental samples such as soil, sediment, water and vegetation with radionuclides for intercomparison tests is a very important contribution to environmental monitoring. Laboratories that carry out such monitoring need to demonstrate that their results are reliable. The IRD National Intercomparison Program (PNI) produces and distributes environmental samples containing radionuclides used to check the laboratories performance. This work demonstrates the feasibility of producing vegetation (grass) samples containing 60 Co, 65 Zn, 134 Cs, and 137 Cs by the spike sample method for the PNI. The preparation and the statistical tests followed the ISO guides 34 and 35 recommendations. The grass samples were dried, ground and passed through a sieve of 250 μm. 500 g of vegetation was treated in each procedure. Samples were treated by two different procedures:1) homogenizing of the radioactive solution containing vegetation by hand and drying in an oven and 2) homogenizing of the radioactive solution containing the vegetation in a rotatory evaporator and drying in an oven. The theoretical activity concentration of the radionuclides in the grass had a range of 593 Bq/kg to 683 Bq/kg. After gamma spectrometry analysis the results of both procedures were compared as accuracy, precision, homogeneity and stability. The accuracy, precision and short time stability from both methods were similar but the homogeneity test of the evaporation method was not approved for the radionuclides 60 Co and 134 Cs. Based on comparisons between procedures was chosen the manual agitation procedure for the grass sample for the PNI. The accuracy of the procedure, represented by the uncertainty and based on theoretical value had a range between -1.1 and 5.1% and the precision between 0.6 a 6.5 %. This result show is the procedure chosen for the production of grass samples for PNI. (author)

  9. The Accuracy of GBM GRB Localizations

    Science.gov (United States)

    Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.

    2010-03-01

    We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.

  10. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    Science.gov (United States)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  11. A simulation study of likelihood inference procedures in rayleigh distribution with censored data

    International Nuclear Information System (INIS)

    Baklizi, S. A.; Baker, H. M.

    2001-01-01

    Inference procedures based on the likelihood function are considered for the one parameter Rayleigh distribution with type1 and type 2 censored data. Using simulation techniques, the finite sample performances of the maximum likelihood estimator and the large sample likelihood interval estimation procedures based on the Wald, the Rao, and the likelihood ratio statistics are investigated. It appears that the maximum likelihood estimator is unbiased. The approximate variance estimates obtained from the asymptotic normal distribution of the maximum likelihood estimator are accurate under type 2 censored data while they tend to be smaller than the actual variances when considering type1 censored data of small size. It appears also that interval estimation based on the Wald and Rao statistics need much more sample size than interval estimation based on the likelihood ratio statistic to attain reasonable accuracy. (authors). 15 refs., 4 tabs

  12. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    International Nuclear Information System (INIS)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J.

    2007-03-01

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future

  13. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J

    2007-03-15

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future.

  14. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Y. Isaac [Institute of Theoretical and Computational Chemistry, College of Chemistry and Molecular Engineering, Peking University, Beijing 100871 (China); Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin, E-mail: gaoyq@pku.edu.cn [Institute of Theoretical and Computational Chemistry, College of Chemistry and Molecular Engineering, Peking University, Beijing 100871 (China); Biodynamic Optical Imaging Center, Peking University, Beijing 100871 (China)

    2016-03-07

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ − ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.

  15. Emotional state and its impact on voice authentication accuracy

    Science.gov (United States)

    Voznak, Miroslav; Partila, Pavol; Penhaker, Marek; Peterek, Tomas; Tomala, Karel; Rezac, Filip; Safarik, Jakub

    2013-05-01

    The paper deals with the increasing accuracy of voice authentication methods. The developed algorithm first extracts segmental parameters, such as Zero Crossing Rate, the Fundamental Frequency and Mel-frequency cepstral coefficients from voice. Based on these parameters, the neural network classifier detects the speaker's emotional state. These parameters shape the distribution of neurons in Kohonen maps, forming clusters of neurons on the map characterizing a particular emotional state. Using regression analysis, we can calculate the function of the parameters of individual emotional states. This relationship increases voice authentication accuracy and prevents unjust rejection.

  16. Photon event distribution sampling: an image formation technique for scanning microscopes that permits tracking of sub-diffraction particles with high spatial and temporal resolutions.

    Science.gov (United States)

    Larkin, J D; Publicover, N G; Sutko, J L

    2011-01-01

    In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  17. The Mann-Whitney U: A Test for Assessing Whether Two Independent Samples Come from the Same Distribution

    Directory of Open Access Journals (Sweden)

    Nadim Nachar

    2008-03-01

    Full Text Available It is often difficult, particularly when conducting research in psychology, to have access to large normally distributed samples. Fortunately, there are statistical tests to compare two independent groups that do not require large normally distributed samples. The Mann-Whitney U is one of these tests. In the following work, a summary of this test is presented. The explanation of the logic underlying this test and its application are presented. Moreover, the forces and weaknesses of the Mann-Whitney U are mentioned. One major limit of the Mann-Whitney U is that the type I error or alpha (? is amplified in a situation of heteroscedasticity.

  18. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  19. Matrix effect on the detection limit and accuracy in total reflection X-ray fluorescence analysis of trace elements in environmental and biological samples

    International Nuclear Information System (INIS)

    Karjou, J.

    2007-01-01

    The effect of matrix contents on the detection limit of total reflection X-ray fluorescence analysis was experimentally investigated using a set of multielement standard solutions (500 ng/mL of each element) in variable concentrations of NH 4 NO 3 . It was found that high matrix concentration, i.e. 0.1-10% NH 4 NO 3 , had a strong effect on the detection limits for all investigated elements, whereas no effect was observed at lower matrix concentration, i.e. 0-0.1% NH 4 NO 3 . The effect of soil and blood sample masses on the detection limit was also studied. The results showed decreasing the detection limit (in concentration unit, μg/g) with increasing the sample mass. However, the detection limit increased (in mass unit, ng) with increasing sample mass. The optimal blood sample mass of ca. 200 μg was sufficient to improve the detection limit of Se determination by total reflection X-ray fluorescence. The capability of total reflection X-ray fluorescence to analyze different kinds of samples was discussed with respect to the accuracy and detection limits based on certified and reference materials. Direct analysis of unknown water samples from several sources was also presented in this work

  20. Diagnostic accuracy of slit skin smears in leprosy

    International Nuclear Information System (INIS)

    Naveed, T.; Shaikh, Z.

    2015-01-01

    To determine the diagnostic accuracy of slit skin smears in clinically suspected patients of leprosy using histopathology as gold standard. Study Design: Validation study Place and Duration of Study: Study was carried out at Rawalpindi Leprosy Hospital, Dermatology Department Military Hospital (MH) and Armed Forces Institute of Pathology (AFIP), Rawalpindi from 18th August 2012 to 18 Feb 2013. Methods: Appropriate technical and ethical approval for the study and patient consent were obtained. All suspected patients of leprosy of any age and either gender having typical hypo-aesthetic or anesthetic, erythematous or hypo-pigmented scaly skin lesions on any part of body were included in this study. All patients who have already received treatment for leprosy, patients with pure neural leprosy, patient not giving their consent for skin biopsy and patients with lepra reactions were excluded from this study. Forty eight patients fulfilling the inclusion criteria were included in the study. Sample size had been calculated by using WHO sample size calculator taking confidence level 95%, absolute precision required 14% and anticipated population proportion 40%. Non-probability consecutive sampling technique was used to collect sample. Results: The results of the study revealed that out of 48 clinically suspected patients of leprosy skin biopsy confirmed the diagnosis in 34 patients (70.8%) and the slit skin smear had diagnostic accuracy of 68.75% with sensitivity 55.8% and specificity and positive predictive value of 100%. Conclusion: Study suggested that although slit skin smears are rapid and inexpensive method of diagnosis but their diagnostic accuracy is low. (author)

  1. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    Science.gov (United States)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  2. 234Th distributions in coastal and open ocean waters by non-destructive β-counting

    International Nuclear Information System (INIS)

    Miller, L.A.; Svaeren, I.

    2003-01-01

    Non-destructive β-counting analyses of particulate and dissolved 234 Th activities in seawater are simpler but no less precise than traditional radioanalytical methods. The inherent accuracy limitations of the non-destructive β-counting method, particularly in samples likely to be contaminated with anthropogenic nuclides, are alleviated by recounting the samples over several half-lives and fitting the counting data to the 234 Th decay curve. Precision (including accuracy, estimated at an average of 3%) is better than 10% for particulate or 5% for dissolved samples. Thorium-234 distributions in the Skagerrak indicated a vigorous, presumably biological, particle export from the surface waters, and while bottom sediment resuspension was not an effective export mechanism, it did strip thorium from the dissolved phase. In the Greenland and Norwegian Seas, we saw clear evidence of particulate export from the surface waters, but at 75 m, total 234 Th activities were generally in equilibrium with 238 U. (author)

  3. Enhancement of accuracy in shape sensing of surgical needles using optical frequency domain reflectometry in optical fibers.

    Science.gov (United States)

    Parent, Francois; Loranger, Sebastien; Mandal, Koushik Kanti; Iezzi, Victor Lambin; Lapointe, Jerome; Boisvert, Jean-Sébastien; Baiad, Mohamed Diaa; Kadoury, Samuel; Kashyap, Raman

    2017-04-01

    We demonstrate a novel approach to enhance the precision of surgical needle shape tracking based on distributed strain sensing using optical frequency domain reflectometry (OFDR). The precision enhancement is provided by using optical fibers with high scattering properties. Shape tracking of surgical tools using strain sensing properties of optical fibers has seen increased attention in recent years. Most of the investigations made in this field use fiber Bragg gratings (FBG), which can be used as discrete or quasi-distributed strain sensors. By using a truly distributed sensing approach (OFDR), preliminary results show that the attainable accuracy is comparable to accuracies reported in the literature using FBG sensors for tracking applications (~1mm). We propose a technique that enhanced our accuracy by 47% using UV exposed fibers, which have higher light scattering compared to un-exposed standard single mode fibers. Improving the experimental setup will enhance the accuracy provided by shape tracking using OFDR and will contribute significantly to clinical applications.

  4. Calculation of the effective D-d neutron energy distribution incident on a cylindrical shell sample

    International Nuclear Information System (INIS)

    Gotoh, Hiroshi

    1977-07-01

    A method is proposed to calculate the effective energy distribution of neutrons incident on a cylindrical shell sample placed perpendicularly to the direction of the deuteron beam bombarding a deuterium metal target. The Monte Carlo method is used and the Fortran program is contained. (auth.)

  5. Failure-censored accelerated life test sampling plans for Weibull distribution under expected test time constraint

    International Nuclear Information System (INIS)

    Bai, D.S.; Chun, Y.R.; Kim, J.G.

    1995-01-01

    This paper considers the design of life-test sampling plans based on failure-censored accelerated life tests. The lifetime distribution of products is assumed to be Weibull with a scale parameter that is a log linear function of a (possibly transformed) stress. Two levels of stress higher than the use condition stress, high and low, are used. Sampling plans with equal expected test times at high and low test stresses which satisfy the producer's and consumer's risk requirements and minimize the asymptotic variance of the test statistic used to decide lot acceptability are obtained. The properties of the proposed life-test sampling plans are investigated

  6. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  7. AN ASSESSMENT OF CITIZEN CONTRIBUTED GROUND REFERENCE DATA FOR LAND COVER MAP ACCURACY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    G. M. Foody

    2015-08-01

    Full Text Available It is now widely accepted that an accuracy assessment should be part of a thematic mapping programme. Authoritative good or best practices for accuracy assessment have been defined but are often impractical to implement. Key reasons for this situation are linked to the ground reference data used in the accuracy assessment. Typically, it is a challenge to acquire a large sample of high quality reference cases in accordance to desired sampling designs specified as conforming to good practice and the data collected are normally to some degree imperfect limiting their value to an accuracy assessment which implicitly assumes the use of a gold standard reference. Citizen sensors have great potential to aid aspects of accuracy assessment. In particular, they may be able to act as a source of ground reference data that may, for example, reduce sample size problems but concerns with data quality remain. The relative strengths and limitations of citizen contributed data for accuracy assessment are reviewed in the context of the authoritative good practices defined for studies of land cover by remote sensing. The article will highlight some of the ways that citizen contributed data have been used in accuracy assessment as well as some of the problems that require further attention, and indicate some of the potential ways forward in the future.

  8. Exact run length distribution of the double sampling x-bar chart with estimated process parameters

    Directory of Open Access Journals (Sweden)

    Teoh, W. L.

    2016-05-01

    Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.

  9. Environmental DNA method for estimating salamander distribution in headwater streams, and a comparison of water sampling methods.

    Science.gov (United States)

    Katano, Izumi; Harada, Ken; Doi, Hideyuki; Souma, Rio; Minamoto, Toshifumi

    2017-01-01

    Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.

  10. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  11. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  12. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  13. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    Science.gov (United States)

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  14. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Science.gov (United States)

    2010-07-01

    ... COAL MINE SAFETY AND HEALTH COAL MINE DUST SAMPLING DEVICES Requirements for Continuous Personal Dust... miner whose exposure is being monitored. (b) Accuracy. The ability of a CPDM to determine the true... levels tested, 0.2 to 4.0 mg/m3 for an 8-hour sampling period. (f) Testing conditions. Laboratory and...

  15. Reactor power distribution pattern judging device

    International Nuclear Information System (INIS)

    Ikehara, Tadashi.

    1992-01-01

    The judging device of the present invention comprises a power distribution readout system for intaking a power value from a fuel segment, a neural network having an experience learning function for receiving a power distribution value as an input variant, mapping it into a desirable property and self-organizing the map, and a learning date base storing a plurality of learnt samples. The read power distribution is classified depending on the similarity thereof with any one of representative learnt power distribution, and the corresponding state of the reactor core is outputted as a result of the judgement. When an error is found in the classified judging operation, erroneous cases are additionally learnt by using the experience and learning function, thereby improving the accuracy of the reactor core characteristic estimation operation. Since the device is mainly based on the neural network having a self-learning function and a pattern classification and judging function, a judging device having a human's intuitive pattern recognition performance and a pattern experience and learning performance is obtainable, thereby enabling to judge the state of the reactor core accurately. (N.H.)

  16. One-sample determination of glomerular filtration rate (GFR) in children. An evaluation based on 75 consecutive patients

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Lütken; Kanstrup, Inge-Lis; Henriksen, Jens Henrik Sahl

    2013-01-01

    the plasma radioactivity curve. The one-sample clearance was determined from a single plasma sample collected at 60, 90 or 120 min after injection according to the one-pool method. Results. The overall accuracy of one-sample clearance was excellent with mean numeric difference to the reference value of 0.......7-1.7 mL/min. In 64 children, the one-sample clearance was within ± 4 mL/min of the multiple-sample value. However, in 11 children the numeric difference exceeded 4 mL/min (4.4-19.5). Analysis of age, body size, distribution volume, indicator retention time, clearance level, curve fitting, and sampling...... fraction (15%) larger discrepancies are found. If an accurate clearance value is essential a multiple-sample determination should be performed....

  17. Testing an Automated Accuracy Assessment Method on Bibliographic Data

    Directory of Open Access Journals (Sweden)

    Marlies Olensky

    2014-12-01

    Full Text Available This study investigates automated data accuracy assessment as described in data quality literature for its suitability to assess bibliographic data. The data samples comprise the publications of two Nobel Prize winners in the field of Chemistry for a 10-year-publication period retrieved from the two bibliometric data sources, Web of Science and Scopus. The bibliographic records are assessed against the original publication (gold standard and an automatic assessment method is compared to a manual one. The results show that the manual assessment method reflects truer accuracy scores. The automated assessment method would need to be extended by additional rules that reflect specific characteristics of bibliographic data. Both data sources had higher accuracy scores per field than accumulated per record. This study contributes to the research on finding a standardized assessment method of bibliographic data accuracy as well as defining the impact of data accuracy on the citation matching process.

  18. Increased accuracy of starch granule type quantification using mixture distributions.

    Science.gov (United States)

    Tanaka, Emi; Ral, Jean-Phillippe F; Li, Sean; Gaire, Raj; Cavanagh, Colin R; Cullis, Brian R; Whan, Alex

    2017-01-01

    The proportion of granule types in wheat starch is an important characteristic that can affect its functionality. It is widely accepted that granule types are either large, disc-shaped A-type granules or small, spherical B-type granules. Additionally, there are some reports of the tiny C-type granules. The differences between these granule types are due to its carbohydrate composition and crystallinity which is highly, but not perfectly, correlated with the granule size. A majority of the studies that have considered granule types analyse them based on a size threshold rather than chemical composition. This is understandable due to the expense of separating starch into different types. While the use of a size threshold to classify granule type is a low-cost measure, this results in misclassification. We present an alternative, statistical method to quantify the proportion of granule types by a fit of the mixture distribution, along with an R package, a web based app and a video tutorial for how to use the web app to enable its straightforward application. Our results show that the reliability of the genotypic effects increase approximately 60% using the proportions of the A-type and B-type granule estimated by the mixture distribution over the standard size-threshold measure. Although there was a marginal drop in reliability for C-type granules. The latter is likely due to the low observed genetic variance for C-type granules. The determination of the proportion of granule types from size-distribution is better achieved by using the mixing probabilities from the fit of the mixture distribution rather than using a size-threshold.

  19. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    Science.gov (United States)

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  20. 21 CFR 809.40 - Restrictions on the sale, distribution, and use of OTC test sample collection systems for drugs...

    Science.gov (United States)

    2010-04-01

    ... OTC test sample collection systems for drugs of abuse testing. 809.40 Section 809.40 Food and Drugs... Restrictions on the sale, distribution, and use of OTC test sample collection systems for drugs of abuse testing. (a) Over-the-counter (OTC) test sample collection systems for drugs of abuse testing (§ 864.3260...

  1. Spatial distribution of grape root borer (Lepidoptera: Sesiidae) infestations in Virginia vineyards and implications for sampling.

    Science.gov (United States)

    Rijal, J P; Brewster, C C; Bergh, J C

    2014-06-01

    Grape root borer, Vitacea polistiformis (Harris) (Lepidoptera: Sesiidae) is a potentially destructive pest of grape vines, Vitis spp. in the eastern United States. After feeding on grape roots for ≍2 yr in Virginia, larvae pupate beneath the soil surface around the vine base. Adults emerge during July and August, leaving empty pupal exuviae on or protruding from the soil. Weekly collections of pupal exuviae from an ≍1-m-diameter weed-free zone around the base of a grid of sample vines in Virginia vineyards were conducted in July and August, 2008-2012, and their distribution was characterized using both nonspatial (dispersion) and spatial techniques. Taylor's power law showed a significant aggregation of pupal exuviae, based on data from 19 vineyard blocks. Combined use of geostatistical and Spatial Analysis by Distance IndicEs methods indicated evidence of an aggregated pupal exuviae distribution pattern in seven of the nine blocks used for those analyses. Grape root borer pupal exuviae exhibited spatial dependency within a mean distance of 8.8 m, based on the range values of best-fitted variograms. Interpolated and clustering index-based infestation distribution maps were developed to show the spatial pattern of the insect within the vineyard blocks. The temporal distribution of pupal exuviae showed that the majority of moths emerged during the 3-wk period spanning the third week of July and the first week of August. The spatial distribution of grape root borer pupal exuviae was used in combination with temporal moth emergence patterns to develop a quantitative and efficient sampling scheme to assess infestations.

  2. Spatial Distribution and Sampling Plans With Fixed Level of Precision for Citrus Aphids (Hom., Aphididae) on Two Orange Species.

    Science.gov (United States)

    Kafeshani, Farzaneh Alizadeh; Rajabpour, Ali; Aghajanzadeh, Sirous; Gholamian, Esmaeil; Farkhari, Mohammad

    2018-04-02

    Aphis spiraecola Patch, Aphis gossypii Glover, and Toxoptera aurantii Boyer de Fonscolombe are three important aphid pests of citrus orchards. In this study, spatial distributions of the aphids on two orange species, Satsuma mandarin and Thomson navel, were evaluated using Taylor's power law and Iwao's patchiness. In addition, a fixed-precision sequential sampling plant was developed for each species on the host plant by Green's model at precision levels of 0.25 and 0.1. The results revealed that spatial distribution parameters and therefore the sampling plan were significantly different according to aphid and host plant species. Taylor's power law provides a better fit for the data than Iwao's patchiness regression. Except T. aurantii on Thomson navel orange, spatial distribution patterns of the aphids were aggregative on both citrus. T. aurantii had regular dispersion pattern on Thomson navel orange. Optimum sample size of the aphids varied from 30-2061 and 1-1622 shoots on Satsuma mandarin and Thomson navel orange based on aphid species and desired precision level. Calculated stop lines of the aphid species on Satsuma mandarin and Thomson navel orange ranged from 0.48 to 19 and 0.19 to 80.4 aphids per 24 shoots according to aphid species and desired precision level. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans (RVSP) software. This sampling program is useful for IPM program of the aphids in citrus orchards.

  3. Accuracy and repeatability of anthropometric facial measurements using cone beam computed tomography

    NARCIS (Netherlands)

    Fourie, Zacharias; Damstra, Janalt; Gerrits, Peter O.; Ren, Yijin

    Objective: The purpose of this study was to determine the accuracy and repeatability of linear anthropometric measurements on the soft tissue surface model generated from cone beam computed tomography scans. Materials and Methods: The study sample consisted of seven cadaver heads. The accuracy and

  4. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    Science.gov (United States)

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  5. Effects of cognitive training on change in accuracy in inductive reasoning ability.

    Science.gov (United States)

    Boron, Julie Blaskewicz; Turiano, Nicholas A; Willis, Sherry L; Schaie, K Warner

    2007-05-01

    We investigated cognitive training effects on accuracy and number of items attempted in inductive reasoning performance in a sample of 335 older participants (M = 72.78 years) from the Seattle Longitudinal Study. We assessed the impact of individual characteristics, including chronic disease. The reasoning training group showed significantly greater gain in accuracy and number of attempted items than did the comparison group; gain was primarily due to enhanced accuracy. Reasoning training effects involved a complex interaction of gender, prior cognitive status, and chronic disease. Women with prior decline on reasoning but no heart disease showed the greatest accuracy increase. In addition, stable reasoning-trained women with heart disease demonstrated significant accuracy gain. Comorbidity was associated with less change in accuracy. The results support the effectiveness of cognitive training on improving the accuracy of reasoning performance.

  6. Extending the Matrix Element Method beyond the Born approximation: calculating event weights at next-to-leading order accuracy

    International Nuclear Information System (INIS)

    Martini, Till; Uwer, Peter

    2015-01-01

    In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e"+e"− annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.

  7. A technique of evaluating most probable stochastic valuables from a small number of samples and their accuracies and degrees of confidence

    Energy Technology Data Exchange (ETDEWEB)

    Katoh, K [Ibaraki Pref. Univ. Health Sci., (Japan)

    1997-12-31

    A problem of estimating stochastic characteristics of a population from a small number of samples is solved as an inverse problem, from view point of information theory and with the Bayesian statistics. For both Poisson-process and Bernoulli-process, the most probable values of the characteristics of the mother population and their accuracies and degrees of confidence are successfully obtained. Mathematical expressions are given to the general case where a limit amount of information and/or knowledge with the stochastic characteristics are available and a special case where no a priori information nor knowledge are available. Mathematical properties of the solutions obtained, practical appreciation to the problem to radiation measurement are also discussed.

  8. Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.

    Science.gov (United States)

    Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo

    2012-01-01

    The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.

  9. Do Shared Interests Affect the Accuracy of Budgets?

    Directory of Open Access Journals (Sweden)

    Ilse Maria Beuren

    2015-04-01

    Full Text Available The creation of budgetary slack is a phenomenon associated with various behavioral aspects. This study focuses on accuracy in budgeting when the benefit of the slack is shared between the unit manager and his/her assistant. In this study, accuracy is measured by the level of slack in the budget, and the benefit of slack represents a financial consideration for the manager and the assistant. The study aims to test how shared interests in budgetary slack affect the accuracy of budget reports in an organization. To this end, an experimental study was conducted with a sample of 90 employees in management and other leadership positions at a cooperative that has a variable compensation plan based on the achievement of organizational goals. The experiment conducted in this study is consubstantiated by the study of Church, Hannan and Kuang (2012, which was conducted with a sample of undergraduate students in the United States and used a quantitative approach to analyze the results. In the first part of the experiment, the results show that when budgetary slack is not shared, managers tend to create greater slack when the assistant is not aware of the creation of slack; these managers thus generate a lower accuracy index than managers whose assistants are aware of the creation of slack. When budgetary slack is shared, there is higher average slack when the assistant is aware of the creation of slack. In the second part of the experiment, the accuracy index is higher for managers who prepare the budget with the knowledge that their assistants prefer larger slack values. However, the accuracy level differs between managers who know that their assistants prefer maximizing slack values and managers who do not know their assistants' preference regarding slack. These results contribute to the literature by presenting evidence of managers' behavior in the creation of budgetary slack in scenarios in which they share the benefits of slack with their assistants.

  10. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  11. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  12. Unit 06 - Sampling the World

    OpenAIRE

    Unit 06, CC in GIS; Parson, Charles; Nyerges, Timothy

    1990-01-01

    This unit begins the section on data acquisition by looking at how the infinite complexity of the real world can be discretized and sampled. It considers sampling techniques and associated issues of accuracy and standards.

  13. The dose dependency of the over-dispersion of quartz OSL single grain dose distributions

    DEFF Research Database (Denmark)

    Thomsen, Kristina Jørkov; Murray, Andrew S.; Jain, Mayank

    2012-01-01

    The use of single grain quartz OSL dating has become widespread over the past decade, particularly with application to samples likely to have been incompletely bleached before burial. By reducing the aliquot size to a single grain the probability of identifying the grain population most likely...... to have been well-bleached at deposition is maximised and thus the accuracy with which the equivalent dose can be determined is – at least in principle – improved. However, analysis of single grain dose distributions requires knowledge of the dispersion of the well-bleached part of the dose distribution....... This can be estimated by measurement of a suitable analogue, e.g. a well-bleached aeolian sample, but this requires such an analogue to be available, and in addition the assumptions that the sample is in fact a) well-bleached, and b) has a similar dose rate heterogeneity to the fossil deposit. Finally...

  14. Urine sampling techniques in symptomatic primary-care patients

    DEFF Research Database (Denmark)

    Holm, Anne; Aabenhus, Rune

    2016-01-01

    in infection rate between mid-stream-clean-catch, mid-stream-urine and random samples. Conclusions: At present, no evidence suggests that sampling technique affects the accuracy of the microbiological diagnosis in non-pregnant women with symptoms of urinary tract infection in primary care. However......Background: Choice of urine sampling technique in urinary tract infection may impact diagnostic accuracy and thus lead to possible over- or undertreatment. Currently no evidencebased consensus exists regarding correct sampling technique of urine from women with symptoms of urinary tract infection...... a randomized or paired design to compare the result of urine culture obtained with two or more collection techniques in adult, female, non-pregnant patients with symptoms of urinary tract infection. We evaluated quality of the studies and compared accuracy based on dichotomized outcomes. Results: We included...

  15. Relative accuracy of three common methods of parentage analysis in natural populations

    KAUST Repository

    Harrison, Hugo B.; Saenz Agudelo, Pablo; Planes, Serge; Jones, Geoffrey P.; Berumen, Michael L.

    2012-01-01

    Parentage studies and family reconstructions have become increasingly popular for investigating a range of evolutionary, ecological and behavioural processes in natural populations. However, a number of different assignment methods have emerged in common use and the accuracy of each may differ in relation to the number of loci examined, allelic diversity, incomplete sampling of all candidate parents and the presence of genotyping errors. Here, we examine how these factors affect the accuracy of three popular parentage inference methods (colony, famoz and an exclusion-Bayes' theorem approach by Christie (Molecular Ecology Resources, 2010a, 10, 115) to resolve true parent-offspring pairs using simulated data. Our findings demonstrate that accuracy increases with the number and diversity of loci. These were clearly the most important factors in obtaining accurate assignments explaining 75-90% of variance in overall accuracy across 60 simulated scenarios. Furthermore, the proportion of candidate parents sampled had a small but significant impact on the susceptibility of each method to either false-positive or false-negative assignments. Within the range of values simulated, colony outperformed FaMoz, which outperformed the exclusion-Bayes' theorem method. However, with 20 or more highly polymorphic loci, all methods could be applied with confidence. Our results show that for parentage inference in natural populations, careful consideration of the number and quality of markers will increase the accuracy of assignments and mitigate the effects of incomplete sampling of parental populations. © 2012 Blackwell Publishing Ltd.

  16. Relative accuracy of three common methods of parentage analysis in natural populations

    KAUST Repository

    Harrison, Hugo B.

    2012-12-27

    Parentage studies and family reconstructions have become increasingly popular for investigating a range of evolutionary, ecological and behavioural processes in natural populations. However, a number of different assignment methods have emerged in common use and the accuracy of each may differ in relation to the number of loci examined, allelic diversity, incomplete sampling of all candidate parents and the presence of genotyping errors. Here, we examine how these factors affect the accuracy of three popular parentage inference methods (colony, famoz and an exclusion-Bayes\\' theorem approach by Christie (Molecular Ecology Resources, 2010a, 10, 115) to resolve true parent-offspring pairs using simulated data. Our findings demonstrate that accuracy increases with the number and diversity of loci. These were clearly the most important factors in obtaining accurate assignments explaining 75-90% of variance in overall accuracy across 60 simulated scenarios. Furthermore, the proportion of candidate parents sampled had a small but significant impact on the susceptibility of each method to either false-positive or false-negative assignments. Within the range of values simulated, colony outperformed FaMoz, which outperformed the exclusion-Bayes\\' theorem method. However, with 20 or more highly polymorphic loci, all methods could be applied with confidence. Our results show that for parentage inference in natural populations, careful consideration of the number and quality of markers will increase the accuracy of assignments and mitigate the effects of incomplete sampling of parental populations. © 2012 Blackwell Publishing Ltd.

  17. Air sampling with solid phase microextraction

    Science.gov (United States)

    Martos, Perry Anthony

    There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds

  18. [Effect of Characteristic Variable Extraction on Accuracy of Cu in Navel Orange Peel by LIBS].

    Science.gov (United States)

    Li, Wen-bing; Yao, Ming-yin; Huang, Lin; Chen, Tian-bing; Zheng, Jian-hong; Fan, Shi-quan; Liu Mu-hua HE, Mu-hua; Lin, Jin-long; Ouyang, Jing-yi

    2015-07-01

    Heavy metals pollution in foodstuffs is more and more serious. It is impossible to satisfy the modern agricultural development by conventional chemical analysis. Laser induced breakdown spectroscopy (LIBS) is an emerging technology with the characteristic of rapid and nondestructive detection. But LIBS' s repeatability, sensitivity and accuracy has much room to improve. In this work, heavy metal Cu in Gannan Navel Orange which is the Jiangxi specialty fruit will be predicted by LIBS. Firstly, the navel orange samples were contaminated in our lab. The spectra of samples were collected by irradiating the peel by optimized LIBS parameters. The laser energy was set as 20 mJ, delay time of Spectral Data Gathering was set as 1.2 micros, the integration time of Spectral data gathering was set as 2 ms. The real concentration in samples was obtained by AAS (atom absorption spectroscopy). The characteristic variables Cu I 324.7 and Cu I 327.4 were extracted. And the calibration model was constructed between LIBS spectra and real concentration about Cu. The results show that relative error of the predicted concentrations of three relational model were 7.01% or less, reached a minimum of 0.02%, 0.01% and 0.02% respectively. The average relative errors were 2.33%, 3.10% and 26.3%. Tests showed that different characteristic variables decided different accuracy. It is very important to choose suitable characteristic variable. At the same time, this work is helpful to explore the distribution of heavy metals between pulp and peel.

  19. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    Science.gov (United States)

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  20. Accuracy Assessment of Digital Surface Models from Unmanned Aerial Vehicles’ Imagery on Glaciers

    Directory of Open Access Journals (Sweden)

    Saskia Gindraux

    2017-02-01

    Full Text Available The use of Unmanned Aerial Vehicles (UAV for photogrammetric surveying has recently gained enormous popularity. Images taken from UAVs are used for generating Digital Surface Models (DSMs and orthorectified images. In the glaciological context, these can serve for quantifying ice volume change or glacier motion. This study focuses on the accuracy of UAV-derived DSMs. In particular, we analyze the influence of the number and disposition of Ground Control Points (GCPs needed for georeferencing the derived products. A total of 1321 different DSMs were generated from eight surveys distributed on three glaciers in the Swiss Alps during winter, summer and autumn. The vertical and horizontal accuracy was assessed by cross-validation with thousands of validation points measured with a Global Positioning System. Our results show that the accuracy increases asymptotically with increasing number of GCPs until a certain density of GCPs is reached. We call this the optimal GCP density. The results indicate that DSMs built with this optimal GCP density have a vertical (horizontal accuracy ranging between 0.10 and 0.25 m (0.03 and 0.09 m across all datasets. In addition, the impact of the GCP distribution on the DSM accuracy was investigated. The local accuracy of a DSM decreases when increasing the distance to the closest GCP, typically at a rate of 0.09 m per 100-m distance. The impact of the glacier’s surface texture (ice or snow was also addressed. The results show that besides cases with a surface covered by fresh snow, the surface texture does not significantly influence the DSM accuracy.

  1. Eggshells as an index of aedine mosquito production. 1: Distribution, movement and sampling of Aedes taeniorhynchus eggshells.

    Science.gov (United States)

    Ritchie, S A; Addison, D S; van Essen, F

    1992-03-01

    The distribution of Aedes taeniorhynchus eggshells in Florida mangrove basin forests was determined and used to design a sampling plan. Eggshells were found in 10/11 sites (91%), with a mean +/- SE density of 1.45 +/- 0.75/cc; density did not change significantly year to year. Highest densities were located on the sloping banks of hummocks, ponds and potholes. Eggshells were less clumped in distribution than eggs and larvae and thus required a smaller sample size for a given precision level. While eggshells were flushed from compact soil that was subject to runoff during heavy rain, mangrove peat, the dominant soil of eggshell-bearing sites, was less dense and had little runoff or eggshell flushing. We suggest that eggshell surveys could be used to identify Ae. taeniorhynchus oviposition sites and oviposition patterns.

  2. Distribution network monitoring : Interaction between EU legal conditions and state estimation accuracy

    NARCIS (Netherlands)

    Blaauwbroek, Niels; Kuiken, Dirk; H. Nguyen, Phuong; Vedder, Hans; Roggenkamp, Martha; Slootweg, Han

    2018-01-01

    The expected increase in uncertainty regarding energy consumption and production from intermittent distributed energy resources calls for advanced network control capabilities and (household) customer flexibility in the distribution network. Depending on the control applications deployed, grid

  3. Distribution network monitoring: Interaction between EU legal conditions and state estimation accuracy

    NARCIS (Netherlands)

    Blaauwbroek, Niels; Kuiken, Dirk; Nguyen, Phuong; Vedder, Hans; Roggenkamp, Martha; Slootweg, Han

    2018-01-01

    The expected increase in uncertainty regarding energy consumption and production from intermittent distributed energy resources calls for advanced network control capabilities and (household) customer flexibility in the distribution network. Depending on the control applications deployed, grid

  4. Distribution of Heavy Metal Content Hg and Cr of Environmental Samples at Surabaya Area

    International Nuclear Information System (INIS)

    Agus Taftazani

    2007-01-01

    Determination of Hg and Cr content of Surabaya river and coastal environmental samples using Instrumental Neutron Activation Analysis (INAA) have been done. The environmental samples were water, sediment, Eichhornia crassipes (Mart) Solmms, Rhizophora stylosa, Johnius (Johnieops) borneensis fish, and Moolgarda delicate fish at 12 locations selected of Surabaya area. Dry powder of sediment and biotic samples and concentrate water samples was irradiated by neutron flux 1.05 x 10 11 n.cm -2 .det -1 during 12 hours. The analytical result showed that the concentration of the heavy metals of river water are smaller than Perda Surabaya City No. 02/2004 for the 4 th level water which are Hg (0.005 ppm) and Cr (1.000 ppm). All locations coastal water samples have Hg and Cr concentrations are higher than Kepmen LH No.51/2004 Hg (0.001 ppm) and Cr (0.005 ppm). The Hg concentration of fish samples have exceeded the threshold according to Kep. Dirjen POM No.03725/B/SK/VII/89 about the maximum concentration of metal pollution in food. The concentration of heavy metals in sediment, Eichhornia crassipes (Mart) Solmms and Rhizophora stylosa are not regulated, so then heavy metals pollution can not be referred to. The concentration of Hg and Cr elements of water samples are smaller than that of biotic and sediment samples. The distribution factor (F d ) is bigger than bioaccumulation factor (F b ). (author)

  5. Integrative analysis of single nucleotide polymorphisms and gene expression efficiently distinguishes samples from closely related ethnic populations

    Directory of Open Access Journals (Sweden)

    Yang Hsin-Chou

    2012-07-01

    Full Text Available Abstract Background Ancestry informative markers (AIMs are a type of genetic marker that is informative for tracing the ancestral ethnicity of individuals. Application of AIMs has gained substantial attention in population genetics, forensic sciences, and medical genetics. Single nucleotide polymorphisms (SNPs, the materials of AIMs, are useful for classifying individuals from distinct continental origins but cannot discriminate individuals with subtle genetic differences from closely related ancestral lineages. Proof-of-principle studies have shown that gene expression (GE also is a heritable human variation that exhibits differential intensity distributions among ethnic groups. GE supplies ethnic information supplemental to SNPs; this motivated us to integrate SNP and GE markers to construct AIM panels with a reduced number of required markers and provide high accuracy in ancestry inference. Few studies in the literature have considered GE in this aspect, and none have integrated SNP and GE markers to aid classification of samples from closely related ethnic populations. Results We integrated a forward variable selection procedure into flexible discriminant analysis to identify key SNP and/or GE markers with the highest cross-validation prediction accuracy. By analyzing genome-wide SNP and/or GE markers in 210 independent samples from four ethnic groups in the HapMap II Project, we found that average testing accuracies for a majority of classification analyses were quite high, except for SNP-only analyses that were performed to discern study samples containing individuals from two close Asian populations. The average testing accuracies ranged from 0.53 to 0.79 for SNP-only analyses and increased to around 0.90 when GE markers were integrated together with SNP markers for the classification of samples from closely related Asian populations. Compared to GE-only analyses, integrative analyses of SNP and GE markers showed comparable testing

  6. Diagnostic Accuracy of the Posttraumatic Stress Disorder Checklist–Civilian Version in a Representative Military Sample

    DEFF Research Database (Denmark)

    Karstoft, Karen-Inge; Andersen, Søren B.; Bertelsen, Mette

    2014-01-01

    This study aimed to assess the diagnostic accuracy of the Posttraumatic Stress Disorder Checklist-Civilian Version (PCL-C; Weathers, Litz, Herman, Huska, & Keane, 1993) and to establish the most accurate cutoff for prevalence estimation of posttraumatic stress disorder (PTSD) in a representative...

  7. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loids Pollution Based on Kriging Interpolation and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Zhenyi Jia

    2017-12-01

    Full Text Available Soil pollution by metal(loids resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As and cadmium (Cd pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loids in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid pollution.

  8. Target Price Accuracy

    Directory of Open Access Journals (Sweden)

    Alexander G. Kerl

    2011-04-01

    Full Text Available This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio. However, target price accuracy is positively related to the level of detail of each report, company size and the reputation of the investment bank. The potential conflicts of interests between an analyst and a covered company do not bias forecast accuracy.

  9. Method validation for control determination of mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry.

    Science.gov (United States)

    Torres, Daiane Placido; Martins-Teixeira, Maristela Braga; Cadore, Solange; Queiroz, Helena Müller

    2015-01-01

    A method for the determination of total mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) has been validated following international foodstuff protocols in order to fulfill the Brazilian National Residue Control Plan. The experimental parameters have been previously studied and optimized according to specific legislation on validation and inorganic contaminants in foodstuff. Linearity, sensitivity, specificity, detection and quantification limits, precision (repeatability and within-laboratory reproducibility), robustness as well as accuracy of the method have been evaluated. Linearity of response was satisfactory for the two range concentrations available on the TDA AAS equipment, between approximately 25.0 and 200.0 μg kg(-1) (square regression) and 250.0 and 2000.0 μg kg(-1) (linear regression) of mercury. The residues for both ranges were homoscedastic and independent, with normal distribution. Correlation coefficients obtained for these ranges were higher than 0.995. Limits of quantification (LOQ) and of detection of the method (LDM), based on signal standard deviation (SD) for a low-in-mercury sample, were 3.0 and 1.0 μg kg(-1), respectively. Repeatability of the method was better than 4%. Within-laboratory reproducibility achieved a relative SD better than 6%. Robustness of the current method was evaluated and pointed sample mass as a significant factor. Accuracy (assessed as the analyte recovery) was calculated on basis of the repeatability, and ranged from 89% to 99%. The obtained results showed the suitability of the present method for direct mercury measurement in fresh fish and shrimp samples and the importance of monitoring the analysis conditions for food control purposes. Additionally, the competence of this method was recognized by accreditation under the standard ISO/IEC 17025.

  10. A generalized polynomial chaos based ensemble Kalman filter with high accuracy

    International Nuclear Information System (INIS)

    Li Jia; Xiu Dongbin

    2009-01-01

    As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.

  11. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    Science.gov (United States)

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being

  12. Extending the Matrix Element Method beyond the Born approximation: calculating event weights at next-to-leading order accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Martini, Till; Uwer, Peter [Humboldt-Universität zu Berlin, Institut für Physik,Newtonstraße 15, 12489 Berlin (Germany)

    2015-09-14

    In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e{sup +}e{sup −} annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.

  13. Measurement of 3D refractive index distribution by optical diffraction tomography

    Science.gov (United States)

    Chi, Weining; Wang, Dayong; Wang, Yunxin; Zhao, Jie; Rong, Lu; Yuan, Yuanyuan

    2018-01-01

    Optical Diffraction Tomography (ODT), as a novel 3D imaging technique, can obtain a 3D refractive index (RI) distribution to reveal the important optical properties of transparent samples. According to the theory of ODT, an optical diffraction tomography setup is built based on the Mach-Zehnder interferometer. The propagation direction of object beam is controlled by a 2D translation stage, and 121 holograms based on different illumination angles are recorded by a Charge-coupled Device (CCD). In order to prove the validity and accuracy of the ODT, the 3D RI profile of microsphere with a known RI is firstly measured. An iterative constraint algorithm is employed to improve the imaging accuracy effectively. The 3D morphology and average RI of the microsphere are consistent with that of the actual situation, and the RI error is less than 0.0033. Then, an optical element fabricated by laser with a non-uniform RI is taken as the sample. Its 3D RI profile is obtained by the optical diffraction tomography system.

  14. Modelling population distribution using remote sensing imagery and location-based data

    Science.gov (United States)

    Song, J.; Prishchepov, A. V.

    2017-12-01

    Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models

  15. Efficient sampling to determine distribution of fruit quality and yield in a commercial apple orchard

    DEFF Research Database (Denmark)

    Martinez Vega, Mabel Virginia; Wulfsohn, D.; Zamora, I.

    2012-01-01

    In situ assessment of fruit quality and yield can provide critical data for marketing and for logistical planning of the harvest, as well as for site-specific management. Our objective was to develop and validate efficient field sampling procedures for this purpose. We used the previously reported...... ‘fractionator’ tree sampling procedure and supporting handheld software (Gardi et al., 2007; Wulfsohn et al., 2012) to obtain representative samples of fruit from a 7.6-ha apple orchard (Malus ×domestica ‘Fuji Raku Raku’) in central Chile. The resulting sample consisted of 70 fruit on 56 branch segments...... of yield. Estimated marketable yield was 295.8±50.2 t. Field and packinghouse records indicated that of 348.2 t sent to packing (52.4 t or 15% higher than our estimate), 263.0 t was packed for export (32.8 t less or -12% error compared to our estimate). The estimated distribution of caliber compared very...

  16. TXRF 'measurements' of concentration distribution below the detection limit

    International Nuclear Information System (INIS)

    Kubala-Kukus, A.; Banas, D.; Braziewicz, J.; Majewska, U.; Mrowczynski, S.; Pajek, M.

    2000-01-01

    We demonstrate that a shape of the concentration distribution of the element in a set of samples, as measured by the TXRF method, can be determined even for the concentrations below the detection limit (DL). This can be done, when the measurements reporting the concentration below DL level are included properly in the analysis of the results. The method developed for such correction is presented and discussed. It is demonstrated that this correction is particularly important when the studied concentrations are close to the DL level of the method, which is a common case for TXRF. In the paper a precision of the developed correction is discussed in details, by using the results of numerical simulations of experiments for different concentration distributions and number of performed measurements. It is demonstrated that the factor, which limits the accuracy of the correction, is the number of measurements, not the correction procedure itself. The applicability and importance of the developed correction is demonstrated for routine TXRF analysis of different types of samples of bio-medical interest. (author)

  17. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network.

    Science.gov (United States)

    Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao

    2017-12-26

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.

  18. Effect of current distribution on the voltage-temperature characteristics: study of the NbTi PF-FSJS sample for ITER

    International Nuclear Information System (INIS)

    Zani, L.; Ciazynski, D.; Gislon, P.; Stepanov, B.; Huber, S.

    2004-01-01

    Various tests, either on full-size joint samples or on model coils confirmed that current distribution may play a crucial role in the electrical behaviour of CICC in operating conditions. In order to evaluate its influence, CEA developed a code (ENSIC) the main feature of which is a CICC electrical model including a discrete resistive network associated with superconducting lengths. Longitudinal and transverse resistances are also modeled, representing either joint or conductor. In our paper we will present the comparison of experimental results with ENSIC calculations for one International Thermonuclear Experimental Reactor (ITER) sample prototype relevant to poloidal field (PF) coils: the PF-full-size joint sample (PF-FSJS). In this purpose, the current distribution has been measured thanks to a segmented Rogowski coils system. Current distribution effects on the basic characteristics (T CS , n-value etc) of the cable compared to single strand will be discussed. This study aims at putting light on the global strand state in a conductor and is also useful to evaluate some intrinsic parameters hardly measurable (effective interpetal transverse contact resistance for example) allowing further application in coils

  19. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model

    Science.gov (United States)

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  20. Increased-accuracy numerical modeling of electron-optical systems with space-charge

    International Nuclear Information System (INIS)

    Sveshnikov, V.

    2011-01-01

    This paper presents a method for improving the accuracy of space-charge computation for electron-optical systems. The method proposes to divide the computational region into two parts: a near-cathode region in which analytical solutions are used and a basic one in which numerical methods compute the field distribution and trace electron ray paths. A numerical method is used for calculating the potential along the interface, which involves solving a non-linear equation. Preliminary results illustrating the improvement of accuracy and the convergence of the method for a simple test example are presented.

  1. Basic distribution free identification tests for small size samples of environmental data

    Energy Technology Data Exchange (ETDEWEB)

    Federico, A.G.; Musmeci, F. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Ambiente

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data. [Italiano] Nell`analisi di dati ambientali ricorre spesso il caso di dover sottoporre a test l`ipotesi di provenienza di due, o piu`, insiemi di dati dalla stessa popolazione. Tipicamente i dati disponibili sono pochi e spesso l`ipotesi di provenienza da distribuzioni normali non e` sostenibile. D`altra aprte la diffusione odierna di Personal Computer fornisce nuove possibili soluzioni basate sull`uso intensivo delle risorse della CPU. Il rapporto analizza il problema e presenta la possibilita` di utilizzo di due test non parametrici basati sulle proprieta` intrinseche di equiprobabilita` dei campioni. Il primo e` basato su una tecnica di ricampionamento esaustivo mentre il secondo su un approccio di tipo bootstrap. E` presentato un programma di semplice utilizzo e un caso di studio basato su dati di contaminazione di bambini a Chernobyl.

  2. BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal

    International Nuclear Information System (INIS)

    Sagar, B.

    1989-01-01

    1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included

  3. Testing the accuracy of remote sensing land use maps

    Science.gov (United States)

    Vangenderen, J. L.; Lock, B. F.; Vass, P. A.

    1977-01-01

    Some of the main aspects that need to be considered in a remote sensing sampling design are: (1) the frequency that any one land use type (on the ground) is erroneously attributed to another class by the interpreter; (2) the frequency that the wrong land use (as observed on the ground) is erroneously included in any one class by the remote sensing interpreter; (3) the proportion of all land (as determined in the field) that is mistakenly attributed by the interpreter; and (4) the determination of whether the mistakes are random (so that the overall proportions are approximately correct) or subject to a persistent bias. A sampling and statistical testing procedure is presented which allows an approximate answer to each of these aspects. The concept developed and described incorporates the probability of making incorrect interpretations at particular prescribed accuracy levels, for a certain number of errors, for a particular sample size. It is considered that this approach offers a meaningful explanation of the interpretation accuracy level of an entire remote sensing land use survey.

  4. Systematic underestimation of the age of samples with saturating exponential behaviour and inhomogeneous dose distribution

    International Nuclear Information System (INIS)

    Brennan, B.J.

    2000-01-01

    In luminescence and ESR studies, a systematic underestimate of the (average) equivalent dose, and thus also the age, of a sample can occur when there is significant variation of the natural dose within the sample and some regions approach saturation. This is demonstrated explicitly for a material that exhibits a single-saturating-exponential growth of signal with dose. The result is valid for any geometry (e.g. a plain layer, spherical grain, etc.) and some illustrative cases are modelled, with the age bias exceeding 10% in extreme cases. If the dose distribution within the sample can be modelled accurately, it is possible to correct for the bias in the estimates of equivalent dose estimate and age. While quantifying the effect would be more difficult, similar systematic biases in dose and age estimates are likely in other situations more complex than the one modelled

  5. Prediction uncertainty assessment of a systems biology model requires a sample of the full probability distribution of its parameters

    Directory of Open Access Journals (Sweden)

    Simon van Mourik

    2014-06-01

    Full Text Available Multi-parameter models in systems biology are typically ‘sloppy’: some parameters or combinations of parameters may be hard to estimate from data, whereas others are not. One might expect that parameter uncertainty automatically leads to uncertain predictions, but this is not the case. We illustrate this by showing that the prediction uncertainty of each of six sloppy models varies enormously among different predictions. Statistical approximations of parameter uncertainty may lead to dramatic errors in prediction uncertainty estimation. We argue that prediction uncertainty assessment must therefore be performed on a per-prediction basis using a full computational uncertainty analysis. In practice this is feasible by providing a model with a sample or ensemble representing the distribution of its parameters. Within a Bayesian framework, such a sample may be generated by a Markov Chain Monte Carlo (MCMC algorithm that infers the parameter distribution based on experimental data. Matlab code for generating the sample (with the Differential Evolution Markov Chain sampler and the subsequent uncertainty analysis using such a sample, is supplied as Supplemental Information.

  6. UV TO FAR-IR CATALOG OF A GALAXY SAMPLE IN NEARBY CLUSTERS: SPECTRAL ENERGY DISTRIBUTIONS AND ENVIRONMENTAL TRENDS

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Fernandez, Jonathan D.; Iglesias-Paramo, J.; Vilchez, J. M., E-mail: jonatan@iaa.es [Instituto de Astrofisica de Andalucia, Glorieta de la Astronomia s/n, 18008 Granada (Spain)

    2012-03-01

    In this paper, we present a sample of cluster galaxies devoted to study the environmental influence on the star formation activity. This sample of galaxies inhabits in clusters showing a rich variety in their characteristics and have been observed by the SDSS-DR6 down to M{sub B} {approx} -18, and by the Galaxy Evolution Explorer AIS throughout sky regions corresponding to several megaparsecs. We assign the broadband and emission-line fluxes from ultraviolet to far-infrared to each galaxy performing an accurate spectral energy distribution for spectral fitting analysis. The clusters follow the general X-ray luminosity versus velocity dispersion trend of L{sub X} {proportional_to} {sigma}{sup 4.4}{sub c}. The analysis of the distributions of galaxy density counting up to the 5th nearest neighbor {Sigma}{sub 5} shows: (1) the virial regions and the cluster outskirts share a common range in the high density part of the distribution. This can be attributed to the presence of massive galaxy structures in the surroundings of virial regions. (2) The virial regions of massive clusters ({sigma}{sub c} > 550 km s{sup -1}) present a {Sigma}{sub 5} distribution statistically distinguishable ({approx}96%) from the corresponding distribution of low-mass clusters ({sigma}{sub c} < 550 km s{sup -1}). Both massive and low-mass clusters follow a similar density-radius trend, but the low-mass clusters avoid the high density extreme. We illustrate, with ABELL 1185, the environmental trends of galaxy populations. Maps of sky projected galaxy density show how low-luminosity star-forming galaxies appear distributed along more spread structures than their giant counterparts, whereas low-luminosity passive galaxies avoid the low-density environment. Giant passive and star-forming galaxies share rather similar sky regions with passive galaxies exhibiting more concentrated distributions.

  7. Distribution of blood types in a sample of 245 New Zealand non-purebred cats.

    Science.gov (United States)

    Cattin, R P

    2016-05-01

    To determine the distribution of feline blood types in a sample of non-pedigree, domestic cats in New Zealand, whether a difference exists in this distribution between domestic short haired and domestic long haired cats, and between the North and South Islands of New Zealand; and to calculate the risk of a random blood transfusion causing a severe transfusion reaction, and the risk of a random mating producing kittens susceptible to neonatal isoerythrolysis. The results of 245 blood typing tests in non-pedigree cats performed at the New Zealand Veterinary Pathology (NZVP) and Gribbles Veterinary Pathology laboratories between the beginning of 2009 and the end of 2014 were retrospectively collated and analysed. Cats that were identified as domestic short or long haired were included. For the cats tested at Gribbles Veterinary Pathology 62 were from the North Island, and 27 from the South Island. The blood type distribution differed between samples from the two laboratories (p=0.029), but not between domestic short and long haired cats (p=0.50), or between the North and South Islands (p=0.76). Of the 89 cats tested at Gribbles Veterinary Pathology, 70 (79%) were type A, 18 (20%) type B, and 1 (1%) type AB; for NZVP 139/156 (89.1%) cats were type A, 16 (10.3%) type B, and 1 (0.6%) type AB. It was estimated that 18.3-31.9% of random blood transfusions would be at risk of a transfusion reaction, and neonatal isoerythrolysis would be a risk in 9.2-16.1% of random matings between non-pedigree cats. The results from this study suggest that there is a high risk of complications for a random blood transfusion between non-purebred cats in New Zealand. Neonatal isoerythrolysis should be considered an important differential diagnosis in illness or mortality in kittens during the first days of life.

  8. A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotide distribution.

    Science.gov (United States)

    Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme

    2013-07-01

    The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.

  9. Effect of Repeated Microwave Disinfection on Surface Hardness and Dimensional Accuracy of Two Dental Stone Materials

    Directory of Open Access Journals (Sweden)

    Mahmood Robati Anaraki

    2015-01-01

    Full Text Available There is controversial evidence in relation to the effect of microwave on mechanical properties of stone casts. The present study was designed to evaluate the effect of repeated microwave disinfection on surface hardness and dimensional accuracy of dental stone. In this in vitro study, 48 cylindrical stone samples were prepared using two products of type IV stone to assess surface hardness and 48 impressions were taken from a model and poured by these stones to assess the dimensional accuracy. The evaluation of the samples was carried out consequently by a micro-hardness tester and a digital caliper after the stone samples were exposed to 7 consecutive rounds of 900 watts (W microwave irradiation for five minutes each time after cooling. Data were analyzed by t-test and ANOVA. According to the obtained results, multiple disinfections of the stone casts by microwave do not negatively affect their surface hardness and dimensional accuracy.   Key words: Dental stone; Dimensional accuracy; Hardness; Microwave

  10. Evaluation of NAA laboratory results in inter-comparison on determination of trace elements in food and environmental samples

    International Nuclear Information System (INIS)

    Diah Dwiana Lestiani; Syukria Kurniawati; Natalia Adventini

    2012-01-01

    Inter-comparison program is a good tool for improving quality and to enhance the accuracy and precision of the analytical techniques. By participating in this program, laboratories could demonstrate their capability and ensuring the quality of analysis results generated by analytical laboratories. The Neutron Activation Analysis (NAA) laboratory at National Nuclear Energy Agency of Indonesia (BATAN), Nuclear Technology Center for Materials and Radiometry-PTNBR laboratory participated in inter-comparison tests organized by NAA working group. Inter-comparison BATAN 2009 was the third inter-laboratory analysis test within that project. The participating laboratories were asked to analyze for trace elements using neutron activation analysis as the primary technique. Three materials were distributed to the participants representing foodstuff, and environmental material samples. Samples were irradiated in rabbit facility of G.A. Siwabessy reactor with neutron flux ~ 10 13 n.cm -2 .s -1 , and counted with HPGe detector of gamma spectrometry. Several trace elements in these samples were detected. The accuracy and precision evaluation based on International Atomic Energy Agency (IAEA) criteria was applied. In this paper the PTNBR NAA laboratory results is evaluated. (author)

  11. Why choose Random Forest to predict rare species distribution with few samples in large undersampled areas? Three Asian crane species models provide supporting evidence

    Directory of Open Access Journals (Sweden)

    Chunrong Mi

    2017-01-01

    Full Text Available Species distribution models (SDMs have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33, White-naped Crane (Grus vipio, n = 40, and Black-necked Crane (Grus nigricollis, n = 75 in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model, Random Forest, CART (Classification and Regression Tree and Maxent (Maximum Entropy Models. In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC and true skill statistic (TSS were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid

  12. Why choose Random Forest to predict rare species distribution with few samples in large undersampled areas? Three Asian crane species models provide supporting evidence.

    Science.gov (United States)

    Mi, Chunrong; Huettmann, Falk; Guo, Yumin; Han, Xuesong; Wen, Lijia

    2017-01-01

    Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane ( Grus monacha , n  = 33), White-naped Crane ( Grus vipio , n  = 40), and Black-necked Crane ( Grus nigricollis , n  = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid

  13. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  14. Constructing a Watts-Strogatz network from a small-world network with symmetric degree distribution.

    Directory of Open Access Journals (Sweden)

    Mozart B C Menezes

    Full Text Available Though the small-world phenomenon is widespread in many real networks, it is still challenging to replicate a large network at the full scale for further study on its structure and dynamics when sufficient data are not readily available. We propose a method to construct a Watts-Strogatz network using a sample from a small-world network with symmetric degree distribution. Our method yields an estimated degree distribution which fits closely with that of a Watts-Strogatz network and leads into accurate estimates of network metrics such as clustering coefficient and degree of separation. We observe that the accuracy of our method increases as network size increases.

  15. Constructing a Watts-Strogatz network from a small-world network with symmetric degree distribution.

    Science.gov (United States)

    Menezes, Mozart B C; Kim, Seokjin; Huang, Rongbing

    2017-01-01

    Though the small-world phenomenon is widespread in many real networks, it is still challenging to replicate a large network at the full scale for further study on its structure and dynamics when sufficient data are not readily available. We propose a method to construct a Watts-Strogatz network using a sample from a small-world network with symmetric degree distribution. Our method yields an estimated degree distribution which fits closely with that of a Watts-Strogatz network and leads into accurate estimates of network metrics such as clustering coefficient and degree of separation. We observe that the accuracy of our method increases as network size increases.

  16. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    Science.gov (United States)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  17. Distribution of polybrominated diphenyl ethers in Japanese autopsy tissue and body fluid samples.

    Science.gov (United States)

    Hirai, Tetsuya; Fujimine, Yoshinori; Watanabe, Shaw; Nakano, Takeshi

    2012-09-01

    Brominated flame retardants are components of many plastics and are used in products such as cars, textiles, televisions, and personal computers. Human exposure to polybrominated diphenyl ether (PBDE) flame retardants has increased exponentially during the last three decades. Our objective was to measure the body burden and distribution of PBDEs and to determine the concentrations of the predominant PBDE congeners in samples of liver, bile, adipose tissue, and blood obtained from Japanese autopsy cases. Tissues and body fluids obtained from 20 autopsy cases were analyzed. The levels of 25 PBDE congeners, ranging from tri- to hexa-BDEs, were assessed. The geometric means of the sum of the concentrations of PBDE congeners having detection frequencies >50 % (ΣPBDE) in the blood, liver, bile, and adipose tissue were 2.4, 2.6, 1.4, and 4.3 ng/g lipid, respectively. The most abundant congeners were BDE-47 and BDE-153, followed by BDE-100, BDE-99, and BDE-28+33. These concentrations of PBDE congeners were similar to other reports of human exposure in Japan but were notably lower than concentrations than those reported in the USA. Significant positive correlations were observed between the concentrations of predominant congeners and ΣPBDE among the samples analyzed. The ΣPBDE concentration was highest in the adipose tissue, but PBDEs were distributed widely among the tissues and body fluids analyzed. The PBDE levels observed in the present study are similar to those reported in previous studies in Japan and significantly lower than those reported in the USA.

  18. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Science.gov (United States)

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  19. Test Cost and Test Accuracy in Clinical Laboratories in Kampala, Uganda.

    Science.gov (United States)

    Amukele, Timothy K; Jones, Robert; Elbireer, Ali

    2018-04-25

    To assess the accuracy and costs of laboratory tests in Kampala, Uganda. A random selection of 78 laboratories tested external quality assurance samples at market rates. There were 40 moderate- to high-complexity and 38 low-complexity laboratories. Four percent (3/78) of these laboratories were accredited and 94% (73/78) were private. The 40 moderate- to high-complexity laboratories performed malaria blood smear, urine human chorionic gonadotropin (hCG), human immunodeficiency virus (HIV), syphilis, glucose, and three-panel tests: CBC, liver function tests, and kidney function tests. The 38 low-complexity laboratories performed malaria blood smear, urine hCG, and syphilis testing only. Hematology, HIV, syphilis, and malarial proficiency testing samples were prepared by accredited laboratories in Kampala. All other samples were provided by the Royal College of Pathologists of Australia. 77.1% of all results were accurate (met target values). It varied widely by laboratory (50%-100%), test identity (malaria blood smear, 96%; serum urea nitrogen, 38%), and test type (quantitative: 66% [31%-89%], qualitative: 91% [68%-97%]). Test prices varied by up to 3,600%, and there was no correlation between test cost and accuracy (r2 = 0.02). There were large differences in accuracy and price across laboratories in Kampala. Price was not associated with quality.

  20. MAFsnp: A Multi-Sample Accurate and Flexible SNP Caller Using Next-Generation Sequencing Data

    Science.gov (United States)

    Hu, Jiyuan; Li, Tengfei; Xiu, Zidi; Zhang, Hong

    2015-01-01

    Most existing statistical methods developed for calling single nucleotide polymorphisms (SNPs) using next-generation sequencing (NGS) data are based on Bayesian frameworks, and there does not exist any SNP caller that produces p-values for calling SNPs in a frequentist framework. To fill in this gap, we develop a new method MAFsnp, a Multiple-sample based Accurate and Flexible algorithm for calling SNPs with NGS data. MAFsnp is based on an estimated likelihood ratio test (eLRT) statistic. In practical situation, the involved parameter is very close to the boundary of the parametric space, so the standard large sample property is not suitable to evaluate the finite-sample distribution of the eLRT statistic. Observing that the distribution of the test statistic is a mixture of zero and a continuous part, we propose to model the test statistic with a novel two-parameter mixture distribution. Once the parameters in the mixture distribution are estimated, p-values can be easily calculated for detecting SNPs, and the multiple-testing corrected p-values can be used to control false discovery rate (FDR) at any pre-specified level. With simulated data, MAFsnp is shown to have much better control of FDR than the existing SNP callers. Through the application to two real datasets, MAFsnp is also shown to outperform the existing SNP callers in terms of calling accuracy. An R package “MAFsnp” implementing the new SNP caller is freely available at http://homepage.fudan.edu.cn/zhangh/softwares/. PMID:26309201

  1. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    Science.gov (United States)

    Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.

    2013-05-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.

  2. Verification of the Time Accuracy of a Magnometer by Using a GPS Pulse Generator

    Directory of Open Access Journals (Sweden)

    Yasuhiro Minamoto

    2011-05-01

    Full Text Available The time accuracy of geomagnetic data is an important specification for one-second data distributions. We tested a procedure to verify the time accuracy of a fluxgate magnetometer by using a GPS pulse generator. The magnetometer was equipped with a high time resolution (100 Hz output, so the data delay could be checked directly. The delay detected from one-second data by a statistical method was larger than those from 0.1-s- and 0.01-s-resolution data. The test of the time accuracy revealed the larger delay and was useful for verifying the quality of the data.

  3. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    Science.gov (United States)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  4. Statistical models for quantifying diagnostic accuracy with multiple lesions per patient

    NARCIS (Netherlands)

    Zwinderman, Aeilko H.; Glas, Afina S.; Bossuyt, Patrick M.; Florie, Jasper; Bipat, Shandra; Stoker, Jaap

    2008-01-01

    We propose random-effects models to summarize and quantify the accuracy of the diagnosis of multiple lesions on a single image without assuming independence between lesions. The number of false-positive lesions was assumed to be distributed as a Poisson mixture, and the proportion of true-positive

  5. A Bayesian Method for Weighted Sampling

    OpenAIRE

    Lo, Albert Y.

    1993-01-01

    Bayesian statistical inference for sampling from weighted distribution models is studied. Small-sample Bayesian bootstrap clone (BBC) approximations to the posterior distribution are discussed. A second-order property for the BBC in unweighted i.i.d. sampling is given. A consequence is that BBC approximations to a posterior distribution of the mean and to the sampling distribution of the sample average, can be made asymptotically accurate by a proper choice of the random variables that genera...

  6. The Effect of Pixel Size on the Accuracy of Orthophoto Production

    Science.gov (United States)

    Kulur, S.; Yildiz, F.; Selcuk, O.; Yildiz, M. A.

    2016-06-01

    In our country, orthophoto products are used by the public and private sectors for engineering services and infrastructure projects, Orthophotos are particularly preferred due to faster and are more economical production according to vector digital photogrammetric production. Today, digital orthophotos provide an expected accuracy for engineering and infrastructure projects. In this study, the accuracy of orthophotos using pixel sizes with different sampling intervals are tested for the expectations of engineering and infrastructure projects.

  7. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    Science.gov (United States)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  8. Design and analysis for thematic map accuracy assessment: Fundamental principles

    Science.gov (United States)

    Stephen V. Stehman; Raymond L. Czaplewski

    1998-01-01

    Land-cover maps are used in numerous natural resource applications to describe the spatial distribution and pattern of land-cover, to estimate areal extent of various cover classes, or as input into habitat suitability models, land-cover change analyses, hydrological models, and risk analyses. Accuracy assessment quantifies data quality so that map users may evaluate...

  9. The accuracy of remotely-sensed IWC: An assessment from MLS, TRMM and CloudSat statistics

    Science.gov (United States)

    Wu, D. L.; Heymsfield, A. J.

    2006-12-01

    Understanding climate change requires accurate global cloud ice water content (IWC) measurements. Satellite remote sensing has been the major tool to provide such global observations, but the accuracy of deduced IWC depends on knowledge of cloud microphysics learned from in-situ samples. Because only limited number and type of ice clouds have been measured by in-situ sensors, the knowledge about cloud microphysics is incomplete, and the IWC accuracy from remote sensing can vary from 30% to 200% from case to case. Recent observations from MLS, TRMM and CloudSat allow us to evaluate consistency and accuracy of IWCs deduced from passive and active satellite techniques. In this study we conduct statistical analyses on the tropical and subtropical IWCs observed by MLS, TRMM and CloudSat. The probability density functions (PDFs) of IWC are found to depend on the volume size of averaging, and therefore data need to be averaged into the same volume in order for fair comparisons. Showing measurement noise, bias and sensitivity, the PDF is a better characterization than an average for evaluating IWC accuracy because an averaged IWC depends on cloud-detection threshold that can vary from sensor to sensor. Different thresholds will not only change the average value but also change cloud fraction and occurrence frequency. Our study shows that MLS and TRMM IWCs, despite large differences in sensitivity with little overlap, can still be compared under PDF. The two statistics are generally consistent within 50% at ~13 km, obeying an approximate lognormal distribution as suggested by some ground-based radar observations. MLS has sensitivity to IWC of 1-100 mg/m3 whereas TRMM can improve its sensitivity to IWC as low as 70 mg/m3 if the radar data are averaged properly for the equivalent volume of MLS samples. The proper statistical averaging requires full characteristics of IWC noise, which are not available for products normally derived from radar reflectivity, and therefore we

  10. THE RELIABILITY AND ACCURACY OF THE TRIPLE MEASUREMENTS OF ANALOG PROCESS VARIABLES

    Directory of Open Access Journals (Sweden)

    V. A. Anishchenko

    2017-01-01

    Full Text Available The increase in unit capacity of electric equipment as well as complication of technological processes, devices control and management of the latter in power plants and substations demonstrate the need to improve the reliability and accuracy of measurement information characterizing the state of the objects being managed. The mentioned objective is particularly important for nuclear power plants, where the price of inaccuracy of measurement responsible process variables is particularly high and the error might lead to irreparable consequences. Improving the reliability and accuracy of measurements along with the improvement of the element base is provided by methods of operational validation. These methods are based on the use of information redundancy (structural, topological, temporal. In particular, information redundancy can be achieved by the simultaneous measurement of one analog variable by two (duplication or three devices (triplication i.e., triple redundancy. The problem of operational control of the triple redundant system of measurement of electrical analog variables (currents, voltages, active and reactive power and energy is considered as a special case of signal processing by an orderly sampling on the basis of majority transformation and transformation being close to majority one. Difficulties in monitoring the reliability of measurements are associated with the two tasks. First, one needs to justify the degree of truncation of the distributions of random errors of measurements and allowable residuals of the pairwise differences of the measurement results. The second task consists in formation of the algorithm of joint processing of a set of separate measurements determined as valid. The quality of control is characterized by the reliability, which adopted the synonym of validity, and accuracy of the measuring system. Taken separately, these indicators might lead to opposite results. A compromise solution is therefore proposed

  11. Distribution of local critical current along sample length and its relation to overall current in a long Bi2223/Ag superconducting composite tape

    International Nuclear Information System (INIS)

    Ochiai, S; Doko, D; Okuda, H; Oh, S S; Ha, D W

    2006-01-01

    The distribution of the local critical current and the n-value along the sample length and its relation to the overall critical current were studied experimentally and analytically for the bent multifilamentary Bi2223/Ag/Ag-Mg alloy superconducting composite tape. Then, based on the results, it was attempted to simulate on a computer the dependence of the critical current on the sample length. The main results are summarized as follows. The experimentally observed relation of the distributed local critical current and n-value to the overall critical current was described comprehensively with a simple voltage summation model, in which the sample was regarded as a one-dimensional series circuit. The sample length dependence of the critical current was reproduced on the computer by a Monte Carlo simulation incorporating the voltage summation model and the regression analysis results for the local critical current distribution and the relation of the n-value to the critical current

  12. Study on distributions and recoveries of tetrachlorodibenzo-p-dioxin and octachlorodibenzo-p-dioxin in a mm5 sampling train

    International Nuclear Information System (INIS)

    Finkel, J.M.; James, R.H.; Baughman, K.W.

    1990-12-01

    14 C-dioxin tracers were used to evaluate whole MM5 sampling train recoveries of dioxin and to determine the distribution of dioxins spiked into a sampling train that was concurrently sampling emissions from a burn of either natural gas ('clean' burn) or kerosene ('dirty' burn). The spike tests were made with a pilot-scale furnace constructed and operated in the laboratory. Recovery of 14 C-dioxin from the MM5 sampling train was determined by scintillation spectrometry. The experimental results indicate that the amount of spiked TCDD- 14 C recovered was approximately 85% during a natural gas test and 83% during a kerosene test. The amount of spiked OCDD- 14 C recovered was approximately 88% during a kerosene test. Also, the data indicate that during the kerosene tests OCDD- 14 C is collected primarily in the front half of the sampling train but TCDD- 14 C is often found in the XAD and the rear filter bell, riser and condenser of the sampling train. During the natural gas tests, TCDD- 14 C was primarily in the XAD. The distribution of the TCDD- 14 C in the kerosene tests was dependent on the rigid operation of the sampling train. The information from the study will be used to determine procedural areas that need improvements or modifications to allow the efficient collection and accurate determination of trace levels of dioxins and furans using the MM5 Method

  13. Speed and accuracy of visual motion discrimination by rats.

    Directory of Open Access Journals (Sweden)

    Pamela Reinagel

    Full Text Available Animals must continuously evaluate sensory information to select the preferable among possible actions in a given context, including the option to wait for more information before committing to another course of action. In experimental sensory decision tasks that replicate these features, reaction time distributions can be informative about the implicit rules by which animals determine when to commit and what to do. We measured reaction times of Long-Evans rats discriminating the direction of motion in a coherent random dot motion stimulus, using a self-paced two-alternative forced-choice (2-AFC reaction time task. Our main findings are: (1 When motion strength was constant across trials, the error trials had shorter reaction times than correct trials; in other words, accuracy increased with response latency. (2 When motion strength was varied in randomly interleaved trials, accuracy increased with motion strength, whereas reaction time decreased. (3 Accuracy increased with reaction time for each motion strength considered separately, and in the interleaved motion strength experiment overall. (4 When stimulus duration was limited, accuracy improved with stimulus duration, whereas reaction time decreased. (5 Accuracy decreased with response latency after stimulus offset. This was the case for each stimulus duration considered separately, and in the interleaved duration experiment overall. We conclude that rats integrate visual evidence over time, but in this task the time of their response is governed more by elapsed time than by a criterion for sufficient evidence.

  14. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  15. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  16. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  17. State updating of a distributed hydrological model with Ensemble Kalman Filtering: Effects of updating frequency and observation network density on forecast accuracy

    Science.gov (United States)

    Rakovec, O.; Weerts, A.; Hazenberg, P.; Torfs, P.; Uijlenhoet, R.

    2012-12-01

    This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model (Rakovec et al., 2012a). The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. The uncertain precipitation model forcings were obtained using a time-dependent multivariate spatial conditional simulation method (Rakovec et al., 2012b), which is further made conditional on preceding simulations. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty. Rakovec, O., Weerts, A. H., Hazenberg, P., Torfs, P. J. J. F., and Uijlenhoet, R.: State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy, Hydrol. Earth Syst. Sci. Discuss., 9, 3961-3999, doi:10.5194/hessd-9-3961-2012, 2012a. Rakovec, O., Hazenberg, P., Torfs, P. J. J. F., Weerts, A. H., and Uijlenhoet, R.: Generating spatial precipitation ensembles: impact of

  18. The accuracy of liquid-liquid phase transition temperatures determined from semiautomated light scattering measurements

    Science.gov (United States)

    Dean, Kevin M.; Babayco, Christopher B.; Sluss, Daniel R. B.; Williamson, J. Charles

    2010-08-01

    The synthetic-method determination of liquid-liquid coexistence curves using semiautomated light scattering instrumentation and stirred samples is based on identifying the coexistence curve transition temperatures (Tcx) from sudden changes in turbidity associated with droplet formation. Here we use a thorough set of such measurements to evaluate the accuracy of several different analysis methods reported in the literature for assigning Tcx. More than 20 samples each of weakly opalescent isobutyric acid+water and strongly opalescent aniline+hexane were tested with our instrumentation. Transmitted light and scattering intensities at 2°, 24°, and 90° were collected simultaneously as a function of temperature for each stirred sample, and the data were compared with visual observations and light scattering theory. We find that assigning Tcx to the onset of decreased transmitted light or increased 2° scattering has a potential accuracy of 0.01 K or better for many samples. However, the turbidity due to critical opalescence obscures the identification of Tcx from the light scattering data of near-critical stirred samples, and no simple rule of interpretation can be applied regardless of collection geometry. At best, when 90° scattering is collected along with transmitted or 2° data, the accuracy of Tcx is limited to 0.05 K for near-critical samples. Visual determination of Tcx remains the more accurate approach in this case.

  19. Efficient sampling to determine the distribution of fruit quality and yield in a commercial apple orchard

    DEFF Research Database (Denmark)

    Martinez, M.; Wulfsohn, Dvora-Laio; Zamora, I.

    2012-01-01

    In situ assessment of fruit quality and yield can provide critical data for marketing and for logistical planning of the harvest, as well as for site-specific management. Our objective was to develop and validate efficient field sampling procedures for this purpose. We used the previously reported...... 'fractionator' tree sampling procedure and supporting handheld software (Gardi et al., 2007; Wulfsohn et al., 2012) to obtain representative samples of fruit from a 7.6-ha apple orchard (Malus ×domestica 'Fuji Raku Raku') in central Chile. The resulting sample consisted of 70 fruit on 56 branch segments...... of yield. Estimated marketable yield was 295.8±50.2 t. Field and packinghouse records indicated that of 348.2 t sent to packing (52.4 t or 15% higher than our estimate), 263.0 t was packed for export (32.8 t less or -12% error compared to our estimate). The estimated distribution of caliber compared very...

  20. Selection of representative calibration sample sets for near-infrared reflectance spectroscopy to predict nitrogen concentration in grasses

    DEFF Research Database (Denmark)

    Shetty, Nisha; Rinnan, Åsmund; Gislum, René

    2012-01-01

    ) algorithm were used and compared. Both Puchwein and CADEX methods provide a calibration set equally distributed in space, and both methods require a minimum prior of knowledge. The samples were also selected randomly using complete random, cultivar random (year fixed), year random (cultivar fixed......) and interaction (cultivar × year fixed) random procedures to see the influence of different factors on sample selection. Puchwein's method performed best with lowest RMSEP followed by CADEX, interaction random, year random, cultivar random and complete random. Out of 118 samples of the complete calibration set...... effectively enhance the cost-effectiveness of NIR spectral analysis by reducing the number of analyzed samples in the calibration set by more than 80%, which substantially reduces the effort of laboratory analyses with no significant loss in prediction accuracy....

  1. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  2. A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz

    Science.gov (United States)

    Onuma, Takashi; Otani, Yukitoshi

    2014-03-01

    A two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz is proposed. A polarization image sensor is developed as core device of the system. It is composed of a pixelated polarizer array made from photonic crystal and a parallel read out circuit with a multi-channel analog to digital converter specialized for two-dimensional polarization detection. By applying phase shifting algorism with circularly-polarized incident light, birefringence phase difference and azimuthal angle can be measured. The performance of the system is demonstrated experimentally by measuring actual birefringence distribution and polarization device such as Babinet-Soleil compensator.

  3. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Science.gov (United States)

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  4. FIELD ACCURACY TEST OF RPAS PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    P. Barry

    2013-08-01

    Full Text Available Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS. We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This

  5. Effect of layer thickness and printing orientation on mechanical properties and dimensional accuracy of 3D printed porous samples for bone tissue engineering.

    Directory of Open Access Journals (Sweden)

    Arghavan Farzadi

    Full Text Available Powder-based inkjet 3D printing method is one of the most attractive solid free form techniques. It involves a sequential layering process through which 3D porous scaffolds can be directly produced from computer-generated models. 3D printed products' quality are controlled by the optimal build parameters. In this study, Calcium Sulfate based powders were used for porous scaffolds fabrication. The printed scaffolds of 0.8 mm pore size, with different layer thickness and printing orientation, were subjected to the depowdering step. The effects of four layer thicknesses and printing orientations, (parallel to X, Y and Z, on the physical and mechanical properties of printed scaffolds were investigated. It was observed that the compressive strength, toughness and Young's modulus of samples with 0.1125 and 0.125 mm layer thickness were more than others. Furthermore, the results of SEM and μCT analyses showed that samples with 0.1125 mm layer thickness printed in X direction have more dimensional accuracy and significantly close to CAD software based designs with predefined pore size, porosity and pore interconnectivity.

  6. Discussion on accuracy of weld residual stress measurement by neutron diffraction. Influence of strain free reference

    International Nuclear Information System (INIS)

    Suzuki, Hiroshi; Akita, Koichi

    2012-01-01

    It is required to evaluate a strain-free reference, α 0 , to perform accurate stress measurement using neutron diffraction. In this study, accuracy of neutron stress measurement was quantitatively discussed from α 0 evaluations on a dissimilar metal butt-weld between a type 304 austenitic stainless steel and an A533B low alloy ferritic steel. A strain-free standard specimen and a sliced specimen with 10 mm thickness taken from the dissimilar metal butt-weld were utilized. In the lattice constant evaluation using the standard specimen, average lattice constant derived from multiple hkl reflections was evaluated as the stress-free reference with cancelling out an intergranular strain. Comparing lattice constant distributions in each reflection with average lattice constant distribution in the standard specimen, αFe211 and γFe311 reflections were judged as a suitable reflection for neutron strain measurement to reduce intergranular strain effects. Residual stress distribution in the sliced specimen evaluated using α 0 measured here exhibited higher accuracy than that measured using strain gauges. On the other hand, α 0 distributions were evaluated using the sliced specimen under the plane-stress condition. Existence of slight longitudinal residual stresses near the weld center decreased accuracy of the α 0 evaluations, which means that it is required to optimize the thickness of the sliced specimen for accurate α 0 evaluation under plane strain condition. As a conclusion of this study, it was confirmed that procedures of accurate α 0 evaluation, optimization of the measurement condition, and multiple evaluations on the results play an important role to improve accuracy of the residual stress measurement using neutron diffraction. (author)

  7. Sampling effects on the identification of roadkill hotspots: Implications for survey design.

    Science.gov (United States)

    Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António

    2015-10-01

    Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from 60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Effect of CT image size and resolution on the accuracy of rock property estimates

    Science.gov (United States)

    Bazaikin, Y.; Gurevich, B.; Iglauer, S.; Khachkova, T.; Kolyukhin, D.; Lebedev, M.; Lisitsa, V.; Reshetova, G.

    2017-05-01

    In order to study the effect of the micro-CT scan resolution and size on the accuracy of upscaled digital rock property estimation of core samples Bentheimer sandstone images with the resolution varying from 0.9 μm to 24 μm are used. We statistically show that the correlation length of the pore-to-matrix distribution can be reliably determined for the images with the resolution finer than 9 voxels per correlation length and the representative volume for this property is about 153 correlation length. Similar resolution values for the statistically representative volume are also valid for the estimation of the total porosity, specific surface area, mean curvature, and topology of the pore space. Only the total porosity and the number of isolated pores are stably recovered, whereas geometry and the topological measures of the pore space are strongly affected by the resolution change. We also simulate fluid flow in the pore space and estimate permeability and tortuosity of the sample. The results demonstrate that the representative volume for the transport property calculation should be greater than 50 correlation lengths of pore-to-matrix distribution. On the other hand, permeability estimation based on the statistical analysis of equivalent realizations shows some weak influence of the resolution on the transport properties. The reason for this might be that the characteristic scale of the particular physical processes may affect the result stronger than the model (image) scale.

  9. Precision and accuracy of multi-element analysis of aerosols using energy-dispersive x-ray fluorescence

    International Nuclear Information System (INIS)

    Adams, F.; Van Espen, P.

    1976-01-01

    Measurements have been carried out for the determination of the inherent errors of energy-dispersive X-ray fluorescence and for the evaluation of its precision and accuracy. The accuracy of the method is confirmed by independent determinations on the same samples using other analytical methods

  10. Effect of modulation of the particle size distributions in the direct solid analysis by total-reflection X-ray fluorescence

    Science.gov (United States)

    Fernández-Ruiz, Ramón; Friedrich K., E. Josue; Redrejo, M. J.

    2018-02-01

    The main goal of this work was to investigate, in a systematic way, the influence of the controlled modulation of the particle size distribution of a representative solid sample with respect to the more relevant analytical parameters of the Direct Solid Analysis (DSA) by Total-reflection X-Ray Fluorescence (TXRF) quantitative method. In particular, accuracy, uncertainty, linearity and detection limits were correlated with the main parameters of their size distributions for the following elements; Al, Si, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Rb, Sr, Ba and Pb. In all cases strong correlations were finded. The main conclusion of this work can be resumed as follows; the modulation of particles shape to lower average sizes next to a minimization of the width of particle size distributions, produce a strong increment of accuracy, minimization of uncertainties and limit of detections for DSA-TXRF methodology. These achievements allow the future use of the DSA-TXRF analytical methodology for development of ISO norms and standardized protocols for the direct analysis of solids by mean of TXRF.

  11. Testing the existence of non-Maxwellian electron distributions in H II regions after assessing atomic data accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, C. [Permanent address: Centro de Física, Instituto Venezolano de Investigaciones Científicas (IVIC), P.O. Box 20632, Caracas 1020A, Venezuela. (Venezuela, Bolivarian Republic of); Bautista, M. A., E-mail: claudio.mendozaguardia@wmich.edu, E-mail: manuel.bautista@wmich.edu [Department of Physics, Western Michigan University, Kalamazoo, MI 49008 (United States)

    2014-04-20

    The classic optical nebular diagnostics [N II], [O II], [O III], [S II], [S III], and [Ar III] are employed to search for evidence of non-Maxwellian electron distributions, namely κ distributions, in a sample of well-observed Galactic H II regions. By computing new effective collision strengths for all these systems and A-values when necessary (e.g., S II), and by comparing with previous collisional and radiative data sets, we have been able to obtain realistic estimates of the electron-temperature dispersion caused by the atomic data, which in most cases are not larger than ∼10%. If the uncertainties due to both observation and atomic data are then taken into account, it is plausible to determine for some nebulae a representative average temperature while in others there are at least two plasma excitation regions. For the latter, it is found that the diagnostic temperature differences in the high-excitation region, e.g., T{sub e} (O III), T{sub e} (S III), and T{sub e} (Ar III), cannot be conciliated by invoking κ distributions. For the low-excitation region, it is possible in some, but not all, cases to arrive at a common, lower temperature for [N II], [O II], and [S II] with κ ≈ 10, which would then lead to significant abundance enhancements for these ions. An analytic formula is proposed to generate accurate κ-averaged excitation rate coefficients (better than 10% for κ ≥ 5) from temperature tabulations of the Maxwell-Boltzmann effective collision strengths.

  12. Assessment of crystalline disorder in cryo-milled samples of indomethacin using atomic pair-wise distribution functions

    DEFF Research Database (Denmark)

    Bøtker, Johan P; Karmwar, Pranav; Strachan, Clare J

    2011-01-01

    to analyse the cryo-milled samples. The high similarity between the ¿-indomethacin cryogenic ball milled samples and the crude ¿-indomethacin indicated that milled samples retained residual order of the ¿-form. The PDF analysis encompassed the capability of achieving a correlation with the physical......The aim of this study was to investigate the usefulness of the atomic pair-wise distribution function (PDF) to detect the extension of disorder/amorphousness induced into a crystalline drug using a cryo-milling technique, and to determine the optimal milling times to achieve amorphisation. The PDF...... properties determined from DSC, ss-NMR and stability experiments. Multivariate data analysis (MVDA) was used to visualize the differences in the PDF and XRPD data. The MVDA approach revealed that PDF is more efficient in assessing the introduced degree of disorder in ¿-indomethacin after cryo-milling than...

  13. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  14. Multi-shelled q-ball imaging. Moment-based orientation distribution function

    International Nuclear Information System (INIS)

    Umezawa, Eizou; Yamaguchi, Kojiro; Yoshikawa, Mayo; Ohno, Kana; Yoshikawa, Emi

    2010-01-01

    q-ball imaging (QBI) reconstructs the orientation distribution function (ODF) that describes the probability for a spin to diffuse in a given direction, and it is capable of identifying intravoxel multiple fiber orientations. The local maxima of ODF are assumed to indicate fiber orientations, but there is a mismatch between the orientation of a fiber crossing and the local maxima. We propose a novel method, multi-shelled QBI (MS-QBI), that gives a new ODF based on the moment of the probability density function of diffusion displacement. We test the accuracy of the fiber orientation indicated by the new ODF and test fiber tracking using the new ODF. We performed tests using numerical simulation. To test the accuracy of fiber orientation, we assumed that 2 fibers cross and evaluated the deviation of the measured crossing angle from the actual angle. To test the fiber tracking, we used a numerical phantom of the cerebral hemisphere containing the corpus callosum, projection fibers, and superior longitudinal fasciculus. In the tests, we compared the results between MS-QBI and conventional QBI under the condition of approximately equal total numbers of diffusion signal samplings between the 2 methods and chose the interpolation parameter such that the stabilities of the results of the angular deviation for the 2 methods were the same. The absolute value of the mean angular deviation was smaller in MS-QBI than in conventional QBI. Using the moment-based ODF improved the accuracy of fiber pathways in fiber tracking but maintained the stability of the results. MS-QBI can more accurately identify intravoxel multiple fiber orientations than can QBI, without increasing sampling number. The high accuracy of MS-QBI will contribute to the improved tractography. (author)

  15. Examination of quantitative accuracy of PIXE analysis for atmospheric aerosol particle samples. PIXE analysis of NIST air particulate on filter media

    International Nuclear Information System (INIS)

    Saitoh, Katsumi; Sera, Koichiro

    2005-01-01

    In order to confirm accuracy of the direct analysis of filter samples containing atmospheric aerosol particles collected on a polycarbonate membrane filter by PIXE, we carried out PIXE analysis on a National Institute of Standards and Technology (NIST, USA) air particulate on filter media (SRM 2783). For 16 elements with NIST certified values determined by PIXE analysis - Na, Mg, Al, Si, S, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn and Pb - quantitative values were 80-110% relative to NIST certified values except for Na, Al, Si and Ni. Quantitative values of Na, Al and Si were 140-170% relative to NIST certified values, which were all high, and Ni was 64%. One possible reason why the quantitative values of Na, Al and Si were higher than the NIST certified values could be the difference in the X-ray spectrum analysis method used. (author)

  16. Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy

    Science.gov (United States)

    Hassan, Afifa Afifi

    1981-01-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)

  17. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  18. Concentration distribution of trace elements: from normal distribution to Levy flights

    International Nuclear Information System (INIS)

    Kubala-Kukus, A.; Banas, D.; Braziewicz, J.; Majewska, U.; Pajek, M.

    2003-01-01

    The paper discusses a nature of concentration distributions of trace elements in biomedical samples, which were measured by using the X-ray fluorescence techniques (XRF, TXRF). Our earlier observation, that the lognormal distribution well describes the measured concentration distribution is explained here on a more general ground. Particularly, the role of random multiplicative process, which models the concentration distributions of trace elements in biomedical samples, is discussed in detail. It is demonstrated that the lognormal distribution, appearing when the multiplicative process is driven by normal distribution, can be generalized to the so-called log-stable distribution. Such distribution describes the random multiplicative process, which is driven, instead of normal distribution, by more general stable distribution, being known as the Levy flights. The presented ideas are exemplified by the results of the study of trace element concentration distributions in selected biomedical samples, obtained by using the conventional (XRF) and (TXRF) X-ray fluorescence methods. Particularly, the first observation of log-stable concentration distribution of trace elements is reported and discussed here in detail

  19. Enhancement of the spectral selectivity of complex samples by measuring them in a frozen state at low temperatures in order to improve accuracy for quantitative analysis. Part II. Determination of viscosity for lube base oils using Raman spectroscopy.

    Science.gov (United States)

    Kim, Mooeung; Chung, Hoeil

    2013-03-07

    The use of selectivity-enhanced Raman spectra of lube base oil (LBO) samples achieved by the spectral collection under frozen conditions at low temperatures was effective for improving accuracy for the determination of the kinematic viscosity at 40 °C (KV@40). A collection of Raman spectra from samples cooled around -160 °C provided the most accurate measurement of KV@40. Components of the LBO samples were mainly long-chain hydrocarbons with molecular structures that were deformable when these were frozen, and the different structural deformabilities of the components enhanced spectral selectivity among the samples. To study the structural variation of components according to the change of sample temperature from cryogenic to ambient condition, n-heptadecane and pristane (2,6,10,14-tetramethylpentadecane) were selected as representative components of LBO samples, and their temperature-induced spectral features as well as the corresponding spectral loadings were investigated. A two-dimensional (2D) correlation analysis was also employed to explain the origin for the improved accuracy. The asynchronous 2D correlation pattern was simplest at the optimal temperature, indicating the occurrence of distinct and selective spectral variations, which enabled the variation of KV@40 of LBO samples to be more accurately assessed.

  20. Using Environmental DNA to Improve Species Distribution Models for Freshwater Invaders

    Directory of Open Access Journals (Sweden)

    Teja P. Muha

    2017-12-01

    Full Text Available Species Distribution Models (SDMs have been reported as a useful tool for the risk assessment and modeling of the pathways of dispersal of freshwater invasive alien species (IAS. Environmental DNA (eDNA is a novel tool that can help detect IAS at their early stage of introduction and additionally improve the data available for a more efficient management. SDMs rely on presence and absence of the species in the study area to infer the predictors affecting species distributions. Presence is verified once a species is detected, but confirmation of absence can be problematic because this depends both on the detectability of the species and the sampling strategy. eDNA is a technique that presents higher detectability and accuracy in comparison to conventional sampling techniques, and can effectively differentiate between presence or absence of specific species or entire communities by using a barcoding or metabarcoding approach. However, a number of potential bias can be introduced during (i sampling, (ii amplification, (iii sequencing, or (iv through the usage of bioinformatics pipelines. Therefore, it is important to report and conduct the field and laboratory procedures in a consistent way, by (i introducing eDNA independent observations, (ii amplifying and sequencing control samples, (iii achieving quality sequence reads by appropriate clean-up steps, (iv controlling primer amplification preferences, (v introducing PCR-free sequence capturing, (vi estimating primer detection capabilities through controlled experiments and/or (vii post-hoc introduction of “site occupancy-detection models.” With eDNA methodology becoming increasingly routine, its use is strongly recommended to retrieve species distributional data for SDMs.

  1. Extraction X-ray fluorescence determination of gold in natural samples

    International Nuclear Information System (INIS)

    Dmitriev, S.N.; Shishkina, T.V.; Zhuravleva, E.L.; Chimehg, Zh.

    1990-01-01

    The behaviour of gold and other elements impeding its X-ray fluorescence (XRF) determination, namely, of zinc, lead, and arsenic, has been studied during their extraction by TBP from hydrochloric, nitric, and aqua regia solutions using solid extractant (SE(TBP)). Gold extraction from pulps after aqua regia leaching, with the gold distribution coefficient (D) being equal to about 10 4 , was observed as the most favourable one for the quantitative and selective recovery of gold. For extraction from hydrochloric solutions the D Au value does not depend on the gold content of initial solutions (10 -8 - 10 -4 M), but it decreases substantially with increasing extraction temperature (from 5x10 5 at 20 deg C to 9x10 3 at 70 deg C). An anomalously high distribution coefficient of lead (D Pb =10 3 ) was observed during extraction from hydrochloric solutions in the presence of chlorine. This fact could be explained by the formation of the chlorocomplexes of lead (IV). The XRF method of gold determination in natural samples has been developed, which includes the aqua regia decomposition of the samples, recovery of gold from the pulp after its leaching by SE(TBP) and back - extraction using a 0.025 M hot thiourea solution providing a thin sample film for secondary XRF spectrometry. For 25 g of the sample material the limit of determination is set at 0.01 g per ton (10 -6 %). The accuracy of the technique has been checked on different reference materials. The results agree within 10%. 16 refs.; 5 figs.; 1 tab

  2. Accuracy of predicting milk yield from alternative recording schemes

    NARCIS (Netherlands)

    Berry, D.P.; Olori, V.E.; Cromie, A.R.; Rath, M.; Veerkamp, R.F.; Dilon, P.

    2005-01-01

    The effect of reducing the frequency of official milk recording and the number of recorded samples per test-day on the accuracy of predicting daily yield and cumulative 305-day yield was investigated. A control data set consisting of 58 210 primiparous cows with milk test-day records every 4 weeks

  3. Distribution of Total Depressive Symptoms Scores and Each Depressive Symptom Item in a Sample of Japanese Employees.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Yamada, Hiroshi; Miyake, Hirotsugu; Furukawa, Toshiaki A; Furukaw, Toshiaki A

    2016-01-01

    In a previous study, we reported that the distribution of total depressive symptoms scores according to the Center for Epidemiologic Studies Depression Scale (CES-D) in a general population is stable throughout middle adulthood and follows an exponential pattern except for at the lowest end of the symptom score. Furthermore, the individual distributions of 16 negative symptom items of the CES-D exhibit a common mathematical pattern. To confirm the reproducibility of these findings, we investigated the distribution of total depressive symptoms scores and 16 negative symptom items in a sample of Japanese employees. We analyzed 7624 employees aged 20-59 years who had participated in the Northern Japan Occupational Health Promotion Centers Collaboration Study for Mental Health. Depressive symptoms were assessed using the CES-D. The CES-D contains 20 items, each of which is scored in four grades: "rarely," "some," "much," and "most of the time." The descriptive statistics and frequency curves of the distributions were then compared according to age group. The distribution of total depressive symptoms scores appeared to be stable from 30-59 years. The right tail of the distribution for ages 30-59 years exhibited a linear pattern with a log-normal scale. The distributions of the 16 individual negative symptom items of the CES-D exhibited a common mathematical pattern which displayed different distributions with a boundary at "some." The distributions of the 16 negative symptom items from "some" to "most" followed a linear pattern with a log-normal scale. The distributions of the total depressive symptoms scores and individual negative symptom items in a Japanese occupational setting show the same patterns as those observed in a general population. These results show that the specific mathematical patterns of the distributions of total depressive symptoms scores and individual negative symptom items can be reproduced in an occupational population.

  4. Examining Impulse-Variability Theory and the Speed-Accuracy Trade-Off in Children's Overarm Throwing Performance.

    Science.gov (United States)

    Molina, Sergio L; Stodden, David F

    2018-04-01

    This study examined variability in throwing speed and spatial error to test the prediction of an inverted-U function (i.e., impulse-variability [IV] theory) and the speed-accuracy trade-off. Forty-five 9- to 11-year-old children were instructed to throw at a specified percentage of maximum speed (45%, 65%, 85%, and 100%) and hit the wall target. Results indicated no statistically significant differences in variable error across the target conditions (p = .72), failing to support the inverted-U hypothesis. Spatial accuracy results indicated no statistically significant differences with mean radial error (p = .18), centroid radial error (p = .13), and bivariate variable error (p = .08) also failing to support the speed-accuracy trade-off in overarm throwing. As neither throwing performance variability nor accuracy changed across percentages of maximum speed in this sample of children as well as in a previous adult sample, current policy and practices of practitioners may need to be reevaluated.

  5. Precision time distribution within a deep space communications complex

    Science.gov (United States)

    Curtright, J. B.

    1972-01-01

    The Precision Time Distribution System (PTDS) at the Golstone Deep Space Communications Complex is a practical application of existing technology to the solution of a local problem. The problem was to synchronize four station timing systems to a master source with a relative accuracy consistently and significantly better than 10 microseconds. The solution involved combining a precision timing source, an automatic error detection assembly and a microwave distribution network into an operational system. Upon activation of the completed PTDS two years ago, synchronization accuracy at Goldstone (two station relative) was improved by an order of magnitude. It is felt that the validation of the PTDS mechanization is now completed. Other facilities which have site dispersion and synchronization accuracy requirements similar to Goldstone may find the PTDS mechanization useful in solving their problem. At present, the two station relative synchronization accuracy at Goldstone is better than one microsecond.

  6. Distribution of 137Cs in samples of ocean bottom sediments of the baltic sea in 1982-1983

    International Nuclear Information System (INIS)

    Gedenov, L.I.; Flegontov, V.M.; Ivanova, L.M.; Kostandov, K.A.

    1986-01-01

    The concentration of Cs-137 in samples of ocean bottom sediments picked up in 1979 in the Gulf of Finland with a geological nozzle pipe varied within a wide interval of values. The results could indicate nonuniformity of the Cs-137 distribution in ocean bottom sediments as well as the penetration of significant amounts of Cs-137 to large depths. The main error resulted from the sampling technique employed because the upper part of the sediment could be lost. In 1982, a special ground-sampling device, with which the upper layer of sediments in the water layer close to the ocean bottom could be sampled, was tested in the Gulf of Finland and the Northeastern part of the Baltic Sea. The results of a layerwise determination of the Cs-137 concentration in samples of ocean bottom sediments of the Gulf of Finland and of the Baltic Sea are listed. The new soil-sampling device for picking samples of ocean sediments of undisturbed stratification will allow a correct determination of the radionuclide accumulation in the upper layers of ocean bottom sediments in the Baltic Sea

  7. Accuracy and detection limits for bioassay measurements in radiation protection. Statistical considerations

    International Nuclear Information System (INIS)

    Brodsky, A.

    1986-04-01

    This report provides statistical concepts and formulas for defining minimum detectable amount (MDA), bias and precision of sample analytical measurements of radioactivity for radiobioassay purposes. The defined statistical quantities and accuracy criteria were developed for use in standard performance criteria for radiobioassay, but are also useful in intralaboratory quality assurance programs. This report also includes a literature review and analysis of accuracy needs and accuracy recommendations of national and international scientific organizations for radiation or radioactivity measurements used for radiation protection purposes. Computer programs are also included for calculating the probabilities of passing or failing multiple analytical tests for different acceptable ranges of bias and precision

  8. Time accuracy requirements for fusion experiments: A case study at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Raupp, Gerhard; Behler, Karl; Eixenberger, Horst; Fitzek, Michael; Kollotzek, Horst; Lohs, Andreas; Lueddecke, Klaus; Mueller, Peter; Merkel, Roland; Neu, Gregor; Schacht, Joerg; Schramm, Gerold; Treutterer, Wolfgang; Zasche, Dieter; Zehetbauer, Thomas

    2010-01-01

    To manage and operate a fusion device and measure meaningful data an accurate and stable time is needed. As a benchmark, we suggest to consider time accuracy as sufficient if it is better than typical data errors or process timescales. This allows to distinguish application domains and chose appropriate time distribution methods. For ASDEX Upgrade a standard NTP method provides Unix time for project and operation management tasks, and a dedicated time system generates and distributes a precise experiment time for physics applications. Applying the benchmark to ASDEX Upgrade shows that physics measurements tagged with experiment time meet the requirements, while correlation of NTP tagged operation data with physics data tagged with experiment time remains problematic. Closer coupling of the two initially free running time systems with daily re-sets was an efficient and satisfactory improvement. For ultimate accuracy and seamless integration, however, continuous adjustment of the experiment time clock frequency to NTP is needed, within frequency variation limits given by the benchmark.

  9. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    International Nuclear Information System (INIS)

    Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W

    2013-01-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)

  10. A new method for determining the uranium and thorium distribution in volcanic rock samples using solid state nuclear track detectors

    International Nuclear Information System (INIS)

    Misdaq, M.A.; Bakhchi, A.; Ktata, A.; Koutit, A.; Lamine, J.; Ait nouh, F.; Oufni, L.

    2000-01-01

    A method based on using solid state nuclear track detectors (SSNTD) CR- 39 and LR-115 type II and calculating the probabilities for the alpha particles emitted by the uranium and thorium series to reach and be registered on these films was utilized for uranium and thorium contents determination in various geological samples. The distribution of uranium and thorium in different volcanic rocks has been investigated using the track fission method. In this work, the uranium and thorium contents have been determined in different volcanic rock samples by using CR-39 and LR-115 type II solid state nuclear track detectors (SSNTD). The mean critical angles of etching of the solid state nuclear track detectors utilized have been calculated. A petrographical study of the volcanic rock thin layers studied has been conducted. The uranium and thorium distribution inside different rock thin layers has been studied. The mechanism of inclusion of the uranium and thorium nuclei inside the volcanic rock samples studied has been investigated. (author)

  11. 3D site specific sample preparation and analysis of 3D devices (FinFETs) by atom probe tomography.

    Science.gov (United States)

    Kambham, Ajay Kumar; Kumar, Arul; Gilbert, Matthieu; Vandervorst, Wilfried

    2013-09-01

    With the transition from planar to three-dimensional device architectures such as Fin field-effect-transistors (FinFETs), new metrology approaches are required to meet the needs of semiconductor technology. It is important to characterize the 3D-dopant distributions precisely as their extent, positioning relative to gate edges and absolute concentration determine the device performance in great detail. At present the atom probe has shown its ability to analyze dopant distributions in semiconductor and thin insulating materials with sub-nm 3D-resolution and good dopant sensitivity. However, so far most reports have dealt with planar devices or restricted the measurements to 2D test structures which represent only limited challenges in terms of localization and site specific sample preparation. In this paper we will discuss the methodology to extract the dopant distribution from real 3D-devices such as a 3D-FinFET device, requiring the sample preparation to be carried out at a site specific location with a positioning accuracy ∼50 nm. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Comparison of SHOX and associated elements duplications distribution between patients (Lėri-Weill dyschondrosteosis/idiopathic short stature) and population sample.

    Science.gov (United States)

    Hirschfeldova, Katerina; Solc, Roman

    2017-09-05

    The effect of heterozygous duplications of SHOX and associated elements on Lėri-Weill dyschondrosteosis (LWD) and idiopathic short stature (ISS) development is less distinct when compared to reciprocal deletions. The aim of our study was to compare frequency and distribution of duplications within SHOX and associated elements between population sample and LWD (ISS) patients. A preliminary analysis conducted on Czech population sample of 250 individuals compared to our previously reported sample of 352 ISS/LWD Czech patients indicated that rather than the difference in frequency of duplications it is the difference in their distribution. Particularly, there was an increased frequency of duplications residing to the CNE-9 enhancer in our LWD/ISS sample. To see whether the obtained data are consistent across published studies we made a literature survey to get published cases with SHOX or associated elements duplication and formed the merged LWD, the merged ISS, and the merged population samples. Relative frequency of particular region duplication in each of those merged samples were calculated. There was a significant difference in the relative frequency of CNE-9 enhancer duplications (11 vs. 3) and complete SHOX (exon1-6b) duplications (4 vs. 24) (p-value 0.0139 and p-value 0.000014, respectively) between the merged LWD sample and the merged population sample. We thus propose that partial SHOX duplications and small duplications encompassing CNE-9 enhancer could be highly penetrant alleles associated with ISS and LWD development. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The depth distribution functions of the natural abundances of carbon isotopes in Alfisols thoroughly sampled by thin-layer sampling, and their relation to the dynamics of organic matter in theses soils

    International Nuclear Information System (INIS)

    Becker-Heidmann, P.

    1989-01-01

    The aim of this study was to gain fundamental statements on the relationship between the depth distributions of the natural abundances of 13 C and 14 C isotopes and the dynamics of the organic matter in Alfisols. For this purpose, six Alfisols were investigated: four forest soils from Northern Germany, two of them developed in Loess and two in glacial loam, one West German Loess soil used for fruit-growing and one agricultural granite-gneiss soil from the semiarid part of India. The soil was sampled as succesive horizontal layers of 2 cm depth from an area of 0.5 to 1 m 2 size, starting from the organic down to the C horizon or the lower part of the Bt. This kind of completely thin-layer-wise sampling was applied here for the first time. The carbon content and the natural abundances of the 13 C and the 14 C isotopes of each sample were determined. The δ 13 C value was measured by mass spectrometry. A vacuum preparation line with an electronically controlled cooling unit was constructed thereto. For the determination of the 14 C content, the sample carbon was transferred into benzene, and its activity was measured by liquid scintillation spectrometry. From the combination of the depth distribution functions of the 14 C activity and the δ 13 C value, and with the aid of additional analyses like C/N ratio and particle size distribution, a conclusive interpretation as to the dynamics of the organic matter in the investigated Alfisols is given. (orig./BBR)

  14. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  15. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    Science.gov (United States)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).

  16. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  17. Elaboration of austenitic stainless steel samples with bimodal grain size distributions and investigation of their mechanical behavior

    Science.gov (United States)

    Flipon, B.; de la Cruz, L. Garcia; Hug, E.; Keller, C.; Barbe, F.

    2017-10-01

    Samples of 316L austenitic stainless steel with bimodal grain size distributions are elaborated using two distinct routes. The first one is based on powder metallurgy using spark plasma sintering of two powders with different particle sizes. The second route applies the reverse-annealing method: it consists in inducing martensitic phase transformation by plastic strain and further annealing in order to obtain two austenitic grain populations with different sizes. Microstructural analy ses reveal that both methods are suitable to generate significative grain size contrast and to control this contrast according to the elaboration conditions. Mechanical properties under tension are then characterized for different grain size distributions. Crystal plasticity finite element modelling is further applied in a configuration of bimodal distribution to analyse the role played by coarse grains within a matrix of fine grains, considering not only their volume fraction but also their spatial arrangement.

  18. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  19. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    International Nuclear Information System (INIS)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A.; Caloba, L.P.; Mery, D.

    2004-01-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  20. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A. [Federal Univ. of Rio de Janeiro, Dept., of Metallurgical and Materials Engineering, Rio de Janeiro (Brazil); Caloba, L.P. [Federal Univ. of Rio de Janeiro, Dept., of Electrical Engineering, Rio de Janeiro (Brazil); Mery, D. [Pontificia Unversidad Catolica de Chile, Escuela de Ingenieria - DCC, Dept. de Ciencia de la Computacion, Casilla, Santiago (Chile)

    2004-07-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  1. An R package for spatial coverage sampling and random sampling from compact geographical strata by k-means

    NARCIS (Netherlands)

    Walvoort, D.J.J.; Brus, D.J.; Gruijter, de J.J.

    2010-01-01

    Both for mapping and for estimating spatial means of an environmental variable, the accuracy of the result will usually be increased by dispersing the sample locations so that they cover the study area as uniformly as possible. We developed a new R package for designing spatial coverage samples for

  2. Estimating cyclopoid copepod species richness and geographical distribution (Crustacea across a large hydrographical basin: comparing between samples from water column (plankton and macrophyte stands

    Directory of Open Access Journals (Sweden)

    Gilmar Perbiche-Neves

    2014-06-01

    Full Text Available Species richness and geographical distribution of Cyclopoida freshwater copepods were analyzed along the "La Plata" River basin. Ninety-six samples were taken from 24 sampling sites, twelve sites for zooplankton in open waters and twelve sites for zooplankton within macrophyte stands, including reservoirs and lotic stretches. There were, on average, three species per sample in the plankton compared to five per sample in macrophytes. Six species were exclusive to the plankton, 10 to macrophyte stands, and 17 were common to both. Only one species was found in similar proportions in plankton and macrophytes, while five species were widely found in plankton, and thirteen in macrophytes. The distinction between species from open water zooplankton and macrophytes was supported by nonmetric multidimensional analysis. There was no distinct pattern of endemicity within the basin, and double sampling contributes to this result. This lack of sub-regional faunal differentiation is in accordance with other studies that have shown that cyclopoids generally have wide geographical distribution in the Neotropics and that some species there are cosmopolitan. This contrasts with other freshwater copepods such as Calanoida and some Harpacticoida. We conclude that sampling plankton and macrophytes together provided a more accurate estimate of the richness and geographical distribution of these organisms than sampling in either one of those zones alone.

  3. Using silver nano particles for sampling of toxic mercury vapors from industrial air sample

    Directory of Open Access Journals (Sweden)

    M. Osanloo

    2014-05-01

    .Conclusion: The presented adsorbent is very useful for sampling of the trace amounts of mercury vapors from air. Moreover, it can be regenerated easily is suitable or sampling at 25 to 70 °C. Due to oxidation of silver and reduction in uptake of nanoparticles, oven temperature of 245 °C is used for the recovery of metallic silver. Low amount of adsorbent, high absorbency, high repeatability for sampling, low cost and high accuracy are of the advantages of the presented method.

  4. Adaptive sampling based on the cumulative distribution function of order statistics to delineate heavy-metal contaminated soils using kriging

    International Nuclear Information System (INIS)

    Juang, K.-W.; Lee, D.-Y.; Teng, Y.-L.

    2005-01-01

    Correctly classifying 'contaminated' areas in soils, based on the threshold for a contaminated site, is important for determining effective clean-up actions. Pollutant mapping by means of kriging is increasingly being used for the delineation of contaminated soils. However, those areas where the kriged pollutant concentrations are close to the threshold have a high possibility for being misclassified. In order to reduce the misclassification due to the over- or under-estimation from kriging, an adaptive sampling using the cumulative distribution function of order statistics (CDFOS) was developed to draw additional samples for delineating contaminated soils, while kriging. A heavy-metal contaminated site in Hsinchu, Taiwan was used to illustrate this approach. The results showed that compared with random sampling, adaptive sampling using CDFOS reduced the kriging estimation errors and misclassification rates, and thus would appear to be a better choice than random sampling, as additional sampling is required for delineating the 'contaminated' areas. - A sampling approach was derived for drawing additional samples while kriging

  5. Mapping geomorphic process domains to predict hillslope sediment size distribution using remotely-sensed data and field sampling, Inyo Creek, California

    Science.gov (United States)

    Leclere, S.; Sklar, L. S.; Genetti, J. R.

    2014-12-01

    The size distribution of sediments produced on hillslopes and supplied to channels depends on the geomorphic processes that weather, detach and transport rock fragments down slopes. Little in the way of theory or data is available to predict patterns in hillslope size distributions at the catchment scale from topographic and geologic maps. Here we use aerial imagery and a variety of remote sensing techniques to map and categorize geomorphic landscape units (GLUs) by inferred sediment production process regime, across the steep mountain catchment of Inyo Creek, eastern Sierra Nevada, California. We also use field measurements of particle size and local geomorphic attributes to test and refine GLU determinations. Across the 2 km of relief in this catchment, landcover varies from bare bedrock cliffs at higher elevations to vegetated, regolith-covered convex slopes at lower elevations. Hillslope gradient could provide a simple index of sediment production process, from rock spallation and landsliding at highest slopes, to tree-throw and other disturbance-driven soil production processes at lowest slopes. However, many other attributes are needed for a more robust predictive model, including elevation, curvature, aspect, drainage area, and color. We combine tools from ArcGIS, ERDAS Imagine and Envi with groundtruthing field work to find an optimal combination of attributes for defining sediment production GLUs. Key challenges include distinguishing: weathered from freshly eroded bedrock, boulders from intact bedrock, and landslide deposits from talus slopes. We take advantage of emerging technologies that provide new ways of conducting fieldwork and comparing field data to mapping solutions. In particular, cellphone GPS is approaching the accuracy of dedicated GPS systems and the ability to geo-reference photos simplifies field notes and increases accuracy of later map creation. However, the predictive power of the GLU mapping approach is limited by inherent uncertainty

  6. Study on the contents of trace rare earth elements and their distribution in wheat and rice samples by RNAA

    International Nuclear Information System (INIS)

    Sun Jingxin; Zhao Hang; Wang Yuqi

    1994-01-01

    The concentrations of 8 REE (La, Ce, Nd, Sm, Eu, Tb, Yb and Lu) in wheat and rice samples have been determined by RNAA. The contents and distributions of REE in each part of the plants (i.e. root, leaf, stem, husk and seed) and their host soils were studied, which included samples applied with rare earth elements in farming and control samples. The effects of applying rare earth on the uptake of REE by the plants and the REE accumulation in the grains of human health were also discussed. (author) 9 refs.; 4 figs.; 4 tabs

  7. Use of neutron activation techniques for studying elements distribution. Applications in geochemistry, ecology and technology

    International Nuclear Information System (INIS)

    Flitsyan, E.S.

    1997-01-01

    The essence of the radiography method, sensitivity of which is within a range of 10 -2 - 10 -7 g · mm -2 and resolution ability extend from 0,01 to 100 μm, consists in determining the distributions of ionising irradiation sources on a surface under study by using the two-dimensional field of their irradiation registered by means of photoemulsion or dielectric track detector. Statistical analysis and computer processing of the radiographic images have allowed one to solve the two main tasks of the method , namely, the determination of true distributions of ionising irradiation sources on the surface of an object under study and the quantitative estimation within given accuracy of the content of an element under study in a sample. A complex of radiographic techniques based on the registration of secondary irradiation of the activated nuclei, instantaneous products of the nuclear reactions, and fusion fragments of the transuranium elements, has been developed to solve a series of the problem in geology and geochemistry, as well as for an analysis of technological and environment samples. Upon designing and constructing special comparison standards modelling a sample in composition and structure, the quantitative data on distributions of more than forty elements in rock sections and these of the main microcomponents, including oxygen, in the superconducting ceramics samples have also been obtained. (author)

  8. How social information can improve estimation accuracy in human groups.

    Science.gov (United States)

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  9. Calculation of life distributions, in particular Weibull distributions, from operational observations

    International Nuclear Information System (INIS)

    Rauhut, J.

    1982-01-01

    Established methods are presented by which life distributions of machine elements can be determined on the basis of laboratory experiments and operational observations. Practical observations are given special attention as the results estimated on the basis of conventional have not been accurate enough. As an introduction, the stochastic life concept, the general method of determining life distributions, various sampling methods, and the Weibull distribution are explained. Further, possible life testing schedules and maximum-likelihood estimates are discussed for the complete sample case and for censered sampling without replacement in laboratory experiments. Finally, censered sampling with replacement in laboratory experiments is discussed; it is shown how suitable parameter estimates can be obtained for given life distributions by means of the maximum-likelihood method. (orig./RW) [de

  10. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  11. Asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size.

    Science.gov (United States)

    Chen, Hua; Chen, Kun

    2013-07-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.

  12. Role of interoceptive accuracy in topographical changes in emotion-induced bodily sensations

    Science.gov (United States)

    Jung, Won-Mo; Ryu, Yeonhee; Lee, Ye-Seul; Wallraven, Christian; Chae, Younbyoung

    2017-01-01

    The emotion-associated bodily sensation map is composed of a specific topographical distribution of bodily sensations to categorical emotions. The present study investigated whether or not interoceptive accuracy was associated with topographical changes in this map following emotion-induced bodily sensations. This study included 31 participants who observed short video clips containing emotional stimuli and then reported their sensations on the body map. Interoceptive accuracy was evaluated with a heartbeat detection task and the spatial patterns of bodily sensations to specific emotions, including anger, fear, disgust, happiness, sadness, and neutral, were visualized using Statistical Parametric Mapping (SPM) analyses. Distinct patterns of bodily sensations were identified for different emotional states. In addition, positive correlations were found between the magnitude of sensation in emotion-specific regions and interoceptive accuracy across individuals. A greater degree of interoceptive accuracy was associated with more specific topographical changes after emotional stimuli. These results suggest that the awareness of one’s internal bodily states might play a crucial role as a required messenger of sensory information during the affective process. PMID:28877218

  13. Optical diffraction tomography: accuracy of an off-axis reconstruction

    Science.gov (United States)

    Kostencka, Julianna; Kozacki, Tomasz

    2014-05-01

    Optical diffraction tomography is an increasingly popular method that allows for reconstruction of three-dimensional refractive index distribution of semi-transparent samples using multiple measurements of an optical field transmitted through the sample for various illumination directions. The process of assembly of the angular measurements is usually performed with one of two methods: filtered backprojection (FBPJ) or filtered backpropagation (FBPP) tomographic reconstruction algorithm. The former approach, although conceptually very simple, provides an accurate reconstruction for the object regions located close to the plane of focus. However, since FBPJ ignores diffraction, its use for spatially extended structures is arguable. According to the theory of scattering, more precise restoration of a 3D structure shall be achieved with the FBPP algorithm, which unlike the former approach incorporates diffraction. It is believed that with this method one is allowed to obtain a high accuracy reconstruction in a large measurement volume exceeding depth of focus of an imaging system. However, some studies have suggested that a considerable improvement of the FBPP results can be achieved with prior propagation of the transmitted fields back to the centre of the object. This, supposedly, enables reduction of errors due to approximated diffraction formulas used in FBPP. In our view this finding casts doubt on quality of the FBPP reconstruction in the regions far from the rotation axis. The objective of this paper is to investigate limitation of the FBPP algorithm in terms of an off-axis reconstruction and compare its performance with the FBPJ approach. Moreover, in this work we propose some modifications to the FBPP algorithm that allow for more precise restoration of a sample structure in off-axis locations. The research is based on extensive numerical simulations supported with wave-propagation method.

  14. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Directory of Open Access Journals (Sweden)

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  15. A Sustainability-Oriented Multiobjective Optimization Model for Siting and Sizing Distributed Generation Plants in Distribution Systems

    Directory of Open Access Journals (Sweden)

    Guang Chen

    2013-01-01

    Full Text Available This paper proposes a sustainability-oriented multiobjective optimization model for siting and sizing DG plants in distribution systems. Life cycle exergy (LCE is used as a unified indicator of the entire system’s environmental sustainability, and it is optimized as an objective function in the model. Other two objective functions include economic cost and expected power loss. Chance constraints are used to control the operation risks caused by the uncertain power loads and renewable energies. A semilinearized simulation method is proposed and combined with the Latin hypercube sampling (LHS method to improve the efficiency of probabilistic load flow (PLF analysis which is repeatedly performed to verify the chance constraints. A numerical study based on the modified IEEE 33-node system is performed to verify the proposed method. Numerical results show that the proposed semilinearized simulation method reduces about 93.3% of the calculation time of PLF analysis and guarantees satisfying accuracy. The results also indicate that benefits for environmental sustainability of using DG plants can be effectively reflected by the proposed model which helps the planner to make rational decision towards sustainable development of the distribution system.

  16. A Distributed Wireless Camera System for the Management of Parking Spaces.

    Science.gov (United States)

    Vítek, Stanislav; Melničuk, Petr

    2017-12-28

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  17. A Distributed Wireless Camera System for the Management of Parking Spaces

    Directory of Open Access Journals (Sweden)

    Stanislav Vítek

    2017-12-01

    Full Text Available The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG feature descriptor and support vector machine (SVM classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  18. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    Science.gov (United States)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  19. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Energy Technology Data Exchange (ETDEWEB)

    Cardil Forradellas, A.; Molina Terrén, D.M.; Oliveres, J.; Castellnou, M.

    2016-07-01

    Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights. Area of study: North-West of Spain. Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill.) stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution. Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic. Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass. (Author)

  20. Estimating the accuracy of geographical imputation

    Directory of Open Access Journals (Sweden)

    Boscoe Francis P

    2008-01-01

    Full Text Available Abstract Background To reduce the number of non-geocoded cases researchers and organizations sometimes include cases geocoded to postal code centroids along with cases geocoded with the greater precision of a full street address. Some analysts then use the postal code to assign information to the cases from finer-level geographies such as a census tract. Assignment is commonly completed using either a postal centroid or by a geographical imputation method which assigns a location by using both the demographic characteristics of the case and the population characteristics of the postal delivery area. To date no systematic evaluation of geographical imputation methods ("geo-imputation" has been completed. The objective of this study was to determine the accuracy of census tract assignment using geo-imputation. Methods Using a large dataset of breast, prostate and colorectal cancer cases reported to the New Jersey Cancer Registry, we determined how often cases were assigned to the correct census tract using alternate strategies of demographic based geo-imputation, and using assignments obtained from postal code centroids. Assignment accuracy was measured by comparing the tract assigned with the tract originally identified from the full street address. Results Assigning cases to census tracts using the race/ethnicity population distribution within a postal code resulted in more correctly assigned cases than when using postal code centroids. The addition of age characteristics increased the match rates even further. Match rates were highly dependent on both the geographic distribution of race/ethnicity groups and population density. Conclusion Geo-imputation appears to offer some advantages and no serious drawbacks as compared with the alternative of assigning cases to census tracts based on postal code centroids. For a specific analysis, researchers will still need to consider the potential impact of geocoding quality on their results and evaluate