WorldWideScience

Sample records for sample observed statistics

  1. Statistical distribution sampling

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  2. Contributions to sampling statistics

    Conti, Pier; Ranalli, Maria

    2014-01-01

    This book contains a selection of the papers presented at the ITACOSM 2013 Conference, held in Milan in June 2013. ITACOSM is the bi-annual meeting of the Survey Sampling Group S2G of the Italian Statistical Society, intended as an international  forum of scientific discussion on the developments of theory and application of survey sampling methodologies and applications in human and natural sciences. The book gathers research papers carefully selected from both invited and contributed sessions of the conference. The whole book appears to be a relevant contribution to various key aspects of sampling methodology and techniques; it deals with some hot topics in sampling theory, such as calibration, quantile-regression and multiple frame surveys, and with innovative methodologies in important topics of both sampling theory and applications. Contributions cut across current sampling methodologies such as interval estimation for complex samples, randomized responses, bootstrap, weighting, modeling, imputati...

  3. Sampling, Probability Models and Statistical Reasoning Statistical

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

  4. Statistical sampling strategies

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  5. Ecotoxicology statistical sampling

    Saona, G.

    2012-01-01

    This presentation introduces to general concepts in toxicology sample designs such as the distribution of organic or inorganic contaminants, a microbiological contamination, and the determination of the position in an eco toxicological bioassays ecosystem.

  6. Statistical sampling plans

    Jaech, J.L.

    1984-01-01

    In auditing and in inspection, one selects a number of items by some set of procedures and performs measurements which are compared with the operator's values. This session considers the problem of how to select the samples to be measured, and what kinds of measurements to make. In the inspection situation, the ultimate aim is to independently verify the operator's material balance. The effectiveness of the sample plan in achieving this objective is briefly considered. The discussion focuses on the model plant

  7. Statistical sampling for holdup measurement

    Picard, R.R.; Pillay, K.K.S.

    1986-01-01

    Nuclear materials holdup is a serious problem in many operating facilities. Estimating amounts of holdup is important for materials accounting and, sometimes, for process safety. Clearly, measuring holdup in all pieces of equipment is not a viable option in terms of time, money, and radiation exposure to personnel. Furthermore, 100% measurement is not only impractical but unnecessary for developing estimated values. Principles of statistical sampling are valuable in the design of cost effective holdup monitoring plans and in qualifying uncertainties in holdup estimates. The purpose of this paper is to describe those principles and to illustrate their use

  8. Evaluation of observables in statistical multifragmentation theories

    Cole, A.J.

    1989-01-01

    The canonical formulation of equilibrium statistical multifragmentation is examined. It is shown that the explicit construction of observables (average values) by sampling the partition probabilities is unnecessary insofar as closed expressions in the form of recursion relations can be obtained quite easily. Such expressions may conversely be used to verify the sampling algorithms

  9. Sample Reuse in Statistical Remodeling.

    1987-08-01

    as the jackknife and bootstrap, is an expansion of the functional, T(Fn), or of its distribution function or both. Frangos and Schucany (1987a) used...accelerated bootstrap. In the same report Frangos and Schucany demonstrated the small sample superiority of that approach over the proposals that take...higher order terms of an Edgeworth expansion into account. In a second report Frangos and Schucany (1987b) examined the small sample performance of

  10. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  11. 42 CFR 402.109 - Statistical sampling.

    2010-10-01

    ... or caused to be presented. (b) Prima facie evidence. The results of the statistical sampling study, if based upon an appropriate sampling and computed by valid statistical methods, constitute prima... § 402.1. (c) Burden of proof. Once CMS or OIG has made a prima facie case, the burden is on the...

  12. Staging Liver Fibrosis with Statistical Observers

    Brand, Jonathan Frieman

    Chronic liver disease is a worldwide health problem, and hepatic fibrosis (HF) is one of the hallmarks of the disease. Pathology diagnosis of HF is based on textural change in the liver as a lobular collagen network that develops within portal triads. The scale of collagen lobules is characteristically on order of 1mm, which close to the resolution limit of in vivo Gd-enhanced MRI. In this work the methods to collect training and testing images for a Hotelling observer are covered. An observer based on local texture analysis is trained and tested using wet-tissue phantoms. The technique is used to optimize the MRI sequence based on task performance. The final method developed is a two stage model observer to classify fibrotic and healthy tissue in both phantoms and in vivo MRI images. The first stage observer tests for the presence of local texture. Test statistics from the first observer are used to train the second stage observer to globally sample the local observer results. A decision of the disease class is made for an entire MRI image slice using test statistics collected from the second observer. The techniques are tested on wet-tissue phantoms and in vivo clinical patient data.

  13. Statistical sampling approaches for soil monitoring

    Brus, D.J.

    2014-01-01

    This paper describes three statistical sampling approaches for regional soil monitoring, a design-based, a model-based and a hybrid approach. In the model-based approach a space-time model is exploited to predict global statistical parameters of interest such as the space-time mean. In the hybrid

  14. Statistical literacy and sample survey results

    McAlevey, Lynn; Sullivan, Charles

    2010-10-01

    Sample surveys are widely used in the social sciences and business. The news media almost daily quote from them, yet they are widely misused. Using students with prior managerial experience embarking on an MBA course, we show that common sample survey results are misunderstood even by those managers who have previously done a statistics course. In general, they fare no better than managers who have never studied statistics. There are implications for teaching, especially in business schools, as well as for consulting.

  15. Statistical Symbolic Execution with Informed Sampling

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  16. Statistical aspects of food safety sampling

    Jongenburger, I.; Besten, den H.M.W.; Zwietering, M.H.

    2015-01-01

    In food safety management, sampling is an important tool for verifying control. Sampling by nature is a stochastic process. However, uncertainty regarding results is made even greater by the uneven distribution of microorganisms in a batch of food. This article reviews statistical aspects of

  17. Statistical benchmark for BosonSampling

    Walschaers, Mattia; Mayer, Klaus; Buchleitner, Andreas; Kuipers, Jack; Urbina, Juan-Diego; Richter, Klaus; Tichy, Malte Christopher

    2016-01-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church–Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects. (fast track communication)

  18. Statistical sampling method for releasing decontaminated vehicles

    Lively, J.W.; Ware, J.A.

    1996-01-01

    Earth moving vehicles (e.g., dump trucks, belly dumps) commonly haul radiologically contaminated materials from a site being remediated to a disposal site. Traditionally, each vehicle must be surveyed before being released. The logistical difficulties of implementing the traditional approach on a large scale demand that an alternative be devised. A statistical method (MIL-STD-105E, open-quotes Sampling Procedures and Tables for Inspection by Attributesclose quotes) for assessing product quality from a continuous process was adapted to the vehicle decontamination process. This method produced a sampling scheme that automatically compensates and accommodates fluctuating batch sizes and changing conditions without the need to modify or rectify the sampling scheme in the field. Vehicles are randomly selected (sampled) upon completion of the decontamination process to be surveyed for residual radioactive surface contamination. The frequency of sampling is based on the expected number of vehicles passing through the decontamination process in a given period and the confidence level desired. This process has been successfully used for 1 year at the former uranium mill site in Monticello, Utah (a CERCLA regulated clean-up site). The method forces improvement in the quality of the decontamination process and results in a lower likelihood that vehicles exceeding the surface contamination standards are offered for survey. Implementation of this statistical sampling method on Monticello Projects has resulted in more efficient processing of vehicles through decontamination and radiological release, saved hundreds of hours of processing time, provided a high level of confidence that release limits are met, and improved the radiological cleanliness of vehicles leaving the controlled site

  19. Innovations in Statistical Observations of Consumer Prices

    Olga Stepanovna Oleynik

    2016-10-01

    Full Text Available This article analyzes the innovative changes in the methodology of statistical surveys of consumer prices. These changes are reflected in the “Official statistical methodology for the organization of statistical observation of consumer prices for goods and services and the calculation of the consumer price index”, approved by order of the Federal State Statistics Service of December 30, 2014 no. 734. The essence of innovation is the use of mathematical methods in determining the range of studies objects of trade and services, in calculating the sufficient observable price quotes based on price dispersion, the proportion of the observed product (service, a representative of consumer spending, as well as the indicator of the complexity of price registration. The authors analyzed the mathematical calculations of the required number of quotations for observation in the Volgograd region in 2016, the results of calculations are compared with the number of quotes included in the monitoring. The authors believe that the implementation of these mathematical models allowed to substantially reduce the influence of the subjective factor in the organization of monitoring of consumer prices, and therefore to increase the objectivity of the resulting statistics on consumer prices and inflation. At the same time, the proposed methodology needs further improvement in terms of payment for goods, products (services by representatives having a minor share in consumer expenditure.

  20. Statistics and sampling in transuranic studies

    Eberhardt, L.L.; Gilbert, R.O.

    1980-01-01

    The existing data on transuranics in the environment exhibit a remarkably high variability from sample to sample (coefficients of variation of 100% or greater). This chapter stresses the necessity of adequate sample size and suggests various ways to increase sampling efficiency. Objectives in sampling are regarded as being of great importance in making decisions as to sampling methodology. Four different classes of sampling methods are described: (1) descriptive sampling, (2) sampling for spatial pattern, (3) analytical sampling, and (4) sampling for modeling. A number of research needs are identified in the various sampling categories along with several problems that appear to be common to two or more such areas

  1. Statistical sampling methods for soils monitoring

    Ann M. Abbott

    2010-01-01

    Development of the best sampling design to answer a research question should be an interactive venture between the land manager or researcher and statisticians, and is the result of answering various questions. A series of questions that can be asked to guide the researcher in making decisions that will arrive at an effective sampling plan are described, and a case...

  2. Sampling, Probability Models and Statistical Reasoning -RE ...

    random sampling allows data to be modelled with the help of probability ... g based on different trials to get an estimate of the experimental error. ... research interests lie in the .... if e is indeed the true value of the proportion of defectives in the.

  3. Statistical validation of earthquake related observations

    Kossobokov, V. G.

    2011-12-01

    The confirmed fractal nature of earthquakes and their distribution in space and time implies that many traditional estimations of seismic hazard (from term-less to short-term ones) are usually based on erroneous assumptions of easy tractable or, conversely, delicately-designed models. The widespread practice of deceptive modeling considered as a "reasonable proxy" of the natural seismic process leads to seismic hazard assessment of unknown quality, which errors propagate non-linearly into inflicted estimates of risk and, eventually, into unexpected societal losses of unacceptable level. The studies aimed at forecast/prediction of earthquakes must include validation in the retro- (at least) and, eventually, in prospective tests. In the absence of such control a suggested "precursor/signal" remains a "candidate", which link to target seismic event is a model assumption. Predicting in advance is the only decisive test of forecast/predictions and, therefore, the score-card of any "established precursor/signal" represented by the empirical probabilities of alarms and failures-to-predict achieved in prospective testing must prove statistical significance rejecting the null-hypothesis of random coincidental occurrence in advance target earthquakes. We reiterate suggesting so-called "Seismic Roulette" null-hypothesis as the most adequate undisturbed random alternative accounting for the empirical spatial distribution of earthquakes: (i) Consider a roulette wheel with as many sectors as the number of earthquake locations from a sample catalog representing seismic locus, a sector per each location and (ii) make your bet according to prediction (i.e., determine, which locations are inside area of alarm, and put one chip in each of the corresponding sectors); (iii) Nature turns the wheel; (iv) accumulate statistics of wins and losses along with the number of chips spent. If a precursor in charge of prediction exposes an imperfection of Seismic Roulette then, having in mind

  4. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  5. Statistical compilation of NAPAP chemical erosion observations

    Mossotti, Victor G.; Eldeeb, A. Raouf; Reddy, Michael M.; Fries, Terry L.; Coombs, Mary Jane; Schmiermund, Ron L.; Sherwood, Susan I.

    2001-01-01

    In the mid 1980s, the National Acid Precipitation Assessment Program (NAPAP), in cooperation with the National Park Service (NPS) and the U.S. Geological Survey (USGS), initiated a Materials Research Program (MRP) that included a series of field and laboratory studies with the broad objective of providing scientific information on acid rain effects on calcareous building stone. Among the several effects investigated, the chemical dissolution of limestone and marble by rainfall was given particular attention because of the pervasive appearance of erosion effects on cultural materials situated outdoors. In order to track the chemical erosion of stone objects in the field and in the laboratory, the Ca 2+ ion concentration was monitored in the runoff solution from a variety of test objects located both outdoors and under more controlled conditions in the laboratory. This report provides a graphical and statistical overview of the Ca 2+ chemistry in the runoff solutions from (1) five urban and rural sites (DC, NY, NJ, NC, and OH) established by the MRP for materials studies over the period 1984 to 1989, (2) subevent study at the New York MRP site, (3) in situ study of limestone and marble monuments at Gettysburg, (4) laboratory experiments on calcite dissolution conducted by Baedecker, (5) laboratory simulations by Schmiermund, and (6) laboratory investigation of the surface reactivity of calcareous stone conducted by Fries and Mossotti. The graphical representations provided a means for identifying erroneous data that can randomly appear in a database when field operations are semi-automated; a purged database suitable for the evaluation of quantitative models of stone erosion is appended to this report. An analysis of the sources of statistical variability in the data revealed that the rate of stone erosion is weakly dependent on the type of calcareous stone, the ambient temperature, and the H + concentration delivered in the incident rain. The analysis also showed

  6. Measuring radioactive half-lives via statistical sampling in practice

    Lorusso, G.; Collins, S. M.; Jagan, K.; Hitt, G. W.; Sadek, A. M.; Aitken-Smith, P. M.; Bridi, D.; Keightley, J. D.

    2017-10-01

    The statistical sampling method for the measurement of radioactive decay half-lives exhibits intriguing features such as that the half-life is approximately the median of a distribution closely resembling a Cauchy distribution. Whilst initial theoretical considerations suggested that in certain cases the method could have significant advantages, accurate measurements by statistical sampling have proven difficult, for they require an exercise in non-standard statistical analysis. As a consequence, no half-life measurement using this method has yet been reported and no comparison with traditional methods has ever been made. We used a Monte Carlo approach to address these analysis difficulties, and present the first experimental measurement of a radioisotope half-life (211Pb) by statistical sampling in good agreement with the literature recommended value. Our work also focused on the comparison between statistical sampling and exponential regression analysis, and concluded that exponential regression achieves generally the highest accuracy.

  7. Statistical hypothesis tests of some micrometeorological observations

    SethuRaman, S.; Tichler, J.

    1977-01-01

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g 1 has a good correlation with the chi-square values. Events with vertical-barg 1 vertical-bar 1 vertical-bar<0.43 were approximately normal. Intermittency associated with the formation and breaking of internal gravity waves in surface-based inversions over water is thought to be the reason for the non-normality

  8. Exclusive observables from a statistical simulation of energetic nuclear collisions

    Fai, G.

    1983-01-01

    Exclusive observables are calculated in the framework of a statistical model for medium-energy nuclear collisions. The collision system is divided into a few (participant/spectator) sources, that are assumed to disassemble independently. Sufficiently excited sources explode into pions, nucleons, and composite, possibly particle unstable, nuclei. The different final states compete according to their microcanonical weight. Less excited sources, and the unstable explosion products, deexcite via light-particle evaporation. The model has been implemented as a Monte Carlo computer code that is sufficiently efficient to permit generation of large event samples. Some illustrative applications are discussed. (author)

  9. A course in mathematical statistics and large sample theory

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  10. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  11. Statistical Analysis Of Tank 19F Floor Sample Results

    Harris, S.

    2010-01-01

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  12. The Role of the Sampling Distribution in Understanding Statistical Inference

    Lipson, Kay

    2003-01-01

    Many statistics educators believe that few students develop the level of conceptual understanding essential for them to apply correctly the statistical techniques at their disposal and to interpret their outcomes appropriately. It is also commonly believed that the sampling distribution plays an important role in developing this understanding.…

  13. Illustrating Sampling Distribution of a Statistic: Minitab Revisited

    Johnson, H. Dean; Evans, Marc A.

    2008-01-01

    Understanding the concept of the sampling distribution of a statistic is essential for the understanding of inferential procedures. Unfortunately, this topic proves to be a stumbling block for students in introductory statistics classes. In efforts to aid students in their understanding of this concept, alternatives to a lecture-based mode of…

  14. Statistical sampling techniques as applied to OSE inspections

    Davis, J.J.; Cote, R.W.

    1987-01-01

    The need has been recognized for statistically valid methods for gathering information during OSE inspections; and for interpretation of results, both from performance testing and from records reviews, interviews, etc. Battelle Columbus Division, under contract to DOE OSE has performed and is continuing to perform work in the area of statistical methodology for OSE inspections. This paper represents some of the sampling methodology currently being developed for use during OSE inspections. Topics include population definition, sample size requirements, level of confidence and practical logistical constraints associated with the conduct of an inspection based on random sampling. Sequential sampling schemes and sampling from finite populations are also discussed. The methods described are applicable to various data gathering activities, ranging from the sampling and examination of classified documents to the sampling of Protective Force security inspectors for skill testing

  15. Multivariate Statistical Inference of Lightning Occurrence, and Using Lightning Observations

    Boccippio, Dennis

    2004-01-01

    Two classes of multivariate statistical inference using TRMM Lightning Imaging Sensor, Precipitation Radar, and Microwave Imager observation are studied, using nonlinear classification neural networks as inferential tools. The very large and globally representative data sample provided by TRMM allows both training and validation (without overfitting) of neural networks with many degrees of freedom. In the first study, the flashing / or flashing condition of storm complexes is diagnosed using radar, passive microwave and/or environmental observations as neural network inputs. The diagnostic skill of these simple lightning/no-lightning classifiers can be quite high, over land (above 80% Probability of Detection; below 20% False Alarm Rate). In the second, passive microwave and lightning observations are used to diagnose radar reflectivity vertical structure. A priori diagnosis of hydrometeor vertical structure is highly important for improved rainfall retrieval from either orbital radars (e.g., the future Global Precipitation Mission "mothership") or radiometers (e.g., operational SSM/I and future Global Precipitation Mission passive microwave constellation platforms), we explore the incremental benefit to such diagnosis provided by lightning observations.

  16. Statistical sampling and modelling for cork oak and eucalyptus stands

    Paulo, M.J.

    2002-01-01

    This thesis focuses on the use of modern statistical methods to solve problems on sampling, optimal cutting time and agricultural modelling in Portuguese cork oak and eucalyptus stands. The results are contained in five chapters that have been submitted for publication

  17. Multivariate statistics high-dimensional and large-sample approximations

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  18. Statistical conditional sampling for variable-resolution video compression.

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  19. Method for statistical data analysis of multivariate observations

    Gnanadesikan, R

    1997-01-01

    A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte

  20. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  1. Statistical Methods and Tools for Hanford Staged Feed Tank Sampling

    Fountain, Matthew S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brigantic, Robert T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-10-01

    This report summarizes work conducted by Pacific Northwest National Laboratory to technically evaluate the current approach to staged feed sampling of high-level waste (HLW) sludge to meet waste acceptance criteria (WAC) for transfer from tank farms to the Hanford Waste Treatment and Immobilization Plant (WTP). The current sampling and analysis approach is detailed in the document titled Initial Data Quality Objectives for WTP Feed Acceptance Criteria, 24590-WTP-RPT-MGT-11-014, Revision 0 (Arakali et al. 2011). The goal of this current work is to evaluate and provide recommendations to support a defensible, technical and statistical basis for the staged feed sampling approach that meets WAC data quality objectives (DQOs).

  2. Gene coexpression measures in large heterogeneous samples using count statistics.

    Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan

    2014-11-18

    With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.

  3. Weighted statistical parameters for irregularly sampled time series

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  4. Statistical sampling applied to the radiological characterization of historical waste

    Zaffora Biagio

    2016-01-01

    Full Text Available The evaluation of the activity of radionuclides in radioactive waste is required for its disposal in final repositories. Easy-to-measure nuclides, like γ-emitters and high-energy X-rays, can be measured via non-destructive nuclear techniques from outside a waste package. Some radionuclides are difficult-to-measure (DTM from outside a package because they are α- or β-emitters. The present article discusses the application of linear regression, scaling factors (SF and the so-called “mean activity method” to estimate the activity of DTM nuclides on metallic waste produced at the European Organization for Nuclear Research (CERN. Various statistical sampling techniques including simple random sampling, systematic sampling, stratified and authoritative sampling are described and applied to 2 waste populations of activated copper cables. The bootstrap is introduced as a tool to estimate average activities and standard errors in waste characterization. The analysis of the DTM Ni-63 is used as an example. Experimental and theoretical values of SFs are calculated and compared. Guidelines for sampling historical waste using probabilistic and non-probabilistic sampling are finally given.

  5. STATISTICAL ANALYSIS OF SPORT MOVEMENT OBSERVATIONS: THE CASE OF ORIENTEERING

    K. Amouzandeh

    2017-09-01

    Full Text Available Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope and non-spatial movement attributes (e.g. speed and heart rate of athletes. As the case study, an example dataset of movement observations acquired during the “orienteering” sport is presented and statistically analyzed.

  6. Statistical Analysis of Sport Movement Observations: the Case of Orienteering

    Amouzandeh, K.; Karimipour, F.

    2017-09-01

    Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope) and non-spatial movement attributes (e.g. speed and heart rate) of athletes. As the case study, an example dataset of movement observations acquired during the "orienteering" sport is presented and statistically analyzed.

  7. Examination of statistical noise in SPECT image and sampling pitch

    Takaki, Akihiro; Soma, Tsutomu; Murase, Kenya; Watanabe, Hiroyuki; Murakami, Tomonori; Kawakami, Kazunori; Teraoka, Satomi; Kojima, Akihiro; Matsumoto, Masanori

    2008-01-01

    Statistical noise in single photon emission computed tomography (SPECT) image was examined for its relation with total count and with sampling pitch by simulation and phantom experiment to obtain their projection data under defined conditions. The former SPECT simulation was performed on assumption of a virtual, homogeneous water column (20 cm diameter) as an absorbing mass. In the latter, used were 3D-Hoffman brain phantom (Data Spectrum Corp.) filled with 370 MBq of 99m Tc-pertechnetate solution and a facing 2-detector SPECT machine with a low-energy/high-resolution collimator, E-CAM (Siemens). Projected data by the two methods were reconstructed through the filtered back projection to make each transaxial image. The noise was evaluated by vision, by their root mean square uncertainty calculated from average count and standard deviation (SD) in the region of interest (ROI) defined in reconstructed images and by normalized mean squares calculated from the difference between the reference image obtained with common sampling pitch to and all of obtained slices of, the simulation and phantom. As a conclusion, the pitch was recommended to be set in the machine as to approximating the value calculated by the sampling theorem, though the projection counts per one angular direction were smaller with the same total time of data acquisition. (R.T.)

  8. Statistical reexamination of analytical method on the observed electron spin (or nuclear) resonance curves

    Kim, J.W.

    1980-01-01

    Observed magnetic resonance curves are statistically reexamined. Typical models of resonance lines are Lorentzian and Gaussian distribution functions. In the case of metallic, alloy or intermetallic compound samples, observed resonance lines are supperposed with the absorption line and the dispersion line. The analyzing methods of supperposed resonance lines are demonstrated. (author)

  9. In situ statistical observations of EMIC waves by Arase satellite

    Nomura, R.; Matsuoka, A.; Teramoto, M.; Nose, M.; Yoshizumi, M.; Fujimoto, A.; Shinohara, M.; Tanaka, Y.

    2017-12-01

    We present in situ statistical survey of electromagnetic ion cyclotron (EMIC) waves observed by Arase satellite from 3 March to 16 July 2017. We identified 64 events using the fluxgate magnetometer (MGF) on the satellite. The EMIC wave is the key phenomena to understand the loss dynamics of MeV-energy electrons in the radiation belt. We will show the radial and latitudinal dependence of the wave occurance rate and the wave parameters (frequency band, coherence, polarization, and ellipticity). Especially the EMIC waves observed at localized weak background magnetic field will be discussed for the wave excitation mechanism in the deep inner magnetosphere.

  10. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  11. Statistical sampling plan for the TRU waste assay facility

    Beauchamp, J.J.; Wright, T.; Schultz, F.J.; Haff, K.; Monroe, R.J.

    1983-08-01

    Due to limited space, there is a need to dispose appropriately of the Oak Ridge National Laboratory transuranic waste which is presently stored below ground in 55-gal (208-l) drums within weather-resistant structures. Waste containing less than 100 nCi/g transuranics can be removed from the present storage and be buried, while waste containing greater than 100 nCi/g transuranics must continue to be retrievably stored. To make the necessary measurements needed to determine the drums that can be buried, a transuranic Neutron Interrogation Assay System (NIAS) has been developed at Los Alamos National Laboratory and can make the needed measurements much faster than previous techniques which involved γ-ray spectroscopy. The previous techniques are reliable but time consuming. Therefore, a validation study has been planned to determine the ability of the NIAS to make adequate measurements. The validation of the NIAS will be based on a paired comparison of a sample of measurements made by the previous techniques and the NIAS. The purpose of this report is to describe the proposed sampling plan and the statistical analyses needed to validate the NIAS. 5 references, 4 figures, 5 tables

  12. The nature of the redshift and directly observed quasar statistics.

    Segal, I E; Nicoll, J F; Wu, P; Zhou, Z

    1991-07-01

    The nature of the cosmic redshift is one of the most fundamental questions in modern science. Hubble's discovery of the apparent Expansion of the Universe is derived from observations on a small number of galaxies at very low redshifts. Today, quasar redshifts have a range more than 1000 times greater than those in Hubble's sample, and represent more than 100 times as many objects. A recent comprehensive compilation of published measurements provides the basis for a study indicating that quasar observations are not in good agreement with the original predictions of the Expanding Universe theory, but are well fit by the predictions of an alternative theory having fewer adjustable parameters.

  13. A Statistical Study of Interplanetary Type II Bursts: STEREO Observations

    Krupar, V.; Eastwood, J. P.; Magdalenic, J.; Gopalswamy, N.; Kruparova, O.; Szabo, A.

    2017-12-01

    Coronal mass ejections (CMEs) are the primary cause of the most severe and disruptive space weather events such as solar energetic particle (SEP) events and geomagnetic storms at Earth. Interplanetary type II bursts are generated via the plasma emission mechanism by energetic electrons accelerated at CME-driven shock waves and hence identify CMEs that potentially cause space weather impact. As CMEs propagate outward from the Sun, radio emissions are generated at progressively at lower frequencies corresponding to a decreasing ambient solar wind plasma density. We have performed a statistical study of 153 interplanetary type II bursts observed by the two STEREO spacecraft between March 2008 and August 2014. These events have been correlated with manually-identified CMEs contained in the Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS) catalogue. Our results confirm that faster CMEs are more likely to produce interplanetary type II radio bursts. We have compared observed frequency drifts with white-light observations to estimate angular deviations of type II burst propagation directions from radial. We have found that interplanetary type II bursts preferably arise from CME flanks. Finally, we discuss a visibility of radio emissions in relation to the CME propagation direction.

  14. Search for lesions in mammograms: Statistical characterization of observer responses

    Bochud, Francois O.; Abbey, Craig K.; Eckstein, Miguel P.

    2004-01-01

    We investigate human performance for visually detecting simulated microcalcifications and tumors embedded in x-ray mammograms as a function of signal contrast and the number of possible signal locations. Our results show that performance degradation with an increasing number of locations is well approximated by signal detection theory (SDT) with the usual Gaussian assumption. However, more stringent statistical analysis finds a departure from Gaussian assumptions for the detection of microcalcifications. We investigated whether these departures from the SDT Gaussian model could be accounted for by an increase in human internal response correlations arising from the image-pixel correlations present in 1/f spectrum backgrounds and/or observer internal response distributions that departed from the Gaussian assumption. Results were consistent with a departure from the Gaussian response distributions and suggested that the human observer internal responses were more compact than the Gaussian distribution. Finally, we conducted a free search experiment where the signal could appear anywhere within the image. Results show that human performance in a multiple-alternative forced-choice experiment can be used to predict performance in the clinically realistic free search experiment when the investigator takes into account the search area and the observers' inherent spatial imprecision to localize the targets

  15. Developing Students' Reasoning about Samples and Sampling Variability as a Path to Expert Statistical Thinking

    Garfield, Joan; Le, Laura; Zieffler, Andrew; Ben-Zvi, Dani

    2015-01-01

    This paper describes the importance of developing students' reasoning about samples and sampling variability as a foundation for statistical thinking. Research on expert-novice thinking as well as statistical thinking is reviewed and compared. A case is made that statistical thinking is a type of expert thinking, and as such, research…

  16. Statistics

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  17. Parametric statistical inference for discretely observed diffusion processes

    Pedersen, Asger Roer

    Part 1: Theoretical results Part 2: Statistical applications of Gaussian diffusion processes in freshwater ecology......Part 1: Theoretical results Part 2: Statistical applications of Gaussian diffusion processes in freshwater ecology...

  18. Statistical Sampling For In-Service Inspection Of Liquid Waste Tanks At The Savannah River Site

    Harris, S.

    2011-01-01

    Savannah River Remediation, LLC (SRR) is implementing a statistical sampling strategy for In-Service Inspection (ISI) of Liquid Waste (LW) Tanks at the United States Department of Energy's Savannah River Site (SRS) in Aiken, South Carolina. As a component of SRS's corrosion control program, the ISI program assesses tank wall structural integrity through the use of ultrasonic testing (UT). The statistical strategy for ISI is based on the random sampling of a number of vertically oriented unit areas, called strips, within each tank. The number of strips to inspect was determined so as to attain, over time, a high probability of observing at least one of the worst 5% in terms of pitting and corrosion across all tanks. The probability estimation to determine the number of strips to inspect was performed using the hypergeometric distribution. Statistical tolerance limits for pit depth and corrosion rates were calculated by fitting the lognormal distribution to the data. In addition to the strip sampling strategy, a single strip within each tank was identified to serve as the baseline for a longitudinal assessment of the tank safe operational life. The statistical sampling strategy enables the ISI program to develop individual profiles of LW tank wall structural integrity that collectively provide a high confidence in their safety and integrity over operational lifetimes.

  19. Exact distributions of two-sample rank statistics and block rank statistics using computer algebra

    Wiel, van de M.A.

    1998-01-01

    We derive generating functions for various rank statistics and we use computer algebra to compute the exact null distribution of these statistics. We present various techniques for reducing time and memory space used by the computations. We use the results to write Mathematica notebooks for

  20. Statistical techniques for sampling and monitoring natural resources

    Hans T. Schreuder; Richard Ernst; Hugo Ramirez-Maldonado

    2004-01-01

    We present the statistical theory of inventory and monitoring from a probabilistic point of view. We start with the basics and show the interrelationships between designs and estimators illustrating the methods with a small artificial population as well as with a mapped realistic population. For such applications, useful open source software is given in Appendix 4....

  1. Audit sampling: A qualitative study on the role of statistical and non-statistical sampling approaches on audit practices in Sweden

    Ayam, Rufus Tekoh

    2011-01-01

    PURPOSE: The two approaches to audit sampling; statistical and nonstatistical have been examined in this study. The overall purpose of the study is to explore the current extent at which statistical and nonstatistical sampling approaches are utilized by independent auditors during auditing practices. Moreover, the study also seeks to achieve two additional purposes; the first is to find out whether auditors utilize different sampling techniques when auditing SME´s (Small and Medium-Sized Ente...

  2. Sampling stored product insect pests: a comparison of four statistical sampling models for probability of pest detection

    Statistically robust sampling strategies form an integral component of grain storage and handling activities throughout the world. Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult due to species biology and behavioral characteristics. ...

  3. Statistical sampling strategies for survey of soil contamination

    Brus, D.J.

    2011-01-01

    This chapter reviews methods for selecting sampling locations in contaminated soils for three situations. In the first situation a global estimate of the soil contamination in an area is required. The result of the surey is a number or a series of numbers per contaminant, e.g. the estimated mean

  4. Small sample approach, and statistical and epidemiological aspects

    Offringa, Martin; van der Lee, Hanneke

    2011-01-01

    In this chapter, the design of pharmacokinetic studies and phase III trials in children is discussed. Classical approaches and relatively novel approaches, which may be more useful in the context of drug research in children, are discussed. The burden of repeated blood sampling in pediatric

  5. Statistical evaluations of current sampling procedures and incomplete core recovery

    Heasler, P.G.; Jensen, L.

    1994-03-01

    This document develops two formulas that describe the effects of incomplete recovery on core sampling results for the Hanford waste tanks. The formulas evaluate incomplete core recovery from a worst-case (i.e.,biased) and best-case (i.e., unbiased) perspective. A core sampler is unbiased if the sample material recovered is a random sample of the material in the tank, while any sampler that preferentially recovers a particular type of waste over others is a biased sampler. There is strong evidence to indicate that the push-mode sampler presently used at the Hanford site is a biased one. The formulas presented here show the effects of incomplete core recovery on the accuracy of composition measurements, as functions of the vertical variability in the waste. These equations are evaluated using vertical variability estimates from previously sampled tanks (B110, U110, C109). Assuming that the values of vertical variability used in this study adequately describes the Hanford tank farm, one can use the formulas to compute the effect of incomplete recovery on the accuracy of an average constituent estimate. To determine acceptable recovery limits, we have assumed that the relative error of such an estimate should be no more than 20%

  6. Microvariability in AGNs: study of different statistical methods - I. Observational analysis

    Zibecchi, L.; Andruchow, I.; Cellone, S. A.; Carpintero, D. D.; Romero, G. E.; Combi, J. A.

    2017-05-01

    We present the results of a study of different statistical methods currently used in the literature to analyse the (micro)variability of active galactic nuclei (AGNs) from ground-based optical observations. In particular, we focus on the comparison between the results obtained by applying the so-called C and F statistics, which are based on the ratio of standard deviations and variances, respectively. The motivation for this is that the implementation of these methods leads to different and contradictory results, making the variability classification of the light curves of a certain source dependent on the statistics implemented. For this purpose, we re-analyse the results on an AGN sample observed along several sessions with the 2.15 m 'Jorge Sahade' telescope (CASLEO), San Juan, Argentina. For each AGN, we constructed the nightly differential light curves. We thus obtained a total of 78 light curves for 39 AGNs, and we then applied the statistical tests mentioned above, in order to re-classify the variability state of these light curves and in an attempt to find the suitable statistical methodology to study photometric (micro)variations. We conclude that, although the C criterion is not proper a statistical test, it could still be a suitable parameter to detect variability and that its application allows us to get more reliable variability results, in contrast with the F test.

  7. Subclinical delusional ideation and appreciation of sample size and heterogeneity in statistical judgment.

    Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G

    2010-11-01

    Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.

  8. Statistical analysis of archeomagnetic samples of Teotihuacan, Mexico

    Soler-Arechalde, A. M.

    2012-12-01

    Teotihuacan was the one of the most important metropolis of Mesoamerica during the Classic Period (1 to 600 AC). The city had a continuous growth in different stages that usually concluded with a ritual. Fire was an important element natives would burn entire structures. An example of this is the Quetzalcoatl pyramid in La Ciudadela (350 AC), it was burned and a new structure was built over it, also the Big Fire at 570 AC, that marks its end. These events are suitable to archaeomagnetic dating. The inclusion of ash in the stucco enhances the magnetic signal of detrital type that also allows us to make dating. This increases the number of samples to be processed as well as the number of dates. The samples have been analyzed according to their type: floor, wall, talud and painting and whether or not exposed to fire. Sequences of directions obtained in excavations in strict stratigraphic control will be shown. A sequence of images was used to analyze the improving of Teotihuacan secular variation curve through more than a decade of continuous work at the area.

  9. 50 CFR 222.404 - Observer program sampling.

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Observer program sampling. 222.404 Section 222.404 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Requirement § 222.404 Observer program sampling. (a) During the program design, NMFS would be guided by the...

  10. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  11. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    Scheid Anika

    2012-07-01

    -case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case – without sacrificing much of the accuracy of the results. Conclusions Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms.

  12. The Statistics of Radio Astronomical Polarimetry: Disjoint, Superposed, and Composite Samples

    Straten, W. van [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122 (Australia); Tiburzi, C., E-mail: willem.van.straten@aut.ac.nz [Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany)

    2017-02-01

    A statistical framework is presented for the study of the orthogonally polarized modes of radio pulsar emission via the covariances between the Stokes parameters. To accommodate the typically heavy-tailed distributions of single-pulse radio flux density, the fourth-order joint cumulants of the electric field are used to describe the superposition of modes with arbitrary probability distributions. The framework is used to consider the distinction between superposed and disjoint modes, with particular attention to the effects of integration over finite samples. If the interval over which the polarization state is estimated is longer than the timescale for switching between two or more disjoint modes of emission, then the modes are unresolved by the instrument. The resulting composite sample mean exhibits properties that have been attributed to mode superposition, such as depolarization. Because the distinction between disjoint modes and a composite sample of unresolved disjoint modes depends on the temporal resolution of the observing instrumentation, the arguments in favor of superposed modes of pulsar emission are revisited, and observational evidence for disjoint modes is described. In principle, the four-dimensional covariance matrix that describes the distribution of sample mean Stokes parameters can be used to distinguish between disjoint modes, superposed modes, and a composite sample of unresolved disjoint modes. More comprehensive and conclusive interpretation of the covariance matrix requires more detailed consideration of various relevant phenomena, including temporally correlated subpulse modulation (e.g., jitter), statistical dependence between modes (e.g., covariant intensities and partial coherence), and multipath propagation effects (e.g., scintillation and scattering).

  13. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    Bekker, P; Kleibergen, F

    2003-01-01

    We consider the K-statistic, Kleibergen's (2002, Econometrica 70, 1781-1803) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Whereas Kleibergen (2002) especially analyzes the asymptotic behavior of the statistic, we focus on finite-sample properties in, a

  14. Statistical analyses to support guidelines for marine avian sampling. Final report

    Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris

    2012-01-01

    Interest in development of offshore renewable energy facilities has led to a need for high-quality, statistically robust information on marine wildlife distributions. A practical approach is described to estimate the amount of sampling effort required to have sufficient statistical power to identify species-specific “hotspots” and “coldspots” of marine bird abundance and occurrence in an offshore environment divided into discrete spatial units (e.g., lease blocks), where “hotspots” and “coldspots” are defined relative to a reference (e.g., regional) mean abundance and/or occurrence probability for each species of interest. For example, a location with average abundance or occurrence that is three times larger the mean (3x effect size) could be defined as a “hotspot,” and a location that is three times smaller than the mean (1/3x effect size) as a “coldspot.” The choice of the effect size used to define hot and coldspots will generally depend on a combination of ecological and regulatory considerations. A method is also developed for testing the statistical significance of possible hotspots and coldspots. Both methods are illustrated with historical seabird survey data from the USGS Avian Compendium Database. Our approach consists of five main components: 1. A review of the primary scientific literature on statistical modeling of animal group size and avian count data to develop a candidate set of statistical distributions that have been used or may be useful to model seabird counts. 2. Statistical power curves for one-sample, one-tailed Monte Carlo significance tests of differences of observed small-sample means from a specified reference distribution. These curves show the power to detect "hotspots" or "coldspots" of occurrence and abundance at a range of effect sizes, given assumptions which we discuss. 3. A model selection procedure, based on maximum likelihood fits of models in the candidate set, to determine an appropriate statistical

  15. Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.

    Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas

    2002-01-01

    Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…

  16. The statistical observation for frequency of occurrence of the anencephalus

    Chun, Yong Sun; Suh, Sung Hee; Lee, Chung Sik

    1974-01-01

    During the period from January 1967 to August 1974, 14308 delivery cases of Korean women have been observed in the Ewha Womans University Hospital for determining congenital anormalies, especially anencephalus. The results obtained are as follows: 1)The fetal congenital anormalies of 109 cases(0.76%) was detected in the total 14308 delivery cases. 2) The 5 cases (23%) was anencephalus (84%) caused by medication of the Herb and Occidental drugs in early pregnancy period within 3 months

  17. On the representativeness of behavior observation samples in classrooms.

    Tiger, Jeffrey H; Miller, Sarah J; Mevers, Joanna Lomas; Mintz, Joslyn Cynkus; Scheithauer, Mindy C; Alvarez, Jessica

    2013-01-01

    School consultants who rely on direct observation typically conduct observational samples (e.g., 1 30-min observation per day) with the hopes that the sample is representative of performance during the remainder of the day, but the representativeness of these samples is unclear. In the current study, we recorded the problem behavior of 3 referred students for 4 consecutive school days between 9:30 a.m. and 2:30 p.m. using duration recording in consecutive 10-min sessions. We then culled 10-min, 20-min, 30-min, and 60-min observations from the complete record and compared these observations to the true daily mean to assess their accuracy (i.e., how well individual observations represented the daily occurrence of target behaviors). The results indicated that when behavior occurred with low variability, the majority of brief observations were representative of the overall levels; however, when behavior occurred with greater variability, even 60-min observations did not accurately capture the true levels of behavior. © Society for the Experimental Analysis of Behavior.

  18. A Unimodal Model for Double Observer Distance Sampling Surveys.

    Earl F Becker

    Full Text Available Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  19. Observations in the statistical analysis of NBG-18 nuclear graphite strength tests

    Hindley, Michael P.; Mitchell, Mark N.; Blaine, Deborah C.; Groenwold, Albert A.

    2012-01-01

    Highlights: ► Statistical analysis of NBG-18 nuclear graphite strength test. ► A Weibull distribution and normal distribution is tested for all data. ► A Bimodal distribution in the CS data is confirmed. ► The CS data set has the lowest variance. ► A Combined data set is formed and has Weibull distribution. - Abstract: The purpose of this paper is to report on the selection of a statistical distribution chosen to represent the experimental material strength of NBG-18 nuclear graphite. Three large sets of samples were tested during the material characterisation of the Pebble Bed Modular Reactor and Core Structure Ceramics materials. These sets of samples are tensile strength, flexural strength and compressive strength (CS) measurements. A relevant statistical fit is determined and the goodness of fit is also evaluated for each data set. The data sets are also normalised for ease of comparison, and combined into one representative data set. The validity of this approach is demonstrated. A second failure mode distribution is found on the CS test data. Identifying this failure mode supports the similar observations made in the past. The success of fitting the Weibull distribution through the normalised data sets allows us to improve the basis for the estimates of the variability. This could also imply that the variability on the graphite strength for the different strength measures is based on the same flaw distribution and thus a property of the material.

  20. Effect of the Target Motion Sampling Temperature Treatment Method on the Statistics and Performance

    Viitanen, Tuomas; Leppänen, Jaakko

    2014-06-01

    Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very strong resonances. This effect is actually not related to the usage of sampled responses, but is instead an inherent property of the TMS tracking method and concerns both EBT and 0 K calculations.

  1. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  2. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  3. Effect of model choice and sample size on statistical tolerance limits

    Duran, B.S.; Campbell, K.

    1980-03-01

    Statistical tolerance limits are estimates of large (or small) quantiles of a distribution, quantities which are very sensitive to the shape of the tail of the distribution. The exact nature of this tail behavior cannot be ascertained brom small samples, so statistical tolerance limits are frequently computed using a statistical model chosen on the basis of theoretical considerations or prior experience with similar populations. This report illustrates the effects of such choices on the computations

  4. Improving Statistics Education through Simulations: The Case of the Sampling Distribution.

    Earley, Mark A.

    This paper presents a summary of action research investigating statistics students' understandings of the sampling distribution of the mean. With four sections of an introductory Statistics in Education course (n=98 students), a computer simulation activity (R. delMas, J. Garfield, and B. Chance, 1999) was implemented and evaluated to show…

  5. STATISTICAL LANDMARKS AND PRACTICAL ISSUES REGARDING THE USE OF SIMPLE RANDOM SAMPLING IN MARKET RESEARCHES

    CODRUŢA DURA

    2010-01-01

    Full Text Available The sample represents a particular segment of the statistical populationchosen to represent it as a whole. The representativeness of the sample determines the accuracyfor estimations made on the basis of calculating the research indicators and the inferentialstatistics. The method of random sampling is part of probabilistic methods which can be usedwithin marketing research and it is characterized by the fact that it imposes the requirementthat each unit belonging to the statistical population should have an equal chance of beingselected for the sampling process. When the simple random sampling is meant to be rigorouslyput into practice, it is recommended to use the technique of random number tables in order toconfigure the sample which will provide information that the marketer needs. The paper alsodetails the practical procedure implemented in order to create a sample for a marketingresearch by generating random numbers using the facilities offered by Microsoft Excel.

  6. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  7. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.

    Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E

    2015-09-03

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Effect of the Target Motion Sampling temperature treatment method on the statistics and performance

    Viitanen, Tuomas; Leppänen, Jaakko

    2015-01-01

    Highlights: • Use of the Target Motion Sampling (TMS) method with collision estimators is studied. • The expected values of the estimators agree with NJOY-based reference. • In most practical cases also the variances of the estimators are unaffected by TMS. • Transport calculation slow-down due to TMS dominates the impact on figures-of-merit. - Abstract: Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very

  9. Effects of (α,n) contaminants and sample multiplication on statistical neutron correlation measurements

    Dowdy, E.J.; Hansen, G.E.; Robba, A.A.; Pratt, J.C.

    1980-01-01

    The complete formalism for the use of statistical neutron fluctuation measurements for the nondestructive assay of fissionable materials has been developed. This formalism includes the effect of detector deadtime, neutron multiplicity, random neutron pulse contributions from (α,n) contaminants in the sample, and the sample multiplication of both fission-related and background neutrons

  10. Statistical evaluation of the data obtained from the K East Basin Sandfilter Backwash Pit samples

    Welsh, T.L.

    1994-01-01

    Samples were obtained from different locations from the K Each Sandfilter Backwash Pit to characterize the sludge material. These samples were analyzed chemically for elements, radionuclides, and residual compounds. The analytical results were statistically analyzed to determine the mean analyte content and the associated variability for each mean value

  11. Supporting Students to Develop Concepts Underlying Sampling and to Shuttle Between Contextual and Statistical Spheres

    Bakker, A.; Dierdorp, A.; Maanen, J.A. van; Eijkelhof, H.M.C.

    2012-01-01

    To stimulate students’ shuttling between contextual and statistical spheres, we based tasks on professional practices. This article focuses on two tasks to support reasoning about sampling by students aged 16-17. The purpose of the tasks was to find out which smaller sample size would have been

  12. Sampling methods to the statistical control of the production of blood components.

    Pereira, Paulo; Seghatchian, Jerard; Caldeira, Beatriz; Santos, Paula; Castro, Rosa; Fernandes, Teresa; Xavier, Sandra; de Sousa, Gracinda; de Almeida E Sousa, João Paulo

    2017-12-01

    The control of blood components specifications is a requirement generalized in Europe by the European Commission Directives and in the US by the AABB standards. The use of a statistical process control methodology is recommended in the related literature, including the EDQM guideline. The control reliability is dependent of the sampling. However, a correct sampling methodology seems not to be systematically applied. Commonly, the sampling is intended to comply uniquely with the 1% specification to the produced blood components. Nevertheless, on a purely statistical viewpoint, this model could be argued not to be related to a consistent sampling technique. This could be a severe limitation to detect abnormal patterns and to assure that the production has a non-significant probability of producing nonconforming components. This article discusses what is happening in blood establishments. Three statistical methodologies are proposed: simple random sampling, sampling based on the proportion of a finite population, and sampling based on the inspection level. The empirical results demonstrate that these models are practicable in blood establishments contributing to the robustness of sampling and related statistical process control decisions for the purpose they are suggested for. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Comparing simulated and theoretical sampling distributions of the U3 person-fit statistic

    Emons, W.H.M.; Meijer, R.R.; Sijtsma, K.

    2002-01-01

    The accuracy with which the theoretical sampling distribution of van der Flier's person-.t statistic U3 approaches the empirical U3 sampling distribution is affected by the item discrimination. A simulation study showed that for tests with a moderate or a strong mean item discrimination, the Type I

  14. Comparing simulated and theoretical sampling distributions of the U3 person-fit statistic

    Emons, Wilco H.M.; Meijer, R.R.; Sijtsma, Klaas

    2002-01-01

    The accuracy with which the theoretical sampling distribution of van der Flier’s person-fit statistic U3 approaches the empirical U3 sampling distribution is affected by the item discrimination. A simulation study showed that for tests with a moderate or a strong mean item discrimination, the Type I

  15. Repetitive Observation of Coniferous Samples in ESEM and SEM

    Tihlaříková, Eva; Neděla, Vilém

    2015-01-01

    Roč. 21, S3 (2015), s. 1695-1696 ISSN 1431-9276 R&D Projects: GA ČR(CZ) GA14-22777S; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : SEM * ESEM * biological samples * repetitive observation Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.730, year: 2015

  16. Comparison of pure and 'Latinized' centroidal Voronoi tessellation against various other statistical sampling methods

    Romero, Vicente J.; Burkardt, John V.; Gunzburger, Max D.; Peterson, Janet S.

    2006-01-01

    A recently developed centroidal Voronoi tessellation (CVT) sampling method is investigated here to assess its suitability for use in statistical sampling applications. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-dimensional parameter spaces. On several 2-D test problems CVT has recently been found to provide exceedingly effective and efficient point distributions for response surface generation. Additionally, for statistical function integration and estimation of response statistics associated with uniformly distributed random-variable inputs (uncorrelated), CVT has been found in initial investigations to provide superior points sets when compared against latin-hypercube and simple-random Monte Carlo methods and Halton and Hammersley quasi-random sequence methods. In this paper, the performance of all these sampling methods and a new variant ('Latinized' CVT) are further compared for non-uniform input distributions. Specifically, given uncorrelated normal inputs in a 2-D test problem, statistical sampling efficiencies are compared for resolving various statistics of response: mean, variance, and exceedence probabilities

  17. Connecting HL Tau to the observed exoplanet sample

    Simbulan, Christopher; Tamayo, Daniel; Petrovich, Cristobal; Rein, Hanno; Murray, Norman

    2017-08-01

    The Atacama Large Millimeter/submilimeter Array (ALMA) recently revealed a set of nearly concentric gaps in the protoplanetary disc surrounding the young star HL Tauri (HL Tau). If these are carved by forming gas giants, this provides the first set of orbital initial conditions for planets as they emerge from their birth discs. Using N-body integrations, we have followed the evolution of the system for 5 Gyr to explore the possible outcomes. We find that HL Tau initial conditions scaled down to the size of typically observed exoplanet orbits naturally produce several populations in the observed exoplanet sample. First, for a plausible range of planetary masses, we can match the observed eccentricity distribution of dynamically excited radial velocity giant planets with eccentricities >0.2. Secondly, we roughly obtain the observed rate of hot Jupiters around FGK stars. Finally, we obtain a large efficiency of planetary ejections of ≈2 per HL Tau-like system, but the small fraction of stars observed to host giant planets makes it hard to match the rate of free-floating planets inferred from microlensing observations. In view of upcoming Gaia results, we also provide predictions for the expected mutual inclination distribution, which is significantly broader than the absolute inclination distributions typically considered by previous studies.

  18. Discrimination of handlebar grip samples by fourier transform infrared microspectroscopy analysis and statistics

    Zeyu Lin

    2017-01-01

    Full Text Available In this paper, the authors presented a study on the discrimination of handlebar grip samples, to provide effective forensic science service for hit and run traffic cases. 50 bicycle handlebar grip samples, 49 electric bike handlebar grip samples, and 96 motorcycle handlebar grip samples have been randomly collected by the local police in Beijing (China. Fourier transform infrared microspectroscopy (FTIR was utilized as analytical technology. Then, target absorption selection, data pretreatment, and discrimination of linked samples and unlinked samples were chosen as three steps to improve the discrimination of FTIR spectrums collected from different handlebar grip samples. Principal component analysis and receiver operating characteristic curve were utilized to evaluate different data selection methods and different data pretreatment methods, respectively. It is possible to explore the evidential value of handlebar grip residue evidence through instrumental analysis and statistical treatments. It will provide a universal discrimination method for other forensic science samples as well.

  19. The Impact of Time Difference between Satellite Overpass and Ground Observation on Cloud Cover Performance Statistics

    Jędrzej S. Bojanowski

    2014-12-01

    Full Text Available Cloud property data sets derived from passive sensors onboard the polar orbiting satellites (such as the NOAA’s Advanced Very High Resolution Radiometer have global coverage and now span a climatological time period. Synoptic surface observations (SYNOP are often used to characterize the accuracy of satellite-based cloud cover. Infrequent overpasses of polar orbiting satellites combined with the 3- or 6-h SYNOP frequency lead to collocation time differences of up to 3 h. The associated collocation error degrades the cloud cover performance statistics such as the Hanssen-Kuiper’s discriminant (HK by up to 45%. Limiting the time difference to 10 min, on the other hand, introduces a sampling error due to a lower number of corresponding satellite and SYNOP observations. This error depends on both the length of the validated time series and the SYNOP frequency. The trade-off between collocation and sampling error call for an optimum collocation time difference. It however depends on cloud cover characteristics and SYNOP frequency, and cannot be generalized. Instead, a method is presented to reconstruct the unbiased (true HK from HK affected by the collocation differences, which significantly (t-test p < 0.01 improves the validation results.

  20. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. On Extrapolating Past the Range of Observed Data When Making Statistical Predictions in Ecology.

    Paul B Conn

    Full Text Available Ecologists are increasingly using statistical models to predict animal abundance and occurrence in unsampled locations. The reliability of such predictions depends on a number of factors, including sample size, how far prediction locations are from the observed data, and similarity of predictive covariates in locations where data are gathered to locations where predictions are desired. In this paper, we propose extending Cook's notion of an independent variable hull (IVH, developed originally for application with linear regression models, to generalized regression models as a way to help assess the potential reliability of predictions in unsampled areas. Predictions occurring inside the generalized independent variable hull (gIVH can be regarded as interpolations, while predictions occurring outside the gIVH can be regarded as extrapolations worthy of additional investigation or skepticism. We conduct a simulation study to demonstrate the usefulness of this metric for limiting the scope of spatial inference when conducting model-based abundance estimation from survey counts. In this case, limiting inference to the gIVH substantially reduces bias, especially when survey designs are spatially imbalanced. We also demonstrate the utility of the gIVH in diagnosing problematic extrapolations when estimating the relative abundance of ribbon seals in the Bering Sea as a function of predictive covariates. We suggest that ecologists routinely use diagnostics such as the gIVH to help gauge the reliability of predictions from statistical models (such as generalized linear, generalized additive, and spatio-temporal regression models.

  2. A homogeneous sample of binary galaxies: Basic observational properties

    Karachentsev, I. D.

    1990-01-01

    A survey of optical characteristics for 585 binary systems, satisfying a condition of apparent isolation on the sky, is presented. Influences of various selection effects distorting the average parameters of the sample are noted. The pair components display mutual similarity over all the global properties: luminosity, diameter, morphological type, mass-to-luminosity ratio, angular momentum etc., which is not due only to selection effects. The observed correlations must be caused by common origin of pair members. Some features (nuclear activity, color index) could acquire similarity during synchronous evolution of double galaxies. Despite the observed isolation, the sample of double systems is seriously contaminated by accidental pairs, and also by members of groups and clusters. After removing false pairs estimates of orbital mass-to-luminosity ratio range from 0 to 30 f(solar), with the mean value (7.8 plus or minus 0.7) f(solar). Binary galaxies possess nearly circular orbits with a typical eccentrity e = 0.25, probably resulting from evolutionary selection driven by component mergers under dynamical friction. The double-galaxy population with space abundance 0.12 plus or minus 0.02 and characteristic merger timescale 0.2 H(exp -1) may significantly influence the rate of dynamical evolution of galaxies.

  3. Statistical analysis of MMS observations of energetic electron escape observed at/beyond the dayside magnetopause

    Cohen, Ian J.; Mauk, Barry H.; Anderson, Brian J.; Westlake, Joseph H.; Sibeck, David G.; Turner, Drew L.; Fennell, Joseph F.; Blake, J. Bern; Jaynes, Allison N.; Leonard, Trevor W.; Baker, Daniel N.; Spence, Harlan E.; Reeves, Geoff D.; Giles, Barbara J.; Strangeway, Robert J.; Torbert, Roy B.; Burch, James L.

    2017-09-01

    Observations from the Energetic Particle Detector (EPD) instrument suite aboard the Magnetospheric Multiscale (MMS) spacecraft show that energetic (greater than tens of keV) magnetospheric particle escape into the magnetosheath occurs commonly across the dayside. This includes the surprisingly frequent observation of magnetospheric electrons in the duskside magnetosheath, an unexpected result given assumptions regarding magnetic drift shadowing. The 238 events identified in the 40 keV electron energy channel during the first MMS dayside season that exhibit strongly anisotropic pitch angle distributions indicating monohemispheric field-aligned streaming away from the magnetopause. A review of the extremely rich literature of energetic electron observations beyond the magnetopause is provided to place these new observations into historical context. Despite the extensive history of such research, these new observations provide a more comprehensive data set that includes unprecedented magnetic local time (MLT) coverage of the dayside equatorial magnetopause/magnetosheath. These data clearly highlight the common escape of energetic electrons along magnetic field lines concluded to have been reconnected across the magnetopause. While these streaming escape events agree with prior studies which show strong correlation with geomagnetic activity (suggesting a magnetotail source) and occur most frequently during periods of southward IMF, the high number of duskside events is unexpected and previously unobserved. Although the lowest electron energy channel was the focus of this study, the events reported here exhibit pitch angle anisotropies indicative of streaming up to 200 keV, which could represent the magnetopause loss of >1 MeV electrons from the outer radiation belt.

  4. Statistical analysis of MMS observations of energetic electron escape observed at/beyond the dayside magnetopause

    Cohen, Ian J.; Mauk, Barry H.; Anderson, Brian J.; Westlake, Joseph H.; Sibeck, David G.

    2017-01-01

    Here, observations from the Energetic Particle Detector (EPD) instrument suite aboard the Magnetospheric Multiscale (MMS) spacecraft show that energetic (greater than tens of keV) magnetospheric particle escape into the magnetosheath occurs commonly across the dayside. This includes the surprisingly frequent observation of magnetospheric electrons in the duskside magnetosheath, an unexpected result given assumptions regarding magnetic drift shadowing. The 238 events identified in the 40 keV electron energy channel during the first MMS dayside season that exhibit strongly anisotropic pitch angle distributions indicating monohemispheric field-aligned streaming away from the magnetopause. A review of the extremely rich literature of energetic electron observations beyond the magnetopause is provided to place these new observations into historical context. Despite the extensive history of such research, these new observations provide a more comprehensive data set that includes unprecedented magnetic local time (MLT) coverage of the dayside equatorial magnetopause/magnetosheath. These data clearly highlight the common escape of energetic electrons along magnetic field lines concluded to have been reconnected across the magnetopause. While these streaming escape events agree with prior studies which show strong correlation with geomagnetic activity (suggesting a magnetotail source) and occur most frequently during periods of southward IMF, the high number of duskside events is unexpected and previously unobserved. Although the lowest electron energy channel was the focus of this study, the events reported here exhibit pitch angle anisotropies indicative of streaming up to 200 keV, which could represent the magnetopause loss of >1 MeV electrons from the outer radiation belt.

  5. The application of statistical and/or non-statistical sampling techniques by internal audit functions in the South African banking industry

    D.P. van der Nest

    2015-03-01

    Full Text Available This article explores the use by internal audit functions of audit sampling techniques in order to test the effectiveness of controls in the banking sector. The article focuses specifically on the use of statistical and/or non-statistical sampling techniques by internal auditors. The focus of the research for this article was internal audit functions in the banking sector of South Africa. The results discussed in the article indicate that audit sampling is still used frequently as an audit evidence-gathering technique. Non-statistical sampling techniques are used more frequently than statistical sampling techniques for the evaluation of the sample. In addition, both techniques are regarded as important for the determination of the sample size and the selection of the sample items

  6. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  7. Statistics

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  8. Statistical properties of the surface velocity field in the northern Gulf of Mexico sampled by GLAD drifters

    Mariano, A.J.; Ryan, E.H.; Huntley, H.S.; Laurindo, L.C.; Coelho, E.; Ozgokmen, TM; Berta, M.; Bogucki, D; Chen, S.S.; Curcic, M.; Drouin, K.L.; Gough, M; Haus, BK; Haza, A.C.; Hogan, P

    2016-01-01

    The Grand LAgrangian Deployment (GLAD) used multiscale sampling and GPS technology to observe time series of drifter positions with initial drifter separation of O(100 m) to O(10 km), and nominal 5 min sampling, during the summer and fall of 2012 in the northern Gulf of Mexico. Histograms of the velocity field and its statistical parameters are non-Gaussian; most are multimodal. The dominant periods for the surface velocity field are 1–2 days due to inertial oscillations, tides, and the sea b...

  9. Statistical inference for discrete-time samples from affine stochastic delay differential equations

    Küchler, Uwe; Sørensen, Michael

    2013-01-01

    Statistical inference for discrete time observations of an affine stochastic delay differential equation is considered. The main focus is on maximum pseudo-likelihood estimators, which are easy to calculate in practice. A more general class of prediction-based estimating functions is investigated...

  10. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  11. Effect of the absolute statistic on gene-sampling gene-set analysis methods.

    Nam, Dougu

    2017-06-01

    Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.

  12. Statistical characteristics of L1 carrier phase observations from four low-cost GPS receivers

    Cederholm, Jens Peter

    2010-01-01

    Statistical properties of L1 carrier phase observations from four low-cost GPS receivers are investigated through a case study. The observations are collected on a zero baseline with a frequency of 1 Hz and processed with a double difference model. The carrier phase residuals from an ambiguity...

  13. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-01-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm

  14. Statistics

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  15. Statistical characterization of a large geochemical database and effect of sample size

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively

  16. DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data

    Harris, S.P. [Westinghouse Savannah River Company, AIKEN, SC (United States)

    1997-09-18

    This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Either new DWPF analytical method could result in a two to three fold improvement in sample analysis time.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples.The insert sample is named after the initial trials which placed the container inside the sample (peanut) vials. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock

  17. Chemometric and Statistical Analyses of ToF-SIMS Spectra of Increasingly Complex Biological Samples

    Berman, E S; Wu, L; Fortson, S L; Nelson, D O; Kulp, K S; Wu, K J

    2007-10-24

    Characterizing and classifying molecular variation within biological samples is critical for determining fundamental mechanisms of biological processes that will lead to new insights including improved disease understanding. Towards these ends, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to examine increasingly complex samples of biological relevance, including monosaccharide isomers, pure proteins, complex protein mixtures, and mouse embryo tissues. The complex mass spectral data sets produced were analyzed using five common statistical and chemometric multivariate analysis techniques: principal component analysis (PCA), linear discriminant analysis (LDA), partial least squares discriminant analysis (PLSDA), soft independent modeling of class analogy (SIMCA), and decision tree analysis by recursive partitioning. PCA was found to be a valuable first step in multivariate analysis, providing insight both into the relative groupings of samples and into the molecular basis for those groupings. For the monosaccharides, pure proteins and protein mixture samples, all of LDA, PLSDA, and SIMCA were found to produce excellent classification given a sufficient number of compound variables calculated. For the mouse embryo tissues, however, SIMCA did not produce as accurate a classification. The decision tree analysis was found to be the least successful for all the data sets, providing neither as accurate a classification nor chemical insight for any of the tested samples. Based on these results we conclude that as the complexity of the sample increases, so must the sophistication of the multivariate technique used to classify the samples. PCA is a preferred first step for understanding ToF-SIMS data that can be followed by either LDA or PLSDA for effective classification analysis. This study demonstrates the strength of ToF-SIMS combined with multivariate statistical and chemometric techniques to classify increasingly complex biological samples

  18. Statistics

    2001-01-01

    For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  19. Statistics

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  20. Statistics

    1999-01-01

    For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  1. Statistics of counter-streaming solar wind suprathermal electrons at solar minimum: STEREO observations

    B. Lavraud

    2010-01-01

    Full Text Available Previous work has shown that solar wind suprathermal electrons can display a number of features in terms of their anisotropy. Of importance is the occurrence of counter-streaming electron patterns, i.e., with "beams" both parallel and anti-parallel to the local magnetic field, which is believed to shed light on the heliospheric magnetic field topology. In the present study, we use STEREO data to obtain the statistical properties of counter-streaming suprathermal electrons (CSEs in the vicinity of corotating interaction regions (CIRs during the period March–December 2007. Because this period corresponds to a minimum of solar activity, the results are unrelated to the sampling of large-scale coronal mass ejections, which can lead to CSE owing to their closed magnetic field topology. The present study statistically confirms that CSEs are primarily the result of suprathermal electron leakage from the compressed CIR into the upstream regions with the combined occurrence of halo depletion at 90° pitch angle. The occurrence rate of CSE is found to be about 15–20% on average during the period analyzed (depending on the criteria used, but superposed epoch analysis demonstrates that CSEs are preferentially observed both before and after the passage of the stream interface (with peak occurrence rate >35% in the trailing high speed stream, as well as both inside and outside CIRs. The results quantitatively show that CSEs are common in the solar wind during solar minimum, but yet they suggest that such distributions would be much more common if pitch angle scattering were absent. We further argue that (1 the formation of shocks contributes to the occurrence of enhanced counter-streaming sunward-directed fluxes, but does not appear to be a necessary condition, and (2 that the presence of small-scale transients with closed-field topologies likely also contributes to the occurrence of counter-streaming patterns, but only in the slow solar wind prior to

  2. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described

  3. Adaptive sampling rate control for networked systems based on statistical characteristics of packet disordering.

    Li, Jin-Na; Er, Meng-Joo; Tan, Yen-Kheng; Yu, Hai-Bin; Zeng, Peng

    2015-09-01

    This paper investigates an adaptive sampling rate control scheme for networked control systems (NCSs) subject to packet disordering. The main objectives of the proposed scheme are (a) to avoid heavy packet disordering existing in communication networks and (b) to stabilize NCSs with packet disordering, transmission delay and packet loss. First, a novel sampling rate control algorithm based on statistical characteristics of disordering entropy is proposed; secondly, an augmented closed-loop NCS that consists of a plant, a sampler and a state-feedback controller is transformed into an uncertain and stochastic system, which facilitates the controller design. Then, a sufficient condition for stochastic stability in terms of Linear Matrix Inequalities (LMIs) is given. Moreover, an adaptive tracking controller is designed such that the sampling period tracks a desired sampling period, which represents a significant contribution. Finally, experimental results are given to illustrate the effectiveness and advantages of the proposed scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. The autoradiographic observation of neutron activated plant samples

    Koyama, Motoko; Tanizaki, Yoshiyuki

    2003-01-01

    Imaging Plate (IP) is a radiography apparatus of applying photostimulable luminescence. IP has some advantages in comparison with X-ray film, for example, high sensitivity, wide latitude and high fidelity for radiations. The high sensitivity of IP makes it possible to observe the distribution of short-lived nuclides. We obtained autoradiographs of Azuki bean cuttings. In the basal region of Azuki bean cuttings, the intensity of autoradiographs of indole acetic acid (IAA)-treated samples were higher than that of water- and Gibbereline(GA)-treated ones. The high intensity parts of IAA-treated cuttings were extended upwards. The high intensive imaging of basal region treated in IAA indicated that high elemental concentrations were in existence for adventitious root formations. The measurement results by γ-ray spectrometry showed that the Ca content in the Azuki bean cuttings basal region increased in IAA treatment. It seems that the cell division for adventitious root formation needs Ca. In Azuki bean epicotyls, Ca content showed an increase to basal region, though Mg content increased to upper region. (author)

  5. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  6. Statistics

    2003-01-01

    For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

  7. Statistics

    2004-01-01

    For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

  8. Statistics

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  9. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical

  10. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    David C Pavlacky

    Full Text Available Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1 coordination across organizations and regions, 2 meaningful management and conservation objectives, and 3 rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17. We provide two examples for the Brewer's sparrow (Spizella breweri in BCR 17 demonstrating the ability of the design to 1 determine hierarchical population responses to landscape change and 2 estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous

  11. DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data

    Harris, S.P.

    1997-01-01

    This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock-up Facility using 3 ml inserts and 15 ml peanut vials. A number of the insert samples were analyzed by Cold Chem and compared with full peanut vial samples analyzed by the current methods. The remaining inserts were analyzed by

  12. A preliminary study on identification of Thai rice samples by INAA and statistical analysis

    Kongsri, S.; Kukusamude, C.

    2017-09-01

    This study aims to investigate the elemental compositions in 93 Thai rice samples using instrumental neutron activation analysis (INAA) and to identify rice according to their types and rice cultivars using statistical analysis. As, Mg, Cl, Al, Br, Mn, K, Rb and Zn in Thai jasmine rice and Sung Yod rice samples were successfully determined by INAA. The accuracy and precision of the INAA method were verified by SRM 1568a Rice Flour. All elements were found to be in a good agreement with the certified values. The precisions in term of %RSD were lower than 7%. The LODs were obtained in range of 0.01 to 29 mg kg-1. The concentration of 9 elements distributed in Thai rice samples was evaluated and used as chemical indicators to identify the type of rice samples. The result found that Mg, Cl, As, Br, Mn, K, Rb, and Zn concentrations in Thai jasmine rice samples are significantly different but there was no evidence that Al is significantly different from concentration in Sung Yod rice samples at 95% confidence interval. Our results may provide preliminary information for discrimination of rice samples and may be useful database of Thai rice.

  13. Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module

    Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); McKay, James; Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Farmer, Ben; Conrad, Jan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Roebber, Elinore [McGill University, Department of Physics, Montreal, QC (Canada); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Collaboration: The GAMBIT Scanner Workgroup

    2017-11-15

    We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics. (orig.)

  14. New Hybrid Monte Carlo methods for efficient sampling. From physics to biology and statistics

    Akhmatskaya, Elena; Reich, Sebastian

    2011-01-01

    We introduce a class of novel hybrid methods for detailed simulations of large complex systems in physics, biology, materials science and statistics. These generalized shadow Hybrid Monte Carlo (GSHMC) methods combine the advantages of stochastic and deterministic simulation techniques. They utilize a partial momentum update to retain some of the dynamical information, employ modified Hamiltonians to overcome exponential performance degradation with the system’s size and make use of multi-scale nature of complex systems. Variants of GSHMCs were developed for atomistic simulation, particle simulation and statistics: GSHMC (thermodynamically consistent implementation of constant-temperature molecular dynamics), MTS-GSHMC (multiple-time-stepping GSHMC), meso-GSHMC (Metropolis corrected dissipative particle dynamics (DPD) method), and a generalized shadow Hamiltonian Monte Carlo, GSHmMC (a GSHMC for statistical simulations). All of these are compatible with other enhanced sampling techniques and suitable for massively parallel computing allowing for a range of multi-level parallel strategies. A brief description of the GSHMC approach, examples of its application on high performance computers and comparison with other existing techniques are given. Our approach is shown to resolve such problems as resonance instabilities of the MTS methods and non-preservation of thermodynamic equilibrium properties in DPD, and to outperform known methods in sampling efficiency by an order of magnitude. (author)

  15. Parameter sampling capabilities of sequential and simultaneous data assimilation: II. Statistical analysis of numerical results

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess and compare parameter sampling capabilities of one sequential and one simultaneous Bayesian, ensemble-based, joint state-parameter (JS) estimation method. In the companion paper, part I (Fossum and Mannseth 2014 Inverse Problems 30 114002), analytical investigations lead us to propose three claims, essentially stating that the sequential method can be expected to outperform the simultaneous method for weakly nonlinear forward models. Here, we assess the reliability and robustness of these claims through statistical analysis of results from a range of numerical experiments. Samples generated by the two approximate JS methods are compared to samples from the posterior distribution generated by a Markov chain Monte Carlo method, using four approximate measures of distance between probability distributions. Forward-model nonlinearity is assessed from a stochastic nonlinearity measure allowing for sufficiently large model dimensions. Both toy models (with low computational complexity, and where the nonlinearity is fairly easy to control) and two-phase porous-media flow models (corresponding to down-scaled versions of problems to which the JS methods have been frequently applied recently) are considered in the numerical experiments. Results from the statistical analysis show strong support of all three claims stated in part I. (paper)

  16. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  17. IR Observations of a Complete Unbiased Sample of Bright Seyfert Galaxies

    Malkan, Matthew; Bendo, George; Charmandaris, Vassilis; Smith, Howard; Spinoglio, Luigi; Tommasin, Silvia

    2008-03-01

    IR spectra will measure the 2 main energy-generating processes by which galactic nuclei shine: black hole accretion and star formation. Both of these play roles in galaxy evolution, and they appear connected. To obtain a complete sample of AGN, covering the range of luminosities and column-densities, we will combine 2 complete all-sky samples with complementary selections, minimally biased by dust obscuration: the 116 IRAS 12um AGN and the 41 Swift/BAT hard Xray AGN. These galaxies have been extensively studied across the entire EM spectrum. Herschel observations have been requested and will be synergistic with the Spitzer database. IRAC and MIPS imaging will allow us to separate the nuclear and galactic continua. We are completing full IR observations of the local AGN population, most of which have already been done. The only remaining observations we request are 10 IRS/HIRES, 57 MIPS-24 and 30 IRAC pointings. These high-quality observations of bright AGN in the bolometric-flux-limited samples should be completed, for the high legacy value of complete uniform datasets. We will measure quantitatively the emission at each wavelength arising from stars and from accretion in each galactic center. Since our complete samples come from flux-limited all-sky surveys in the IR and HX, we will calculate the bi-variate AGN and star formation Luminosity Functions for the local population of active galaxies, for comparison with higher redshifts.Our second aim is to understand the physical differences between AGN classes. This requires statistical comparisons of full multiwavelength observations of complete representative samples. If the difference between Sy1s and Sy2s is caused by orientation, their isotropic properties, including those of the surrounding galactic centers, should be similar. In contrast, if they are different evolutionary stages following a galaxy encounter, then we may find observational evidence that the circumnuclear ISM of Sy2s is relatively younger.

  18. New complete sample of identified radio sources. Part 2. Statistical study

    Soltan, A.

    1978-01-01

    Complete sample of radio sources with known redshifts selected in Paper I is studied. Source counts in the sample and the luminosity - volume test show that both quasars and galaxies are subject to the evolution. Luminosity functions for different ranges of redshifts are obtained. Due to many uncertainties only simplified models of the evolution are tested. Exponential decline of the liminosity with time of all the bright sources is in a good agreement both with the luminosity- volume test and N(S) realtion in the entire range of observed flux densities. It is shown that sources in the sample are randomly distributed in scales greater than about 17 Mpc. (author)

  19. The Sloan Digital Sky Survey Quasar Lens Search. IV. Statistical Lens Sample from the Fifth Data Release

    Inada, Naohisa; /Wako, RIKEN /Tokyo U., ICEPP; Oguri, Masamune; /Natl. Astron. Observ. of Japan /Stanford U., Phys. Dept.; Shin, Min-Su; /Michigan U. /Princeton U. Observ.; Kayo, Issha; /Tokyo U., ICRR; Strauss, Michael A.; /Princeton U. Observ.; Hennawi, Joseph F.; /UC, Berkeley /Heidelberg, Max Planck Inst. Astron.; Morokuma, Tomoki; /Natl. Astron. Observ. of Japan; Becker, Robert H.; /LLNL, Livermore /UC, Davis; White, Richard L.; /Baltimore, Space Telescope Sci.; Kochanek, Christopher S.; /Ohio State U.; Gregg, Michael D.; /LLNL, Livermore /UC, Davis /Exeter U.

    2010-05-01

    We present the second report of our systematic search for strongly lensed quasars from the data of the Sloan Digital Sky Survey (SDSS). From extensive follow-up observations of 136 candidate objects, we find 36 lenses in the full sample of 77,429 spectroscopically confirmed quasars in the SDSS Data Release 5. We then define a complete sample of 19 lenses, including 11 from our previous search in the SDSS Data Release 3, from the sample of 36,287 quasars with i < 19.1 in the redshift range 0.6 < z < 2.2, where we require the lenses to have image separations of 1 < {theta} < 20 and i-band magnitude differences between the two images smaller than 1.25 mag. Among the 19 lensed quasars, 3 have quadruple-image configurations, while the remaining 16 show double images. This lens sample constrains the cosmological constant to be {Omega}{sub {Lambda}} = 0.84{sub -0.08}{sup +0.06}(stat.){sub -0.07}{sup + 0.09}(syst.) assuming a flat universe, which is in good agreement with other cosmological observations. We also report the discoveries of 7 binary quasars with separations ranging from 1.1 to 16.6, which are identified in the course of our lens survey. This study concludes the construction of our statistical lens sample in the full SDSS-I data set.

  20. MULTI-LEVEL SAMPLING APPROACH FOR CONTINOUS LOSS DETECTION USING ITERATIVE WINDOW AND STATISTICAL MODEL

    Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani

    2010-01-01

    This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...

  1. Filtering a statistically exactly solvable test model for turbulent tracers from partial observations

    Gershgorin, B.; Majda, A.J.

    2011-01-01

    A statistically exactly solvable model for passive tracers is introduced as a test model for the authors' Nonlinear Extended Kalman Filter (NEKF) as well as other filtering algorithms. The model involves a Gaussian velocity field and a passive tracer governed by the advection-diffusion equation with an imposed mean gradient. The model has direct relevance to engineering problems such as the spread of pollutants in the air or contaminants in the water as well as climate change problems concerning the transport of greenhouse gases such as carbon dioxide with strongly intermittent probability distributions consistent with the actual observations of the atmosphere. One of the attractive properties of the model is the existence of the exact statistical solution. In particular, this unique feature of the model provides an opportunity to design and test fast and efficient algorithms for real-time data assimilation based on rigorous mathematical theory for a turbulence model problem with many active spatiotemporal scales. Here, we extensively study the performance of the NEKF which uses the exact first and second order nonlinear statistics without any approximations due to linearization. The role of partial and sparse observations, the frequency of observations and the observation noise strength in recovering the true signal, its spectrum, and fat tail probability distribution are the central issues discussed here. The results of our study provide useful guidelines for filtering realistic turbulent systems with passive tracers through partial observations.

  2. The Bologna complete sample of nearby radio sources. II. Phase referenced observations of faint nuclear sources

    Liuzzo, E.; Giovannini, G.; Giroletti, M.; Taylor, G. B.

    2009-10-01

    Aims: To study statistical properties of different classes of sources, it is necessary to observe a sample that is free of selection effects. To do this, we initiated a project to observe a complete sample of radio galaxies selected from the B2 Catalogue of Radio Sources and the Third Cambridge Revised Catalogue (3CR), with no selection constraint on the nuclear properties. We named this sample “the Bologna Complete Sample” (BCS). Methods: We present new VLBI observations at 5 and 1.6 GHz for 33 sources drawn from a sample not biased toward orientation. By combining these data with those in the literature, information on the parsec-scale morphology is available for a total of 76 of 94 radio sources with a range in radio power and kiloparsec-scale morphologies. Results: The fraction of two-sided sources at milliarcsecond resolution is high (30%), compared to the fraction found in VLBI surveys selected at centimeter wavelengths, as expected from the predictions of unified models. The parsec-scale jets are generally found to be straight and to line up with the kiloparsec-scale jets. A few peculiar sources are discussed in detail. Tables 1-4 are only available in electronic form at http://www.aanda.org

  3. Constrained statistical inference: sample-size tables for ANOVA and regression

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  4. Statistical methods for detecting differentially abundant features in clinical metagenomic samples.

    James Robert White

    2009-04-01

    Full Text Available Numerous studies are currently underway to characterize the microbial communities inhabiting our world. These studies aim to dramatically expand our understanding of the microbial biosphere and, more importantly, hope to reveal the secrets of the complex symbiotic relationship between us and our commensal bacterial microflora. An important prerequisite for such discoveries are computational tools that are able to rapidly and accurately compare large datasets generated from complex bacterial communities to identify features that distinguish them.We present a statistical method for comparing clinical metagenomic samples from two treatment populations on the basis of count data (e.g. as obtained through sequencing to detect differentially abundant features. Our method, Metastats, employs the false discovery rate to improve specificity in high-complexity environments, and separately handles sparsely-sampled features using Fisher's exact test. Under a variety of simulations, we show that Metastats performs well compared to previously used methods, and significantly outperforms other methods for features with sparse counts. We demonstrate the utility of our method on several datasets including a 16S rRNA survey of obese and lean human gut microbiomes, COG functional profiles of infant and mature gut microbiomes, and bacterial and viral metabolic subsystem data inferred from random sequencing of 85 metagenomes. The application of our method to the obesity dataset reveals differences between obese and lean subjects not reported in the original study. For the COG and subsystem datasets, we provide the first statistically rigorous assessment of the differences between these populations. The methods described in this paper are the first to address clinical metagenomic datasets comprising samples from multiple subjects. Our methods are robust across datasets of varied complexity and sampling level. While designed for metagenomic applications, our software

  5. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  6. Statistical assessment of fish behavior from split-beam hydro-acoustic sampling

    McKinstry, Craig A.; Simmons, Mary Ann; Simmons, Carver S.; Johnson, Robert L.

    2005-01-01

    Statistical methods are presented for using echo-traces from split-beam hydro-acoustic sampling to assess fish behavior in response to a stimulus. The data presented are from a study designed to assess the response of free-ranging, lake-resident fish, primarily kokanee (Oncorhynchus nerka) and rainbow trout (Oncorhynchus mykiss) to high intensity strobe lights, and was conducted at Grand Coulee Dam on the Columbia River in Northern Washington State. The lights were deployed immediately upstream from the turbine intakes, in a region exposed to daily alternating periods of high and low flows. The study design included five down-looking split-beam transducers positioned in a line at incremental distances upstream from the strobe lights, and treatments applied in randomized pseudo-replicate blocks. Statistical methods included the use of odds-ratios from fitted loglinear models. Fish-track velocity vectors were modeled using circular probability distributions. Both analyses are depicted graphically. Study results suggest large increases of fish activity in the presence of the strobe lights, most notably at night and during periods of low flow. The lights also induced notable bimodality in the angular distributions of the fish track velocity vectors. Statistical/SUMmaries are presented along with interpretations on fish behavior

  7. Apparatus for observing a sample with a particle beam and an optical microscope

    2010-01-01

    An apparatus for observing a sample (1) with a TEM column and an optical high resolution scanning microscope (10). The sample position when observing the sample with the TEM column differs from the sample position when observing the sample with the optical microscope in that in the latter case the

  8. Sparse Power-Law Network Model for Reliable Statistical Predictions Based on Sampled Data

    Alexander P. Kartun-Giles

    2018-04-01

    Full Text Available A projective network model is a model that enables predictions to be made based on a subsample of the network data, with the predictions remaining unchanged if a larger sample is taken into consideration. An exchangeable model is a model that does not depend on the order in which nodes are sampled. Despite a large variety of non-equilibrium (growing and equilibrium (static sparse complex network models that are widely used in network science, how to reconcile sparseness (constant average degree with the desired statistical properties of projectivity and exchangeability is currently an outstanding scientific problem. Here we propose a network process with hidden variables which is projective and can generate sparse power-law networks. Despite the model not being exchangeable, it can be closely related to exchangeable uncorrelated networks as indicated by its information theory characterization and its network entropy. The use of the proposed network process as a null model is here tested on real data, indicating that the model offers a promising avenue for statistical network modelling.

  9. Statistical behavior of foreshock Langmuir waves observed by the Cluster wideband data plasma wave receiver

    K. Sigsbee

    2004-07-01

    Full Text Available We present the statistics of Langmuir wave amplitudes in the Earth's foreshock using Cluster Wideband Data (WBD Plasma Wave Receiver electric field waveforms from spacecraft 2, 3 and 4 on 26 March 2002. The largest amplitude Langmuir waves were observed by Cluster near the boundary between the foreshock and solar wind, in agreement with earlier studies. The characteristics of the waves were similar for all three spacecraft, suggesting that variations in foreshock structure must occur on scales greater than the 50-100km spacecraft separations. The electric field amplitude probability distributions constructed using waveforms from the Cluster WBD Plasma Wave Receiver generally followed the log-normal statistics predicted by stochastic growth theory for the event studied. Comparison with WBD receiver data from 17 February 2002, when spacecraft 4 was set in a special manual gain mode, suggests non-optimal auto-ranging of the instrument may have had some influence on the statistics.

  10. Statistical behavior of foreshock Langmuir waves observed by the Cluster wideband data plasma wave receiver

    K. Sigsbee

    2004-07-01

    Full Text Available We present the statistics of Langmuir wave amplitudes in the Earth's foreshock using Cluster Wideband Data (WBD Plasma Wave Receiver electric field waveforms from spacecraft 2, 3 and 4 on 26 March 2002. The largest amplitude Langmuir waves were observed by Cluster near the boundary between the foreshock and solar wind, in agreement with earlier studies. The characteristics of the waves were similar for all three spacecraft, suggesting that variations in foreshock structure must occur on scales greater than the 50-100km spacecraft separations. The electric field amplitude probability distributions constructed using waveforms from the Cluster WBD Plasma Wave Receiver generally followed the log-normal statistics predicted by stochastic growth theory for the event studied. Comparison with WBD receiver data from 17 February 2002, when spacecraft 4 was set in a special manual gain mode, suggests non-optimal auto-ranging of the instrument may have had some influence on the statistics.

  11. NEON terrestrial field observations: designing continental scale, standardized sampling

    R. H. Kao; C.M. Gibson; R. E. Gallery; C. L. Meier; D. T. Barnett; K. M. Docherty; K. K. Blevins; P. D. Travers; E. Azuaje; Y. P. Springer; K. M. Thibault; V. J. McKenzie; M. Keller; L. F. Alves; E. L. S. Hinckley; J. Parnell; D. Schimel

    2012-01-01

    Rapid changes in climate and land use and the resulting shifts in species distributions and ecosystem functions have motivated the development of the National Ecological Observatory Network (NEON). Integrating across spatial scales from ground sampling to remote sensing, NEON will provide data for users to address ecological responses to changes in climate, land use,...

  12. [Statistical study of the incidence of agenesis in a sample of 1529 subjects].

    Lo Muzio, L; Mignogna, M D; Bucci, P; Sorrentino, F

    1989-09-01

    Following a short review of the main aetiopathogenetic theories on dental agenesia, a personal statistical study of this pathology is reported. 1529 orthopantomographs of juveniles aged between 7 and 14 were examined. 79 cases of hypodentia were observed (5.2%), 32 in males (4.05%) and 47 in females (6.78%). The most interesting tooth was the second premolar with an incidence of 58.9% followed by the lateral incisor, with an incidence of 26.38%. This is in agreement with the international literature.

  13. Determination of Sr-90 in milk samples from the study of statistical results

    Otero-Pazos Alberto

    2017-01-01

    Full Text Available The determination of 90Sr in milk samples is the main objective of radiation monitoring laboratories because of its environmental importance. In this paper the concentration of activity of 39 milk samples was obtained through radiochemical separation based on selective retention of Sr in a cationic resin (Dowex 50WX8, 50-100 mesh and subsequent determination by a low-level proportional gas counter. The results were checked by performing the measurement of the Sr concentration by using the flame atomic absorption spectroscopy technique, to finally obtain the mass of 90Sr. From the data obtained a statistical treatment was performed using linear regressions. A reliable estimate of the mass of 90Sr was obtained based on the gravimetric technique, and secondly, the counts per minute of the third measurement in the 90Sr and 90Y equilibrium, without having to perform the analysis. These estimates have been verified with 19 milk samples, obtaining overlapping results. The novelty of the manuscript is the possibility of determining the concentration of 90Sr in milk samples, without the need to perform the third measurement in the equilibrium.

  14. Statistical issues in reporting quality data: small samples and casemix variation.

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  15. TRAN-STAT, Issue No. 3, January 1978. Topics discussed: some statistical aspects of compositing field samples

    Gilbert, R.O.

    1978-01-01

    Some statistical aspects of compositing field samples of soils for determining the content of Pu are discussed. Some of the potential problems involved in pooling samples are reviewed. This is followed by more detailed discussions and examples of compositing designs, adequacy of mixing, statistical models and their role in compositing, and related topics

  16. Generalizability of causal inference in observational studies under retrospective convenience sampling.

    Hu, Zonghui; Qin, Jing

    2018-05-20

    Many observational studies adopt what we call retrospective convenience sampling (RCS). With the sample size in each arm prespecified, RCS randomly selects subjects from the treatment-inclined subpopulation into the treatment arm and those from the control-inclined into the control arm. Samples in each arm are representative of the respective subpopulation, but the proportion of the 2 subpopulations is usually not preserved in the sample data. We show in this work that, under RCS, existing causal effect estimators actually estimate the treatment effect over the sample population instead of the underlying study population. We investigate how to correct existing methods for consistent estimation of the treatment effect over the underlying population. Although RCS is adopted in medical studies for ethical and cost-effective purposes, it also has a big advantage for statistical inference: When the tendency to receive treatment is low in a study population, treatment effect estimators under RCS, with proper correction, are more efficient than their parallels under random sampling. These properties are investigated both theoretically and through numerical demonstration. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  17. THE SLOAN DIGITAL SKY SURVEY QUASAR LENS SEARCH. IV. STATISTICAL LENS SAMPLE FROM THE FIFTH DATA RELEASE

    Inada, Naohisa; Oguri, Masamune; Shin, Min-Su; Kayo, Issha; Fukugita, Masataka; Strauss, Michael A.; Gott, J. Richard; Hennawi, Joseph F.; Morokuma, Tomoki; Becker, Robert H.; Gregg, Michael D.; White, Richard L.; Kochanek, Christopher S.; Chiu, Kuenley; Johnston, David E.; Clocchiatti, Alejandro; Richards, Gordon T.; Schneider, Donald P.; Frieman, Joshua A.

    2010-01-01

    We present the second report of our systematic search for strongly lensed quasars from the data of the Sloan Digital Sky Survey (SDSS). From extensive follow-up observations of 136 candidate objects, we find 36 lenses in the full sample of 77,429 spectroscopically confirmed quasars in the SDSS Data Release 5. We then define a complete sample of 19 lenses, including 11 from our previous search in the SDSS Data Release 3, from the sample of 36,287 quasars with i Λ = 0.84 +0.06 -0.08 (stat.) +0.09 -0.07 (syst.) assuming a flat universe, which is in good agreement with other cosmological observations. We also report the discoveries of seven binary quasars with separations ranging from 1.''1 to 16.''6, which are identified in the course of our lens survey. This study concludes the construction of our statistical lens sample in the full SDSS-I data set.

  18. The Statistics of Emission and Detection of Neutrons and Photons from Fissile Samples for Safeguard Applications

    Enqvist, Andreas

    2008-03-01

    One particular purpose of nuclear safeguards, in addition to accounting for known materials, is the detection, identifying and quantifying unknown material, to prevent accidental and clandestine transports and uses of nuclear materials. This can be achieved in a non-destructive way through the various physical and statistical properties of particle emission and detection from such materials. This thesis addresses some fundamental aspects of nuclear materials and the way they can be detected and quantified by such methods. Factorial moments or multiplicities have long been used within the safeguard area. These are low order moments of the underlying number distributions of emission and detection. One objective of the present work was to determine the full probability distribution and its dependence on the sample mass and the detection process. Derivation and analysis of the full probability distribution and its dependence on the above factors constitutes the first part of the thesis. Another possibility of identifying unknown samples lies in the information in the 'fingerprints' (pulse shape distribution) left by a detected neutron or photon. A study of the statistical properties of the interaction of the incoming radiation (neutrons and photons) with the detectors constitutes the second part of the thesis. The interaction between fast neutrons and organic scintillation detectors is derived, and compared to Monte Carlo simulations. An experimental approach is also addressed in which cross correlation measurements were made using liquid scintillation detectors. First the dependence of the pulse height distribution on the energy and collision number of an incoming neutron was derived analytically and compared to numerical simulations. Then an algorithm was elaborated which can discriminate neutron pulses from photon pulses. The resulting cross correlation graphs are analyzed and discussed whether they can be used in applications to distinguish possible sample

  19. The Statistics of Emission and Detection of Neutrons and Photons from Fissile Samples for Safeguard Applications

    Enqvist, Andreas

    2008-03-15

    One particular purpose of nuclear safeguards, in addition to accounting for known materials, is the detection, identifying and quantifying unknown material, to prevent accidental and clandestine transports and uses of nuclear materials. This can be achieved in a non-destructive way through the various physical and statistical properties of particle emission and detection from such materials. This thesis addresses some fundamental aspects of nuclear materials and the way they can be detected and quantified by such methods. Factorial moments or multiplicities have long been used within the safeguard area. These are low order moments of the underlying number distributions of emission and detection. One objective of the present work was to determine the full probability distribution and its dependence on the sample mass and the detection process. Derivation and analysis of the full probability distribution and its dependence on the above factors constitutes the first part of the thesis. Another possibility of identifying unknown samples lies in the information in the 'fingerprints' (pulse shape distribution) left by a detected neutron or photon. A study of the statistical properties of the interaction of the incoming radiation (neutrons and photons) with the detectors constitutes the second part of the thesis. The interaction between fast neutrons and organic scintillation detectors is derived, and compared to Monte Carlo simulations. An experimental approach is also addressed in which cross correlation measurements were made using liquid scintillation detectors. First the dependence of the pulse height distribution on the energy and collision number of an incoming neutron was derived analytically and compared to numerical simulations. Then an algorithm was elaborated which can discriminate neutron pulses from photon pulses. The resulting cross correlation graphs are analyzed and discussed whether they can be used in applications to distinguish possible

  20. Statistical model for degraded DNA samples and adjusted probabilities for allelic drop-out

    Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt

    2012-01-01

    Abstract DNA samples found at a scene of crime or obtained from the debris of a mass disaster accident are often subject to degradation. When using the STR DNA technology, the DNA profile is observed via a so-called electropherogram (EPG), where the alleles are identified as signal peaks above...... data from degraded DNA, where cases with varying amounts of DNA and levels of degradation are investigated....

  1. Statistical model for degraded DNA samples and adjusted probabilities for allelic drop-out

    Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt

    2012-01-01

    DNA samples found at a scene of crime or obtained from the debris of a mass disaster accident are often subject to degradation. When using the STR DNA technology, the DNA profile is observed via a so-called electropherogram (EPG), where the alleles are identified as signal peaks above a certain...... data from degraded DNA, where cases with varying amounts of DNA and levels of degradation are investigated....

  2. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    Edjabou, Vincent Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona

    2015-01-01

    Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both...... comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub......-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10-50 waste fractions, organised according to a three-level (tiered approach) facilitating,comparison of the waste data between individual sub-areas with different fractionation (waste...

  3. Autonomous spatially adaptive sampling in experiments based on curvature, statistical error and sample spacing with applications in LDA measurements

    Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.

    2015-06-01

    Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.

  4. Statistic analyses of the color experience according to the age of the observer.

    Hunjet, Anica; Parac-Osterman, Durdica; Vucaj, Edita

    2013-04-01

    Psychological experience of color is a real state of the communication between the environment and color, and it will depend on the source of the light, angle of the view, and particular on the observer and his health condition. Hering's theory or a theory of the opponent processes supposes that cones, which are situated in the retina of the eye, are not sensible on the three chromatic domains (areas, fields, zones) (red, green and purple-blue), but they produce a signal based on the principle of the opposed pairs of colors. A reason of this theory depends on the fact that certain disorders of the color eyesight, which include blindness to certain colors, cause blindness to pairs of opponent colors. This paper presents a demonstration of the experience of blue and yellow tone according to the age of the observer. For the testing of the statistically significant differences in the omission in the color experience according to the color of the background we use following statistical tests: Mann-Whitnney U Test, Kruskal-Wallis ANOVA and Median test. It was proven that the differences are statistically significant in the elderly persons (older than 35 years).

  5. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2015-01-01

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  6. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    Edjabou, Maklawe Essonanawe, E-mail: vine@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Petersen, Claus [Econet AS, Omøgade 8, 2.sal, 2100 Copenhagen (Denmark); Scheutz, Charlotte; Astrup, Thomas Fruergaard [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark)

    2015-02-15

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  7. Finite-sample instrumental variables Inference using an Asymptotically Pivotal Statistic

    Bekker, P.; Kleibergen, F.R.

    2001-01-01

    The paper considers the K-statistic, Kleibergen’s (2000) adaptation ofthe Anderson-Rubin (AR) statistic in instrumental variables regression.Compared to the AR-statistic this K-statistic shows improvedasymptotic efficiency in terms of degrees of freedom in overidentifiedmodels and yet it shares,

  8. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    Bekker, Paul A.; Kleibergen, Frank

    2001-01-01

    The paper considers the K-statistic, Kleibergen’s (2000) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Compared to the AR-statistic this K-statistic shows improved asymptotic efficiency in terms of degrees of freedom in overidenti?ed models and yet it shares,

  9. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.

  10. Time-of-Flight Measurements as a Possible Method to Observe Anyonic Statistics

    Umucalılar, R. O.; Macaluso, E.; Comparin, T.; Carusotto, I.

    2018-06-01

    We propose a standard time-of-flight experiment as a method for observing the anyonic statistics of quasiholes in a fractional quantum Hall state of ultracold atoms. The quasihole states can be stably prepared by pinning the quasiholes with localized potentials and a measurement of the mean square radius of the freely expanding cloud, which is related to the average total angular momentum of the initial state, offers direct signatures of the statistical phase. Our proposed method is validated by Monte Carlo calculations for ν =1 /2 and 1 /3 fractional quantum Hall liquids containing a realistic number of particles. Extensions to quantum Hall liquids of light and to non-Abelian anyons are briefly discussed.

  11. Seasonal rationalization of river water quality sampling locations: a comparative study of the modified Sanders and multivariate statistical approaches.

    Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar

    2016-02-01

    approach outperforms FA/PCA when limited water quality and extensive watershed information is available. The available water quality dataset is limited and FA/PCA-based approach fails to identify monitoring locations with higher variation, as these multivariate statistical approaches are data-driven. The priority/hierarchy and number of sampling sites designed by modified Sanders approach are well justified by the land use practices and observed river basin characteristics of the study area.

  12. Comparison of long-term Moscow and Danish NLC observations: statistical results

    P. Dalin

    2006-11-01

    Full Text Available Noctilucent clouds (NLC are the highest clouds in the Earth's atmosphere, observed close to the mesopause at 80–90 km altitudes. Systematic NLC observations conducted in Moscow for the period of 1962–2005 and in Denmark for 1983–2005 are compared and statistical results both for seasonally summarized NLC parameters and for individual NLC appearances are described. Careful attention is paid to the weather conditions during each season of observations. This turns out to be a very important factor both for the NLC case study and for long-term data set analysis. Time series of seasonal values show moderate similarity (taking into account the weather conditions but, at the same time, the comparison of individual cases of NLC occurrence reveals substantial differences. There are positive trends in the Moscow and Danish normalized NLC brightness as well as nearly zero trend in the Moscow normalized NLC occurrence frequency but these long-term changes are not statistically significant. The quasi-ten-year cycle in NLC parameters is about 1 year shorter than the solar cycle during the same period. The characteristic scale of NLC fields is estimated for the first time and it is found to be less than 800 km.

  13. Early pack-off diagnosis in drilling using an adaptive observer and statistical change detection

    Willersrud, Anders; Imsland, Lars; Blanke, Mogens

    2015-01-01

    in the well. A model-based adaptive observer is used to estimate these friction parameters as well as flow rates. Detecting changes to these estimates can then be used for pack-off diagnosis, which due to measurement noise is done using statistical change detection. Isolation of incident type and location...... is done using a multivariate generalized likelihood ratio test, determining the change direction of the estimated mean values. The method is tested on simulated data from the commercial high-fidelity multi-phase simulator OLGA, where three different pack-offs at different locations and with different...

  14. A statistical method to get surface level air-temperature from satellite observations of precipitable water

    Pankajakshan, T.; Shikauchi, A; Sugimori, Y.; Kubota, M.

    -T a and precipitable water. The rms errors of the SSMI-T a , in this case are found to be reduced to 1.0°C. 1. Introduction Satellite derived surface-level meteorological parameters are considered to be a better alternative to sparse ship... Vol. 49, pp. 551 to 558. 1993 A Statistical Method to Get Surface Level Air-Temperature from Satellite Observations of Precipitable Water PANKAJAKSHAN THADATHIL*, AKIRA SHIKAUCHI, YASUHIRO SUGIMORI and MASAHISA KUBOTA School of Marine Science...

  15. A statistic to estimate the variance of the histogram-based mutual information estimator based on dependent pairs of observations

    Moddemeijer, R

    In the case of two signals with independent pairs of observations (x(n),y(n)) a statistic to estimate the variance of the histogram based mutual information estimator has been derived earlier. We present such a statistic for dependent pairs. To derive this statistic it is necessary to avail of a

  16. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  17. Further observations on comparison of immunization coverage by lot quality assurance sampling and 30 cluster sampling.

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-06-01

    Lot Quality Assurance Sampling (LQAS) and standard EPI methodology (30 cluster sampling) were used to evaluate immunization coverage in a Primary Health Center (PHC) where coverage levels were reported to be more than 85%. Of 27 sub-centers (lots) evaluated by LQAS, only 2 were accepted for child coverage, whereas none was accepted for tetanus toxoid (TT) coverage in mothers. LQAS data were combined to obtain an estimate of coverage in the entire population; 41% (95% CI 36-46) infants were immunized appropriately for their ages, while 42% (95% CI 37-47) of their mothers had received a second/ booster dose of TT. TT coverage in 149 contemporary mothers sampled in EPI survey was also 42% (95% CI 31-52). Although results by the two sampling methods were consistent with each other, a big gap was evident between reported coverage (in children as well as mothers) and survey results. LQAS was found to be operationally feasible, but it cost 40% more and required 2.5 times more time than the EPI survey. LQAS therefore, is not a good substitute for current EPI methodology to evaluate immunization coverage in a large administrative area. However, LQAS has potential as method to monitor health programs on a routine basis in small population sub-units, especially in areas with high and heterogeneously distributed immunization coverage.

  18. Statistical survey of day-side magnetospheric current flow using Cluster observations: magnetopause

    E. Liebert

    2017-05-01

    Full Text Available We present a statistical survey of current structures observed by the Cluster spacecraft at high-latitude day-side magnetopause encounters in the close vicinity of the polar cusps. Making use of the curlometer technique and the fluxgate magnetometer data, we calculate the 3-D current densities and investigate the magnetopause current direction, location, and magnitude during varying solar wind conditions. We find that the orientation of the day-side current structures is in accordance with existing magnetopause current models. Based on the ambient plasma properties, we distinguish five different transition regions at the magnetopause surface and observe distinctive current properties for each region. Additionally, we find that the location of currents varies with respect to the onset of the changes in the plasma environment during magnetopause crossings.

  19. Observer variability in the assessment of type and dysplasia of colorectal adenomas, analyzed using kappa statistics

    Jensen, P; Krogsgaard, M R; Christiansen, J

    1995-01-01

    . The kappa values for Observer A vs. B and Observer C vs. B were 0.3480 and 0.3770, respectively (both type and dysplasia). Values for type were better than for dysplasia, but agreement was only fair to moderate. CONCLUSION: The interobserver agreement was moderate to almost perfect, but the intraobserver...... agreement was only fair to moderate. A simpler classification system or a centralization of assessments would probably increase kappa values....... of adenomas were assessed twice by three experienced pathologists, with an interval of two months. Results were analyzed using kappa statistics. RESULTS: For agreement between first and second assessment (both type and grade of dysplasia), kappa values for the three specialists were 0.5345, 0.9022, and 0...

  20. The enhanced greenhouse signal versus natural variations in observed climate time series: a statistical approach

    Schoenwiese, C D [J.W. Goethe Univ., Frankfurt (Germany). Inst. for Meteorology and Geophysics

    1996-12-31

    It is a well-known fact that human activities lead to an atmospheric concentration increase of some IR-active trace gases (greenhouse gases GHG) and that this influence enhances the `greenhouse effect`. However, there are major quantitative and regional uncertainties in the related climate model projections and the observational data reflect the whole complex of both anthropogenic and natural forcing of the climate system. This contribution aims at the separation of the anthropogenic enhanced greenhouse signal in observed global surface air temperature data versus other forcing using statistical methods such as multiple (multiforced) regressions and neural networks. The competitive natural forcing considered are volcanic and solar activity, in addition the ENSO (El Nino/Southern Oscillation) mechanism. This analysis will be extended also to the NAO (North Atlantic Oscillation) and anthropogenic sulfate formation in the troposphere

  1. Statistical Analysis of Langmuir Waves Associated with Type III Radio Bursts: I. Wind Observations

    Vidojević S.

    2011-12-01

    Full Text Available Interplanetary electron beams are unstable in the solar wind and they generate Langmuir waves at the local plasma frequency or its harmonic. Radio observations of the waves in the range 4-256 kHz, observed in 1994-2010 with the WAVES experiment onboard the WIND spacecraft, are statistically analyzed. A subset of 36 events with Langmuir waves and type III bursts occurring at the same time was selected. After removal of the background, the remaining power spectral density is modeled by the Pearson system of probability distributions (types I, IV and VI. The Stochastic Growth Theory (SGT predicts log-normal distribution for the power spectrum density of the Langmuir waves. Our results indicate that SGT possibly requires further verification.

  2. The enhanced greenhouse signal versus natural variations in observed climate time series: a statistical approach

    Schoenwiese, C.D. [J.W. Goethe Univ., Frankfurt (Germany). Inst. for Meteorology and Geophysics

    1995-12-31

    It is a well-known fact that human activities lead to an atmospheric concentration increase of some IR-active trace gases (greenhouse gases GHG) and that this influence enhances the `greenhouse effect`. However, there are major quantitative and regional uncertainties in the related climate model projections and the observational data reflect the whole complex of both anthropogenic and natural forcing of the climate system. This contribution aims at the separation of the anthropogenic enhanced greenhouse signal in observed global surface air temperature data versus other forcing using statistical methods such as multiple (multiforced) regressions and neural networks. The competitive natural forcing considered are volcanic and solar activity, in addition the ENSO (El Nino/Southern Oscillation) mechanism. This analysis will be extended also to the NAO (North Atlantic Oscillation) and anthropogenic sulfate formation in the troposphere

  3. Statistical properties of a utility measure of observer performance compared to area under the ROC curve

    Abbey, Craig K.; Samuelson, Frank W.; Gallas, Brandon D.; Boone, John M.; Niklason, Loren T.

    2013-03-01

    The receiver operating characteristic (ROC) curve has become a common tool for evaluating diagnostic imaging technologies, and the primary endpoint of such evaluations is the area under the curve (AUC), which integrates sensitivity over the entire false positive range. An alternative figure of merit for ROC studies is expected utility (EU), which focuses on the relevant region of the ROC curve as defined by disease prevalence and the relative utility of the task. However if this measure is to be used, it must also have desirable statistical properties keep the burden of observer performance studies as low as possible. Here, we evaluate effect size and variability for EU and AUC. We use two observer performance studies recently submitted to the FDA to compare the EU and AUC endpoints. The studies were conducted using the multi-reader multi-case methodology in which all readers score all cases in all modalities. ROC curves from the study were used to generate both the AUC and EU values for each reader and modality. The EU measure was computed assuming an iso-utility slope of 1.03. We find mean effect sizes, the reader averaged difference between modalities, to be roughly 2.0 times as big for EU as AUC. The standard deviation across readers is roughly 1.4 times as large, suggesting better statistical properties for the EU endpoint. In a simple power analysis of paired comparison across readers, the utility measure required 36% fewer readers on average to achieve 80% statistical power compared to AUC.

  4. Characteristics of electrostatic solitary waves observed in the plasma sheet boundary: Statistical analyses

    H. Kojima

    1999-01-01

    Full Text Available We present the characteristics of the Electrostatic Solitary Waves (ESW observed by the Geotail spacecraft in the plasma sheet boundary layer based on the statistical analyses. We also discuss the results referring to a model of ESW generation due to electron beams, which is proposed by computer simulations. In this generation model, the nonlinear evolution of Langmuir waves excited by electron bump-on-tail instabilities leads to formation of isolated electrostatic potential structures corresponding to "electron hole" in the phase space. The statistical analyses of the Geotail data, which we conducted under the assumption that polarity of ESW potentials is positive, show that most of ESW propagate in the same direction of electron beams, which are observed by the plasma instrument, simultaneously. Further, we also find that the ESW potential energy is much smaller than the background electron thermal energy and that the ESW potential widths are typically shorter than 60 times of local electron Debye length when we assume that the ESW potentials travel in the same velocity of electron beams. These results are very consistent with the ESW generation model that the nonlinear evolution of electron bump-on-tail instability leads to the formation of electron holes in the phase space.

  5. Mathematical background and attitudes toward statistics in a sample of Spanish college students.

    Carmona, José; Martínez, Rafael J; Sánchez, Manuel

    2005-08-01

    To examine the relation of mathematical background and initial attitudes toward statistics of Spanish college students in social sciences the Survey of Attitudes Toward Statistics was given to 827 students. Multivariate analyses tested the effects of two indicators of mathematical background (amount of exposure and achievement in previous courses) on the four subscales. Analysis suggested grades in previous courses are more related to initial attitudes toward statistics than the number of mathematics courses taken. Mathematical background was related with students' affective responses to statistics but not with their valuing of statistics. Implications of possible research are discussed.

  6. Statistics of EMIC Rising Tones Observed by the Van Allen Probes

    Sigsbee, K. M.; Kletzing, C.; Smith, C. W.; Santolik, O.

    2017-12-01

    We will present results from an ongoing statistical study of electromagnetic ion cyclotron (EMIC) wave rising tones observed by the Van Allen Probes. Using data from the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) fluxgate magnetometer, we have identified orbits by both Van Allen Probes with EMIC wave events from the start of the mission in fall 2012 through fall 2016. Orbits with EMIC wave events were further examined for evidence of rising tones. Most EMIC wave rising tones were found during H+ band EMIC wave events. In Fourier time-frequency power spectrograms of the fluxgate magnetometer data, H+ band rising tones generally took the form of triggered emission type events, where the discrete rising tone structures rapidly rise in frequency out of the main band of observed H+ EMIC waves. A smaller percentage of EMIC wave rising tone events were found in the He+ band, where rising tones may appear as discrete structures with a positive slope embedded within the main band of observed He+ EMIC waves, similar in appearance to whistler-mode chorus elements. Understanding the occurrence rate and properties of rising tone EMIC waves will provide observational context for theoretical studies indicating that EMIC waves exhibiting non-linear behavior, such as rising tones, may be more effective at scattering radiation belt electrons than ordinary EMIC waves.

  7. Statistical retrieval of thin liquid cloud microphysical properties using ground-based infrared and microwave observations

    Marke, Tobias; Ebell, Kerstin; Löhnert, Ulrich; Turner, David D.

    2016-12-01

    In this article, liquid water cloud microphysical properties are retrieved by a combination of microwave and infrared ground-based observations. Clouds containing liquid water are frequently occurring in most climate regimes and play a significant role in terms of interaction with radiation. Small perturbations in the amount of liquid water contained in the cloud can cause large variations in the radiative fluxes. This effect is enhanced for thin clouds (liquid water path, LWP cloud properties crucial. Due to large relative errors in retrieving low LWP values from observations in the microwave domain and a high sensitivity for infrared methods when the LWP is low, a synergistic retrieval based on a neural network approach is built to estimate both LWP and cloud effective radius (reff). These statistical retrievals can be applied without high computational demand but imply constraints like prior information on cloud phase and cloud layering. The neural network retrievals are able to retrieve LWP and reff for thin clouds with a mean relative error of 9% and 17%, respectively. This is demonstrated using synthetic observations of a microwave radiometer (MWR) and a spectrally highly resolved infrared interferometer. The accuracy and robustness of the synergistic retrievals is confirmed by a low bias in a radiative closure study for the downwelling shortwave flux, even for marginally invalid scenes. Also, broadband infrared radiance observations, in combination with the MWR, have the potential to retrieve LWP with a higher accuracy than a MWR-only retrieval.

  8. Can Low Frequency Measurements Be Good Enough? - A Statistical Assessment of Citizen Hydrology Streamflow Observations

    Davids, J. C.; Rutten, M.; Van De Giesen, N.

    2016-12-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and relatively accurate but expensive monitoring equipment at limited numbers of sites. Consequently, the spatial coverage of the data is limited and costs are high. Achieving adequate maintenance of sophisticated monitoring equipment often exceeds local technical and resource capacity, and permanently deployed monitoring equipment is susceptible to vandalism, theft, and other hazards. Rather than using expensive, vulnerable installations at a few points, SmartPhones4Water (S4W), a form of Citizen Hydrology, leverages widely available mobile technology to gather hydrologic data at many sites in a manner that is repeatable and scalable. However, there is currently a limited understanding of the impact of decreased observational frequency on the accuracy of key streamflow statistics like minimum flow, maximum flow, and runoff. As a first step towards evaluating the tradeoffs between traditional continuous monitoring approaches and emerging Citizen Hydrology methods, we randomly selected 50 active U.S. Geological Survey (USGS) streamflow gauges in California. We used historical 15 minute flow data from 01/01/2008 through 12/31/2014 to develop minimum flow, maximum flow, and runoff values (7 year total) for each gauge. In order to mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, along with their respective distributions, from 50 subsample iterations with four different subsampling intervals (i.e. daily, three day, weekly, and monthly). Based on our results we conclude that, depending on the types of questions being asked, and the watershed characteristics, Citizen Hydrology streamflow measurements can provide useful and accurate information. Depending on watershed characteristics, minimum flows were reasonably estimated with subsample intervals ranging from

  9. MILLIMETER OBSERVATIONS OF A SAMPLE OF HIGH-REDSHIFT OBSCURED QUASARS

    Martinez-Sansigre, Alejo; Karim, Alexander; Schinnerer, Eva

    2009-01-01

    We present observations at 1.2 mm with Max-Planck Millimetre Bolometer Array (MAMBO-II) of a sample of z ∼> 2 radio-intermediate obscured quasars, as well as CO observations of two sources with the Plateau de Bure Interferometer. The typical rms noise achieved by the MAMBO observations is 0.55 mJy beam -1 and five out of 21 sources (24%) are detected at a significance of ≥3σ. Stacking all sources leads to a statistical detection of (S 1.2mm ) = 0.96 ± 0.11 mJy and stacking only the non-detections also yields a statistical detection, with (S 1.2mm ) = 0.51 ± 0.13 mJy. At the typical redshift of the sample, z = 2, 1 mJy corresponds to a far-infrared luminosity L FIR ∼4 x 10 12 L sun . If the far-infrared luminosity is powered entirely by star formation, and not by active galactic nucleus heated dust, then the characteristic inferred star formation rate is ∼700 M sun yr -1 . This far-infrared luminosity implies a dust mass of M d ∼3 x 10 8 M sun , which is expected to be distributed on ∼kpc scales. We estimate that such large dust masses on kpc scales can plausibly cause the obscuration of the quasars. Combining our observations at 1.2 mm with mid- and far-infrared data, and additional observations for two objects at 350 μm using SHARC-II, we present dust spectral energy distributions (SEDs) for our sample and derive a mean SED for our sample. This mean SED is not well fitted by clumpy torus models, unless additional extinction and far-infrared re-emission due to cool dust are included. This additional extinction can be consistently achieved by the mass of cool dust responsible for the far-infrared emission, provided the bulk of the dust is within a radius ∼2-3 kpc. Comparison of our sample to other samples of z ∼ 2 quasars suggests that obscured quasars have, on average, higher far-infrared luminosities than unobscured quasars. There is a hint that the host galaxies of obscured quasars must have higher cool-dust masses and are therefore often

  10. Sampling

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  11. Application of binomial and multinomial probability statistics to the sampling design process of a global grain tracing and recall system

    Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...

  12. Exploring Tree Age & Diameter to Illustrate Sample Design & Inference in Observational Ecology

    Casady, Grant M.

    2015-01-01

    Undergraduate biology labs often explore the techniques of data collection but neglect the statistical framework necessary to express findings. Students can be confused about how to use their statistical knowledge to address specific biological questions. Growth in the area of observational ecology requires that students gain experience in…

  13. Estimation of Peaking Factor Uncertainty due to Manufacturing Tolerance using Statistical Sampling Method

    Lee, Kyung Hoon; Park, Ho Jin; Lee, Chung Chan; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The purpose of this paper is to study the effect on output parameters in the lattice physics calculation due to the last input uncertainty such as manufacturing deviations from nominal value for material composition and geometric dimensions. In a nuclear design and analysis, the lattice physics calculations are usually employed to generate lattice parameters for the nodal core simulation and pin power reconstruction. These lattice parameters which consist of homogenized few-group cross-sections, assembly discontinuity factors, and form-functions can be affected by input uncertainties which arise from three different sources: 1) multi-group cross-section uncertainties, 2) the uncertainties associated with methods and modeling approximations utilized in lattice physics codes, and 3) fuel/assembly manufacturing uncertainties. In this paper, data provided by the light water reactor (LWR) uncertainty analysis in modeling (UAM) benchmark has been used as the manufacturing uncertainties. First, the effect of each input parameter has been investigated through sensitivity calculations at the fuel assembly level. Then, uncertainty in prediction of peaking factor due to the most sensitive input parameter has been estimated using the statistical sampling method, often called the brute force method. For our analysis, the two-dimensional transport lattice code DeCART2D and its ENDF/B-VII.1 based 47-group library were used to perform the lattice physics calculation. Sensitivity calculations have been performed in order to study the influence of manufacturing tolerances on the lattice parameters. The manufacturing tolerance that has the largest influence on the k-inf is the fuel density. The second most sensitive parameter is the outer clad diameter.

  14. Statistical analysis of tiny SXR flares observed by SphinX

    Gryciuk, Magdalena; Siarkowski, Marek; Sylwester, Janusz; Kepa, Anna; Gburek, Szymon; Mrozek, Tomasz; Podgórski, Piotr

    2015-08-01

    The Solar Photometer in X-rays (SphinX) was designed to observe soft X-ray solar emission in the energy range between ~1 keV and 15 keV with the resolution better than 0.5 keV. The instrument operated from February until November 2009 aboard CORONAS-Photon satellite, during the phase of exceptionally low minimum of solar activity. Here we use SphinX data for analysis of micro-flares and brightenings. Despite a very low activity more than a thousand small X-ray events have been recognized by semi-automatic inspection of SphinX light curves. A catalogue of temporal and physical characteristics of these events is shown and discussed and results of the statistical analysis of the catalogue data are presented.

  15. Statistical Study of the Properties of Magnetosheath Lion Roars using MMS observations

    Giagkiozis, S.; Wilson, L. B., III

    2017-12-01

    Intense whistler-mode waves of very short duration are frequently encountered in the magnetosheath. These emissions have been linked to mirror mode waves and the Earth's bow shock. They can efficiently transfer energy between different plasma populations. These electromagnetic waves are commonly referred to as Lion roars (LR), due to the sound generated when the signals are sonified. They are generally observed during dips of the magnetic field that are anti-correlated with increases of density. Using MMS data, we have identified more than 1750 individual LR burst intervals. Each emission was band-pass filtered and further split into >35,000 subintervals, for which the direction of propagation and the polarization were calculated. The analysis of subinterval properties provides a more accurate representation of their true nature than the more commonly used time- and frequency-averaged dynamic spectra analysis. The results of the statistical analysis of the wave properties will be presented.

  16. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.

  17. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2013-01-01

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis

  18. 7 CFR 52.38a - Definitions of terms applicable to statistical sampling.

    2010-01-01

    ... the number of defects (or defectives), which exceed the sample unit tolerance (“T”), in a series of... accumulation of defects (or defectives) allowed to exceed the sample unit tolerance (“T”) in any sample unit or consecutive group of sample units. (ii) CuSum value. The accumulated number of defects (or defectives) that...

  19. The outlier sample effects on multivariate statistical data processing geochemical stream sediment survey (Moghangegh region, North West of Iran)

    Ghanbari, Y.; Habibnia, A.; Memar, A.

    2009-01-01

    In geochemical stream sediment surveys in Moghangegh Region in north west of Iran, sheet 1:50,000, 152 samples were collected and after the analyze and processing of data, it revealed that Yb, Sc, Ni, Li, Eu, Cd, Co, as contents in one sample is far higher than other samples. After detecting this sample as an outlier sample, the effect of this sample on multivariate statistical data processing for destructive effects of outlier sample in geochemical exploration was investigated. Pearson and Spear man correlation coefficient methods and cluster analysis were used for multivariate studies and the scatter plot of some elements together the regression profiles are given in case of 152 and 151 samples and the results are compared. After investigation of multivariate statistical data processing results, it was realized that results of existence of outlier samples may appear as the following relations between elements: - true relation between two elements, which have no outlier frequency in the outlier sample. - false relation between two elements which one of them has outlier frequency in the outlier sample. - complete false relation between two elements which both have outlier frequency in the outlier sample

  20. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  1. Improvement of vertical velocity statistics measured by a Doppler lidar through comparison with sonic anemometer observations

    Bonin, Timothy A.; Newman, Jennifer F.; Klein, Petra M.; Chilson, Phillip B.; Wharton, Sonia

    2016-12-01

    Since turbulence measurements from Doppler lidars are being increasingly used within wind energy and boundary-layer meteorology, it is important to assess and improve the accuracy of these observations. While turbulent quantities are measured by Doppler lidars in several different ways, the simplest and most frequently used statistic is vertical velocity variance (w'2) from zenith stares. However, the competing effects of signal noise and resolution volume limitations, which respectively increase and decrease w'2, reduce the accuracy of these measurements. Herein, an established method that utilises the autocovariance of the signal to remove noise is evaluated and its skill in correcting for volume-averaging effects in the calculation of w'2 is also assessed. Additionally, this autocovariance technique is further refined by defining the amount of lag time to use for the most accurate estimates of w'2. Through comparison of observations from two Doppler lidars and sonic anemometers on a 300 m tower, the autocovariance technique is shown to generally improve estimates of w'2. After the autocovariance technique is applied, values of w'2 from the Doppler lidars are generally in close agreement (R2 ≈ 0.95 - 0.98) with those calculated from sonic anemometer measurements.

  2. Statistical analysis of temperature data sampled at Station-M in the Norwegian Sea

    Lorentzen, Torbjørn

    2014-02-01

    The paper analyzes sea temperature data sampled at Station-M in the Norwegian Sea. The data cover the period 1948-2010. The following questions are addressed: What type of stochastic process characterizes the temperature series? Are there any changes or patterns which indicate climate change? Are there any characteristics in the data which can be linked to the shrinking sea-ice in the Arctic area? Can the series be modeled consistently and applied in forecasting of the future sea temperature? The paper applies the following methods: Augmented Dickey-Fuller tests for testing of unit-root and stationarity, ARIMA-models in univariate modeling, cointegration and error-correcting models are applied in estimating short- and long-term dynamics of non-stationary series, Granger-causality tests in analyzing the interaction pattern between the deep and upper layer temperatures, and simultaneous equation systems are applied in forecasting future temperature. The paper shows that temperature at 2000 m Granger-causes temperature at 150 m, and that the 2000 m series can represent an important information carrier of the long-term development of the sea temperature in the geographical area. Descriptive statistics shows that the temperature level has been on a positive trend since the beginning of the 1980s which is also measured in most of the oceans in the North Atlantic. The analysis shows that the temperature series are cointegrated which means they share the same long-term stochastic trend and they do not diverge too far from each other. The measured long-term temperature increase is one of the factors that can explain the shrinking summer sea-ice in the Arctic region. The analysis shows that there is a significant negative correlation between the shrinking sea ice and the sea temperature at Station-M. The paper shows that the temperature forecasts are conditioned on the properties of the stochastic processes, causality pattern between the variables and specification of model

  3. Assessment of statistical uncertainty in the quantitative analysis of solid samples in motion using laser-induced breakdown spectroscopy

    Cabalin, L.M.; Gonzalez, A. [Department of Analytical Chemistry, University of Malaga, E-29071 Malaga (Spain); Ruiz, J. [Department of Applied Physics I, University of Malaga, E-29071 Malaga (Spain); Laserna, J.J., E-mail: laserna@uma.e [Department of Analytical Chemistry, University of Malaga, E-29071 Malaga (Spain)

    2010-08-15

    Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s{sup -1}. Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.

  4. Assessment of statistical uncertainty in the quantitative analysis of solid samples in motion using laser-induced breakdown spectroscopy

    Cabalín, L. M.; González, A.; Ruiz, J.; Laserna, J. J.

    2010-08-01

    Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s - 1 . Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.

  5. Assessment of statistical uncertainty in the quantitative analysis of solid samples in motion using laser-induced breakdown spectroscopy

    Cabalin, L.M.; Gonzalez, A.; Ruiz, J.; Laserna, J.J.

    2010-01-01

    Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s -1 . Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.

  6. Survey of statistical and sampling needs for environmental monitoring of commercial low-level radioactive waste disposal facilities

    Eberhardt, L.L.; Thomas, J.M.

    1986-07-01

    This project was designed to develop guidance for implementing 10 CFR Part 61 and to determine the overall needs for sampling and statistical work in characterizing, surveying, monitoring, and closing commercial low-level waste sites. When cost-effectiveness and statistical reliability are of prime importance, then double sampling, compositing, and stratification (with optimal allocation) are identified as key issues. If the principal concern is avoiding questionable statistical practice, then the applicability of kriging (for assessing spatial pattern), methods for routine monitoring, and use of standard textbook formulae in reporting monitoring results should be reevaluated. Other important issues identified include sampling for estimating model parameters and the use of data from left-censored (less than detectable limits) distributions

  7. Statistical evolution of quiet-Sun small-scale magnetic features using Sunrise observations

    Anusha, L. S.; Solanki, S. K.; Hirzberger, J.; Feller, A.

    2017-02-01

    The evolution of small magnetic features in quiet regions of the Sun provides a unique window for probing solar magneto-convection. Here we analyze small-scale magnetic features in the quiet Sun, using the high resolution, seeing-free observations from the Sunrise balloon borne solar observatory. Our aim is to understand the contribution of different physical processes, such as splitting, merging, emergence and cancellation of magnetic fields to the rearrangement, addition and removal of magnetic flux in the photosphere. We have employed a statistical approach for the analysis and the evolution studies are carried out using a feature-tracking technique. In this paper we provide a detailed description of the feature-tracking algorithm that we have newly developed and we present the results of a statistical study of several physical quantities. The results on the fractions of the flux in the emergence, appearance, splitting, merging, disappearance and cancellation qualitatively agrees with other recent studies. To summarize, the total flux gained in unipolar appearance is an order of magnitude larger than the total flux gained in emergence. On the other hand, the bipolar cancellation contributes nearly an equal amount to the loss of magnetic flux as unipolar disappearance. The total flux lost in cancellation is nearly six to eight times larger than the total flux gained in emergence. One big difference between our study and previous similar studies is that, thanks to the higher spatial resolution of Sunrise, we can track features with fluxes as low as 9 × 1014 Mx. This flux is nearly an order of magnitude lower than the smallest fluxes of the features tracked in the highest resolution previous studies based on Hinode data. The area and flux of the magnetic features follow power-law type distribution, while the lifetimes show either power-law or exponential type distribution depending on the exact definitions used to define various birth and death events. We have

  8. Statistical Sampling Handbook for Student Aid Programs: A Reference for Non-Statisticians. Winter 1984.

    Office of Student Financial Assistance (ED), Washington, DC.

    A manual on sampling is presented to assist audit and program reviewers, project officers, managers, and program specialists of the U.S. Office of Student Financial Assistance (OSFA). For each of the following types of samples, definitions and examples are provided, along with information on advantages and disadvantages: simple random sampling,…

  9. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  10. MANAGERIAL DECISION IN INNOVATIVE EDUCATION SYSTEMS STATISTICAL SURVEY BASED ON SAMPLE THEORY

    Gheorghe SĂVOIU

    2012-12-01

    Full Text Available Before formulating the statistical hypotheses and the econometrictesting itself, a breakdown of some of the technical issues is required, which are related to managerial decision in innovative educational systems, the educational managerial phenomenon tested through statistical and mathematical methods, respectively the significant difference in perceiving the current qualities, knowledge, experience, behaviour and desirable health, obtained through a questionnaire applied to a stratified population at the end,in the educational environment, either with educational activities, or with simultaneously managerial and educational activities. The details having to do with research focused on the survey theory, turning into a working tool the questionnaires and statistical data that are processed from those questionnaires, are summarized below.

  11. Characteristics of high altitude oxygen ion energization and outflow as observed by Cluster: a statistical study

    Nilsson, H.; Waara, M.; Arvelius, S.; Yamauchi, M.; Lundin, R. [Inst. of Space Physics, Kiruna (Sweden); Marghitu, O. [Max-Planck-Inst. fuer Extraterrestriche Physik, Garching (Germany); Inst. for Space Sciences, Bucharest (Romania); Bouhram, M. [Max-Planck-Inst. fuer Extraterrestriche Physik, Garching (Germany); CETP-CNRS, Saint-Maur (France); Hobara, Y. [Inst. of Space Physics, Kiruna (Sweden); Univ. of Sheffield, Sheffield (United Kingdom); Reme, H.; Sauvaud, J.A.; Dandouras, I. [Centre d' Etude Spatiale des Rayonnements, Toulouse (France); Balogh, A. [Imperial Coll. of Science, Technology and Medicine, London (United Kingdom); Kistler, L.M. [Univ. of New Hampshire, Durham (United States); Klecker, B. [Max-Planck-Inst. fuer Extraterrestriche Physik, Garching (Germany); Carlson, C.W. [Space Science Lab., Univ. of California, Berkeley (United States); Bavassano-Cattaneo, M.B. [Ist. di Fisica dello Spazio Interplanetario, Roma (Italy); Korth, A. [Max-Planck-Inst. fuer Sonnensystemforschung, Katlenburg-Lindau (Germany)

    2006-07-01

    The results of a statistical study of oxygen ion outflow using cluster data obtained at high altitude above the polar cap is reported. Moment data for both hydrogen ions (H{sup +}) and oxygen ions (O{sup +}) from 3 years (2001-2003) of spring orbits (January to May) have been used. The altitudes covered were mainly in the range 5-12 R{sub E} geocentric distance. It was found that O{sup +} is significantly transversely energized at high altitudes, indicated both by high perpendicular temperatures for low magnetic field values as well as by a tendency towards higher perpendicular than parallel temperature distributions for the highest observed temperatures. The O{sup +} parallel bulk velocity increases with altitude in particular for the lowest observed altitude intervals. O{sup +} parallel bulk velocities in excess of 60 km s{sup -1} were found mainly at higher altitudes corresponding to magnetic field strengths of less than 100 nT. For the highest observed parallel bulk velocities of O{sup +} the thermal velocity exceeds the bulk velocity, indicating that the beam-like character of the distribution is lost. The parallel bulk velocity of the H{sup +} and O{sup +} was found to typically be close to the same throughout the observation interval when the H{sup +} bulk velocity was calculated for all pitch-angles. When the H{sup +} bulk velocity was calculated for upward moving particles only the H{sup +} parallel bulk velocity was typically higher than that of O{sup +}. The parallel bulk velocity is close to the same for a wide range of relative abundance of the two ion species, including when the O{sup +} ions dominates. The thermal velocity of O{sup +} was always well below that of H{sup +}. Thus perpendicular energization that is more effective for O{sup +} takes place, but this is not enough to explain the close to similar parallel velocities. Further parallel acceleration must occur. The results presented constrain the models of perpendicular heating and parallel

  12. Characteristics of high altitude oxygen ion energization and outflow as observed by Cluster: a statistical study

    H. Nilsson

    2006-05-01

    Full Text Available The results of a statistical study of oxygen ion outflow using Cluster data obtained at high altitude above the polar cap is reported. Moment data for both hydrogen ions (H+ and oxygen ions (O+ from 3 years (2001-2003 of spring orbits (January to May have been used. The altitudes covered were mainly in the range 5–12 RE geocentric distance. It was found that O+ is significantly transversely energized at high altitudes, indicated both by high perpendicular temperatures for low magnetic field values as well as by a tendency towards higher perpendicular than parallel temperature distributions for the highest observed temperatures. The O+ parallel bulk velocity increases with altitude in particular for the lowest observed altitude intervals. O+ parallel bulk velocities in excess of 60 km s-1 were found mainly at higher altitudes corresponding to magnetic field strengths of less than 100 nT. For the highest observed parallel bulk velocities of O+ the thermal velocity exceeds the bulk velocity, indicating that the beam-like character of the distribution is lost. The parallel bulk velocity of the H+ and O+ was found to typically be close to the same throughout the observation interval when the H+ bulk velocity was calculated for all pitch-angles. When the H+ bulk velocity was calculated for upward moving particles only the H+ parallel bulk velocity was typically higher than that of O+. The parallel bulk velocity is close to the same for a wide range of relative abundance of the two ion species, including when the O+ ions dominates. The thermal velocity of O+ was always well below that of H+. Thus perpendicular energization that is more effective for O+ takes place, but this is not enough to explain the close to similar parallel velocities. Further

  13. Characteristics of high altitude oxygen ion energization and outflow as observed by Cluster: a statistical study

    H. Nilsson

    2006-05-01

    Full Text Available The results of a statistical study of oxygen ion outflow using Cluster data obtained at high altitude above the polar cap is reported. Moment data for both hydrogen ions (H+ and oxygen ions (O+ from 3 years (2001-2003 of spring orbits (January to May have been used. The altitudes covered were mainly in the range 5–12 RE geocentric distance. It was found that O+ is significantly transversely energized at high altitudes, indicated both by high perpendicular temperatures for low magnetic field values as well as by a tendency towards higher perpendicular than parallel temperature distributions for the highest observed temperatures. The O+ parallel bulk velocity increases with altitude in particular for the lowest observed altitude intervals. O+ parallel bulk velocities in excess of 60 km s-1 were found mainly at higher altitudes corresponding to magnetic field strengths of less than 100 nT. For the highest observed parallel bulk velocities of O+ the thermal velocity exceeds the bulk velocity, indicating that the beam-like character of the distribution is lost. The parallel bulk velocity of the H+ and O+ was found to typically be close to the same throughout the observation interval when the H+ bulk velocity was calculated for all pitch-angles. When the H+ bulk velocity was calculated for upward moving particles only the H+ parallel bulk velocity was typically higher than that of O+. The parallel bulk velocity is close to the same for a wide range of relative abundance of the two ion species, including when the O+ ions dominates. The thermal velocity of O+ was always well below that of H+. Thus perpendicular energization that is more effective for O+ takes place, but this is not enough to explain the close to similar parallel velocities. Further parallel acceleration must occur. The results presented constrain the models of perpendicular heating and parallel acceleration. In particular centrifugal acceleration of the outflowing ions, which may

  14. Statistical and observational research of solar flare for total spectra and geometrical features

    Nishimoto, S.; Watanabe, K.; Imada, S.; Kawate, T.; Lee, K. S.

    2017-12-01

    Impulsive energy release phenomena such as solar flares, sometimes affect to the solar-terrestrial environment. Usually, we use soft X-ray flux (GOES class) as the index of flare scale. However, the magnitude of effect to the solar-terrestrial environment is not proportional to that scale. To identify the relationship between solar flare phenomena and influence to the solar-terrestrial environment, we need to understand the full spectrum of solar flares. There is the solar flare irradiance model named the Flare Irradiance Spectral Model (FISM) (Chamberlin et al., 2006, 2007, 2008). The FISM can estimate solar flare spectra with high wavelength resolution. However, this model can not express the time evolution of emitted plasma during the solar flare, and has low accuracy on short wavelength that strongly effects and/or controls the total flare spectra. For the purpose of obtaining the time evolution of total solar flare spectra, we are performing statistical analysis of the electromagnetic data of solar flares. In this study, we select solar flare events larger than M-class from the Hinode flare catalogue (Watanabe et al., 2012). First, we focus on the EUV emission observed by the SDO/EVE. We examined the intensities and time evolutions of five EUV lines of 55 flare events. As a result, we found positive correlation between the "soft X-ray flux" and the "EUV peak flux" for all EVU lines. Moreover, we found that hot lines peaked earlier than cool lines of the EUV light curves. We also examined the hard X-ray data obtained by RHESSI. When we analyzed 163 events, we found good correlation between the "hard X-ray intensity" and the "soft X-ray flux". Because it seems that the geometrical features of solar flares effect to those time evolutions, we also looked into flare ribbons observed by SDO/AIA. We examined 21 flare events, and found positive correlation between the "GOES duration" and the "ribbon length". We also found positive correlation between the "ribbon

  15. A cost-saving statistically based screening technique for focused sampling of a lead-contaminated site

    Moscati, A.F. Jr.; Hediger, E.M.; Rupp, M.J.

    1986-01-01

    High concentrations of lead in soils along an abandoned railroad line prompted a remedial investigation to characterize the extent of contamination across a 7-acre site. Contamination was thought to be spotty across the site reflecting its past use in battery recycling operations at discrete locations. A screening technique was employed to delineate the more highly contaminated areas by testing a statistically determined minimum number of random samples from each of seven discrete site areas. The approach not only quickly identified those site areas which would require more extensive grid sampling, but also provided a statistically defensible basis for excluding other site areas from further consideration, thus saving the cost of additional sample collection and analysis. The reduction in the number of samples collected in ''clean'' areas of the site ranged from 45 to 60%

  16. Using student models to generate feedback in a university course on statistical sampling

    Tacoma, S.G.|info:eu-repo/dai/nl/411923080; Drijvers, P.H.M.|info:eu-repo/dai/nl/074302922; Boon, P.B.J.|info:eu-repo/dai/nl/203374207

    2017-01-01

    Due to the complexity of the topic and a lack of individual guidance, introductory statistics courses at university are often challenging. Automated feedback might help to address this issue. In this study, we explore the use of student models to provide feedback. The research question is how

  17. Constrained statistical inference : sample-size tables for ANOVA and regression

    Vanbrabant, Leonard; Van De Schoot, Rens; Rosseel, Yves

    2015-01-01

    Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β1 is larger than β2 and β3. The corresponding hypothesis is H: β1 > {β2, β3} and

  18. Spatial scan statistics to assess sampling strategy of antimicrobial resistance monitoring programme

    Vieira, Antonio; Houe, Hans; Wegener, Henrik Caspar

    2009-01-01

    Pie collection and analysis of data on antimicrobial resistance in human and animal Populations are important for establishing a baseline of the occurrence of resistance and for determining trends over time. In animals, targeted monitoring with a stratified sampling plan is normally used. However...... sampled by the Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP), by identifying spatial Clusters of samples and detecting areas with significantly high or low sampling rates. These analyses were performed for each year and for the total 5-year study period for all...... by an antimicrobial monitoring program....

  19. Structure of Small and Medium-Sized Business: Results of Total Statistic Observations in Russia

    Iuliia S. Pinkovetskaia

    2018-03-01

    Full Text Available The aim of the research is estimation of regularities and tendencies, characteristic for modern sectoral structure of small and mediumsized business in Russia. The subject of the research is a set of processes of structural changes on the types of economic activities of such enterprises, as well as the differentiation of the number of employees in enterprises. The research methodology included consideration of aggregates of subjects of small and medium-sized business, formed according to sectoral and territorial features. As the initial data used the official statistical information, which was obtain in the course of total observation of the activities of small and medium-sized businesses in 2010 and 2015. The study was conducted on indicators characterizing the full range of legal entities and individual entrepreneurs in the country. The materiality of structural changes was carried out on the basis of the Ryabtsev index. Modeling the differentiation of the values of the number of employees per enterprise was based on the development of density normal distribution functions. According to the hypothesis it is assumed that the differentiation of the number of employees working in enterprises depend on six main types of economic activity and on the subjects of Russia. Based on the results of the study was proved that there are no significant structural changes for the period from 2010 to 2015, both in terms of the number of enterprises and the number of their employees. Based on the results of the simulation, the average values of the number of employees for the six main types of activity were established, as well as the intervals for changing these indicators for the aggregates of small and medium-sized enterprises located in the majority of the country's subjects. The results of research can be used in the performance of scientific works related to the justification of the expected number and number of employees of enterprises, the formation of

  20. Comparative statistical analysis of carcinogenic and non-carcinogenic effects of uranium in groundwater samples from different regions of Punjab, India.

    Saini, Komal; Singh, Parminder; Bajwa, Bikramjit Singh

    2016-12-01

    LED flourimeter has been used for microanalysis of uranium concentration in groundwater samples collected from six districts of South West (SW), West (W) and North East (NE) Punjab, India. Average value of uranium content in water samples of SW Punjab is observed to be higher than WHO, USEPA recommended safe limit of 30µgl -1 as well as AERB proposed limit of 60µgl -1 . Whereas, for W and NE region of Punjab, average level of uranium concentration was within AERB recommended limit of 60µgl -1 . Average value observed in SW Punjab is around 3-4 times the value observed in W Punjab, whereas its value is more than 17 times the average value observed in NE region of Punjab. Statistical analysis of carcinogenic as well as non carcinogenic risks due to uranium have been evaluated for each studied district. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  2. Mapping cell populations in flow cytometry data for cross‐sample comparison using the Friedman–Rafsky test statistic as a distance measure

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu

    2015-01-01

    Abstract Flow cytometry (FCM) is a fluorescence‐based single‐cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap‐FR, a novel method for cell population mapping across FCM samples. FlowMap‐FR is based on the Friedman–Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap‐FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap‐FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap‐FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap‐FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap‐FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback–Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL‐distance in distinguishing

  3. Mapping cell populations in flow cytometry data for cross-sample comparison using the Friedman-Rafsky test statistic as a distance measure.

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu; Scheuermann, Richard H

    2016-01-01

    Flow cytometry (FCM) is a fluorescence-based single-cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap-FR, a novel method for cell population mapping across FCM samples. FlowMap-FR is based on the Friedman-Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap-FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap-FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap-FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap-FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap-FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback-Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL-distance in distinguishing equivalent from nonequivalent cell

  4. Statistical observation on autopsy cases of malignancy at the Japanese Red Cross, Nagasaki Atomic Bomb Hospital

    Takahara, O; Toyoda, S; Tsuno, S; Mukai, H; Uemura, S [Nagasaki Atomic Bomb Hospital (Japan)

    1976-09-01

    Statistical observation was made as to autopsy cases of atomic-bomb survivors in Nagasaki. The total of autopsy cases at the Japanese Red Cross, Nagasaki Atomic Bomb Hospital from the opening of the hospital, 1968, to December in 1975 was 1,486 cases (autopsy rate, 65.1%) in which 880 cases of atomic bomb survivors (autopsy rate, 68.0%) were contained. Cases of malignancy totaled 829 and 528 cases of those were atomic bomb survivors. Cases of malignancy were divided into three groups, that is, group exposured to atomic bomb at place within 2 km from the explosion place, group exposured at place from more than 2 km or entering after explosion into the city, and not-exposured group. Relationship between main malignancies and exposure was discussed, and the following results were obtained. 1) Obvious relationship was found to exist between exposure and acute and chronic medullary leukemia. 2) Malignant lymphoma was scarcely correlated with exposure, but its occurrence rate was higher than the mean rate in Japan in reflection the region where this disease occurs much geographically. 3) Relationship between exposure and stomach cancer, lung cancer, cancer of the large intestine, and double cancer was not found obviously, but occurrence rate of hepatic cancer was higher than the mean rate in Japan in three groups. The reason was supposed to be geographical factor. 4) Cases of thyroid gland cancer were a small number in female of the group exposured within 2 km, and cases of prostate cancer were a small number in the group within 2 km, but their occurrence rate was high specifically.

  5. Dental Calculus Links Statistically to Angina Pectoris: 26-Year Observational Study.

    Söder, Birgitta; Meurman, Jukka H; Söder, Per-Östen

    2016-01-01

    Dental infections, such as periodontitis, associate with atherosclerosis and its complications. We studied a cohort followed-up since 1985 for incidence of angina pectoris with the hypothesis that calculus accumulation, proxy for poor oral hygiene, links to this symptom. In our Swedish prospective cohort study of 1676 randomly selected subjects followed-up for 26 years. In 1985 all subjects underwent clinical oral examination and answered a questionnaire assessing background variables such as socio-economic status and pack-years of smoking. By using data from the Center of Epidemiology, Swedish National Board of Health and Welfare, Sweden we analyzed the association of oral health parameters with the prevalence of in-hospital verified angina pectoris classified according to the WHO International Classification of Diseases, using descriptive statistics and logistic regression analysis. Of the 1676 subjects, 51 (28 women/23 men) had been diagnosed with angina pectoris at a mean age of 59.8 ± 2.9 years. No difference was observed in age and gender between patients with angina pectoris and subjects without. Neither was there any difference in education level and smoking habits (in pack years), Gingival index and Plaque index between the groups. Angina pectoris patients had significantly more often their first maxillary molar tooth extracted (d. 16) than the other subjects (p = 0.02). Patients also showed significantly higher dental calculus index values than the subjects without angina pectoris (p = 0.01). Multiple regression analysis showed odds ratio 2.21 (95% confidence interval 1.17-4.17) in the association between high calculus index and angina pectoris (p = 0.015). Our study hypothesis was confirmed by showing for the first time that high dental calculus score indeed associated with the incidence of angina pectoris in this cohort study.

  6. Statistical observation on autopsy cases of malignancy at the Japanese Red Cross, Nagasaki Atomic Bomb Hospital

    Takahara, Osamu; Toyoda, Shigeki; Tsuno, Sumio; Mukai, Hideaki; Uemura, Seiji

    1976-01-01

    Statistical observation was made as to autopsy cases of atomic-bomb survivors in Nagasaki. The total of autopsy cases at the Japanese Red Cross, Nagasaki Atomic Bomb Hospital from the opening of the hospital, 1968, to December in 1975 was 1,486 cases (autopsy rate, 65.1%) in which 880 cases of atomic bomb survivors (autopsy rate, 68.0%) were contained. Cases of malignancy totaled 829 and 528 cases of those were atomic bomb survivors. Cases of malignancy were divided into three groups, that is, group exposured to atomic bomb at place within 2 km from the explosion place, group exposured at place from more than 2 km or entering after explosion into the city, and not-exposured group. Relationship between main malignancies and exposure was discussed, and the following results were obtained. 1) Obvious relationship was found to exist between exposure and acute and chronic medullary leukemia. 2) Malignant lymphoma was scarecely correlated with exposure, but its occurrence rate was higher than the mean rate in Japan in reflection the region where this disease occurs much geographically. 3) Relationship between exposure and stomach cancer, lung cancer, cancer of the large intestine, and double cancer was not found obviously, but occurrence rate of hepatic cancer was higher than the mean rate in Japan in three groups. The reason was supposed to be geographical factor. 4) Cases of thyroid gland cancer were a small number in female of the group exposured within 2 km, and cases of prostate cancer were a small number in the group within 2 km, but their occurrence rate was high specifically. (Tsunoda, M.)

  7. Statistics of AUV's Missions for Operational Ocean Observation at the South Brazilian Bight.

    dos Santos, F. A.; São Tiago, P. M.; Oliveira, A. L. S. C.; Barmak, R. B.; Miranda, T. C.; Guerra, L. A. A.

    2016-02-01

    The high costs and logistics limitations of ship-based data collection represent an obstacle for a persistent in-situ data collection. Satellite-operated Autonomous Underwater Vehicles (AUV's) or gliders (as these AUV's are generally known by the scientific community) are presented as an inexpensive and reliable alternative to perform long-term and real-time ocean monitoring of important parameters such as temperature, salinity, water-quality and acoustics. This work is focused on the performance statistics and the reliability for continuous operation of a fleet of seven gliders navigating in Santos Basin - Brazil, since March 2013. The gliders performance were evaluated by the number of standby days versus the number of operating days, the number of interrupted missions due to (1) equipment failure, (2) weather, (3) accident versus the number of successful missions and the amount and quality of data collected. From the start of the operations in March 2013 to the preparation of this work (July 2015), a total of 16 glider missions were accomplished, operating during 728 of the 729 days passed since then. From this total, 11 missions were successful, 3 missions were interrupted due to equipment failure and 2 gliders were lost. Most of the identified issues were observed in the communication with the glider (when recovery was necessary) or the optode sensors (when remote settings solved the problem). The average duration of a successful mission was 103 days while interrupted ones ended on average in 7 days. The longest mission lasted for 139 days, performing 859 continuous profiles and covering a distance of 2734 Km. The 2 projects performed together 6856 dives, providing an average of 9,5 profiles per day or one profile every 2,5 hours each day during 2 consecutive years.

  8. Statistical Correlation of Low-Altitude ENA Emissions with Geomagnetic Activity from IMAGE MENA Observations

    Mackler, D. A.; Jahn, J.- M.; Perez, J. D.; Pollock, C. J.; Valek, P. W.

    2016-01-01

    Plasma sheet particles transported Earthward during times of active magnetospheric convection can interact with exospheric/thermospheric neutrals through charge exchange. The resulting Energetic Neutral Atoms (ENAs) are free to leave the influence of the magnetosphere and can be remotely detected. ENAs associated with low-altitude (300-800 km) ion precipitation in the high-latitude atmosphere/ionosphere are termed low-altitude emissions (LAEs). Remotely observed LAEs are highly nonisotropic in velocity space such that the pitch angle distribution at the time of charge exchange is near 90deg. The Geomagnetic Emission Cone of LAEs can be mapped spatially, showing where proton energy is deposited during times of varying geomagnetic activity. In this study we present a statistical look at the correlation between LAE flux (intensity and location) and geomagnetic activity. The LAE data are from the MENA imager on the IMAGE satellite over the declining phase of solar cycle 23 (2000-2005). The SYM-H, AE, and Kp indices are used to describe geomagnetic activity. The goal of the study is to evaluate properties of LAEs in ENA images and determine if those images can be used to infer properties of ion precipitation. Results indicate a general positive correlation to LAE flux for all three indices, with the SYM-H showing the greatest sensitivity. The magnetic local time distribution of LAEs is centered about midnight and spreads with increasing activity. The invariant latitude for all indices has a slightly negative correlation. The combined results indicate LAE behavior similar to that of ion precipitation.

  9. Statistical correlation of low-altitude ENA emissions with geomagnetic activity from IMAGE/MENA observations

    Mackler, D. A.; Jahn, J.-M.; Perez, J. D.; Pollock, C. J.; Valek, P. W.

    2016-03-01

    Plasma sheet particles transported Earthward during times of active magnetospheric convection can interact with exospheric/thermospheric neutrals through charge exchange. The resulting Energetic Neutral Atoms (ENAs) are free to leave the influence of the magnetosphere and can be remotely detected. ENAs associated with low-altitude (300-800 km) ion precipitation in the high-latitude atmosphere/ionosphere are termed low-altitude emissions (LAEs). Remotely observed LAEs are highly nonisotropic in velocity space such that the pitch angle distribution at the time of charge exchange is near 90°. The Geomagnetic Emission Cone of LAEs can be mapped spatially, showing where proton energy is deposited during times of varying geomagnetic activity. In this study we present a statistical look at the correlation between LAE flux (intensity and location) and geomagnetic activity. The LAE data are from the MENA imager on the IMAGE satellite over the declining phase of solar cycle 23 (2000-2005). The SYM-H, AE, and Kp indices are used to describe geomagnetic activity. The goal of the study is to evaluate properties of LAEs in ENA images and determine if those images can be used to infer properties of ion precipitation. Results indicate a general positive correlation to LAE flux for all three indices, with the SYM-H showing the greatest sensitivity. The magnetic local time distribution of LAEs is centered about midnight and spreads with increasing activity. The invariant latitude for all indices has a slightly negative correlation. The combined results indicate LAE behavior similar to that of ion precipitation.

  10. TRAN-STAT: statistics for environmental studies, Number 22. Comparison of soil-sampling techniques for plutonium at Rocky Flats

    Gilbert, R.O.; Bernhardt, D.E.; Hahn, P.B.

    1983-01-01

    A summary of a field soil sampling study conducted around the Rocky Flats Colorado plant in May 1977 is preseted. Several different soil sampling techniques that had been used in the area were applied at four different sites. One objective was to comparethe average 239 - 240 Pu concentration values obtained by the various soil sampling techniques used. There was also interest in determining whether there are differences in the reproducibility of the various techniques and how the techniques compared with the proposed EPA technique of sampling to 1 cm depth. Statistically significant differences in average concentrations between the techniques were found. The differences could be largely related to the differences in sampling depth-the primary physical variable between the techniques. The reproducibility of the techniques was evaluated by comparing coefficients of variation. Differences between coefficients of variation were not statistically significant. Average (median) coefficients ranged from 21 to 42 percent for the five sampling techniques. A laboratory study indicated that various sample treatment and particle sizing techniques could increase the concentration of plutonium in the less than 10 micrometer size fraction by up to a factor of about 4 compared to the 2 mm size fraction

  11. Hybrid algorithm of ensemble transform and importance sampling for assimilation of non-Gaussian observations

    Shin'ya Nakano

    2014-05-01

    Full Text Available A hybrid algorithm that combines the ensemble transform Kalman filter (ETKF and the importance sampling approach is proposed. Since the ETKF assumes a linear Gaussian observation model, the estimate obtained by the ETKF can be biased in cases with nonlinear or non-Gaussian observations. The particle filter (PF is based on the importance sampling technique, and is applicable to problems with nonlinear or non-Gaussian observations. However, the PF usually requires an unrealistically large sample size in order to achieve a good estimation, and thus it is computationally prohibitive. In the proposed hybrid algorithm, we obtain a proposal distribution similar to the posterior distribution by using the ETKF. A large number of samples are then drawn from the proposal distribution, and these samples are weighted to approximate the posterior distribution according to the importance sampling principle. Since the importance sampling provides an estimate of the probability density function (PDF without assuming linearity or Gaussianity, we can resolve the bias due to the nonlinear or non-Gaussian observations. Finally, in the next forecast step, we reduce the sample size to achieve computational efficiency based on the Gaussian assumption, while we use a relatively large number of samples in the importance sampling in order to consider the non-Gaussian features of the posterior PDF. The use of the ETKF is also beneficial in terms of the computational simplicity of generating a number of random samples from the proposal distribution and in weighting each of the samples. The proposed algorithm is not necessarily effective in case that the ensemble is located distant from the true state. However, monitoring the effective sample size and tuning the factor for covariance inflation could resolve this problem. In this paper, the proposed hybrid algorithm is introduced and its performance is evaluated through experiments with non-Gaussian observations.

  12. Technical note: Instantaneous sampling intervals validated from continuous video observation for behavioral recording of feedlot lambs.

    Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L

    2017-11-01

    When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.

  13. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  14. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the

  15. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  16. Refilling process in the plasmasphere: a 3-D statistical characterization based on Cluster density observations

    G. Lointier

    2013-02-01

    Full Text Available The Cluster mission offers an excellent opportunity to investigate the evolution of the plasma population in a large part of the inner magnetosphere, explored near its orbit's perigee, over a complete solar cycle. The WHISPER sounder, on board each satellite of the mission, is particularly suitable to study the electron density in this region, between 0.2 and 80 cm−3. Compiling WHISPER observations during 1339 perigee passes distributed over more than three years of the Cluster mission, we present first results of a statistical analysis dedicated to the study of the electron density morphology and dynamics along and across magnetic field lines between L = 2 and L = 10. In this study, we examine a specific topic: the refilling of the plasmasphere and trough regions during extended periods of quiet magnetic conditions. To do so, we survey the evolution of the ap index during the days preceding each perigee crossing and sort out electron density profiles along the orbit according to three classes, namely after respectively less than 2 days, between 2 and 4 days, and more than 4 days of quiet magnetic conditions (ap ≤ 15 nT following an active episode (ap > 15 nT. This leads to three independent data subsets. Comparisons between density distributions in the 3-D plasmasphere and trough regions at the three stages of quiet magnetosphere provide novel views about the distribution of matter inside the inner magnetosphere during several days of low activity. Clear signatures of a refilling process inside an expended plasmasphere in formation are noted. A plasmapause-like boundary, at L ~ 6 for all MLT sectors, is formed after 3 to 4 days and expends somewhat further after that. In the outer part of the plasmasphere (L ~ 8, latitudinal profiles of median density values vary essentially according to the MLT sector considered rather than according to the refilling duration. The shape of these density profiles indicates that magnetic flux tubes are not

  17. Observed ozone exceedances in Italy: statistical analysis and modelling in the period 2002-2015

    Falasca, Serena; Curci, Gabriele; Candeloro, Luca; Conte, Annamaria; Ippoliti, Carla

    2017-04-01

    concentrations. On the other hand, high-temperature events have similar duration and higher mean temperature with respect to recent years, pointing out that temperature is not the only driver of high-ozone events. The statistical model confirms a significant impact of the meteorological variables (positive for temperature and pressure, negative for humidity and wind speed) on the probability of ozone events. Significant predictors are also the altitude (negative) and the number of inhabitants (positive). The decreasing observed recent trend is explained by the introduction of the Euro regulations, rather than natural variability. However, we find an inversion of trend for the more recent period under Euro6 (from September 2014), but we cautionary wait a confirmation from additional data at least for the year 2016.

  18. Statistical power to detect genetic (covariance of complex traits using SNP data in unrelated samples.

    Peter M Visscher

    2014-04-01

    Full Text Available We have recently developed analysis methods (GREML to estimate the genetic variance of a complex trait/disease and the genetic correlation between two complex traits/diseases using genome-wide single nucleotide polymorphism (SNP data in unrelated individuals. Here we use analytical derivations and simulations to quantify the sampling variance of the estimate of the proportion of phenotypic variance captured by all SNPs for quantitative traits and case-control studies. We also derive the approximate sampling variance of the estimate of a genetic correlation in a bivariate analysis, when two complex traits are either measured on the same or different individuals. We show that the sampling variance is inversely proportional to the number of pairwise contrasts in the analysis and to the variance in SNP-derived genetic relationships. For bivariate analysis, the sampling variance of the genetic correlation additionally depends on the harmonic mean of the proportion of variance explained by the SNPs for the two traits and the genetic correlation between the traits, and depends on the phenotypic correlation when the traits are measured on the same individuals. We provide an online tool for calculating the power of detecting genetic (covariation using genome-wide SNP data. The new theory and online tool will be helpful to plan experimental designs to estimate the missing heritability that has not yet been fully revealed through genome-wide association studies, and to estimate the genetic overlap between complex traits (diseases in particular when the traits (diseases are not measured on the same samples.

  19. The Orientation of Gastric Biopsy Samples Improves the Inter-observer Agreement of the OLGA Staging System.

    Cotruta, Bogdan; Gheorghe, Cristian; Iacob, Razvan; Dumbrava, Mona; Radu, Cristina; Bancila, Ion; Becheanu, Gabriel

    2017-12-01

    Evaluation of severity and extension of gastric atrophy and intestinal metaplasia is recommended to identify subjects with a high risk for gastric cancer. The inter-observer agreement for the assessment of gastric atrophy is reported to be low. The aim of the study was to evaluate the inter-observer agreement for the assessment of severity and extension of gastric atrophy using oriented and unoriented gastric biopsy samples. Furthermore, the quality of biopsy specimens in oriented and unoriented samples was analyzed. A total of 35 subjects with dyspeptic symptoms addressed for gastrointestinal endoscopy that agreed to enter the study were prospectively enrolled. The OLGA/OLGIM gastric biopsies protocol was used. From each subject two sets of biopsies were obtained (four from the antrum, two oriented and two unoriented, two from the gastric incisure, one oriented and one unoriented, four from the gastric body, two oriented and two unoriented). The orientation of the biopsy samples was completed using nitrocellulose filters (Endokit®, BioOptica, Milan, Italy). The samples were blindly examined by two experienced pathologists. Inter-observer agreement was evaluated using kappa statistic for inter-rater agreement. The quality of histopathology specimens taking into account the identification of lamina propria was analyzed in oriented vs. unoriented samples. The samples with detectable lamina propria mucosae were defined as good quality specimens. Categorical data was analyzed using chi-square test and a two-sided p value <0.05 was considered statistically significant. A total of 350 biopsy samples were analyzed (175 oriented / 175 unoriented). The kappa index values for oriented/unoriented OLGA 0/I/II/III and IV stages have been 0.62/0.13, 0.70/0.20, 0.61/0.06, 0.62/0.46, and 0.77/0.50, respectively. For OLGIM 0/I/II/III stages the kappa index values for oriented/unoriented samples were 0.83/0.83, 0.88/0.89, 0.70/0.88 and 0.83/1, respectively. No case of OLGIM IV

  20. Statistical properties of the surface velocity field in the northern Gulf of Mexico sampled by GLAD drifters

    Mariano, A.J.; Ryan, E.H.; Huntley, H.S.; Laurindo, L.C.; Coelho, E.; Ozgokmen, TM; Berta, M.; Bogucki, D; Chen, S.S.; Curcic, M.; Drouin, K.L.; Gough, M; Haus, BK; Haza, A.C.; Hogan, P; Iskandarani, M; Jacobs, G; Kirwan Jr., A.D.; Laxague, N; Lipphardt Jr., B.; Magaldi, M.G.; Novelli, G.; Reniers, A.J.H.M.; Restrepo, J.M.; Smith, C; Valle-Levinson, A.; Wei, M.

    2016-01-01

    The Grand LAgrangian Deployment (GLAD) used multiscale sampling and GPS technology to observe time series of drifter positions with initial drifter separation of O(100 m) to O(10 km), and nominal 5 min sampling, during the summer and fall of 2012 in the northern Gulf of Mexico. Histograms of the

  1. Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality

    Hirsch, Robert M.

    1988-01-01

    This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.

  2. Statistical analysis of biomechanical properties of the adult skull and age-related structural changes by sex in a Japanese forensic sample.

    Torimitsu, Suguru; Nishida, Yoshifumi; Takano, Tachio; Koizumi, Yoshinori; Makino, Yohsuke; Yajima, Daisuke; Hayakawa, Mutsumi; Inokuchi, Go; Motomura, Ayumi; Chiba, Fumiko; Otsuka, Katsura; Kobayashi, Kazuhiro; Odo, Yuriko; Iwase, Hirotaro

    2014-01-01

    The purpose of this research was to investigate the biomechanical properties of the adult human skull and the structural changes that occur with age in both sexes. The heads of 94 Japanese cadavers (54 male cadavers, 40 female cadavers) autopsied in our department were used in this research. A total of 376 cranial samples, four from each skull, were collected. Sample fracture load was measured by a bending test. A statistically significant negative correlation between the sample fracture load and cadaver age was found. This indicates that the stiffness of cranial bones in Japanese individuals decreases with age, and the risk of skull fracture thus probably increases with age. Prior to the bending test, the sample mass, the sample thickness, the ratio of the sample thickness to cadaver stature (ST/CS), and the sample density were measured and calculated. Significant negative correlations between cadaver age and sample thickness, ST/CS, and the sample density were observed only among the female samples. Computerized tomographic (CT) images of 358 cranial samples were available. The computed tomography value (CT value) of cancellous bone which refers to a quantitative scale for describing radiodensity, cancellous bone thickness and cortical bone thickness were measured and calculated. Significant negative correlation between cadaver age and the CT value or cortical bone thickness was observed only among the female samples. These findings suggest that the skull is substantially affected by decreased bone metabolism resulting from osteoporosis. Therefore, osteoporosis prevention and treatment may increase cranial stiffness and reinforce the skull structure, leading to a decrease in the risk of skull fractures. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Statistical inference for the additive hazards model under outcome-dependent sampling.

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  4. Relationship between accuracy and number of samples on statistical quantity and contour map of environmental gamma-ray dose rate. Example of random sampling

    Matsuda, Hideharu; Minato, Susumu

    2002-01-01

    The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)

  5. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Transit timing observations from Kepler. VI. Potentially interesting candidate systems from fourier-based statistical tests

    Steffen, J.H.; Ford, E.B.; Rowe, J.F.

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify...... several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies....

  7. TRANSIT TIMING OBSERVATIONS FROM KEPLER. VI. POTENTIALLY INTERESTING CANDIDATE SYSTEMS FROM FOURIER-BASED STATISTICAL TESTS

    Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.; Borucki, William J.; Bryson, Steve; Caldwell, Douglas A.; Jenkins, Jon M.; Koch, David G.; Sanderfer, Dwight T.; Seader, Shawn; Twicken, Joseph D.; Fabrycky, Daniel C.; Holman, Matthew J.; Welsh, William F.; Batalha, Natalie M.; Ciardi, David R.; Kjeldsen, Hans; Prša, Andrej

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.

  8. Dynamical 'in situ' observation of biological samples using variable pressure scanning electron microscope

    Nedela, V

    2008-01-01

    Possibilities of 'in-situ' observation of non-conductive biological samples free of charging artefacts in dynamically changed surrounding conditions are the topic of this work. The observed biological sample, the tongue of a rat, was placed on a cooled Peltier stage. We studied the visibility of topographical structure depending on transition between liquid and gas state of water in the specimen chamber of VP SEM.

  9. Statistically Optimized Inversion Algorithm for Enhanced Retrieval of Aerosol Properties from Spectral Multi-Angle Polarimetric Satellite Observations

    Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.

    2011-01-01

    The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.

  10. Adaptive list sequential sampling method for population-based observational studies

    Hof, Michel H.; Ravelli, Anita C. J.; Zwinderman, Aeilko H.

    2014-01-01

    In population-based observational studies, non-participation and delayed response to the invitation to participate are complications that often arise during the recruitment of a sample. When both are not properly dealt with, the composition of the sample can be different from the desired

  11. Observed Characteristics and Teacher Quality: Impacts of Sample Selection on a Value Added Model

    Winters, Marcus A.; Dixon, Bruce L.; Greene, Jay P.

    2012-01-01

    We measure the impact of observed teacher characteristics on student math and reading proficiency using a rich dataset from Florida. We expand upon prior work by accounting directly for nonrandom attrition of teachers from the classroom in a sample selection framework. We find evidence that sample selection is present in the estimation of the…

  12. An empirical test of Maslow's theory of need hierarchy using hologeistic comparison by statistical sampling.

    Davis-Sharts, J

    1986-10-01

    Maslow's hierarchy of basic human needs provides a major theoretical framework in nursing science. The purpose of this study was to empirically test Maslow's need theory, specifically at the levels of physiological and security needs, using a hologeistic comparative method. Thirty cultures taken from the 60 cultural units in the Health Relations Area Files (HRAF) Probability Sample were found to have data available for examining hypotheses about thermoregulatory (physiological) and protective (security) behaviors practiced prior to sleep onset. The findings demonstrate there is initial worldwide empirical evidence to support Maslow's need hierarchy.

  13. Some properties of the representation of the quasilocal observables in statistical mechanics and quantum field theory

    Testard, D.; Centre National de la Recherche Scientifique, 13 - Marseille

    1977-09-01

    For a finite non zero temperature state in Statistical Mechanics it is proved that the factor obtained in the corresponding representation of the quasilocal algebra has the property of Araki. The same result also holds for the 'wedge-algebras' of a hermitian scalar Wightman field

  14. Adaptive sampling based on the cumulative distribution function of order statistics to delineate heavy-metal contaminated soils using kriging

    Juang, K.-W.; Lee, D.-Y.; Teng, Y.-L.

    2005-01-01

    Correctly classifying 'contaminated' areas in soils, based on the threshold for a contaminated site, is important for determining effective clean-up actions. Pollutant mapping by means of kriging is increasingly being used for the delineation of contaminated soils. However, those areas where the kriged pollutant concentrations are close to the threshold have a high possibility for being misclassified. In order to reduce the misclassification due to the over- or under-estimation from kriging, an adaptive sampling using the cumulative distribution function of order statistics (CDFOS) was developed to draw additional samples for delineating contaminated soils, while kriging. A heavy-metal contaminated site in Hsinchu, Taiwan was used to illustrate this approach. The results showed that compared with random sampling, adaptive sampling using CDFOS reduced the kriging estimation errors and misclassification rates, and thus would appear to be a better choice than random sampling, as additional sampling is required for delineating the 'contaminated' areas. - A sampling approach was derived for drawing additional samples while kriging

  15. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  16. [Contradiction and intention of actual situation and statistical observation on home custody of mental patients].

    Kanekawa, Hideo

    2012-01-01

    Actual Situation and Statistical Observation on Home Custody of Mental Patients (1918) by Kure and Kashida has diverse content but contains many contradictions. This book is a record of investigations performed by 15 psychiatrists regarding home custody of mental patients in 15 prefectures between 1910 and 1916. The book is written in archaic Japanese and contains a mixture of old Kanji characters and Katakana, so few people have read the entire book in recent years. We thoroughly read the book over 2 years, and presented the results of our investigation and analysis. The contents were initially published in Tokyo Journal of Medical Sciences as a series of 4 articles, and published as a book in 1918. The Department of the Interior distributed 100 copies of the book to relevant personnel. Until its dissolution in 1947, the Department of the Interior included the Police Department and had a great deal of authority. The Health and Welfare Ministry became independent from the Department of the Interior in 1938. Therefore, mental institutions were under the supervision of the police force for many years. At the time, an important task for police officers was to search for infectious disease patients and to seclude and restrain them. Thus, home custody for mental patients was also supervised under the direction of the Police Department. This book is a record of an external investigation performed by psychiatrists on home custody supervised by the police. When investigating the conditions, one of the psychiatrists obtained a copy of "Documents for mental patients under confinement" at the local police station. The contents of these documents included records of hearings by the police, as well as applications for confinement submitted by family members, as well as detailed specifications and drawings of the confinement room. With a local photographer, they traveled deep into the mountains to investigate the conditions under which mental patients were living. The book

  17. An X-Ray/SDSS Sample: Observational Characterization of The Outflowing Gas

    Perna, Michele; Brusa, M.; Lanzuisi, G.; Mignoli, M.

    2016-10-01

    Powerful ionised AGN-driven outflows, commonly detected both locally and at high redshift, are invoked to contribute to the co-evolution of SMBH and galaxies through feedback phenomena. Our recent works (Brusa+2015; 2016; Perna+2015a,b) have shown that the XMM-COSMOS targets with evidence of outflows collected so far ( 10 sources) appear to be associated with low X-ray kbol corrections (Lbol /LX ˜ 18), in spite of their spread in obscuration, in the locations on the SFR-Mstar diagram, in their radio emission. A higher statistical significance is required to validate a connection between outflow phenomena and a X-ray loudness. Moreover, in order to validate their binding nature to the galaxy fate, it is crucial to correctly determine the outflow energetics. This requires time consuming integral field spectroscopic (IFS) observations, which are, at present, mostly limited to high luminosity objectsThe study of SDSS data offers a complementary strategy to IFS efforts. I will present physical and demographic characterization of the AGN-galaxy system during the feedback phase obtained studying a sample of 500 X-ray/SDSS AGNs, at zdispersion) and X-ray properties (intrinsic X-ray luminosity, obscuration and X-ray kbol correction), to determine what drives ionised winds. Several diagnostic line ratios have been used to infer the physical properties of the ionised outflowing gas. The knowledge of these properties can reduce the actual uncertainties in the outflow energetics by a factor of ten, pointing to improve our understanding of the AGN outflow phenomenon and its impact on galaxy evolution.

  18. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  19. How does observation uncertainty influence which stream water samples are most informative for model calibration?

    Wang, Ling; van Meerveld, Ilja; Seibert, Jan

    2016-04-01

    Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow

  20. Observer-Based Stabilization of Spacecraft Rendezvous with Variable Sampling and Sensor Nonlinearity

    Zhuoshi Li

    2013-01-01

    Full Text Available This paper addresses the observer-based control problem of spacecraft rendezvous with nonuniform sampling period. The relative dynamic model is based on the classical Clohessy-Wiltshire equation, and sensor nonlinearity and sampling are considered together in a unified framework. The purpose of this paper is to perform an observer-based controller synthesis by using sampled and saturated output measurements, such that the resulting closed-loop system is exponentially stable. A time-dependent Lyapunov functional is developed which depends on time and the upper bound of the sampling period and also does not grow along the input update times. The controller design problem is solved in terms of the linear matrix inequality method, and the obtained results are less conservative than using the traditional Lyapunov functionals. Finally, a numerical simulation example is built to show the validity of the developed sampled-data control strategy.

  1. Some statistical and sampling needs for detecting spills or migration at commercial low-level radioactive waste disposal sites

    Thomas, J.M.; Eberhardt, L.L.; Skalski, J.R.; Simmons, M.A.

    1984-05-01

    As part of a larger study funded by the US Nuclear Regulatory Commission we have been investigating field sampling strategies and compositing as a means of detecting spills or migration at commercial low-level radioactive waste disposal sites. The overall project is designed to produce information for developing guidance on implementing 10 CFR part 61. Compositing (pooling samples) for detection is discussed first, followed by our development of a statistical test to allow a decision as to whether any component of a composite exceeds a prescribed maximum acceptable level. The question of optimal field sampling designs and an Apple computer program designed to show the difficulties in constructing efficient field designs and using compositing schemes are considered. 6 references, 3 figures, 3 tables

  2. Statistical survey of widely spread out solar electron events observed with STEREO and ACE with special attention to anisotropies

    Dresing, N.; Gómez-Herrero, R.; Heber, B.; Klassen, A.; Malandraki, O.; Dröge, W.; Kartavykh, Y.

    2014-07-01

    Context. In February 2011, the two STEREO spacecrafts reached a separation of 180 degrees in longitude, offering a complete view of the Sun for the first time ever. When the full Sun surface is visible, source active regions of solar energetic particle (SEP) events can be identified unambiguously. STEREO, in combination with near-Earth observatories such as ACE or SOHO, provides three well separated viewpoints, which build an unprecedented platform from which to investigate the longitudinal variations of SEP events. Aims: We show an ensemble of SEP events that were observed between 2009 and mid-2013 by at least two spacecrafts and show a remarkably wide particle spread in longitude (wide-spread events). The main selection criterion for these events was a longitudinal separation of at least 80 degrees between active region and spacecraft magnetic footpoint for the widest separated spacecraft. We investigate the events statistically in terms of peak intensities, onset delays, and rise times, and determine the spread of the longitudinal events, which is the range filled by SEPs during the events. Energetic electron anisotropies are investigated to distinguish the source and transport mechanisms that lead to the observed wide particle spreads. Methods: According to the anisotropy distributions, we divided the events into three classes depending on different source and transport scenarios. One potential mechanism for wide-spread events is efficient perpendicular transport in the interplanetary medium that competes with another scenario, which is a wide particle spread that occurs close to the Sun. In the latter case, the observations at 1 AU during the early phase of the events are expected to show significant anisotropies because of the wide injection range at the Sun and particle-focusing during the outward propagation, while in the first case only low anisotropies are anticipated. Results: We find events for both of these scenarios in our sample that match the

  3. STATISTICAL ANALYSIS OF FLARING LOOPS OBSERVED BY NOBEYAMA RADIOHELIOGRAPH. I. COMPARISON OF LOOPTOP AND FOOTPOINTS

    Huang Guangli; Nakajima, Hiroshi

    2009-01-01

    Twenty-four events with looplike structures at 17 and 34 GHz are selected from the flare list of Nobeyama Radioheliograph. We obtained the brightness temperatures at 17 and 34 GHz, the polarization degrees at 17 GHz, and the power-law spectral indices at the radio peak time for one looptop (LT) and two footpoints (FPs) of each event. We also calculated the magnetic field strengths and the column depths of nonthermal electrons in the LT and FPs of each event, using the equations modified from the gyrosynchrotron equations by Dulk. The main statistical results from those data are summarized as follows. (1) The spectral indices, the brightness temperatures at 17 and 34 GHz, the polarization degrees at 17 GHz, the calculated magnetic field strengths, and the calculated column densities of nonthermal electrons are always positively correlated between the LT and the two FPs of the selected events. (2) About one-half of the events have the brightest LT at 17 and 34 GHz. (3) The spectral indices in the two FPs are larger (softer) than those in the corresponding LT in most events. (4) The calculated magnetic field strengths in the two FPs are always larger than those in the corresponding LT. (5) Most of the events have the same positive or negative polarization sense in the LT and the two FPs. (6) The brightness temperatures at 17 and 34 GHz in each of the LT and the two FPs statistically decrease with their spectral indices and the calculated magnetic field strengths, but increase with their calculated column densities of nonthermal electrons. Moreover, we try to discuss the possible causes of the present statistical results.

  4. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  5. Classic (Nonquantic) Algorithm for Observations and Measurements Based on Statistical Strategies of Particles Fields

    Savastru, D.; Dontu, Simona; Savastru, Roxana; Sterian, Andreea Rodica

    2013-01-01

    Our knowledge about surroundings can be achieved by observations and measurements but both are influenced by errors (noise). Therefore one of the first tasks is to try to eliminate the noise by constructing instruments with high accuracy. But any real observed and measured system is characterized by natural limits due to the deterministic nature of the measured information. The present work is dedicated to the identification of these limits. We have analyzed some algorithms for selection and ...

  6. PLANETARY CANDIDATES OBSERVED BY KEPLER IV: PLANET SAMPLE FROM Q1-Q8 (22 MONTHS)

    Burke, Christopher J.; Mullally, F.; Rowe, Jason F.; Thompson, Susan E.; Coughlin, Jeffrey L.; Caldwell, Douglas A.; Jenkins, Jon M.; Bryson, Stephen T.; Haas, Michael R.; Batalha, Natalie M.; Borucki, William J.; Christiansen, Jessie L.; Ciardi, David R.; Still, Martin; Barclay, Thomas; Chaplin, William J.; Clarke, Bruce D.; Cochran, William D.; Demory, Brice-Olivier; Esquerdo, Gilbert A.

    2014-01-01

    We provide updates to the Kepler planet candidate sample based upon nearly two years of high-precision photometry (i.e., Q1-Q8). From an initial list of nearly 13,400 threshold crossing events, 480 new host stars are identified from their flux time series as consistent with hosting transiting planets. Potential transit signals are subjected to further analysis using the pixel-level data, which allows background eclipsing binaries to be identified through small image position shifts during transit. We also re-evaluate Kepler Objects of Interest (KOIs) 1-1609, which were identified early in the mission, using substantially more data to test for background false positives and to find additional multiple systems. Combining the new and previous KOI samples, we provide updated parameters for 2738 Kepler planet candidates distributed across 2017 host stars. From the combined Kepler planet candidates, 472 are new from the Q1-Q8 data examined in this study. The new Kepler planet candidates represent ∼40% of the sample with R P ∼ 1 R ⊕ and represent ∼40% of the low equilibrium temperature (T eq < 300 K) sample. We review the known biases in the current sample of Kepler planet candidates relevant to evaluating planet population statistics with the current Kepler planet candidate sample

  7. Statistical examination of the aerosols loading over Kano-Nigeria: the Satellite observation analysis

    Moses E. Emetere

    2016-07-01

    Full Text Available The problem of underestimating or overestimating the aerosols loading over Kano is readily becoming a global challenge. Recent health outcomes from an extensive effect of aerosols pollution has started manifesting in Kano. The aim of the research is to estimate the aerosols loading and retention over Kano. Thirteen years aerosol optical depth (AOD data was obtained from the Multi-angle imaging spectroradiometer (MISR. Statistical tools, as well as analytically derived model for aerosols loading were used to obtain the aerosols retention and loading over the area. It was discovered that the average aerosols retention over Kano is 4.9%. The atmospheric constants over Kano were documented. Due to the volume of aerosols over Kano, it is necessary to change the ITU model which relates to signal budgeting.

  8. Statistical characteristics of Doppler spectral width as observed by the conjugate SuperDARN radars

    K. Hosokawa

    Full Text Available We performed a statistical analysis of the occurrence distribution of Doppler spectral width around the day-side high-latitude ionosphere using data from the conjugate radar pair composed of the CUTLASS Iceland-East radar in the Northern Hemisphere and the SENSU Syowa-East radar in the Southern Hemisphere. Three types of spectral width distribution were identified: (1 an exponential-like distribution in the lower magnetic latitudes (below 72°, (2 a Gaussian-like distribution around a few degrees magnetic latitude, centered on 78°, and (3 another type of distribution in the higher magnetic latitudes (above 80°. The first two are considered to represent the geophysical regimes such as the LLBL and the cusp, respectively, because they are similar to the spectral width distributions within the LLBL and the cusp, as classified by Baker et al. (1995. The distribution found above 80° magnetic latitude has been clarified for the first time in this study. This distribution has similarities to the exponential-like distribution in the lower latitude part, although clear differences also exist in their characteristics. These three spectral width distributions are commonly identified in conjugate hemispheres. The latitudinal transition from one distribution to another exhibits basically the same trend between two hemispheres. There is, however, an interhemispheric difference in the form of the distribution around the cusp latitudes, such that spectral width values obtained from Syowa-East are larger than those from Iceland-East. On the basis of the spectral width characteristics, the average locations of the cusp and the open/closed field line boundary are estimated statistically.

    Key words. Ionosphere (ionosphere-magnetosphere inter-actions; plasma convection – Magnetospheric physics (magnetopause, cusp, and boundary layers

  9. Comparative statistical analysis of carcinogenic and non-carcinogenic effects of uranium in groundwater samples from different regions of Punjab, India

    Saini, Komal; Singh, Parminder; Bajwa, Bikramjit Singh

    2016-01-01

    LED flourimeter has been used for microanalysis of uranium concentration in groundwater samples collected from six districts of South West (SW), West (W) and North East (NE) Punjab, India. Average value of uranium content in water samples of SW Punjab is observed to be higher than WHO, USEPA recommended safe limit of 30 µg l −1 as well as AERB proposed limit of 60 µg l −1 . Whereas, for W and NE region of Punjab, average level of uranium concentration was within AERB recommended limit of 60 µg l −1 . Average value observed in SW Punjab is around 3–4 times the value observed in W Punjab, whereas its value is more than 17 times the average value observed in NE region of Punjab. Statistical analysis of carcinogenic as well as non carcinogenic risks due to uranium have been evaluated for each studied district. - Highlights: • Uranium level in groundwater samples have been assessed in different regions of Punjab. • Comparative study of carcinogenic and non carcinogenic effects of uranium has been done. • Wide variation has been found for different geological regions. • It has been found that South west Punjab is worst affected by uranium contamination in its water. • For west and north east regions of Punjab, uranium levels in groundwater laid under recommended safe limits.

  10. Continuous quality control of the blood sampling procedure using a structured observation scheme

    Seemann, T. L.; Nybo, M.

    2015-01-01

    Background: An important preanalytical factor is the blood sampling procedure and its adherence to the guidelines, i.e. CLSI and ISO 15189, in order to ensure a consistent quality of the blood collection. Therefore, it is critically important to introduce quality control on this part of the process....... As suggested by the EFLM working group on the preanalytical phase we introduced continuous quality control of the blood sampling procedure using a structured observation scheme to monitor the quality of blood sampling performed on an everyday basis. Materials and methods: Based on our own routines the EFLM....... Conclusion: It is possible to establish a continuous quality control on blood sampling. It has been well accepted by the staff and we have already been able to identify critical areas in the sampling process. We find that continuous auditing increase focus on the quality of blood collection which ensures...

  11. The Atmospheric Scanning Electron Microscope with open sample space observes dynamic phenomena in liquid or gas.

    Suga, Mitsuo; Nishiyama, Hidetoshi; Konyuba, Yuji; Iwamatsu, Shinnosuke; Watanabe, Yoshiyuki; Yoshiura, Chie; Ueda, Takumi; Sato, Chikara

    2011-12-01

    Although conventional electron microscopy (EM) requires samples to be in vacuum, most chemical and physical reactions occur in liquid or gas. The Atmospheric Scanning Electron Microscope (ASEM) can observe dynamic phenomena in liquid or gas under atmospheric pressure in real time. An electron-permeable window made of pressure-resistant 100 nm-thick silicon nitride (SiN) film, set into the bottom of the open ASEM sample dish, allows an electron beam to be projected from underneath the sample. A detector positioned below captures backscattered electrons. Using the ASEM, we observed the radiation-induced self-organization process of particles, as well as phenomena accompanying volume change, including evaporation-induced crystallization. Using the electrochemical ASEM dish, we observed tree-like electrochemical depositions on the cathode. In silver nitrate solution, we observed silver depositions near the cathode forming incidental internal voids. The heated ASEM dish allowed observation of patterns of contrast in melting and solidifying solder. Finally, to demonstrate its applicability for monitoring and control of industrial processes, silver paste and solder paste were examined at high throughput. High resolution, imaging speed, flexibility, adaptability, and ease of use facilitate the observation of previously difficult-to-image phenomena, and make the ASEM applicable to various fields. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. THE ORIGIN OF THE INFRARED EMISSION IN RADIO GALAXIES. II. ANALYSIS OF MID- TO FAR-INFRARED SPITZER OBSERVATIONS OF THE 2JY SAMPLE

    Dicken, D.; Tadhunter, C.; Axon, D.; Morganti, R.; Inskip, K. J.; Holt, J.; Delgado, R. Gonzalez; Groves, B.

    2009-01-01

    We present an analysis of deep mid- to far-infrared (MFIR) Spitzer photometric observations of the southern 2Jy sample of powerful radio sources (0.05 statistical investigation of the links between radio jet, active galactic nucleus (AGN), starburst activity and MFIR

  13. Development of the Large-Scale Statistical Analysis System of Satellites Observations Data with Grid Datafarm Architecture

    Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.

    2006-12-01

    number of files and the elapsed time, parallel and distributed processing shorten the elapsed time to 1/5 than sequential processing. On the other hand, sequential processing times were shortened in another experiment, whose file size is smaller than 100KB. In this case, the elapsed time to scan one file is within one second. It implies that disk swap took place in case of parallel processing by each node. We note that the operation became unstable when the number of the files exceeded 1000. To overcome the problem (iii), we developed an original data class. This class supports our reading of data files with various data formats since it converts them into an original data format since it defines schemata for every type of data and encapsulates the structure of data files. In addition, since this class provides a function of time re-sampling, users can easily convert multiple data (array) with different time resolution into the same time resolution array. Finally, using the Gfarm, we achieved a high performance environment for large-scale statistical data analyses. It should be noted that the present method is effective only when one data file size is large enough. At present, we are restructuring the new Gfarm environment with 8 nodes: CPU is Athlon 64 x2 Dual Core 2GHz, 2GB memory and 1.2TB disk (using RAID0) for each node. Our original class is to be implemented on the new Gfarm environment. In the present talk, we show the latest results with applying the present system for data analyses with huge number of satellite observation data files.

  14. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  15. A statistical approach for identifying the ionospheric footprint of magnetospheric boundaries from SuperDARN observations

    G. Lointier

    2008-02-01

    Full Text Available Identifying and tracking the projection of magnetospheric regions on the high-latitude ionosphere is of primary importance for studying the Solar Wind-Magnetosphere-Ionosphere system and for space weather applications. By its unique spatial coverage and temporal resolution, the Super Dual Auroral Radar Network (SuperDARN provides key parameters, such as the Doppler spectral width, which allows the monitoring of the ionospheric footprint of some magnetospheric boundaries in near real-time. In this study, we present the first results of a statistical approach for monitoring these magnetospheric boundaries. The singular value decomposition is used as a data reduction tool to describe the backscattered echoes with a small set of parameters. One of these is strongly correlated with the Doppler spectral width, and can thus be used as a proxy for it. Based on this, we propose a Bayesian classifier for identifying the spectral width boundary, which is classically associated with the Polar Cap boundary. The results are in good agreement with previous studies. Two advantages of the method are: the possibility to apply it in near real-time, and its capacity to select the appropriate threshold level for the boundary detection.

  16. Statistical analysis of midlatitude spread F using multi-station digisonde observations

    Bhaneja, P.; Earle, G. D.; Bullett, T. W.

    2018-01-01

    A comprehensive statistical study of midlatitude spread F (MSF) is presented for five midlatitude stations in the North American sector. These stations include Ramey AFB, Puerto Rico (18.5°N, 67.1°W, -14° declination angle), Wallops Island, Virginia (37.95°N, 75.5°W, -11° declination angle), Dyess, Texas (32.4°N, 99.8°W, 6.9° declination angle), Boulder, Colorado (40°N, 105.3°W, 10° declination angle), and Vandenberg AFB, California (34.8°N, 120.5°W, 13° declination angle). Pattern recognition algorithms are used to determine the presence of both range and frequency spread F. Data from 1996 to 2011 are analyzed, covering all of Solar Cycle 23 and the beginning of Solar Cycle 24. Variations with respect to season and solar activity are presented, including the effects of the extended minimum between cycles 23 and 24.

  17. CRITICAL HEIGHT FOR THE DESTABILIZATION OF SOLAR PROMINENCES: STATISTICAL RESULTS FROM STEREO OBSERVATIONS

    Liu Kai; Wang Yuming; Wang Shui; Shen Chenglong, E-mail: ymwang@ustc.edu.cn [CAS Key Laboratory of Geospace Environment, Department of Geophysics and Planetary Sciences, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2012-01-10

    At which height is a prominence inclined to be unstable, or where is the most probable critical height for the prominence destabilization? This question was statistically studied based on 362 solar limb prominences well recognized by Solar Limb Prominence Catcher and Tracker from 2007 April to the end of 2009. We found that there are about 71% disrupted prominences (DPs), among which about 42% of them did not erupt successfully and about 89% of them experienced a sudden destabilization process. After a comprehensive analysis of the DPs, we discovered the following: (1) Most DPs become unstable at a height of 0.06-0.14 R{sub Sun} from the solar surface, and there are two most probable critical heights at which a prominence is very likely to become unstable, the first one is 0.13 R{sub Sun} and the second one is 0.19 R{sub Sun }. (2) An upper limit for the erupting velocity of eruptive prominences (EPs) exists, which decreases following a power law with increasing height and mass; accordingly, the kinetic energy of EPs has an upper limit too, which decreases as the critical height increases. (3) Stable prominences are generally longer and heavier than DPs, and not higher than 0.4 R{sub Sun }. (4) About 62% of the EPs were associated with coronal mass ejections (CMEs); but there is no difference in apparent properties between EPs associated with CMEs and those that are not.

  18. CRITICAL HEIGHT FOR THE DESTABILIZATION OF SOLAR PROMINENCES: STATISTICAL RESULTS FROM STEREO OBSERVATIONS

    Liu Kai; Wang Yuming; Wang Shui; Shen Chenglong

    2012-01-01

    At which height is a prominence inclined to be unstable, or where is the most probable critical height for the prominence destabilization? This question was statistically studied based on 362 solar limb prominences well recognized by Solar Limb Prominence Catcher and Tracker from 2007 April to the end of 2009. We found that there are about 71% disrupted prominences (DPs), among which about 42% of them did not erupt successfully and about 89% of them experienced a sudden destabilization process. After a comprehensive analysis of the DPs, we discovered the following: (1) Most DPs become unstable at a height of 0.06-0.14 R ☉ from the solar surface, and there are two most probable critical heights at which a prominence is very likely to become unstable, the first one is 0.13 R ☉ and the second one is 0.19 R ☉ . (2) An upper limit for the erupting velocity of eruptive prominences (EPs) exists, which decreases following a power law with increasing height and mass; accordingly, the kinetic energy of EPs has an upper limit too, which decreases as the critical height increases. (3) Stable prominences are generally longer and heavier than DPs, and not higher than 0.4 R ☉ . (4) About 62% of the EPs were associated with coronal mass ejections (CMEs); but there is no difference in apparent properties between EPs associated with CMEs and those that are not.

  19. Fault diagnosis of downhole drilling incidents using adaptive observers and statistical change detection

    Willersrud, Anders; Blanke, Mogens; Imsland, Lars

    2015-01-01

    Downhole abnormal incidents during oil and gas drilling causes costly delays, any may also potentially lead to dangerous scenarios. Dierent incidents willcause changes to dierent parts of the physics of the process. Estimating thechanges in physical parameters, and correlating these with changes ...... expectedfrom various defects, can be used to diagnose faults while in development.This paper shows how estimated friction parameters and ow rates can de-tect and isolate the type of incident, as well as isolating the position of adefect. Estimates are shown to be subjected to non......-Gaussian,t-distributednoise, and a dedicated multivariate statistical change detection approach isused that detects and isolates faults by detecting simultaneous changes inestimated parameters and ow rates. The properties of the multivariate di-agnosis method are analyzed, and it is shown how detection and false alarmprobabilities...... are assessed and optimized using data-based learning to obtainthresholds for hypothesis testing. Data from a 1400 m horizontal ow loop isused to test the method, and successful diagnosis of the incidents drillstringwashout (pipe leakage), lost circulation, gas in ux, and drill bit plugging aredemonstrated....

  20. Integrating observation and statistical forecasts over sub-Saharan Africa to support Famine Early Warning

    Funk, Chris; Verdin, James P.; Husak, Gregory

    2007-01-01

    Famine early warning in Africa presents unique challenges and rewards. Hydrologic extremes must be tracked and anticipated over complex and changing climate regimes. The successful anticipation and interpretation of hydrologic shocks can initiate effective government response, saving lives and softening the impacts of droughts and floods. While both monitoring and forecast technologies continue to advance, discontinuities between monitoring and forecast systems inhibit effective decision making. Monitoring systems typically rely on high resolution satellite remote-sensed normalized difference vegetation index (NDVI) and rainfall imagery. Forecast systems provide information on a variety of scales and formats. Non-meteorologists are often unable or unwilling to connect the dots between these disparate sources of information. To mitigate these problem researchers at UCSB's Climate Hazard Group, NASA GIMMS and USGS/EROS are implementing a NASA-funded integrated decision support system that combines the monitoring of precipitation and NDVI with statistical one-to-three month forecasts. We present the monitoring/forecast system, assess its accuracy, and demonstrate its application in food insecure sub-Saharan Africa.

  1. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    Pegg, E.C., E-mail: elise.pegg@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Mellon, S.J., E-mail: stephen.mellon@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Salmon, G. [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Alvand, A., E-mail: abtin.alvand@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Pandit, H., E-mail: hemant.pandit@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Murray, D.W., E-mail: david.murray@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Gill, H.S., E-mail: richie.gill@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom)

    2012-10-15

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements.

  2. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    Pegg, E.C.; Mellon, S.J.; Salmon, G.; Alvand, A.; Pandit, H.; Murray, D.W.; Gill, H.S.

    2012-01-01

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements

  3. Observation of statistics of screening for unruptured cerebral aneurysms in Tochigi Prefecture

    Matsumoto, Eiji; Shinoda, Souji; Masuzawa, Toshio; Nakamura, Kouichi

    2002-01-01

    Screening for unruptured cerebral aneurysms (UCAs) using magnetic resonance imaging (MRI) and angiography (MRA) is prevalent in Japan. To reveal the prevalence of UCAs found during screening, we collected data of the results in 1999, in Tochigi prefecture. In the prefecture, of which the population was about 2 million, 26 institutions had been established in 1999, and 5,222 persons had been screened. These corresponded to 0.26% of all inhibitants of Tochigi prefecture. Of the 26 institutions, 24 cooperated in this study, and data was collected for 4,961 persons. We investigated the prevalence of UCAs, and compared it with that of subarachnoid hemorrhage (SAH) in Japan using the existing statistics. The UCAs were found in 143 (2.9%) of the 4,961 cases, 69 men and 74 women, with a mean age of 59.2 years. The prevalence of UCAs at screening and the prevalence of SAH in Japan co-relate in that this prevalence increases with age in both UCAs and SAH. However, after the age of 75, the prevalence of SAH decreases. People found with UCAs at screening were mainly in their 50's, but the member of those found with SAH increased gradually after that age. The rate of screening of women was lower than that of men, although both the prevalence of UCAs at screening and SAH of women is higher than that of men. We recommend that middle-aged persons, in their 40's and older, should request screening for UCAs. (author)

  4. Testing University Rankings Statistically: Why this Perhaps is not such a Good Idea after All. Some Reflections on Statistical Power, Effect Size, Random Sampling and Imaginary Populations

    Schneider, Jesper Wiborg

    2012-01-01

    In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...

  5. THE zCOSMOS-SINFONI PROJECT. I. SAMPLE SELECTION AND NATURAL-SEEING OBSERVATIONS

    Mancini, C.; Renzini, A. [INAF-OAPD, Osservatorio Astronomico di Padova, Vicolo Osservatorio 5, I-35122 Padova (Italy); Foerster Schreiber, N. M.; Hicks, E. K. S.; Genzel, R.; Tacconi, L.; Davies, R. [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Cresci, G. [Osservatorio Astrofisico di Arcetri (OAF), INAF-Firenze, Largo E. Fermi 5, I-50125 Firenze (Italy); Peng, Y.; Lilly, S.; Carollo, M.; Oesch, P. [Institute of Astronomy, Department of Physics, Eidgenossische Technische Hochschule, ETH Zurich CH-8093 (Switzerland); Vergani, D.; Pozzetti, L.; Zamorani, G. [INAF-Bologna, Via Ranzani, I-40127 Bologna (Italy); Daddi, E. [CEA-Saclay, DSM/DAPNIA/Service d' Astrophysique, F-91191 Gif-Sur Yvette Cedex (France); Maraston, C. [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, PO1 3HE Portsmouth (United Kingdom); McCracken, H. J. [IAP, 98bis bd Arago, F-75014 Paris (France); Bouche, N. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Shapiro, K. [Aerospace Research Laboratories, Northrop Grumman Aerospace Systems, Redondo Beach, CA 90278 (United States); and others

    2011-12-10

    The zCOSMOS-SINFONI project is aimed at studying the physical and kinematical properties of a sample of massive z {approx} 1.4-2.5 star-forming galaxies, through SINFONI near-infrared integral field spectroscopy (IFS), combined with the multiwavelength information from the zCOSMOS (COSMOS) survey. The project is based on one hour of natural-seeing observations per target, and adaptive optics (AO) follow-up for a major part of the sample, which includes 30 galaxies selected from the zCOSMOS/VIMOS spectroscopic survey. This first paper presents the sample selection, and the global physical characterization of the target galaxies from multicolor photometry, i.e., star formation rate (SFR), stellar mass, age, etc. The H{alpha} integrated properties, such as, flux, velocity dispersion, and size, are derived from the natural-seeing observations, while the follow-up AO observations will be presented in the next paper of this series. Our sample appears to be well representative of star-forming galaxies at z {approx} 2, covering a wide range in mass and SFR. The H{alpha} integrated properties of the 25 H{alpha} detected galaxies are similar to those of other IFS samples at the same redshifts. Good agreement is found among the SFRs derived from H{alpha} luminosity and other diagnostic methods, provided the extinction affecting the H{alpha} luminosity is about twice that affecting the continuum. A preliminary kinematic analysis, based on the maximum observed velocity difference across the source and on the integrated velocity dispersion, indicates that the sample splits nearly 50-50 into rotation-dominated and velocity-dispersion-dominated galaxies, in good agreement with previous surveys.

  6. Statistic Analyses of the Color Experience According to the Age of the Observer

    Hunjet, Anica; Parac-Osterman, Đurđica; Vučaj, Edita

    2013-01-01

    Psychological experience of color is a real state of the communication between the environment and color, and it will depend on the source of the light, angle of the view, and particular on the observer and his health condition. Hering’s theory or a theory of the opponent processes supposes that cones, which are situated in the retina of the eye, are not sensible on the three chromatic domains (areas, fields, zones) (red, green and purple-blue), but they produce a signal based on the principl...

  7. Second multiannual statistics of the meteorological observations at Ispra (1959-1973)

    Bollini, G.

    1975-01-01

    The mean and extreme values, taken over a period of 15 years at the Meteorological Observatory near Ispra, are combined into 40 tables and 18 graphs for the following elements: cloud amount, sunshine and solar radiation, temperature, relative humidity, water vapour pressure, precipitation, atmospheric phenomena, atmospheric pressure, wind and some other observations. The purpose of this work is to present a climatological base for the requirements of the Joint Nuclear Research Center, while detailed safety analyses are reported on the monographs, enumerated after the index

  8. Carcinogenesis in Hiroshima and Nagasaki. Observations from ABCC-JNIH pathology and statistical studies

    Zeldis, L J; Jablon, S; Ishida, Morihiro

    1963-01-01

    Studies in Hiroshima and Nagasaki of a possible carcinogenic effect of radiation in survivors of the atomic bombings are included in programs conducted jointly by the Atomic Bomb Casualty Commission (ABCC) and the Japanese National Institute of Health (JNIH) with the collaboration of physicians and medical organizations in both cities. In order to cope with epidemiologic problems that attend these, in common with other studies of human populations, ABCC-JNIH programs are now oriented to the intensive surveillance of health, morbidity, and mortality principally in known, fixed cohorts of the survivors. The data reported here are derived from 3 interrelated programs in Hiroshima and Nagasaki: the JNIH-ABCC Life Span Study, Tumor Registry Studies, and Joint ABCC-JNIH Pathology Studies. The population samples utilized in these studies are defined along with summarizing pertinent information concerning their exposure to ionizing radiation.

  9. Carcinogenesis in Hiroshima and Nagasaki. Observations from ABCC-JNIH pathology and statistical studies

    Zeldis, L J; Jablon, S; Ishida, Morihiro

    1963-01-01

    Studies in Hiroshima and Nagasaki of a possible carcinogenic effect of radiation in survivors of the atomic bombings are included in programs conducted jointly by the Atomic Bomb Casualty Commission (ABCC) and the Japanese National Institute of Health (JNIH) with the collaboration of physicians and medical organizations in both cities. In order to cope with epidemiologic problems that attend these, in common with other studies of human populations. ABCC-JNIH programs are now oriented to the intensive surveillance of health, morbidity, and mortality principally in known, fixed cohorts of the survivors. The data reported here are derived from 3 interrelated programs in Hiroshima and Nagasaki: the JNIH-ABCC Life Span Study, Tumor Registry Studies, and Joint ABCC-JNIH Pathology Studies. The population samples utilized in these studies are defined along with summarizing pertinent information concerning their exposure to ionizing radiation. 11 references, 2 figures, 10 tables.

  10. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    F. Raicich

    2003-01-01

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied. The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low. Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  11. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    F. Raicich

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied.

    The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low.

    Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  12. STATISTICS OF FLARING LOOPS OBSERVED BY THE NOBEYAMA RADIOHELIOGRAPH. III. ASYMMETRY OF TWO FOOTPOINT EMISSIONS

    Huang Guangli; Song Qiwu; Huang Yu

    2010-01-01

    Two footpoint (FP) emissions are compared in a total of 24 events with loop-like structures imaged by the Nobeyama Radioheliograph (NoRH), which are divided into two groups: when the optically thin radio spectrum in the looptop is harder than those in the two FPs (group 1) and when it is softer than those in at least one FP (group 2). There are always correlative variations of the brightness temperatures and polarization degrees, the spectral indices, the column densities of nonthermal electrons, and magnetic field strengths in the two FPs. The maximum differences of these parameters in the two FPs may reach one or two orders of magnitude (except the polarization degree). The logarithm of the ratio of the magnetic field strengths in the two FPs is always anti-correlated with the logarithms of the ratios of the brightness temperatures in the two FPs, but correlated with the differences of the spectral indices in the two FPs. Only two anti-correlations exist in group 1, between the difference of the absolute polarization degrees in the two FPs and the logarithm of the ratio of the brightness temperatures in the two FPs and between the difference of the spectral indices in the two FPs and the logarithm of the ratio of the column densities of nonthermal electrons in the two FPs. Only two positive correlations appear in group 1, between the difference of the absolute polarization degrees in the two FPs and the relative ratio of magnetic field strengths in the two FPs and between the logarithm of the ratio of the column densities of nonthermal electrons in the two FPs and the logarithm of the ratio of the brightness temperatures in the two FPs. These four statistics in group 2 are just opposite to those in group 1, which may be directly explained by gyrosynchrotron theory. Moreover, the asymmetry of the two FP emissions in group 2 is more evident than that in group 1, which may be explained by two kinds of flare models, respectively, in the two groups of events, i

  13. Effect of carboxymethylcellulose on the rheological and filtration properties of bentonite clay samples determined by experimental planning and statistical analysis

    B. M. A. Brito

    Full Text Available Abstract Over the past few years, considerable research has been conducted using the techniques of mixture delineation and statistical modeling. Through this methodology, applications in various technological fields have been found/optimized, especially in clay technology, leading to greater efficiency and reliability. This work studied the influence of carboxymethylcellulose on the rheological and filtration properties of bentonite dispersions to be applied in water-based drilling fluids using experimental planning and statistical analysis for clay mixtures. The dispersions were prepared according to Petrobras standard EP-1EP-00011-A, which deals with the testing of water-based drilling fluid viscosifiers for oil prospecting. The clay mixtures were transformed into sodic compounds, and carboxymethylcellulose additives of high and low molar mass were added, in order to improve their rheology and filtrate volume. Experimental planning and statistical analysis were used to verify the effect. The regression models were calculated for the relation between the compositions and the following rheological properties: apparent viscosity, plastic viscosity, and filtrate volume. The significance and validity of the models were confirmed. The results showed that the 3D response surfaces of the compositions with high molecular weight carboxymethylcellulose added were the ones that most contributed to the rise in apparent viscosity and plastic viscosity, and that those with low molecular weight were the ones that most helped in the reduction of the filtrate volume. Another important observation is that the experimental planning and statistical analysis can be used as an important auxiliary tool to optimize the rheological properties and filtrate volume of bentonite clay dispersions for use in drilling fluids when carboxymethylcellulose is added.

  14. Regression-based statistical mediation and moderation analysis in clinical research: Observations, recommendations, and implementation.

    Hayes, Andrew F; Rockwood, Nicholas J

    2017-11-01

    There have been numerous treatments in the clinical research literature about various design, analysis, and interpretation considerations when testing hypotheses about mechanisms and contingencies of effects, popularly known as mediation and moderation analysis. In this paper we address the practice of mediation and moderation analysis using linear regression in the pages of Behaviour Research and Therapy and offer some observations and recommendations, debunk some popular myths, describe some new advances, and provide an example of mediation, moderation, and their integration as conditional process analysis using the PROCESS macro for SPSS and SAS. Our goal is to nudge clinical researchers away from historically significant but increasingly old school approaches toward modifications, revisions, and extensions that characterize more modern thinking about the analysis of the mechanisms and contingencies of effects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Inferring spatial clouds statistics from limited field-of-view, zenith observations

    Sun, C.H.; Thorne, L.R. [Sandia National Labs., Livermore, CA (United States)

    1996-04-01

    Many of the Cloud and Radiation Testbed (CART) measurements produce a time series of zenith observations, but spatial averages are often the desired data product. One possible approach to deriving spatial averages from temporal averages is to invoke Taylor`s hypothesis where and when it is valid. Taylor`s hypothesis states that when the turbulence is small compared with the mean flow, the covariance in time is related to the covariance in space by the speed of the mean flow. For clouds fields, Taylor`s hypothesis would apply when the {open_quotes}local{close_quotes} turbulence is small compared with advective flow (mean wind). The objective of this study is to determine under what conditions Taylor`s hypothesis holds or does not hold true for broken cloud fields.

  16. Cluster observations of near-Earth magnetospheric lobe plasma densities – a statistical study

    K. R. Svenes

    2008-09-01

    Full Text Available The Cluster-mission has enabled a study of the near-Earth magnetospheric lobes throughout the waning part of solar cycle 23. During the first seven years of the mission the satellites crossed this region of space regularly from about July to October. We have obtained new and more accurate plasma densities in this region based on spacecraft potential measurements from the EFW-instrument. The plasma density measurements are found by converting the potential measurements using a functional relationship between these two parameters. Our observations have shown that throughout this period a full two thirds of the measurements were contained in the range 0.007–0.092 cm−3 irrespective of solar wind conditions or geomagnetic activity. In fact, the most probable density encountered was 0.047 cm−3, staying roughly constant throughout the entire observation period. The plasma population in this region seems to reflect an equilibrium situation in which the density is independent of the solar wind condition or geomagnetic activity. However, the high density tail of the population (ne>0.2 cm−3 seemed to decrease with the waning solar cycle. This points to a source region influenced by the diminishing solar UV/EUV-intensity. Noting that the quiet time polar wind has just such a development and that it is magnetically coupled to the lobes, it seems likely to assume that this is a prominent source for the lobe plasma.

  17. A high-resolution open biomass burning emission inventory based on statistical data and MODIS observations in mainland China

    Xu, Y.; Fan, M.; Huang, Z.; Zheng, J.; Chen, L.

    2017-12-01

    Open biomass burning which has adverse effects on air quality and human health is an important source of gas and particulate matter (PM) in China. Current emission estimations of open biomass burning are generally based on single source (alternative to statistical data and satellite-derived data) and thus contain large uncertainty due to the limitation of data. In this study, to quantify the 2015-based amount of open biomass burning, we established a new estimation method for open biomass burning activity levels by combining the bottom-up statistical data and top-down MODIS observations. And three sub-category sources which used different activity data were considered. For open crop residue burning, the "best estimate" of activity data was obtained by averaging the statistical data from China statistical yearbooks and satellite observations from MODIS burned area product MCD64A1 weighted by their uncertainties. For the forest and grassland fires, their activity levels were represented by the combination of statistical data and MODIS active fire product MCD14ML. Using the fire radiative power (FRP) which is considered as a better indicator of active fire level as the spatial allocation surrogate, coarse gridded emissions were reallocated into 3km ×3km grids to get a high-resolution emission inventory. Our results showed that emissions of CO, NOx, SO2, NH3, VOCs, PM2.5, PM10, BC and OC in mainland China were 6607, 427, 84, 79, 1262, 1198, 1222, 159 and 686 Gg/yr, respectively. Among all provinces of China, Henan, Shandong and Heilongjiang were the top three contributors to the total emissions. In this study, the developed open biomass burning emission inventory with a high-resolution could support air quality modeling and policy-making for pollution control.

  18. Uncertainties in observational data on organic aerosol: An annual perspective of sampling artifacts in Beijing, China

    Cheng, Yuan; He, Ke-bin

    2015-01-01

    Current understanding of organic aerosol (OA) is challenged by the large gap between simulation results and observational data. Based on six campaigns conducted in a representative mega city in China, this study provided an annual perspective of the uncertainties in observational OA data caused by sampling artifacts. Our results suggest that for the commonly-used sampling approach that involves collection of particles on a bare quartz filter, the positive artifact could result in a 20–40 % overestimation of OA concentrations. Based on an evaluation framework that includes four criteria, an activated carbon denuder was demonstrated to be able to effectively eliminate the positive artifact with a long useful time of at least one month, and hence it was recommended to be a good choice for routine measurement of carbonaceous aerosol. - Highlights: • Positive artifact can cause an overestimation of OA concentrations by up to 40%. • It remains a challenge to measure semivolatile OA based on filter sampling. • The positive artifact can be effectively removed by an ACM denuder. • The ACM denuder is small in size, easy to use and multi-functional. • The ACM denuder is recommended for routine measurement of OA. - Accounting for sampling artifacts can help to bridge the gap between simulated and observed OA concentrations.

  19. A method for three-dimensional quantitative observation of the microstructure of biological samples

    Wang, Pengfei; Chen, Dieyan; Ma, Wanyun; Wu, Hongxin; Ji, Liang; Sun, Jialin; Lv, Danyu; Zhang, Lu; Li, Ying; Tian, Ning; Zheng, Jinggao; Zhao, Fengying

    2009-07-01

    Contemporary biology has developed into the era of cell biology and molecular biology, and people try to study the mechanism of all kinds of biological phenomena at the microcosmic level now. Accurate description of the microstructure of biological samples is exigent need from many biomedical experiments. This paper introduces a method for 3-dimensional quantitative observation on the microstructure of vital biological samples based on two photon laser scanning microscopy (TPLSM). TPLSM is a novel kind of fluorescence microscopy, which has excellence in its low optical damage, high resolution, deep penetration depth and suitability for 3-dimensional (3D) imaging. Fluorescent stained samples were observed by TPLSM, and afterward the original shapes of them were obtained through 3D image reconstruction. The spatial distribution of all objects in samples as well as their volumes could be derived by image segmentation and mathematic calculation. Thus the 3-dimensionally and quantitatively depicted microstructure of the samples was finally derived. We applied this method to quantitative analysis of the spatial distribution of chromosomes in meiotic mouse oocytes at metaphase, and wonderful results came out last.

  20. Variable Sampling Composite Observer Based Frequency Locked Loop and its Application in Grid Connected System

    ARUN, K.

    2016-05-01

    Full Text Available A modified digital signal processing procedure is described for the on-line estimation of DC, fundamental and harmonics of periodic signal. A frequency locked loop (FLL incorporated within the parallel structure of observers is proposed to accommodate a wide range of frequency drift. The error in frequency generated under drifting frequencies has been used for changing the sampling frequency of the composite observer, so that the number of samples per cycle of the periodic waveform remains constant. A standard coupled oscillator with automatic gain control is used as numerically controlled oscillator (NCO to generate the enabling pulses for the digital observer. The NCO gives an integer multiple of the fundamental frequency making it suitable for power quality applications. Another observer with DC and second harmonic blocks in the feedback path act as filter and reduces the double frequency content. A systematic study of the FLL is done and a method has been proposed to design the controller. The performance of FLL is validated through simulation and experimental studies. To illustrate applications of the new FLL, estimation of individual harmonics from nonlinear load and the design of a variable sampling resonant controller, for a single phase grid-connected inverter have been presented.

  1. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Q. Zhang

    2018-02-01

    Full Text Available River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1 fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2 the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling – in the form of spectral slope (β or other equivalent scaling parameters (e.g., Hurst exponent – are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1 they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β  =  0 to Brown noise (β  =  2 and (2 their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb–Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among

  2. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  3. STATISTICS OF FLARING LOOPS OBSERVED BY NOBEYAMA RADIOHELIOGRAPH. II. SPECTRAL EVOLUTION

    Huang Guangli; Nakajima, Hiroshi

    2009-01-01

    The spectral evolution of solar microwave bursts is studied in 10 impulsive events with loop-like structures, which are selected in the flare list of Nobeyama Radioheliograph. Most events have a brighter and harder looptop (LT) with maximum time later than at least one of its two footpoints (FPs), and have a common feature of the spectral evolution in the LT and the two FPs. There are five simple impulsive bursts with a well known pattern of soft-hard-soft or soft-hard-harder (SHH). It is first found that the other five events have multiple subpeaks in their impulsive phase, and mostly have a new feature of hard-soft-hard (HSH) in each subpeak, but, the well known tendency of SHH is still maintained in the total spectral evolution of these events. All of these features in the spectral evolution of the 10 selected events are consistent with the full Sun observations of Nobeyama Radio Polarimeters in these events. The new feature of HSH may be explained by the thermal free-free emission before, during, and after these bursts, together with multiple injections of nonthermal electrons, while the SHH pattern in the total duration may be directly caused by the trapping effect.

  4. Economic Statistical Design of Variable Sampling Interval X¯$\\overline X $ Control Chart Based on Surrogate Variable Using Genetic Algorithms

    Lee Tae-Hoon

    2016-12-01

    Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.

  5. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  6. Particle System Based Adaptive Sampling on Spherical Parameter Space to Improve the MDL Method for Construction of Statistical Shape Models

    Rui Xu

    2013-01-01

    Full Text Available Minimum description length (MDL based group-wise registration was a state-of-the-art method to determine the corresponding points of 3D shapes for the construction of statistical shape models (SSMs. However, it suffered from the problem that determined corresponding points did not uniformly spread on original shapes, since corresponding points were obtained by uniformly sampling the aligned shape on the parameterized space of unit sphere. We proposed a particle-system based method to obtain adaptive sampling positions on the unit sphere to resolve this problem. Here, a set of particles was placed on the unit sphere to construct a particle system whose energy was related to the distortions of parameterized meshes. By minimizing this energy, each particle was moved on the unit sphere. When the system became steady, particles were treated as vertices to build a spherical mesh, which was then relaxed to slightly adjust vertices to obtain optimal sampling-positions. We used 47 cases of (left and right lungs and 50 cases of livers, (left and right kidneys, and spleens for evaluations. Experiments showed that the proposed method was able to resolve the problem of the original MDL method, and the proposed method performed better in the generalization and specificity tests.

  7. Predicted versus observed cosmic-ray-produced noble gases in lunar samples: improved Kr production ratios

    Regnier, S.; Hohenberg, C.M.; Marti, K.; Reedy, R.C.

    1979-01-01

    New sets of cross sections for the production of krypton isotopes from targets of Rb, Sr, Y, and Zr were constructed primarily on the bases of experimental excitation functions for Kr production from Y. These cross sections were used to calculate galactic-cosmic-ray and solar-proton production rates for Kr isotopes in the moon. Spallation Kr data obtained from ilmenite separates of rocks 10017 and 10047 are reported. Production rates and isotopic ratios for cosmogenic Kr observed in ten well-documented lunar samples and in ilmenite separates and bulk samples from several lunar rocks with long but unknown irradiation histories were compared with predicted rates and ratios. The agreements were generally quite good. Erosion of rock surfaces affected rates or ratios for only near-surface samples, where solar-proton production is important. There were considerable spreads in predicted-to-observed production rates of 83 Kr, due at least in part to uncertainties in chemical abundances. The 78 Kr/ 83 Kr ratios were predicted quite well for samples with a wide range of Zr/Sr abundance ratios. The calculated 80 Kr/ 83 Kr ratios were greater than the observed ratios when production by the 79 Br(n,γ) reaction was included, but were slightly undercalculated if the Br reaction was omitted; these results suggest that Br(n,γ)-produced Kr is not retained well by lunar rocks. The productions of 81 Kr and 82 Kr were overcalculated by approximately 10% relative to 83 Kr. Predicted-to-observed 84 Kr/ 83 ratios scattered considerably, possibly because of uncertainties in corrections for trapped and fission components and in cross sections for 84 Kr production. Most predicted 84 Kr and 86 Kr production rates were lower than observed. Shielding depths of several Apollo 11 rocks were determined from the measured 78 Kr/ 83 Kr ratios of ilmenite separates. 4 figures, 5 tables

  8. Using polarimetric radar observations and probabilistic inference to develop the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), a novel microphysical parameterization framework

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.

    2016-12-01

    Microphysical parameterization schemes have reached an impressive level of sophistication: numerous prognostic hydrometeor categories, and either size-resolved (bin) particle size distributions, or multiple prognostic moments of the size distribution. Yet, uncertainty in model representation of microphysical processes and the effects of microphysics on numerical simulation of weather has not shown a improvement commensurate with the advanced sophistication of these schemes. We posit that this may be caused by unconstrained assumptions of these schemes, such as ad-hoc parameter value choices and structural uncertainties (e.g. choice of a particular form for the size distribution). We present work on development and observational constraint of a novel microphysical parameterization approach, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), which seeks to address these sources of uncertainty. Our framework avoids unnecessary a priori assumptions, and instead relies on observations to provide probabilistic constraint of the scheme structure and sensitivities to environmental and microphysical conditions. We harness the rich microphysical information content of polarimetric radar observations to develop and constrain BOSS within a Bayesian inference framework using a Markov Chain Monte Carlo sampler (see Kumjian et al., this meeting for details on development of an associated polarimetric forward operator). Our work shows how knowledge of microphysical processes is provided by polarimetric radar observations of diverse weather conditions, and which processes remain highly uncertain, even after considering observations.

  9. Classification of bladder cancer cell lines using Raman spectroscopy: a comparison of excitation wavelength, sample substrate and statistical algorithms

    Kerr, Laura T.; Adams, Aine; O'Dea, Shirley; Domijan, Katarina; Cullen, Ivor; Hennelly, Bryan M.

    2014-05-01

    Raman microspectroscopy can be applied to the urinary bladder for highly accurate classification and diagnosis of bladder cancer. This technique can be applied in vitro to bladder epithelial cells obtained from urine cytology or in vivo as an optical biopsy" to provide results in real-time with higher sensitivity and specificity than current clinical methods. However, there exists a high degree of variability across experimental parameters which need to be standardised before this technique can be utilized in an everyday clinical environment. In this study, we investigate different laser wavelengths (473 nm and 532 nm), sample substrates (glass, fused silica and calcium fluoride) and multivariate statistical methods in order to gain insight into how these various experimental parameters impact on the sensitivity and specificity of Raman cytology.

  10. Spatial statistics of hydrography and water chemistry in a eutrophic boreal lake based on sounding and water samples.

    Leppäranta, Matti; Lewis, John E; Heini, Anniina; Arvola, Lauri

    2018-06-04

    Spatial variability, an essential characteristic of lake ecosystems, has often been neglected in field research and monitoring. In this study, we apply spatial statistical methods for the key physics and chemistry variables and chlorophyll a over eight sampling dates in two consecutive years in a large (area 103 km 2 ) eutrophic boreal lake in southern Finland. In the four summer sampling dates, the water body was vertically and horizontally heterogenic except with color and DOC, in the two winter ice-covered dates DO was vertically stratified, while in the two autumn dates, no significant spatial differences in any of the measured variables were found. Chlorophyll a concentration was one order of magnitude lower under the ice cover than in open water. The Moran statistic for spatial correlation was significant for chlorophyll a and NO 2 +NO 3 -N in all summer situations and for dissolved oxygen and pH in three cases. In summer, the mass centers of the chemicals were within 1.5 km from the geometric center of the lake, and the 2nd moment radius ranged in 3.7-4.1 km respective to 3.9 km for the homogeneous situation. The lateral length scales of the studied variables were 1.5-2.5 km, about 1 km longer in the surface layer. The detected spatial "noise" strongly suggests that besides vertical variation also the horizontal variation in eutrophic lakes, in particular, should be considered when the ecosystems are monitored.

  11. Experimental observations of Lagrangian sand grain kinematics under bedload transport: statistical description of the step and rest regimes

    Guala, M.; Liu, M.

    2017-12-01

    The kinematics of sediment particles is investigated by non-intrusive imaging methods to provide a statistical description of bedload transport in conditions near the threshold of motion. In particular, we focus on the cyclic transition between motion and rest regimes to quantify the waiting time statistics inferred to be responsible for anomalous diffusion, and so far elusive. Despite obvious limitations in the spatio-temporal domain of the observations, we are able to identify the probability distributions of the particle step time and length, velocity, acceleration, waiting time, and thus distinguish which quantities exhibit well converged mean values, based on the thickness of their respective tails. The experimental results shown here for four different transport conditions highlight the importance of the waiting time distribution and represent a benchmark dataset for the stochastic modeling of bedload transport.

  12. Mitigating Observation Perturbation Sampling Errors in the Stochastic EnKF

    Hoteit, Ibrahim

    2015-03-17

    The stochastic ensemble Kalman filter (EnKF) updates its ensemble members with observations perturbed with noise sampled from the distribution of the observational errors. This was shown to introduce noise into the system and may become pronounced when the ensemble size is smaller than the rank of the observational error covariance, which is often the case in real oceanic and atmospheric data assimilation applications. This work introduces an efficient serial scheme to mitigate the impact of observations’ perturbations sampling in the analysis step of the EnKF, which should provide more accurate ensemble estimates of the analysis error covariance matrices. The new scheme is simple to implement within the serial EnKF algorithm, requiring only the approximation of the EnKF sample forecast error covariance matrix by a matrix with one rank less. The new EnKF scheme is implemented and tested with the Lorenz-96 model. Results from numerical experiments are conducted to compare its performance with the EnKF and two standard deterministic EnKFs. This study shows that the new scheme enhances the behavior of the EnKF and may lead to better performance than the deterministic EnKFs even when implemented with relatively small ensembles.

  13. Mitigating Observation Perturbation Sampling Errors in the Stochastic EnKF

    Hoteit, Ibrahim; Pham, D.-T.; El Gharamti, Mohamad; Luo, X.

    2015-01-01

    The stochastic ensemble Kalman filter (EnKF) updates its ensemble members with observations perturbed with noise sampled from the distribution of the observational errors. This was shown to introduce noise into the system and may become pronounced when the ensemble size is smaller than the rank of the observational error covariance, which is often the case in real oceanic and atmospheric data assimilation applications. This work introduces an efficient serial scheme to mitigate the impact of observations’ perturbations sampling in the analysis step of the EnKF, which should provide more accurate ensemble estimates of the analysis error covariance matrices. The new scheme is simple to implement within the serial EnKF algorithm, requiring only the approximation of the EnKF sample forecast error covariance matrix by a matrix with one rank less. The new EnKF scheme is implemented and tested with the Lorenz-96 model. Results from numerical experiments are conducted to compare its performance with the EnKF and two standard deterministic EnKFs. This study shows that the new scheme enhances the behavior of the EnKF and may lead to better performance than the deterministic EnKFs even when implemented with relatively small ensembles.

  14. Statistical searches for microlensing events in large, non-uniformly sampled time-domain surveys: A test using palomar transient factory data

    Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States); Fournier, Amanda P. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States); Street, Rachel [Las Cumbres Observatory Global Telescope Network, Inc., 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Covey, Kevin R. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Levitan, David; Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Laher, Russ R.; Surace, Jason, E-mail: adrn@astro.columbia.edu [Spitzer Science Center, California Institute of Technology, Mail Stop 314-6, Pasadena, CA 91125 (United States)

    2014-01-20

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ∼20,000 deg{sup 2} footprint. While the median 7.26 deg{sup 2} PTF field has been imaged ∼40 times in the R band, ∼2300 deg{sup 2} have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 10{sup 9} light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  15. Estimation of sampling error uncertainties in observed surface air temperature change in China

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  16. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

    Fraley, R. Chris; Vazire, Simine

    2014-01-01

    The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159

  17. Sensitivity of the Hydrogen Epoch of Reionization Array and its build-out stages to one-point statistics from redshifted 21 cm observations

    Kittiwisit, Piyanat; Bowman, Judd D.; Jacobs, Daniel C.; Beardsley, Adam P.; Thyagarajan, Nithyanandan

    2018-03-01

    We present a baseline sensitivity analysis of the Hydrogen Epoch of Reionization Array (HERA) and its build-out stages to one-point statistics (variance, skewness, and kurtosis) of redshifted 21 cm intensity fluctuation from the Epoch of Reionization (EoR) based on realistic mock observations. By developing a full-sky 21 cm light-cone model, taking into account the proper field of view and frequency bandwidth, utilizing a realistic measurement scheme, and assuming perfect foreground removal, we show that HERA will be able to recover statistics of the sky model with high sensitivity by averaging over measurements from multiple fields. All build-out stages will be able to detect variance, while skewness and kurtosis should be detectable for HERA128 and larger. We identify sample variance as the limiting constraint of the measurements at the end of reionization. The sensitivity can also be further improved by performing frequency windowing. In addition, we find that strong sample variance fluctuation in the kurtosis measured from an individual field of observation indicates the presence of outlying cold or hot regions in the underlying fluctuations, a feature that can potentially be used as an EoR bubble indicator.

  18. Impact of sampling frequency in the analysis of tropospheric ozone observations

    M. Saunois

    2012-08-01

    Full Text Available Measurements of ozone vertical profiles are valuable for the evaluation of atmospheric chemistry models and contribute to the understanding of the processes controlling the distribution of tropospheric ozone. The longest record of ozone vertical profiles is provided by ozone sondes, which have a typical frequency of 4 to 12 profiles a month. Here we quantify the uncertainty introduced by low frequency sampling in the determination of means and trends. To do this, the high frequency MOZAIC (Measurements of OZone, water vapor, carbon monoxide and nitrogen oxides by in-service AIrbus airCraft profiles over airports, such as Frankfurt, have been subsampled at two typical ozone sonde frequencies of 4 and 12 profiles per month. We found the lowest sampling uncertainty on seasonal means at 700 hPa over Frankfurt, with around 5% for a frequency of 12 profiles per month and 10% for a 4 profile-a-month frequency. However the uncertainty can reach up to 15 and 29% at the lowest altitude levels. As a consequence, the sampling uncertainty at the lowest frequency could be higher than the typical 10% accuracy of the ozone sondes and should be carefully considered for observation comparison and model evaluation. We found that the 95% confidence limit on the seasonal mean derived from the subsample created is similar to the sampling uncertainty and suggest to use it as an estimate of the sampling uncertainty. Similar results are found at six other Northern Hemisphere sites. We show that the sampling substantially impacts on the inter-annual variability and the trend derived over the period 1998–2008 both in magnitude and in sign throughout the troposphere. Also, a tropical case is discussed using the MOZAIC profiles taken over Windhoek, Namibia between 2005 and 2008. For this site, we found that the sampling uncertainty in the free troposphere is around 8 and 12% at 12 and 4 profiles a month respectively.

  19. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Corsaro, Enrico; De Ridder, Joris

    2015-09-01

    The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars' power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC) algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  20. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Corsaro Enrico

    2015-01-01

    Full Text Available The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars’ power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  1. A statistical study of gravity waves from radiosonde observations at Wuhan (30° N, 114° E China

    S. D. Zhang

    2005-03-01

    Full Text Available Several works concerning the dynamical and thermal structures and inertial gravity wave activities in the troposphere and lower stratosphere (TLS from the radiosonde observation have been reported before, but these works were concentrated on either equatorial or polar regions. In this paper, background atmosphere and gravity wave activities in the TLS over Wuhan (30° N, 114° E (a medium latitudinal region were statistically studied by using the data from radiosonde observations on a twice daily basis at 08:00 and 20:00 LT in the period between 2000 and 2002. The monthly-averaged temperature and horizontal winds exhibit the essential dynamic and thermal structures of the background atmosphere. For avoiding the extreme values of background winds and temperature in the height range of 11-18km, we studied gravity waves, respectively, in two separate height regions, one is from ground surface to 10km (lower part, and the other is within 18-25km (upper part. In total, 791 and 1165 quasi-monochromatic inertial gravity waves were extracted from our data set for the lower and upper parts, respectively. The gravity wave parameters (intrinsic frequencies, amplitudes, wavelengths, intrinsic phase velocities and wave energies are calculated and statistically studied. The statistical results revealed that in the lower part, there were 49.4% of gravity waves propagating upward, and the percentage was 76.4% in the upper part. Moreover, the average wave amplitudes and energies are less than those at the lower latitudinal regions, which indicates that the gravity wave parameters have a latitudinal dependence. The correlated temporal evolution of the monthly-averaged wave energies in the lower and upper parts and a subsequent quantitative analysis strongly suggested that at the observation site, dynamical instability (strong wind shear induced by the tropospheric jet is the main excitation source of inertial gravity waves in the TLS.

  2. SWIFT X-RAY OBSERVATIONS OF CLASSICAL NOVAE. II. THE SUPER SOFT SOURCE SAMPLE

    Schwarz, Greg J. [American Astronomical Society, 2000 Florida Avenue, NW, Suite 400, Washington, DC 20009-1231 (United States); Ness, Jan-Uwe [XMM-Newton Science Operations Centre, ESAC, Apartado 78, 28691 Villanueva de la Canada, Madrid (Spain); Osborne, J. P.; Page, K. L.; Evans, P. A.; Beardmore, A. P. [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom); Walter, Frederick M. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800 (United States); Andrew Helton, L. [SOFIA Science Center, USRA, NASA Ames Research Center, M.S. N211-3, Moffett Field, CA 94035 (United States); Woodward, Charles E. [Minnesota Institute of Astrophysics, 116 Church Street S.E., University of Minnesota, Minneapolis, MN 55455 (United States); Bode, Mike [Astrophysics Research Institute, Liverpool John Moores University, Birkenhead CH41 1LD (United Kingdom); Starrfield, Sumner [School of Earth and Space Exploration, Arizona State University, P.O. Box 871404, Tempe, AZ 85287-1404 (United States); Drake, Jeremy J., E-mail: Greg.Schwarz@aas.org [Smithsonian Astrophysical Observatory, 60 Garden Street, MS 3, Cambridge, MA 02138 (United States)

    2011-12-01

    The Swift gamma-ray burst satellite is an excellent facility for studying novae. Its rapid response time and sensitive X-ray detector provides an unparalleled opportunity to investigate the previously poorly sampled evolution of novae in the X-ray regime. This paper presents Swift observations of 52 Galactic/Magellanic Cloud novae. We included the X-Ray Telescope (0.3-10 keV) instrument count rates and the UltraViolet and Optical Telescope (1700-8000 A) filter photometry. Also included in the analysis are the publicly available pointed observations of 10 additional novae the X-ray archives. This is the largest X-ray sample of Galactic/Magellanic Cloud novae yet assembled and consists of 26 novae with Super Soft X-ray emission, 19 from Swift observations. The data set shows that the faster novae have an early hard X-ray phase that is usually missing in slower novae. The Super Soft X-ray phase occurs earlier and does not last as long in fast novae compared to slower novae. All the Swift novae with sufficient observations show that novae are highly variable with rapid variability and different periodicities. In the majority of cases, nuclear burning ceases less than three years after the outburst begins. Previous relationships, such as the nuclear burning duration versus t{sub 2} or the expansion velocity of the eject and nuclear burning duration versus the orbital period, are shown to be poorly correlated with the full sample indicating that additional factors beyond the white dwarf mass and binary separation play important roles in the evolution of a nova outburst. Finally, we confirm two optical phenomena that are correlated with strong, soft X-ray emission which can be used to further increase the efficiency of X-ray campaigns.

  3. SWIFT X-RAY OBSERVATIONS OF CLASSICAL NOVAE. II. THE SUPER SOFT SOURCE SAMPLE

    Schwarz, Greg J.; Ness, Jan-Uwe; Osborne, J. P.; Page, K. L.; Evans, P. A.; Beardmore, A. P.; Walter, Frederick M.; Andrew Helton, L.; Woodward, Charles E.; Bode, Mike; Starrfield, Sumner; Drake, Jeremy J.

    2011-01-01

    The Swift gamma-ray burst satellite is an excellent facility for studying novae. Its rapid response time and sensitive X-ray detector provides an unparalleled opportunity to investigate the previously poorly sampled evolution of novae in the X-ray regime. This paper presents Swift observations of 52 Galactic/Magellanic Cloud novae. We included the X-Ray Telescope (0.3-10 keV) instrument count rates and the UltraViolet and Optical Telescope (1700-8000 Å) filter photometry. Also included in the analysis are the publicly available pointed observations of 10 additional novae the X-ray archives. This is the largest X-ray sample of Galactic/Magellanic Cloud novae yet assembled and consists of 26 novae with Super Soft X-ray emission, 19 from Swift observations. The data set shows that the faster novae have an early hard X-ray phase that is usually missing in slower novae. The Super Soft X-ray phase occurs earlier and does not last as long in fast novae compared to slower novae. All the Swift novae with sufficient observations show that novae are highly variable with rapid variability and different periodicities. In the majority of cases, nuclear burning ceases less than three years after the outburst begins. Previous relationships, such as the nuclear burning duration versus t 2 or the expansion velocity of the eject and nuclear burning duration versus the orbital period, are shown to be poorly correlated with the full sample indicating that additional factors beyond the white dwarf mass and binary separation play important roles in the evolution of a nova outburst. Finally, we confirm two optical phenomena that are correlated with strong, soft X-ray emission which can be used to further increase the efficiency of X-ray campaigns.

  4. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  5. Soft X-Ray Observations of a Complete Sample of X-Ray--selected BL Lacertae Objects

    Perlman, Eric S.; Stocke, John T.; Wang, Q. Daniel; Morris, Simon L.

    1996-01-01

    We present the results of ROSAT PSPC observations of the X-ray selected BL Lacertae objects (XBLs) in the complete Einstein Extended Medium Sensitivity Survey (EM MS) sample. None of the objects is resolved in their respective PSPC images, but all are easily detected. All BL Lac objects in this sample are well-fitted by single power laws. Their X-ray spectra exhibit a variety of spectral slopes, with best-fit energy power-law spectral indices between α = 0.5-2.3. The PSPC spectra of this sample are slightly steeper than those typical of flat ratio-spectrum quasars. Because almost all of the individual PSPC spectral indices are equal to or slightly steeper than the overall optical to X-ray spectral indices for these same objects, we infer that BL Lac soft X-ray continua are dominated by steep-spectrum synchrotron radiation from a broad X-ray jet, rather than flat-spectrum inverse Compton radiation linked to the narrower radio/millimeter jet. The softness of the X-ray spectra of these XBLs revives the possibility proposed by Guilbert, Fabian, & McCray (1983) that BL Lac objects are lineless because the circumnuclear gas cannot be heated sufficiently to permit two stable gas phases, the cooler of which would comprise the broad emission-line clouds. Because unified schemes predict that hard self-Compton radiation is beamed only into a small solid angle in BL Lac objects, the steep-spectrum synchrotron tail controls the temperature of the circumnuclear gas at r ≤ 1018 cm and prevents broad-line cloud formation. We use these new ROSAT data to recalculate the X-ray luminosity function and cosmological evolution of the complete EMSS sample by determining accurate K-corrections for the sample and estimating the effects of variability and the possibility of incompleteness in the sample. Our analysis confirms that XBLs are evolving "negatively," opposite in sense to quasars, with Ve/Va = 0.331±0.060. The statistically significant difference between the values for X

  6. The contribution of simple random sampling to observed variations in faecal egg counts.

    Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I

    2012-09-10

    It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Evolution in Cloud Population Statistics of the MJO: From AMIE Field Observations to Global Cloud-Permiting Models

    Zhang, Chidong [Univ. of Miami, Coral Gables, FL (United States)

    2016-08-14

    Motivated by the success of the AMIE/DYNAMO field campaign, which collected unprecedented observations of cloud and precipitation from the tropical Indian Ocean in Octber 2011 – March 2012, this project explored how such observations can be applied to assist the development of global cloud-permitting models through evaluating and correcting model biases in cloud statistics. The main accomplishment of this project were made in four categories: generating observational products for model evaluation, using AMIE/DYNAMO observations to validate global model simulations, using AMIE/DYNAMO observations in numerical studies of cloud-permitting models, and providing leadership in the field. Results from this project provide valuable information for building a seamless bridge between DOE ASR program’s component on process level understanding of cloud processes in the tropics and RGCM focus on global variability and regional extremes. In particular, experience gained from this project would be directly applicable to evaluation and improvements of ACME, especially as it transitions to a non-hydrostatic variable resolution model.

  8. Observation of a physical matrix effect during cold vapour generation measurement of mercury in emissions samples

    Brown, Richard J.C., E-mail: richard.brown@npl.co.uk; Webb, William R.; Goddard, Sharon L.

    2014-05-01

    Highlights: • A matrix effect for CV-AFS measurement of mercury in emissions samples is reported. • This results from the different efficiencies of liberation of reduced mercury. • There is a good correlation between solution density and the size of the effect. • Several methods to overcome the bias are presented and discussed. - Abstract: The observation of a physical matrix effect during the cold vapour generation–atomic fluorescence measurement of mercury in emissions samples is reported. The effect is as a result of the different efficiencies of liberation of reduced mercury from solution as the matrix of the solution under test varies. The result of this is that peak area to peak height ratios decease as matrix concentration increases, passing through a minimum, before the ratio then increases as matrix concentration further increases. In the test matrices examined – acidified potassium dichromate and sodium chloride solutions – the possible biases caused by differences between the calibration standard matrix and the test sample matrix were as large as 2.8% (relative) representing peak area to peak height ratios for calibration standards and matrix samples of 45 and 43.75, respectively. For the system considered there is a good correlation between the density of the matrix and point of optimum liberation of dissolved mercury for both matrix types. Several methods employing matrix matching and mathematical correction to overcome the bias are presented and their relative merits discussed; the most promising being the use of peak area, rather than peak height, for quantification.

  9. Alfvén waves in the foreshock propagating upstream in the plasma rest frame: statistics from Cluster observations

    Y. Narita

    2004-07-01

    Full Text Available We statistically study various properties of low-frequency waves such as frequencies, wave numbers, phase velocities, and polarization in the plasma rest frame in the terrestrial foreshock. Using Cluster observations the wave telescope or k-filtering is applied to investigate wave numbers and rest frame frequencies. We find that most of the foreshock waves propagate upstream along the magnetic field at phase velocity close to the Alfvén velocity. We identify that frequencies are around 0.1xΩcp and wave numbers are around 0.1xΩcp/VA, where Ωcp is the proton cyclotron frequency and VA is the Alfvén velocity. Our results confirm the conclusions drawn from ISEE observations and strongly support the existence of Alfvén waves in the foreshock.

  10. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent

  11. A statistical method for predicting splice variants between two groups of samples using GeneChip® expression array data

    Olson James M

    2006-04-01

    Full Text Available Abstract Background Alternative splicing of pre-messenger RNA results in RNA variants with combinations of selected exons. It is one of the essential biological functions and regulatory components in higher eukaryotic cells. Some of these variants are detectable with the Affymetrix GeneChip® that uses multiple oligonucleotide probes (i.e. probe set, since the target sequences for the multiple probes are adjacent within each gene. Hybridization intensity from a probe correlates with abundance of the corresponding transcript. Although the multiple-probe feature in the current GeneChip® was designed to assess expression values of individual genes, it also measures transcriptional abundance for a sub-region of a gene sequence. This additional capacity motivated us to develop a method to predict alternative splicing, taking advance of extensive repositories of GeneChip® gene expression array data. Results We developed a two-step approach to predict alternative splicing from GeneChip® data. First, we clustered the probes from a probe set into pseudo-exons based on similarity of probe intensities and physical adjacency. A pseudo-exon is defined as a sequence in the gene within which multiple probes have comparable probe intensity values. Second, for each pseudo-exon, we assessed the statistical significance of the difference in probe intensity between two groups of samples. Differentially expressed pseudo-exons are predicted to be alternatively spliced. We applied our method to empirical data generated from GeneChip® Hu6800 arrays, which include 7129 probe sets and twenty probes per probe set. The dataset consists of sixty-nine medulloblastoma (27 metastatic and 42 non-metastatic samples and four cerebellum samples as normal controls. We predicted that 577 genes would be alternatively spliced when we compared normal cerebellum samples to medulloblastomas, and predicted that thirteen genes would be alternatively spliced when we compared metastatic

  12. Observation of ferromagnetic resonance in a microscopic sample using magnetic resonance force microscopy

    Zhang, Z.; Hammel, P.C.; Wigen, P.E.

    1996-01-01

    We report the observation of a ferromagnetic resonance signal arising from a microscopic (∼20μmx40μm) particle of thin (3μm) yttrium iron garnet film using magnetic resonance force microscopy (MRFM). The large signal intensity in the resonance spectra suggests that MRFM could become a powerful microscopic ferromagnetic resonance technique with a micron or sub-micron resolution. We also observe a very strong nonresonance signal which occurs in the field regime where the sample magnetization readily reorients in response to the modulation of the magnetic field. This signal will be the main noise source in applications where a magnet is mounted on the cantilever. copyright 1996 American Institute of Physics

  13. Solar Ion Processing of Itokawa Grains: Reconciling Model Predictions with Sample Observations

    Christoffersen, Roy; Keller, L. P.

    2014-01-01

    Analytical TEM observations of Itokawa grains reported to date show complex solar wind ion processing effects in the outer 30-100 nm of pyroxene and olivine grains. The effects include loss of long-range structural order, formation of isolated interval cavities or "bubbles", and other nanoscale compositional/microstructural variations. None of the effects so far described have, however, included complete ion-induced amorphization. To link the array of observed relationships to grain surface exposure times, we have adapted our previous numerical model for progressive solar ion processing effects in lunar regolith grains to the Itokawa samples. The model uses SRIM ion collision damage and implantation calculations within a framework of a constant-deposited-energy model for amorphization. Inputs include experimentally-measured amorphization fluences, a Pi steradian variable ion incidence geometry required for a rotating asteroid, and a numerical flux-versus-velocity solar wind spectrum.

  14. Fully automated gamma spectrometry gauge observing possible radioactive contamination of melting-shop samples

    Kroos, J.; Westkaemper, G.; Stein, J.

    1999-01-01

    At Salzgitter AG, several monitoring systems have been installed to check the scrap transport by rail and by car. At the moment, the scrap transport by ship is reloaded onto wagons for monitoring afterwards. In the future, a detection system will be mounted onto a crane for a direct check on scrap upon the departure of ship. Furthermore, at Salzgitter AG Central Chemical Laboratory, a fully automated gamma spectrometry gauge is installed in order to observe a possible radioactive contamination of the products. The gamma spectrometer is integrated into the automated OE spectrometry line for testing melting shop samples after performing the OE spectrometry. With this technique the specific activity of selected nuclides and dose rate will be determined. The activity observation is part of the release procedure. The corresponding measurement data are stored in a database for quality management reasons. (author)

  15. The price of electricity from private power producers: Stage 2, Expansion of sample and preliminary statistical analysis

    Comnes, G.A.; Belden, T.N.; Kahn, E.P.

    1995-02-01

    The market for long-term bulk power is becoming increasingly competitive and mature. Given that many privately developed power projects have been or are being developed in the US, it is possible to begin to evaluate the performance of the market by analyzing its revealed prices. Using a consistent method, this paper presents levelized contract prices for a sample of privately developed US generation properties. The sample includes 26 projects with a total capacity of 6,354 MW. Contracts are described in terms of their choice of technology, choice of fuel, treatment of fuel price risk, geographic location, dispatchability, expected dispatch niche, and size. The contract price analysis shows that gas technologies clearly stand out as the most attractive. At an 80% capacity factor, coal projects have an average 20-year levelized price of $0.092/kWh, whereas natural gas combined cycle and/or cogeneration projects have an average price of $0.069/kWh. Within each technology type subsample, however, there is considerable variation. Prices for natural gas combustion turbines and one wind project are also presented. A preliminary statistical analysis is conducted to understand the relationship between price and four categories of explanatory factors including product heterogeneity, geographic heterogeneity, economic and technological change, and other buyer attributes (including avoided costs). Because of residual price variation, we are unable to accept the hypothesis that electricity is a homogeneous product. Instead, the analysis indicates that buyer value still plays an important role in the determination of price for competitively-acquired electricity.

  16. Continuous quality control of the blood sampling procedure using a structured observation scheme.

    Seemann, Tine Lindberg; Nybo, Mads

    2016-10-15

    An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved. The questionnaire from the EFLM Working Group for the Preanalytical Phase was adapted to local procedures. A pilot study of three months duration was conducted. Based on this, corrective actions were implemented and a follow-up study was conducted. All phlebotomists at the Department of Clinical Biochemistry and Pharmacology were observed. Three blood collections by each phlebotomist were observed at each session conducted at the phlebotomy ward and the hospital wards, respectively. Error frequencies were calculated for the phlebotomy ward and the hospital wards and for the two study phases. A total of 126 blood drawings by 39 phlebotomists were observed in the pilot study, while 84 blood drawings by 34 phlebotomists were observed in the follow-up study. In the pilot study, the three major error items were hand hygiene (42% error), mixing of samples (22%), and order of draw (21%). Minor significant differences were found between the two settings. After focus on the major aspects, the follow-up study showed significant improvement for all three items at both settings (P < 0.01, P < 0.01, and P = 0.01, respectively). Continuous quality control of the phlebotomy procedure revealed a number of items not conducted in compliance with the local phlebotomy guideline. It supported significant improvements in the adherence to the recommended phlebotomy procedures and facilitated documentation of the phlebotomy quality.

  17. Statistical evaluation of fatty acid profile and cholesterol content in fish (common carp) lipids obtained by different sample preparation procedures.

    Spiric, Aurelija; Trbovic, Dejana; Vranic, Danijela; Djinovic, Jasna; Petronijevic, Radivoj; Matekalo-Sverak, Vesna

    2010-07-05

    the second principal component (PC2) is recorded by C18:3 n-3, and C20:3 n-6, being present in a higher amount in the samples treated by the modified Soxhlet extraction, while C22:5 n-3, C20:3 n-3, C22:1 and C20:4, C16 and C18 negatively influence the score values of the PC2, showing significantly increased level in the samples treated by ASE method. Hotelling's paired T-square test used on the first three principal components for confirmation of differences in individual fatty acid content obtained by ASE and Soxhlet method in carp muscle showed statistically significant difference between these two data sets (T(2)=161.308, p<0.001). Copyright 2010 Elsevier B.V. All rights reserved.

  18. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  19. Statistical energy as a tool for binning-free, multivariate goodness-of-fit tests, two-sample comparison and unfolding

    Aslan, B.; Zech, G.

    2005-01-01

    We introduce the novel concept of statistical energy as a statistical tool. We define statistical energy of statistical distributions in a similar way as for electric charge distributions. Charges of opposite sign are in a state of minimum energy if they are equally distributed. This property is used to check whether two samples belong to the same parent distribution, to define goodness-of-fit tests and to unfold distributions distorted by measurement. The approach is binning-free and especially powerful in multidimensional applications

  20. ATCA observations of the MACS-Planck Radio Halo Cluster Project. II. Radio observations of an intermediate redshift cluster sample

    Martinez Aviles, G.; Johnston-Hollitt, M.; Ferrari, C.; Venturi, T.; Democles, J.; Dallacasa, D.; Cassano, R.; Brunetti, G.; Giacintucci, S.; Pratt, G. W.; Arnaud, M.; Aghanim, N.; Brown, S.; Douspis, M.; Hurier, J.; Intema, H. T.; Langer, M.; Macario, G.; Pointecouteau, E.

    2018-04-01

    Aim. A fraction of galaxy clusters host diffuse radio sources whose origins are investigated through multi-wavelength studies of cluster samples. We investigate the presence of diffuse radio emission in a sample of seven galaxy clusters in the largely unexplored intermediate redshift range (0.3 http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A94

  1. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    Reiser, I; Lu, Z

    2014-01-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions included two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task

  2. Monitoring larval populations of the Douglas-fir tussock moth and the western spruce budworm on permanent plots: sampling methods and statistical properties of data

    A.R. Mason; H.G. Paul

    1994-01-01

    Procedures for monitoring larval populations of the Douglas-fir tussock moth and the western spruce budworm are recommended based on many years experience in sampling these species in eastern Oregon and Washington. It is shown that statistically reliable estimates of larval density can be made for a population by sampling host trees in a series of permanent plots in a...

  3. Classroom Research: Assessment of Student Understanding of Sampling Distributions of Means and the Central Limit Theorem in Post-Calculus Probability and Statistics Classes

    Lunsford, M. Leigh; Rowell, Ginger Holmes; Goodson-Espy, Tracy

    2006-01-01

    We applied a classroom research model to investigate student understanding of sampling distributions of sample means and the Central Limit Theorem in post-calculus introductory probability and statistics courses. Using a quantitative assessment tool developed by previous researchers and a qualitative assessment tool developed by the authors, we…

  4. DISK IMAGING SURVEY OF CHEMISTRY WITH SMA. II. SOUTHERN SKY PROTOPLANETARY DISK DATA AND FULL SAMPLE STATISTICS

    Oeberg, Karin I.; Qi Chunhua; Andrews, Sean M.; Espaillat, Catherine; Wilner, David J.; Fogel, Jeffrey K. J.; Bergin, Edwin A.; Pascucci, Ilaria; Kastner, Joel H.

    2011-01-01

    This is the second in a series of papers based on data from DISCS, a Submillimeter Array observing program aimed at spatially and spectrally resolving the chemical composition of 12 protoplanetary disks. We present data on six Southern sky sources-IM Lup, SAO 206462 (HD 135344b), HD 142527, AS 209, AS 205, and V4046 Sgr-which complement the six sources in the Taurus star-forming region reported previously. CO 2-1 and HCO + 3-2 emission are detected and resolved in all disks and show velocity patterns consistent with Keplerian rotation. Where detected, the emission from DCO + 3-2, N 2 H + 3-2, H 2 CO 3 03 - 2 02 and 4 14 - 3 13 , HCN 3-2, and CN 2 33/4/2 - 1 22/3/1 are also generally spatially resolved. The detection rates are highest toward the M and K stars, while the F star SAO 206462 has only weak CN and HCN emission, and H 2 CO alone is detected toward HD 142527. These findings together with the statistics from the previous Taurus disks support the hypothesis that high detection rates of many small molecules depend on the presence of a cold and protected disk midplane, which is less common around F and A stars compared to M and K stars. Disk-averaged variations in the proposed radiation tracer CN/HCN are found to be small, despite a two orders of magnitude range of spectral types and accretion rates. In contrast, the resolved images suggest that the CN/HCN emission ratio varies with disk radius in at least two of the systems. There are no clear observational differences in the disk chemistry between the classical/full T Tauri disks and transitional disks. Furthermore, the observed line emission does not depend on the measured accretion luminosities or the number of infrared lines detected, which suggests that the chemistry outside of 100 AU is not coupled to the physical processes that drive the chemistry in the innermost few AU.

  5. RHAPSODY. I. STRUCTURAL PROPERTIES AND FORMATION HISTORY FROM A STATISTICAL SAMPLE OF RE-SIMULATED CLUSTER-SIZE HALOS

    Wu, Hao-Yi; Hahn, Oliver; Wechsler, Risa H.; Mao, Yao-Yuan; Behroozi, Peter S.

    2013-01-01

    We present the first results from the RHAPSODY cluster re-simulation project: a sample of 96 'zoom-in' simulations of dark matter halos of 10 14.8±0.05 h –1 M ☉ , selected from a 1 h –3 Gpc 3 volume. This simulation suite is the first to resolve this many halos with ∼5 × 10 6 particles per halo in the cluster mass regime, allowing us to statistically characterize the distribution of and correlation between halo properties at fixed mass. We focus on the properties of the main halos and how they are affected by formation history, which we track back to z = 12, over five decades in mass. We give particular attention to the impact of the formation history on the density profiles of the halos. We find that the deviations from the Navarro-Frenk-White (NFW) model and the Einasto model depend on formation time. Late-forming halos tend to have considerable deviations from both models, partly due to the presence of massive subhalos, while early-forming halos deviate less but still significantly from the NFW model and are better described by the Einasto model. We find that the halo shapes depend only moderately on formation time. Departure from spherical symmetry impacts the density profiles through the anisotropic distribution of massive subhalos. Further evidence of the impact of subhalos is provided by analyzing the phase-space structure. A detailed analysis of the properties of the subhalo population in RHAPSODY is presented in a companion paper.

  6. Statistical correlation of spectral broadening in VLF transmitter signal and low-frequency ionospheric turbulence from observation on DEMETER satellite

    A. Rozhnoi

    2008-10-01

    Full Text Available In our earlier papers we have found the effect of VLF transmitter signal depression over epicenters of the large earthquakes from observation on the French DEMETER satellite that can be considered as new method of global diagnostics of seismic influence on the ionosphere. At present paper we investigate a possibility VLF signal-ionospheric turbulence interaction using additional characteristic of VLF signal-spectrum broadening. This characteristic is important for estimation of the interaction type: linear or nonlinear scattering. Our main results are the following:
    – There are two zones of increased spectrum broadening, which are centered near magnetic latitudes Φ=±10° and Φ=±40°. Basing on the previous case study research and ground ionosonde registrations, probably it is evidence of nonlinear (active scattering of VLF signal on the ionospheric turbulence. However occurrence rate of spectrum broadening in the middle-latitude area is higher than in the near-equatorial zone (~15–20% in comparison with ~100% in former area that is probably coincides with the rate of ionospheric turbulence.
    – From two years statistics of observation in the selected 3 low-latitude regions and 1 middle-latitude region inside reception area of VLF signal from NWC transmitter we find a correlation of spectrum broadening neither with ion-cyclotron noise (f=150–500 Hz, which possibly means poor representation of the turbulence by the noise due to its mixture with natural ELF emission (which correlates with whistler, nor with magnetic storm activity.
    – We find rather evident correlation of ion-cyclotron frequency noise, VLF signal depression and weak correlation of spectrum broadening with seismicity in the middle-latitude region over Japan. But in the low-latitude regions we do not find such a correlation. Statistical decrease of VLF signal supports our previous case study results. However rather weak spectrum broadening

  7. Small Sample Statistics for Incomplete Nonnormal Data: Extensions of Complete Data Formulae and a Monte Carlo Comparison

    Savalei, Victoria

    2010-01-01

    Incomplete nonnormal data are common occurrences in applied research. Although these 2 problems are often dealt with separately by methodologists, they often cooccur. Very little has been written about statistics appropriate for evaluating models with such data. This article extends several existing statistics for complete nonnormal data to…

  8. Alfvén waves in the foreshock propagating upstream in the plasma rest frame: statistics from Cluster observations

    Y. Narita

    2004-07-01

    Full Text Available We statistically study various properties of low-frequency waves such as frequencies, wave numbers, phase velocities, and polarization in the plasma rest frame in the terrestrial foreshock. Using Cluster observations the wave telescope or k-filtering is applied to investigate wave numbers and rest frame frequencies. We find that most of the foreshock waves propagate upstream along the magnetic field at phase velocity close to the Alfvén velocity. We identify that frequencies are around 0.1xΩcp and wave numbers are around 0.1xΩcp/VA, where Ωcp is the proton cyclotron frequency and VA is the Alfvén velocity. Our results confirm the conclusions drawn from ISEE observations and strongly support the existence of Alfvén waves in the foreshock.

  9. Statistics of high-altitude and high-latitude O+ ion outflows observed by Cluster/CIS

    A. Korth

    2005-07-01

    Full Text Available The persistent outflows of O+ ions observed by the Cluster CIS/CODIF instrument were studied statistically in the high-altitude (from 3 up to 11 RE and high-latitude (from 70 to ~90 deg invariant latitude, ILAT polar region. The principal results are: (1 Outflowing O+ ions with more than 1keV are observed above 10 RE geocentric distance and above 85deg ILAT location; (2 at 6-8 RE geocentric distance, the latitudinal distribution of O+ ion outflow is consistent with velocity filter dispersion from a source equatorward and below the spacecraft (e.g. the cusp/cleft; (3 however, at 8-12 RE geocentric distance the distribution of O+ outflows cannot be explained by velocity filter only. The results suggest that additional energization or acceleration processes for outflowing O+ ions occur at high altitudes and high latitudes in the dayside polar region. Keywords. Magnetospheric physics (Magnetospheric configuration and dynamics, Solar wind-magnetosphere interactions

  10. Fundamental statistical relationships between monthly and daily meteorological variables: Temporal downscaling of weather based on a global observational dataset

    Sommer, Philipp; Kaplan, Jed

    2016-04-01

    Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.

  11. Radar Observations of Asteroid 101955 Bennu and the OSIRIS-REx Sample Return Mission

    Nolan, M. C.; Benner, L.; Giorgini, J. D.; Howell, E. S.; Kerr, R.; Lauretta, D. S.; Magri, C.; Margot, J. L.; Scheeres, D. J.

    2017-12-01

    On September 24, 2023, the OSIRIS-REx spacecraft will return a sample of asteroid (101955) Bennu to the Earth. We chose the target of this mission in part because of the work we did over more than a decade using the Arecibo and Goldstone planetary radars to observe this asteroid. We observed Bennu (then known as 1999 RQ36) at Arecibo and Goldstone in 1999 and 2005, and at Arecibo in 2011. Radar imaging from the first two observing epochs provided a shape and size for Bennu, which greatly simplified mission planning. We know that the spacecraft will encounter a roundish asteroid 500 m in diameter with a distinct equatorial ridge [Nolan et al., 2013]. Bennu does not have the dramatic concavities seen in Itokawa and comet 67P/Churyumov-Gerasimenko, the Hayabusa and Rosetta mission targets, respectively, which would have been obvious in radar imaging. Further radar ranging in 2011 provided a detection of the Yarkovsky effect, allowing us to constrain Bennu's mass and bulk density from radar measurement of non-gravitational forces acting on its orbit [Chesley et al., 2014]. The 2011 observations were particularly challenging, occurring during a management transition at the Arecibo Observatory, and would not have been possible without significant extra cooperation between the old and new managing organizations. As a result, we can predict Bennu's position to within a few km over the next 100 years, until its close encounter with the Earth in 2135. We know its shape to within ± 10 m (1σ) on the long and intermediate axes and ± 52 m on the polar diameter, and its pole orientation to within 5 degrees. The bulk density is 1260 ± 70 kg/m3 and the rotation is retrograde with a 4.297 ± 0.002 h period The OSIRIS-REx team is using these constraints to preplan the initial stages of proximity operations and dramatically reduce risk. The Figure shows the model and Arecibo radar images from 1999 (left), 2005 (center), and 2011 (right). Bennu is the faint dot near the center of

  12. Direct comparison of observed magnitude-redshift relations in complete galaxy samples with systematic predictions of alternative redshift-distance laws

    Segal, I.E.

    1989-01-01

    The directly observed average apparent magnitude (or in one case, angular diameter) as a function of redshift in each of a number of large complete galaxy samples is compared with the predictions of hypothetical redshift-distance power laws, as a systematic statistical question. Due account is taken of observational flux limits by an entirely objective and reproducible optimal statistical procedure, and no assumptions are made regarding the distribution of the galaxies in space. The laws considered are of the form z varies as r p , where r denotes the distance, for p = 1, 2 and 3. The comparative fits of the various redshift-distance laws are similar in all the samples. Overall, the cubic law fits better than the linear law, but each shows substantial systematic deviations from observation. The quadratic law fits extremely well except at high redshifts in some of the samples, where no power law fits closely and the correlation of apparent magnitude with redshift is small or negative. In all cases, the luminosity function required for theoretical prediction was estimated from the sample by the non-parametric procedure ROBUST, whose intrinsic neutrality as programmed was checked by comprehensive computer simulations. (author)

  13. A Statistical Analysis of Langmuir Wave-Electron Correlations Observed by the CHARM II Auroral Sounding Rocket

    Dombrowski, M. P.; Labelle, J. W.; Kletzing, C.; Bounds, S. R.; Kaeppler, S. R.

    2014-12-01

    Langmuir-mode electron plasma waves are frequently observed by spacecraft in active plasma environments such as the ionosphere. Ionospheric Langmuir waves may be excited by the bump-on-tail instability generated by impinging beams of electrons traveling parallel to the background magnetic field (B). The Correlation of High-frequencies and Auroral Roar Measurement (CHARM II) sounding rocket was launched into a substorm at 9:49 UT on 17 February 2010, from the Poker Flat Research Range in Alaska. The primary instruments included the University of Iowa Wave-Particle Correlator (WPC), the Dartmouth High-Frequency Experiment (HFE), several charged particle detectors, low-frequency wave instruments, and a magnetometer. The HFE is a receiver system which effectively yields continuous (100% duty cycle) electric-field waveform measurements from 100 kHz to 5 MHz, and which had its detection axis aligned nominally parallel to B. The HFE output was fed on-payload to the WPC, which uses a phase-locked loop to track the incoming wave frequency with the most power, then sorting incoming electrons at eight energy levels into sixteen wave-phase bins. CHARM II encountered several regions of strong Langmuir wave activity throughout its 15-minute flight, and the WPC showed wave-lock and statistically significant particle correlation distributions during several time periods. We show results of an in-depth analysis of the CHARM II WPC data for the entire flight, including statistical analysis of correlations which show evidence of direct interaction with the Langmuir waves, indicating (at various times) trapping of particles and both driving and damping of Langmuir waves by particles. In particular, the sign of the gradient in particle flux appears to correlate with the phase relation between the electrons and the wave field, with possible implications for the wave physics.

  14. NDT oriented equipment for observing the Doppler broadening of radiation produced by the annihilation of positrons in cylindrical samples

    Coleman, C.F.; Smith, F.A.; Hughes, A.E.

    1976-11-01

    This report describes the development of equipment for measuring annihilation line broadening in cylindrical samples a few millimetres in diameter, suitable for use in fatigue testing programs. A detached positron source is employed, allowing the samples to be scanned both longitudinally (resolution approximately 1 cm) and in azimuth. Some of the advantages of and problems associated with this configuration are discussed. The statistical precision of a number of parameters

  15. Dark Matter Profiles in Dwarf Galaxies: A Statistical Sample Using High-Resolution Hα Velocity Fields from PCWI

    Relatores, Nicole C.; Newman, Andrew B.; Simon, Joshua D.; Ellis, Richard; Truong, Phuongmai N.; Blitz, Leo

    2018-01-01

    We present high quality Hα velocity fields for a sample of nearby dwarf galaxies (log M/M⊙ = 8.4-9.8) obtained as part of the Dark Matter in Dwarf Galaxies survey. The purpose of the survey is to investigate the cusp-core discrepancy by quantifying the variation of the inner slope of the dark matter distributions of 26 dwarf galaxies, which were selected as likely to have regular kinematics. The data were obtained with the Palomar Cosmic Web Imager, located on the Hale 5m telescope. We extract rotation curves from the velocity fields and use optical and infrared photometry to model the stellar mass distribution. We model the total mass distribution as the sum of a generalized Navarro-Frenk-White dark matter halo along with the stellar and gaseous components. We present the distribution of inner dark matter density profile slopes derived from this analysis. For a subset of galaxies, we compare our results to an independent analysis based on CO observations. In future work, we will compare the scatter in inner density slopes, as well as their correlations with galaxy properties, to theoretical predictions for dark matter core creation via supernovae feedback.

  16. Sample processing, protocol, and statistical analysis of the time-of-flight secondary ion mass spectrometry (ToF-SIMS) of protein, cell, and tissue samples.

    Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia

    2014-01-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.

  17. New color-photographic observation of thermoluminescence from sliced rock samples

    Hashimoto, Tetsuo; Kimura, Kenichi; Koyanagi, Akira; Takahashi, Kuniaki; Sotobayashi, Takeshi

    1983-01-01

    New observation technique has been established for the thermoluminescence photography using extremely high-sensitive color films. Considering future application to the geological fields, a granite was selected as a testing material. The sliced specimens (0.5--0.7 mm in thickness), which were irradiated with a 60 Co source, were mounted on the heater attached with a thermocouple, which was connected to a microcomputer for measuring the temperature. The samples were heated in the temperature range of 80--400 0 C by operating the camera-shutter controlled with the microcomputer. Four commercially available films (Kodak-1000(ASA), -400, Sakura-400, Fuji-400) could give apparently detectable color-images of artificial thermoluminescence above a total absorbed dose of 880 Gy(88 krad). The specimens, irradiated upto 8.4 kGy(840krad), allowed easily to distinguish the distinct appearance of the thermoluminescence images depending on kinds of white mineral constituents. Moreover, such color images were changeable with the heating temperature. Sakura-400 film has proved the most colorful images from aspects of color tone although Kodak-1000 film showed the highest sensitivity. By applying this Kodak-1000, it was found that the characteristic color image due to natural thermoluminescence was significantly observed on the Precambrian granite which was exposed with natural radiation alone since its formation. This simple and interesting technique, obtainable surface information reflecting impurities and local crystal defects in addition to small mineral constituents, was named as the thermoluminescence color imaging (abbreviated to TLCI) technique by the authors and its versatile applications were discussed. (author)

  18. Comparing identified and statistically significant lipids and polar metabolites in 15-year old serum and dried blood spot samples for longitudinal studies: Comparing lipids and metabolites in serum and DBS samples

    Kyle, Jennifer E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Casey, Cameron P. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Stratton, Kelly G. [National Security Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zink, Erika M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Kim, Young-Mo [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zheng, Xueyun [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Monroe, Matthew E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Weitz, Karl K. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Bloodsworth, Kent J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Orton, Daniel J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Ibrahim, Yehia M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Moore, Ronald J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Lee, Christine G. [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Research Service, Portland Veterans Affairs Medical Center, Portland OR USA; Pedersen, Catherine [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Orwoll, Eric [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Smith, Richard D. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Burnum-Johnson, Kristin E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Baker, Erin S. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA

    2017-02-05

    The use of dried blood spots (DBS) has many advantages over traditional plasma and serum samples such as smaller blood volume required, storage at room temperature, and ability for sampling in remote locations. However, understanding the robustness of different analytes in DBS samples is essential, especially in older samples collected for longitudinal studies. Here we analyzed DBS samples collected in 2000-2001 and stored at room temperature and compared them to matched serum samples stored at -80°C to determine if they could be effectively used as specific time points in a longitudinal study following metabolic disease. Four hundred small molecules were identified in both the serum and DBS samples using gas chromatograph-mass spectrometry (GC-MS), liquid chromatography-MS (LC-MS) and LC-ion mobility spectrometry-MS (LC-IMS-MS). The identified polar metabolites overlapped well between the sample types, though only one statistically significant polar metabolite in a case-control study was conserved, indicating degradation occurs in the DBS samples affecting quantitation. Differences in the lipid identifications indicated that some oxidation occurs in the DBS samples. However, thirty-six statistically significant lipids correlated in both sample types indicating that lipid quantitation was more stable across the sample types.

  19. Statistical Power and Optimum Sample Allocation Ratio for Treatment and Control Having Unequal Costs Per Unit of Randomization

    Liu, Xiaofeng

    2003-01-01

    This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…

  20. Statistical Machines for Trauma Hospital Outcomes Research: Application to the PRospective, Observational, Multi-Center Major Trauma Transfusion (PROMMTT Study.

    Sara E Moore

    Full Text Available Improving the treatment of trauma, a leading cause of death worldwide, is of great clinical and public health interest. This analysis introduces flexible statistical methods for estimating center-level effects on individual outcomes in the context of highly variable patient populations, such as those of the PRospective, Observational, Multi-center Major Trauma Transfusion study. Ten US level I trauma centers enrolled a total of 1,245 trauma patients who survived at least 30 minutes after admission and received at least one unit of red blood cells. Outcomes included death, multiple organ failure, substantial bleeding, and transfusion of blood products. The centers involved were classified as either large or small-volume based on the number of massive transfusion patients enrolled during the study period. We focused on estimation of parameters inspired by causal inference, specifically estimated impacts on patient outcomes related to the volume of the trauma hospital that treated them. We defined this association as the change in mean outcomes of interest that would be observed if, contrary to fact, subjects from large-volume sites were treated at small-volume sites (the effect of treatment among the treated. We estimated this parameter using three different methods, some of which use data-adaptive machine learning tools to derive the outcome models, minimizing residual confounding by reducing model misspecification. Differences between unadjusted and adjusted estimators sometimes differed dramatically, demonstrating the need to account for differences in patient characteristics in clinic comparisons. In addition, the estimators based on robust adjustment methods showed potential impacts of hospital volume. For instance, we estimated a survival benefit for patients who were treated at large-volume sites, which was not apparent in simpler, unadjusted comparisons. By removing arbitrary modeling decisions from the estimation process and concentrating

  1. MODELING PLANETARY SYSTEM FORMATION WITH N-BODY SIMULATIONS: ROLE OF GAS DISK AND STATISTICS COMPARED TO OBSERVATIONS

    Liu Huigen; Zhou Jilin; Wang Su

    2011-01-01

    During the late stage of planet formation, when Mars-sized cores appear, interactions among planetary cores can excite their orbital eccentricities, accelerate their merging, and thus sculpt their final orbital architecture. This study contributes to the final assembling of planetary systems with N-body simulations, including the type I or II migration of planets and gas accretion of massive cores in a viscous disk. Statistics on the final distributions of planetary masses, semimajor axes, and eccentricities are derived and are comparable to those of the observed systems. Our simulations predict some new orbital signatures of planetary systems around solar mass stars: 36% of the surviving planets are giant planets (>10 M + ). Most of the massive giant planets (>30 M + ) are located at 1-10 AU. Terrestrial planets are distributed more or less evenly at J in highly eccentric orbits (e > 0.3-0.4). The average eccentricity (∼0.15) of the giant planets (>10 M + ) is greater than that (∼0.05) of the terrestrial planets ( + ). A planetary system with more planets tends to have smaller planet masses and orbital eccentricities on average.

  2. Investigation of a sample of carbon-enhanced metal-poor stars observed with FORS and GMOS

    Caffau, E.; Gallagher, A. J.; Bonifacio, P.; Spite, M.; Duffau, S.; Spite, F.; Monaco, L.; Sbordone, L.

    2018-06-01

    Aims: Carbon-enhanced metal-poor (CEMP) stars represent a sizeable fraction of all known metal-poor stars in the Galaxy. Their formation and composition remains a significant topic of investigation within the stellar astrophysics community. Methods: We analysed a sample of low-resolution spectra of 30 dwarf stars, obtained using the visual and near UV FOcal Reducer and low dispersion Spectrograph for the Very Large Telescope (FORS/VLT) of the European Southern Observatory (ESO) and the Gemini Multi-Object Spectrographs (GMOS) at the GEMINI telescope, to derive their metallicity and carbon abundance. Results: We derived C and Ca from all spectra, and Fe and Ba from the majority of the stars. Conclusions: We have extended the population statistics of CEMP stars and have confirmed that in general, stars with a high C abundance belonging to the high C band show a high Ba-content (CEMP-s or -r/s), while stars with a normal C abundance or that are C-rich, but belong to the low C band, are normal in Ba (CEMP-no). Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 099.D-0791.Based on observations obtained at the Gemini Observatory (processed using the Gemini IRAF package), which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil).Tables 1 and 2 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/614/A68

  3. Spectral properties of blazars. I. Objects observed in the far-ultraviolet. II. An X-ray observed sample

    Ghisellini, G.; Maraschi, L.; Treves, A.; Tanzi, E. G.; Milano Universita, Italy; CNR, Istituto di Fisica Cosmica, Milan, Italy)

    1986-01-01

    All blazars observed with the IUE are studied and shown to form a well-defined subgroup according to their spectral properties. These properties are discussed with respect to theoretical models and are compared with those of quasars. Radio, ultraviolet, and X-ray fluxes are used to construct composite spectral indices, and systematic differences between X-ray selected and otherwise selected objects are discussed. It is confirmed that X-ray selected objects have flatter overall spectra, and are therefore weaker radio emitters relative to their X-ray emission than objects selected otherwise. It is found that X-ray selected blazars have the same average X-ray luminosity as blazars selected otherwise and are underluminous at UV and radio frequencies. This finding is used to argue that the radio-weak, X-ray selected BL Lac objects are, in terms of space density, the dominant members of the blazar population. The results are interpreted in the framework of synchrotron emission models involving relativistic plasma jets. 134 references

  4. Spectral properties of blazars. I. Objects observed in the far-ultraviolet. II. An X-ray observed sample

    Ghisellini, G.; Maraschi, L.; Treves, A.; Tanzi, E. G.

    1986-11-01

    All blazars observed with the IUE are studied and shown to form a well-defined subgroup according to their spectral properties. These properties are discussed with respect to theoretical models and are compared with those of quasars. Radio, ultraviolet, and X-ray fluxes are used to construct composite spectral indices, and systematic differences between X-ray selected and otherwise selected objects are discussed. It is confirmed that X-ray selected objects have flatter overall spectra, and are therefore weaker radio emitters relative to their X-ray emission than objects selected otherwise. It is found that X-ray selected blazars have the same average X-ray luminosity as blazars selected otherwise and are underluminous at UV and radio frequencies. This finding is used to argue that the radio-weak, X-ray selected BL Lac objects are, in terms of space density, the dominant members of the blazar population. The results are interpreted in the framework of synchrotron emission models involving relativistic plasma jets. 134 references.

  5. Observations related to hydrogen in powder and single crystal samples of YB2Cu3O7-δ

    Porath, D.; Grayevsky, A.; Kaplan, N.; Shaltiel, D.; Yaron, U.; Walker, E.

    1994-01-01

    New observations related to hydrogenation of YBa 2 Cu 3 O 7-δ (YBCO) are reported: (a) The effects of sample preparation on the H concentration in ''uncharged'' YBCO samples is investigated, and it is shown through nuclear magnetic resonance measurements that samples of YBCO prepared by ''standard'' solid-state reaction procedures may contain ab initio up to 0.2 atoms formula -1 of hydrogen. (b) It is demonstrated that one may introduce up to 0.3 atoms formula -1 into single crystal samples of YBCO without destroying the macroscopic crystal. The significance of the above observations is discussed briefly. (orig.)

  6. The statistical properties of spread F observed at Hainan station during the declining period of the 23rd solar cycle

    G. J. Wang

    2010-06-01

    Full Text Available The temporal variations of the low latitude nighttime spread F (SF observed by DPS-4 digisonde at low latitude Hainan station (geog. 19.5° N, 109.1° E, dip lat. 9.5° N during the declining solar cycle 23 from March 2002 to February 2008 are studied. The spread F measured by the digisonde were classified into four types, i.e., frequency SF (FSF, range SF (RSF, mixed SF (MSF, and strong range SF (SSF. The statistical results show that MSF and SSF are the outstanding irregularities in Hainan, MSF mainly occurs during summer and low solar activity years, whereas SSF mainly occurs during equinoxes and high solar activity years. The SSF has a diurnal peak before midnight and usually appears during 20:00–02:00 LT, whereas MSF peaks nearly or after midnight and occurs during 22:00–06:00 LT. The time of maximum occurrence of SSF is later in summer than in equinoxes and this time delay can be caused by the later reversal time of the E×B drift in summer. The SunSpot Number (SSN dependence of each type SF is different during different season. The FSF is independent of SSN during each season; RSF with SSN is positive relation during equinoxes and summer and is no relationship during the winter; MSF is significant dependence on SSN during the summer and winter, and does not relate to SSN during the equinoxes; SSF is clearly increasing with SSN during equinoxes and summer, while it is independent of SSN during the winter. The occurrence numbers of each type SF and total SF have the same trend, i.e., increasing as Kp increases from 0 to 1, and then decreasing as increasing Kp. The correlation with Kp is negative for RSF, MSF, SSF and total SF, but is vague for the FSF.

  7. Forecasting Japanese encephalitis incidence from historical morbidity patterns: Statistical analysis with 27 years of observation in Assam, India.

    Handique, Bijoy K; Khan, Siraj A; Mahanta, J; Sudhakar, S

    2014-09-01

    Japanese encephalitis (JE) is one of the dreaded mosquito-borne viral diseases mostly prevalent in south Asian countries including India. Early warning of the disease in terms of disease intensity is crucial for taking adequate and appropriate intervention measures. The present study was carried out in Dibrugarh district in the state of Assam located in the northeastern region of India to assess the accuracy of selected forecasting methods based on historical morbidity patterns of JE incidence during the past 22 years (1985-2006). Four selected forecasting methods, viz. seasonal average (SA), seasonal adjustment with last three observations (SAT), modified method adjusting long-term and cyclic trend (MSAT), and autoregressive integrated moving average (ARIMA) have been employed to assess the accuracy of each of the forecasting methods. The forecasting methods were validated for five consecutive years from 2007-2012 and accuracy of each method has been assessed. The forecasting method utilising seasonal adjustment with long-term and cyclic trend emerged as best forecasting method among the four selected forecasting methods and outperformed the even statistically more advanced ARIMA method. Peak of the disease incidence could effectively be predicted with all the methods, but there are significant variations in magnitude of forecast errors among the selected methods. As expected, variation in forecasts at primary health centre (PHC) level is wide as compared to that of district level forecasts. The study showed that adopted forecasting techniques could reasonably forecast the intensity of JE cases at PHC level without considering the external variables. The results indicate that the understanding of long-term and cyclic trend of the disease intensity will improve the accuracy of the forecasts, but there is a need for making the forecast models more robust to explain sudden variation in the disease intensity with detail analysis of parasite and host population

  8. Small Sample Properties of the Wilcoxon Signed Rank Test with Discontinuous and Dependent Observations

    Nadine Chlass; Jens J. Krueger

    2007-01-01

    This Monte-Carlo study investigates sensitivity of the Wilcoxon signed rank test to certain assumption violations in small samples. Emphasis is put on within-sample-dependence, between-sample dependence, and the presence of ties. Our results show that both assumption violations induce severe size distortions and entail power losses. Surprisingly, these consequences do vary substantially with other properties the data may display. Results provided are particularly relevant for experimental set...

  9. Radio Observations of the S5 Sample Jun Liu & Xiang Liu

    density in flux monitoring, standard deviation, the modulation index, the variabil- ity amplitude and the reduced χ2. The last three columns (m, Y, χ2 red. ) are used to judge the degree of variability, and the definition of these parameters are described in Kraus et al. (2003). We studied the statistics of the variability and ...

  10. Continuous quality control of the blood sampling procedure using a structured observation scheme

    Seemann, Tine Lindberg; Nybo, Mads

    2016-01-01

    INTRODUCTION: An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved. MATERIALS AND METHODS: The questionnaire...

  11. The novel programmable riometer for in-depth ionospheric and magnetospheric observations (PRIAMOS) using direct sampling DSP techniques

    Dekoulis, G.; Honary, F.

    2005-01-01

    This paper describes the feasibility study and simulation results for the unique multi-frequency, multi-bandwidth, Programmable Riometer for in-depth Ionospheric And Magnetospheric ObservationS (PRIAMOS) based on direct sampling digital signal processing (DSP) techniques. This novel architecture is based on sampling the cosmic noise wavefront at the antenna. It eliminates the usage of any intermediate frequency (IF) mixer stages (-6 dB) and the noise balancing technique (-3 dB), providing a m...

  12. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  13. Statistical analysis of fluoride levels in human urine and drinking water samples of fluorinated area of punjab (pakistan)

    Qayyum, M.; Zaman, W.U.; Rehman, R.; Ahmad, B.; Ahmad, M.; Ali, S.; Murtaza, S

    2013-01-01

    Increasing fluoride levels in drinking water of fluorinated areas of world leading to fluorosis. For bio-monitoring of fluorosis patients, fluoride levels were determined in drinking water and human urine samples of different individuals having dental fluorosis and bony deformities from fluorotic area of Punjab (Sham Ki Bhatiyan, Pakistan) and then compared with reference samples of non fluorotic area (Queens Road, Lahore, Pakistan) using ion selective electrode methodology. Fluoride levels in fluorinated area differ significantly from control group (p < 0.05). In drinking water and human urine samples, fluoride levels in fluorinated areas were: 136.192 +- 67.836 and 94.484 +- 36.572 micro molL/sup -1/ respectively, whereas in control samples, fluoride concentrations were: 19.306 +- 2.109 and 47.154 +- 22.685 micro molL/sup -1/ in water and urine samples correspondingly. Pearson's correlation data pointed out the fact that that human urine and water fluoride concentrations have a significant positive dose response relationship with the prevalence of dental and skeletal fluorosis in fluorotic areas having higher fluoride levels in drinking water. (author)

  14. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples

    Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David

    2012-01-01

    Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented.

  15. Hail statistic in Western Europe based on a hyrid cell-tracking algorithm combining radar signals with hailstone observations

    Fluck, Elody

    2015-04-01

    Hail statistic in Western Europe based on a hybrid cell-tracking algorithm combining radar signals with hailstone observations Elody Fluck¹, Michael Kunz¹ , Peter Geissbühler², Stefan P. Ritz² With hail damage estimated over Billions of Euros for a single event (e.g., hailstorm Andreas on 27/28 July 2013), hail constitute one of the major atmospheric risks in various parts of Europe. The project HAMLET (Hail Model for Europe) in cooperation with the insurance company Tokio Millennium Re aims at estimating hail probability, hail hazard and, combined with vulnerability, hail risk for several European countries (Germany, Switzerland, France, Netherlands, Austria, Belgium and Luxembourg). Hail signals are obtained from radar reflectivity since this proxy is available with a high temporal and spatial resolution using several hail proxies, especially radar data. The focus in the first step is on Germany and France for the periods 2005- 2013 and 1999 - 2013, respectively. In the next step, the methods will be transferred and extended to other regions. A cell-tracking algorithm TRACE2D was adjusted and applied to two dimensional radar reflectivity data from different radars operated by European weather services such as German weather service (DWD) and French weather service (Météo-France). Strong convective cells are detected by considering 3 connected pixels over 45 dBZ (Reflectivity Cores RCs) in a radar scan. Afterwards, the algorithm tries to find the same RCs in the next 5 minute radar scan and, thus, track the RCs centers over time and space. Additional information about hailstone diameters provided by ESWD (European Severe Weather Database) is used to determine hail intensity of the detected hail swaths. Maximum hailstone diameters are interpolated along and close to the individual hail tracks giving an estimation of mean diameters for the detected hail swaths. Furthermore, a stochastic event set is created by randomizing the parameters obtained from the

  16. Continuous quality control of the blood sampling procedure using a structured observation scheme

    Seemann, T. L.; Nybo, M.

    2015-01-01

    . All observations were performed by the same person (TLS). Results: Already after three months critical issues can be pinpointed, where correction or educational steps are necessary, for example hand hygiene. However, at the meeting we will be able to present results from a six-month observation period...

  17. Long term observation on absolute lymphocyte counts in the adult health study sample, Hiroshima and Nagasaki

    Oesterle, S.N.; Norman, J.E. Jr.

    1980-01-01

    Total peripheral blood lymphocytes were evaluated by age and exposure status in the Adult Health Study population during three examination cycles between 1958 and 1972. No radiation effect was observed, but a significant drop in the absolute lymphocyte counts of those aged 70 years and over and a corresponding maximum for persons aged 50 - 59 was observed. (author)

  18. 7 CFR 52.38c - Statistical sampling procedures for lot inspection of processed fruits and vegetables by attributes.

    2010-01-01

    ... FRUITS AND VEGETABLES, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS 1... processed fruits and vegetables by attributes. 52.38c Section 52.38c Agriculture Regulations of the... inspection of processed fruits and vegetables by attributes. (a) General. Single sampling plans shall be used...

  19. Statistical properties of mean stand biomass estimators in a LIDAR-based double sampling forest survey design.

    H.E. Anderson; J. Breidenbach

    2007-01-01

    Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...

  20. Statistics 101 for Radiologists.

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  1. Continuous quality control of the blood sampling procedure using a structured observation scheme

    Lindberg Seemann, Tine; Nybo, Mads

    2016-01-01

    INTRODUCTION: An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved.MATERIALS AND METHODS: The questionnaire from the EFLM Working Group for the Preanalytical Phase was adapted to local procedures. A pilot study of three months duration was conducted. Based on this, corrective actions were implemented and a ...

  2. Judging statistical models of individual decision making under risk using in- and out-of-sample criteria.

    Drichoutis, Andreas C; Lusk, Jayson L

    2014-01-01

    Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample.

  3. Judging statistical models of individual decision making under risk using in- and out-of-sample criteria.

    Andreas C Drichoutis

    Full Text Available Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample.

  4. Individualized Sampling Parameters for Behavioral Observations: Enhancing the Predictive Validity of Competing Stimulus Assessments

    DeLeon, Iser G.; Toole, Lisa M.; Gutshall, Katharine A.; Bowman, Lynn G.

    2005-01-01

    Recent studies have used pretreatment analyses, termed competing stimulus assessments, to identify items that most effectively displace the aberrant behavior of individuals with developmental disabilities. In most studies, there appeared to have been no systematic basis for selecting the sampling period (ranging from 30 s to 10 min) in which items…

  5. Dependability of Data Derived from Time Sampling Methods with Multiple Observation Targets

    Johnson, Austin H.; Chafouleas, Sandra M.; Briesch, Amy M.

    2017-01-01

    In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in…

  6. A large sample of Kohonen-selected SDSS quasars with weak emission lines: selection effects and statistical properties

    Meusinger, H.; Balafkan, N.

    2014-08-01

    Aims: A tiny fraction of the quasar population shows remarkably weak emission lines. Several hypotheses have been developed, but the weak line quasar (WLQ) phenomenon still remains puzzling. The aim of this study was to create a sizeable sample of WLQs and WLQ-like objects and to evaluate various properties of this sample. Methods: We performed a search for WLQs in the spectroscopic data from the Sloan Digital Sky Survey Data Release 7 based on Kohonen self-organising maps for nearly 105 quasar spectra. The final sample consists of 365 quasars in the redshift range z = 0.6 - 4.2 (z¯ = 1.50 ± 0.45) and includes in particular a subsample of 46 WLQs with equivalent widths WMg iiattention was paid to selection effects. Results: The WLQs have, on average, significantly higher luminosities, Eddington ratios, and accretion rates. About half of the excess comes from a selection bias, but an intrinsic excess remains probably caused primarily by higher accretion rates. The spectral energy distribution shows a bluer continuum at rest-frame wavelengths ≳1500 Å. The variability in the optical and UV is relatively low, even taking the variability-luminosity anti-correlation into account. The percentage of radio detected quasars and of core-dominant radio sources is significantly higher than for the control sample, whereas the mean radio-loudness is lower. Conclusions: The properties of our WLQ sample can be consistently understood assuming that it consists of a mix of quasars at the beginning of a stage of increased accretion activity and of beamed radio-quiet quasars. The higher luminosities and Eddington ratios in combination with a bluer spectral energy distribution can be explained by hotter continua, i.e. higher accretion rates. If quasar activity consists of subphases with different accretion rates, a change towards a higher rate is probably accompanied by an only slow development of the broad line region. The composite WLQ spectrum can be reasonably matched by the

  7. Condenser: a statistical aggregation tool for multi-sample quantitative proteomic data from Matrix Science Mascot Distiller™.

    Knudsen, Anders Dahl; Bennike, Tue; Kjeldal, Henrik; Birkelund, Svend; Otzen, Daniel Erik; Stensballe, Allan

    2014-05-30

    We describe Condenser, a freely available, comprehensive open-source tool for merging multidimensional quantitative proteomics data from the Matrix Science Mascot Distiller Quantitation Toolbox into a common format ready for subsequent bioinformatic analysis. A number of different relative quantitation technologies, such as metabolic (15)N and amino acid stable isotope incorporation, label-free and chemical-label quantitation are supported. The program features multiple options for curative filtering of the quantified peptides, allowing the user to choose data quality thresholds appropriate for the current dataset, and ensure the quality of the calculated relative protein abundances. Condenser also features optional global normalization, peptide outlier removal, multiple testing and calculation of t-test statistics for highlighting and evaluating proteins with significantly altered relative protein abundances. Condenser provides an attractive addition to the gold-standard quantitative workflow of Mascot Distiller, allowing easy handling of larger multi-dimensional experiments. Source code, binaries, test data set and documentation are available at http://condenser.googlecode.com/. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Groundwater sampling from shallow boreholes (PP and PR) and groundwater observation tubes (PVP) at Olkiluoto in 2004

    Hirvonen, H. [Teollisuuden Voima Oyj, Eurajoki (Finland)

    2005-11-15

    Groundwater sampling from the shallow boreholes and groundwater observation tubes was performed in summer 2004 (PP2, PP3, PP7, PP8, PRl, PVPl, PVP3A, PVP3B, PVP4A and PVP4B) and in autumn 2004 (PP2, PP3, PP5, PP7, PP8, PP9, PP36, PP37, PP39, PR1, PR2, PVP1, PVP3A, PVP3B, PVP4A, PVP8A, PVP9A, PVP9B, PVP10B, PVP11, PVP12, PVP13, PVP14 and PVP20). The results from previous samplings have been used in the hydrogeochemical baseline characterization at Olkiluoto and some of the latest results have also been part of the ONKALO monitoring program. This study contains data on preliminary pumping of the sampling points and pumping for groundwater sampling and chemical analyses in the laboratory. This study also includes comparison with analytical results obtained between 1995-2004. The total dissolved solids (TDS) of groundwater samples were mainly below 1000 mg/L. According to Davis's TDS classification, these waters were fresh waters. The only exception was the water sample from shallow borehole PP7 (1400mg/L and 1450mg/L), which was brackish. Several different groundwater types were observed, but the most common water type was Ca-HCO{sub 3} (five samples). Analytical results from 1995-2003 were compared. During 2001-2003 in groundwater samples from sampling points PVP1, PVP9A and PP7 all measured main parameters changed considerably, but from summer 2003 to autumn 2004 the greatest alterations occurred in PR2, PVP1, PVP3A and PVP3B waters. These changes can be seen in almost all parameters. For other samples only minor changes in results were observed during the reference period. (orig.)

  9. Preliminary observations on the metal content in some milk samples from an acid geoenvironment

    Alhonen, P.

    1997-12-01

    Full Text Available The metal content of some milk samples was analyzed from areas of acid sulphate soils along the course of the river Kyrönjoki in western Finland. Comparative analyses were made with samples from the Artjärvi-Porlammi area. The variations of analyzed metals AI, Ba, Ca, Cr, Cu, Fe, K, Mg, Mo, Na, Sr and Zn are not great in both areas except that of Al, which is clearly associated with the acid environment in the Kyrönjoki valley. The portions of these elements in milk are relatively high as compared with data from literature. It is obvious that they show environmental contamination. Under acid circumstances the metals in milk may create serious geomedical problems.

  10. Dynamical "in situ" observation of biological samples using variable pressure scanning electron microscope

    Neděla, Vilém

    2008-01-01

    Roč. 126, - (2008), 012046:1-4 ISSN 1742-6588. [Electron Microscopy and Analysis Group Conference 2007 (EMAG 2007). Glasgow, 03.09.2007-07.09.2007] R&D Projects: GA ČR(CZ) GA102/05/0886; GA AV ČR KJB200650602 Institutional research plan: CEZ:AV0Z20650511 Keywords : biological sample * VP-SEM * dynamical experiments Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  11. Methane hydrate distribution from prolonged and repeated formation in natural and compacted sand samples: X-ray CT observations

    Rees, E.V.L.; Kneafsey, T.J.; Seol, Y.

    2010-07-01

    To study physical properties of methane gas hydrate-bearing sediments, it is necessary to synthesize laboratory samples due to the limited availability of cores from natural deposits. X-ray computed tomography (CT) and other observations have shown gas hydrate to occur in a number of morphologies over a variety of sediment types. To aid in understanding formation and growth patterns of hydrate in sediments, methane hydrate was repeatedly formed in laboratory-packed sand samples and in a natural sediment core from the Mount Elbert Stratigraphic Test Well. CT scanning was performed during hydrate formation and decomposition steps, and periodically while the hydrate samples remained under stable conditions for up to 60 days. The investigation revealed the impact of water saturation on location and morphology of hydrate in both laboratory and natural sediments during repeated hydrate formations. Significant redistribution of hydrate and water in the samples was observed over both the short and long term.

  12. Maxima estimate of non gaussian process from observation of time history samples

    Borsoi, L.

    1987-01-01

    The problem constitutes a formidable task but is essential for industrial applications: extreme value design, fatigue analysis, etc. Even for the linear Gaussian case, the process ergodicity does not prevent the observation duration to be long enough to make reliable estimates. As well known, this duration is closely related to the process autocorrelation. A subterfuge, which distorts a little the problem, consists in considering periodic random process and in adjusting the observation duration to a complete period. In the nonlinear case, the stated problem is as much important as time history simulation is presently the only practicable way for analysing structures. Thus it is always interesting to adjust a tractable model to rough time history observations. In some cases this can be done with a Gumble-Poisson model. Then the difficulty is to make reliable estimates of the parameters involved in the model. Unfortunately it seems that even the use of sophisticated Bayesian method does not permit to reduce as wanted the necessary observation duration. One of the difficulties lies in process ergodicity which is often assumed to be based on physical considerations but which is not always rigorously stated. An other difficulty is the confusion between hidden informations - which can be extracted - and missing informations - which cannot be extracted. Finally it must be recalled that the obligation of considering time histories long enough is not always embarrassing due to the current computer cost reduction. (orig./HP)

  13. Planetary Candidates Observed by Kepler IV: Planet Sample from Q1-Q8 (22 Months)

    Burke, Christopher J.; Christensen, Jessie L.; Ciardi, David R.; Morton, Timothy D.; Shporer, Avi

    2014-01-01

    We provide updates to the Kepler planet candidate sample based upon nearly two years of high-precision photometry (i.e., Q1-Q8). From an initial list of nearly 13,400 threshold crossing events, 480 new host stars are identified from their flux time series as consistent with hosting transiting planets. Potential transit signals are subjected to further analysis using the pixel-level data, which allows background eclipsing binaries to be identified through small image position shifts during tra...

  14. Multiparametric statistics

    Serdobolskii, Vadim Ivanovich

    2007-01-01

    This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...

  15. High-resolution observations of quasars from the Parkes +- 40 sample

    Booth, R.S.; Spencer, R.E.; Stannard, D.; Baath, L.B.

    1979-01-01

    VLBI observations of 20 compact quasars have been made between Jodrell Bank and Onsala at a frequency of 1666 MHz. Twelve of the quasars have inverted or peaked spectra at centimetre wavelengths and these are all unresolved, having angular diameters of < 0.015 arcsec. Two out of five quasars with overall flat spectra are partially resolved on this scale size, as are three steep-spectrum quasars. (author)

  16. A statistical rationale for establishing process quality control limits using fixed sample size, for critical current verification of SSC superconducting wire

    Pollock, D.A.; Brown, G.; Capone, D.W. II; Christopherson, D.; Seuntjens, J.M.; Woltz, J.

    1992-01-01

    This work has demonstrated the statistical concepts behind the XBAR R method for determining sample limits to verify billet I c performance and process uniformity. Using a preliminary population estimate for μ and σ from a stable production lot of only 5 billets, we have shown that reasonable sensitivity to systematic process drift and random within billet variation may be achieved, by using per billet subgroup sizes of moderate proportions. The effects of subgroup size (n) and sampling risk (α and β) on the calculated control limits have been shown to be important factors that need to be carefully considered when selecting an actual number of measurements to be used per billet, for each supplier process. Given the present method of testing in which individual wire samples are ramped to I c only once, with measurement uncertainty due to repeatability and reproducibility (typically > 1.4%), large subgroups (i.e. >30 per billet) appear to be unnecessary, except as an inspection tool to confirm wire process history for each spool. The introduction of the XBAR R method or a similar Statistical Quality Control procedure is recommend for use in the superconducing wire production program, particularly when the program transitions from requiring tests for all pieces of wire to sampling each production unit

  17. Observed mass distribution of spontaneous fission fragments from samples of lime - an SSNTD study

    Paul, D; Ghose, D; Sastri, R C

    1999-01-01

    SSNTD is one of the most commonly used detectors in the studies involving nuclear phenomena. The ease of registration of the presence of alpha particles and fission fragments has made it particularly suitable in studies where stable long exposures are needed to extract reliable information. Studies on the presence of alpha emitting nuclides in the environment assume importance since they are found to be carcinogenic. Lime samples from Silchar in Assam of Eastern India have shown the presence of spontaneous fission fragments besides alphas. In the present study we look at the ratio of the average mass distribution of these fission fragments, that gives us an indication of the presence of the traces of transuranic elements.

  18. Superwind Outflow in Seyfert Galaxies? : Optical Observations of an Edge-On Sample

    Colbert, E.; Gallimore, J.; Baum, S.; O'Dea, C.; Lehnert, M.

    1994-12-01

    Large-scale galactic winds (superwinds) are commonly found flowing out of the nuclear region of ultraluminous infrared and powerful starburst galaxies. Stellar winds and supernovae from the nuclear starburst are thought to provide the energy to drive these superwinds. The outflowing gas escapes along the rotation axis, sweeping up and shock-heating clouds in the halo, which produces optical line emission, X-rays and radio synchrotron emission. These features can most easily be studied in edge-on systems, so that the wind emission is not confused by that from the disk. Diffuse radio emission has been found (Baum et al. 1993, ApJ, 419, 553) to extend out to kpc-scales in a number of edge-on Seyfert galaxies. We have therefore launched a systematic search for superwind outflows in Seyferts. We present here narrow-band optical images and optical spectra for a sample of edge-on Seyferts. These data have been used to estimate the frequency of occurence of superwinds. Approximately half of the sample objects show evidence for extended emission-line regions which are preferentially oriented perpendicular to the galaxy disk. It is possible that these emission-line regions may be energized by a superwind outflow from a circumnuclear starburst, although there may also be a contribution from the AGN itself. A goal of this work is to find a diagnostic that can be used to distinguish between large-scale outflows that are driven by starbursts and those that are driven by an AGN. The presence of starburst-driven superwinds in Seyferts, if established, would have important implications for the connection between starburst galaxies and AGN.

  19. Triacylglycerol Analysis in Human Milk and Other Mammalian Species: Small-Scale Sample Preparation, Characterization, and Statistical Classification Using HPLC-ELSD Profiles.

    Ten-Doménech, Isabel; Beltrán-Iturat, Eduardo; Herrero-Martínez, José Manuel; Sancho-Llopis, Juan Vicente; Simó-Alfonso, Ernesto Francisco

    2015-06-24

    In this work, a method for the separation of triacylglycerols (TAGs) present in human milk and from other mammalian species by reversed-phase high-performance liquid chromatography using a core-shell particle packed column with UV and evaporative light-scattering detectors is described. Under optimal conditions, a mobile phase containing acetonitrile/n-pentanol at 10 °C gave an excellent resolution among more than 50 TAG peaks. A small-scale method for fat extraction in these milks (particularly of interest for human milk samples) using minimal amounts of sample and reagents was also developed. The proposed extraction protocol and the traditional method were compared, giving similar results, with respect to the total fat and relative TAG contents. Finally, a statistical study based on linear discriminant analysis on the TAG composition of different types of milks (human, cow, sheep, and goat) was carried out to differentiate the samples according to their mammalian origin.

  20. Observer-based output feedback control of networked control systems with non-uniform sampling and time-varying delay

    Meng, Su; Chen, Jie; Sun, Jian

    2017-10-01

    This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.

  1. Independent assessment of matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) sample preparation quality: A novel statistical approach for quality scoring.

    Kooijman, Pieter C; Kok, Sander J; Weusten, Jos J A M; Honing, Maarten

    2016-05-05

    Preparation of samples according to an optimized method is crucial for accurate determination of polymer sample characteristics by Matrix-Assisted Laser Desorption Ionization (MALDI) analysis. Sample preparation conditions such as matrix choice, cationization agent, deposition technique or even the deposition volume should be chosen to suit the sample of interest. Many sample preparation protocols have been developed and employed, yet finding the optimal sample preparation protocol remains a challenge. Because an objective comparison between the results of diverse protocols is not possible, "gut-feeling" or "good enough" is often decisive in the search for an optimum. This implies that sub-optimal protocols are used, leading to a loss of mass spectral information quality. To address this problem a novel analytical strategy based on MALDI imaging and statistical data processing was developed in which eight parameters were formulated to objectively quantify the quality of sample deposition and optimal MALDI matrix composition and finally sum up to an overall quality score of the sample deposition. These parameters can be established in a fully automated way using commercially available mass spectrometry imaging instruments without any hardware adjustments. With the newly developed analytical strategy the highest quality MALDI spots were selected, resulting in more reproducible and more valuable spectra for PEG in a variety of matrices. Moreover, our method enables an objective comparison of sample preparation protocols for any analyte and opens up new fields of investigation by presenting MALDI performance data in a clear and concise way. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Herschel and SCUBA-2 observations of dust emission in a sample of Planck cold clumps

    Juvela, Mika; He, Jinhua; Pattle, Katherine; Liu, Tie; Bendo, George; Eden, David J.; Fehér, Orsolya; Michel, Fich; Fuller, Gary; Hirano, Naomi; Kim, Kee-Tae; Li, Di; Liu, Sheng-Yuan; Malinen, Johanna; Marshall, Douglas J.; Paradis, Deborah; Parsons, Harriet; Pelkonen, Veli-Matti; Rawlings, Mark G.; Ristorcelli, Isabelle; Samal, Manash R.; Tatematsu, Ken'ichi; Thompson, Mark; Traficante, Alessio; Wang, Ke; Ward-Thompson, Derek; Wu, Yuefang; Yi, Hee-Weon; Yoo, Hyunju

    2018-04-01

    Context. Analysis of all-sky Planck submillimetre observations and the IRAS 100 μm data has led to the detection of a population of Galactic cold clumps. The clumps can be used to study star formation and dust properties in a wide range of Galactic environments. Aims: Our aim is to measure dust spectral energy distribution (SED) variations as a function of the spatial scale and the wavelength. Methods: We examined the SEDs at large scales using IRAS, Planck, and Herschel data. At smaller scales, we compared JCMT/SCUBA-2 850 μm maps with Herschel data that were filtered using the SCUBA-2 pipeline. Clumps were extracted using the Fellwalker method, and their spectra were modelled as modified blackbody functions. Results: According to IRAS and Planck data, most fields have dust colour temperatures TC 14-18 K and opacity spectral index values of β = 1.5-1.9. The clumps and cores identified in SCUBA-2 maps have T 13 K and similar β values. There are some indications of the dust emission spectrum becoming flatter at wavelengths longer than 500 μm. In fits involving Planck data, the significance is limited by the uncertainty of the corrections for CO line contamination. The fits to the SPIRE data give a median β value that is slightly above 1.8. In the joint SPIRE and SCUBA-2 850 μm fits, the value decreases to β 1.6. Most of the observed T-β anticorrelation can be explained by noise. Conclusions: The typical submillimetre opacity spectral index β of cold clumps is found to be 1.7. This is above the values of diffuse clouds, but lower than in some previous studies of dense clumps. There is only tentative evidence of a T-β anticorrelation and β decreasing at millimetre wavelengths. Planck (http://www.esa.int/Planck) is a project of the European Space Agency - ESA - with instruments provided by two scientific consortia funded by ESA member states (in particular the lead countries: France and Italy) with contributions from NASA (USA), and telescope reflectors

  3. On the Use of Biomineral Oxygen Isotope Data to Identify Human Migrants in the Archaeological Record: Intra-Sample Variation, Statistical Methods and Geographical Considerations.

    Emma Lightfoot

    Full Text Available Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals' homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific

  4. Arctic-HYCOS: a Large Sample observing system for estimating freshwater fluxes in the drainage basin of the Arctic Ocean

    Pietroniro, Al; Korhonen, Johanna; Looser, Ulrich; Hardardóttir, Jórunn; Johnsrud, Morten; Vuglinsky, Valery; Gustafsson, David; Lins, Harry F.; Conaway, Jeffrey S.; Lammers, Richard; Stewart, Bruce; Abrate, Tommaso; Pilon, Paul; Sighomnou, Daniel; Arheimer, Berit

    2015-04-01

    The Arctic region is an important regulating component of the global climate system, and is also experiencing a considerable change during recent decades. More than 10% of world's river-runoff flows to the Arctic Ocean and there is evidence of changes in its fresh-water balance. However, about 30% of the Arctic basin is still ungauged, with differing monitoring practices and data availability from the countries in the region. A consistent system for monitoring and sharing of hydrological information throughout the Arctic region is thus of highest interest for further studies and monitoring of the freshwater flux to the Arctic Ocean. The purpose of the Arctic-HYCOS project is to allow for collection and sharing of hydrological data. Preliminary 616 stations were identified with long-term daily discharge data available, and around 250 of these already provide online available data in near real time. This large sample will be used in the following scientific analysis: 1) to evaluate freshwater flux to the Arctic Ocean and Seas, 2) to monitor changes and enhance understanding of the hydrological regime and 3) to estimate flows in ungauged regions and develop models for enhanced hydrological prediction in the Arctic region. The project is intended as a component of the WMO (World Meteorological Organization) WHYCOS (World Hydrological Cycle Observing System) initiative, covering the area of the expansive transnational Arctic basin with participation from Canada, Denmark, Finland, Iceland, Norway, Russian Federation, Sweden and United States of America. The overall objective is to regularly collect, manage and share high quality data from a defined basic network of hydrological stations in the Arctic basin. The project focus on collecting data on discharge and possibly sediment transport and temperature. Data should be provisional in near-real time if available, whereas time-series of historical data should be provided once quality assurance has been completed. The

  5. The Investigation of Construct Validity of Diagnostic and Statistical Manual of Mental Disorder-5 Personality Traits on Iranian sample with Antisocial and Borderline Personality Disorders.

    Amini, Mehdi; Pourshahbaz, Abbas; Mohammadkhani, Parvaneh; Ardakani, Mohammad-Reza Khodaie; Lotfi, Mozhgan

    2014-12-01

    The goal of this study was to examine the construct validity of the diagnostic and statistical manual of mental disorder-5 (DSM-5) conceptual model of antisocial and borderline personality disorders (PDs). More specifically, the aim was to determine whether the DSM-5 five-factor structure of pathological personality trait domains replicated in an independently collected sample that differs culturally from the derivation sample. This study was on a sample of 346 individuals with antisocial (n = 122) and borderline PD (n = 130), and nonclinical subjects (n = 94). Participants randomly selected from prisoners, out-patient, and in-patient clients. Participants were recruited from Tehran prisoners, and clinical psychology and psychiatry clinics of Razi and Taleghani Hospital, Tehran, Iran. The SCID-II-PQ, SCID-II, DSM-5 Personality Trait Rating Form (Clinician's PTRF) were used to diagnosis of PD and to assessment of pathological traits. The data were analyzed by exploratory factor analysis. Factor analysis revealed a 5-factor solution for DSM-5 personality traits. Results showed that DSM-5 has adequate construct validity in Iranian sample with antisocial and borderline PDs. Factors similar in number with the other studies, but different in the content. Exploratory factor analysis revealed five homogeneous components of antisocial and borderline PDs. That may represent personality, behavioral, and affective features central to the disorder. Furthermore, the present study helps understand the adequacy of DSM-5 dimensional approach to evaluation of personality pathology, specifically on Iranian sample.

  6. The investigation of construct validity of diagnostic and statistical manual of mental disorder-5 personality traits on iranian sample with antisocial and borderline personality disorders

    Mehdi Amini

    2014-01-01

    Full Text Available Background: The goal of this study was to examine the construct validity of the diagnostic and statistical manual of mental disorder-5 (DSM-5 conceptual model of antisocial and borderline personality disorders (PDs. More specifically, the aim was to determine whether the DSM-5 five-factor structure of pathological personality trait domains replicated in an independently collected sample that differs culturally from the derivation sample. Methods: This study was on a sample of 346 individuals with antisocial (n = 122 and borderline PD (n = 130, and nonclinical subjects (n = 94. Participants randomly selected from prisoners, out-patient, and in-patient clients . Participants were recruited from Tehran prisoners, and clinical psychology and psychiatry clinics of Razi and Taleghani Hospital, Tehran, Iran. The SCID-II-PQ, SCID-II, DSM-5 Personality Trait Rating Form (Clinician′s PTRF were used to diagnosis of PD and to assessment of pathological traits. The data were analyzed by exploratory factor analysis. Results: Factor analysis revealed a 5-factor solution for DSM-5 personality traits. Results showed that DSM-5 has adequate construct validity in Iranian sample with antisocial and borderline PDs. Factors similar in number with the other studies, but different in the content. Conclusions: Exploratory factor analysis revealed five homogeneous components of antisocial and borderline PDs. That may represent personality, behavioral, and affective features central to the disorder. Furthermore, the present study helps understand the adequacy of DSM-5 dimensional approach to evaluation of personality pathology, specifically on Iranian sample.

  7. Towards the harmonization between National Forest Inventory and Forest Condition Monitoring. Consistency of plot allocation and effect of tree selection methods on sample statistics in Italy.

    Gasparini, Patrizia; Di Cosmo, Lucio; Cenni, Enrico; Pompei, Enrico; Ferretti, Marco

    2013-07-01

    In the frame of a process aiming at harmonizing National Forest Inventory (NFI) and ICP Forests Level I Forest Condition Monitoring (FCM) in Italy, we investigated (a) the long-term consistency between FCM sample points (a subsample of the first NFI, 1985, NFI_1) and recent forest area estimates (after the second NFI, 2005, NFI_2) and (b) the effect of tree selection method (tree-based or plot-based) on sample composition and defoliation statistics. The two investigations were carried out on 261 and 252 FCM sites, respectively. Results show that some individual forest categories (larch and stone pine, Norway spruce, other coniferous, beech, temperate oaks and cork oak forests) are over-represented and others (hornbeam and hophornbeam, other deciduous broadleaved and holm oak forests) are under-represented in the FCM sample. This is probably due to a change in forest cover, which has increased by 1,559,200 ha from 1985 to 2005. In case of shift from a tree-based to a plot-based selection method, 3,130 (46.7%) of the original 6,703 sample trees will be abandoned, and 1,473 new trees will be selected. The balance between exclusion of former sample trees and inclusion of new ones will be particularly unfavourable for conifers (with only 16.4% of excluded trees replaced by new ones) and less for deciduous broadleaves (with 63.5% of excluded trees replaced). The total number of tree species surveyed will not be impacted, while the number of trees per species will, and the resulting (plot-based) sample composition will have a much larger frequency of deciduous broadleaved trees. The newly selected trees have-in general-smaller diameter at breast height (DBH) and defoliation scores. Given the larger rate of turnover, the deciduous broadleaved part of the sample will be more impacted. Our results suggest that both a revision of FCM network to account for forest area change and a plot-based approach to permit statistical inference and avoid bias in the tree sample

  8. Statistics Clinic

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  9. Interplanetary scintillation observations of an unbiased sample of 90 Ooty occultation radio sources at 326.5 MHz

    Banhatti, D.G.; Ananthakrishnan, S.

    1989-01-01

    We present 327-MHz interplanetary scintillation (IPS) observations of an unbiased sample of 90 extragalactic radio sources selected from the ninth Ooty lunar occultation list. The sources are brighter than 0.75 Jy at 327 MHz and lie outside the galactic plane. We derive values, the fraction of scintillating flux density, and the equivalent Gaussian diameter for the scintillating structure. Various correlations are found between the observed parameters. In particular, the scintillating component weakens and broadens with increasing largest angular size, and stronger scintillators have more compact scintillating components. (author)

  10. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  11. Understanding Statistics - Cancer Statistics

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  12. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

  13. New low-spin states of 122Xe observed via high-statistics β-decay of 122Cs

    Jigmeddorj, B.; Garrett, P. E.; Andreoiu, C.; Ball, G. C.; Bruhn, T.; Cross, D. S.; Garnsworthy, A. B.; Hadinia, B.; Moukaddam, M.; Park, J.; Pore, J. L.; Radich, A. J.; Rajabali, M. M.; Rand, E. T.; Rizwan, U.; Svensson, C. E.; Voss, P.; Wang, Z. M.; Wood, J. L.; Yates, S. W.

    2018-05-01

    Excited states of 122Xe were studied via the β+/EC decay of 122Cs with the 8π γ-ray spectrometer at the TRIUMF-ISAC facility. Compton-suppressed HPGe detectors were used for measurements of γ-ray intensities, γγ coincidences, and γ-γ angular correlations. Two sets of data were collected to optimize the decays of the ground (21.2 s) and isomeric (3.7 min) states of 122Cs. The data collected have enabled the observation of about 505 new transitions and about 250 new levels, including 51 new low-spin states. Spin assignments have been made for 58 low-spin states based on the deduced β-decay feeding and γ-γ angular correlation analyses.

  14. Geostatistical estimation of forest biomass in interior Alaska combining Landsat-derived tree cover, sampled airborne lidar and field observations

    Babcock, Chad; Finley, Andrew O.; Andersen, Hans-Erik; Pattison, Robert; Cook, Bruce D.; Morton, Douglas C.; Alonzo, Michael; Nelson, Ross; Gregoire, Timothy; Ene, Liviu; Gobakken, Terje; Næsset, Erik

    2018-06-01

    The goal of this research was to develop and examine the performance of a geostatistical coregionalization modeling approach for combining field inventory measurements, strip samples of airborne lidar and Landsat-based remote sensing data products to predict aboveground biomass (AGB) in interior Alaska's Tanana Valley. The proposed modeling strategy facilitates pixel-level mapping of AGB density predictions across the entire spatial domain. Additionally, the coregionalization framework allows for statistically sound estimation of total AGB for arbitrary areal units within the study area---a key advance to support diverse management objectives in interior Alaska. This research focuses on appropriate characterization of prediction uncertainty in the form of posterior predictive coverage intervals and standard deviations. Using the framework detailed here, it is possible to quantify estimation uncertainty for any spatial extent, ranging from pixel-level predictions of AGB density to estimates of AGB stocks for the full domain. The lidar-informed coregionalization models consistently outperformed their counterpart lidar-free models in terms of point-level predictive performance and total AGB precision. Additionally, the inclusion of Landsat-derived forest cover as a covariate further improved estimation precision in regions with lower lidar sampling intensity. Our findings also demonstrate that model-based approaches that do not explicitly account for residual spatial dependence can grossly underestimate uncertainty, resulting in falsely precise estimates of AGB. On the other hand, in a geostatistical setting, residual spatial structure can be modeled within a Bayesian hierarchical framework to obtain statistically defensible assessments of uncertainty for AGB estimates.

  15. Feasibility of recruiting a diverse sample of men who have sex with men: observation from Nanjing, China.

    Weiming Tang

    Full Text Available Respondent-driven-sampling (RDS has well been recognized as a method for sampling from most hard-to-reach populations like commercial sex workers, drug users and men who have sex with men. However the feasibility of this sampling strategy in terms of recruiting a diverse spectrum of these hidden populations has not been understood well yet in developing countries.In a cross sectional study in Nanjing city of Jiangsu province of China, 430 MSM were recruited including 9 seeds in 14 weeks of study period using RDS. Information regarding socio-demographic characteristics and sexual risk behavior were collected and testing was done for HIV and syphilis. Duration, completion, participant characteristics and the equilibrium of key factors were used for assessing feasibility of RDS. Homophily of key variables, socio-demographic distribution and social network size were used as the indicators of diversity.In the study sample, adjusted HIV and syphilis prevalence were 6.6% and 14.6% respectively. Majority (96.3% of the participants were recruited by members of their own social network. Although there was a tendency for recruitment within the same self-identified group (homosexuals recruited 60.0% homosexuals, considerable cross-group recruitment (bisexuals recruited 52.3% homosexuals was also seen. Homophily of the self-identified sexual orientations was 0.111 for homosexuals. Upon completion of the recruitment process, participant characteristics and the equilibrium of key factors indicated that RDS was feasible for sampling MSM in Nanjing. Participants recruited by RDS were found to be diverse after assessing the homophily of key variables in successive waves of recruitment, the proportion of characteristics after reaching equilibrium and the social network size. The observed design effects were nearly the same or even better than the theoretical design effect of 2.RDS was found to be an efficient and feasible sampling method for recruiting a diverse

  16. Identifying ecological "sweet spots" underlying cyanobacteria functional group dynamics from long-term observations using a statistical machine learning approach

    Nelson, N.; Munoz-Carpena, R.; Phlips, E. J.

    2017-12-01

    Diversity in the eco-physiological adaptations of cyanobacteria genera creates challenges for water managers who are tasked with developing appropriate actions for controlling not only the intensity and frequency of cyanobacteria blooms, but also reducing the potential for blooms of harmful taxa (e.g., toxin producers, N2 fixers). Compounding these challenges, the efficacy of nutrient management strategies (phosphorus-only versus nitrogen-and-phosphorus) for cyanobacteria bloom abatement is the subject of an ongoing debate, which increases uncertainty associated with bloom mitigation decision-making. In this work, we analyze a unique long-term (17-year) dataset composed of monthly observations of cyanobacteria genera abundances, zooplankton abundances, water quality, and flow from Lake George, a bloom-impacted flow-through lake of the St. Johns River (FL, USA). Using the Random Forests machine learning algorithm, an assumption-free ensemble modeling approach, the dataset was evaluated to quantify and characterize relationships between environmental conditions and seven cyanobacteria groupings: five genera (Anabaena, Cylindrospermopsis, Lyngbya, Microcystis, and Oscillatoria) and two functional groups (N2 fixers and non-fixers). Results highlight the selectivity of nitrogen in describing genera and functional group dynamics, and potential for physical effects to limit the efficacy of nutrient management as a mechanism for cyanobacteria bloom mitigation.

  17. Statistical Similarities Between WSA-ENLIL+Cone Model and MAVEN in Situ Observations From November 2014 to March 2016

    Lentz, C. L.; Baker, D. N.; Jaynes, A. N.; Dewey, R. M.; Lee, C. O.; Halekas, J. S.; Brain, D. A.

    2018-02-01

    Normal solar wind flows and intense solar transient events interact directly with the upper Martian atmosphere due to the absence of an intrinsic global planetary magnetic field. Since the launch of the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission, there are now new means to directly observe solar wind parameters at the planet's orbital location for limited time spans. Due to MAVEN's highly elliptical orbit, in situ measurements cannot be taken while MAVEN is inside Mars' magnetosheath. To model solar wind conditions during these atmospheric and magnetospheric passages, this research project utilized the solar wind forecasting capabilities of the WSA-ENLIL+Cone model. The model was used to simulate solar wind parameters that included magnetic field magnitude, plasma particle density, dynamic pressure, proton temperature, and velocity during a four Carrington rotation-long segment. An additional simulation that lasted 18 Carrington rotations was then conducted. The precision of each simulation was examined for intervals when MAVEN was in the upstream solar wind, that is, with no exospheric or magnetospheric phenomena altering in situ measurements. It was determined that generalized, extensive simulations have comparable prediction capabilities as shorter, more comprehensive simulations. Generally, this study aimed to quantify the loss of detail in long-term simulations and to determine if extended simulations can provide accurate, continuous upstream solar wind conditions when there is a lack of in situ measurements.

  18. Statistical Computing

    inference and finite population sampling. Sudhakar Kunte. Elements of statistical computing are discussed in this series. ... which captain gets an option to decide whether to field first or bat first ... may of course not be fair, in the sense that the team which wins ... describe two methods of drawing a random number between 0.

  19. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  20. [Influence of an observer in the haemolysis produced during the extraction of blood samples in primary care].

    Bel-Peña, N; Mérida-de la Torre, F J

    2015-01-01

    To check whether an intervention based on direct observation and complementary information to nurses helps reduce haemolysis when drawing blood specimens. Random sampling study in primary care centres in the serrania de Málaga health management area, using a cross-sectional, longitudinal pre- and post-intervention design. The study period was from August 2012 to January 2015. The level of free haemoglobin was measured by direct spectrophotometry in the specimens extracted. It was then checked whether the intervention influenced the level of haemolysis, and if this was maintained over time. The mean haemolysis measured pre-intervention was 17%, and after intervention it was 6.1%. A year later and under the same conditions, the frequency of haemolysis was measured again the samples analysed, and the percentage was 9% These results are low when compared to the level obtained pre-intervention, but are higher when compared to the levels obtained immediately after the intervention. The transport and analysis conditions were the same. An intervention based on a direct and informative observation in the process of collecting blood samples contributes significantly to reduce the level of haemolysis. This effect is maintained in time. This intervention needs to be repeated to maintain its effectiveness. Audits and continuing education programs are useful for quality assurance procedures, and maintain the level of care needed for a good quality of care. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  1. USING STATISTICAL SURVEY IN ECONOMICS

    Delia TESELIOS

    2012-01-01

    Full Text Available Statistical survey is an effective method of statistical investigation that involves gathering quantitative data, which is often preferred in statistical reports due to the information which can be obtained regarding the entire population studied by observing a part of it. Therefore, because of the information provided, polls are used in many research areas. In economics, statistics are used in the decision making process in choosing competitive strategies in the analysis of certain economic phenomena, the formulation of forecasts. Economic study presented in this paper is to illustrate how a simple random sampling is used to analyze the existing parking spaces situation in a given locality.

  2. In-Sample Confidence Bands and Out-of-Sample Forecast Bands for Time-Varying Parameters in Observation Driven Models

    Blasques, F.; Koopman, S.J.; Lasak, K.A.; Lucas, A.

    2016-01-01

    We study the performances of alternative methods for calculating in-sample confidence and out-of-sample forecast bands for time-varying parameters. The in-sample bands reflect parameter uncertainty, while the out-of-sample bands reflect not only parameter uncertainty, but also innovation

  3. A direct observation method for auditing large urban centers using stratified sampling, mobile GIS technology and virtual environments.

    Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth

    2017-02-16

    With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.

  4. Comparing observer models and feature selection methods for a task-based statistical assessment of digital breast tomsynthesis in reconstruction space

    Park, Subok; Zhang, George Z.; Zeng, Rongping; Myers, Kyle J.

    2014-03-01

    A task-based assessment of image quality1 for digital breast tomosynthesis (DBT) can be done in either the projected or reconstructed data space. As the choice of observer models and feature selection methods can vary depending on the type of task and data statistics, we previously investigated the performance of two channelized- Hotelling observer models in conjunction with 2D Laguerre-Gauss (LG) and two implementations of partial least squares (PLS) channels along with that of the Hotelling observer in binary detection tasks involving DBT projections.2, 3 The difference in these observers lies in how the spatial correlation in DBT angular projections is incorporated in the observer's strategy to perform the given task. In the current work, we extend our method to the reconstructed data space of DBT. We investigate how various model observers including the aforementioned compare for performing the binary detection of a spherical signal embedded in structured breast phantoms with the use of DBT slices reconstructed via filtered back projection. We explore how well the model observers incorporate the spatial correlation between different numbers of reconstructed DBT slices while varying the number of projections. For this, relatively small and large scan angles (24° and 96°) are used for comparison. Our results indicate that 1) given a particular scan angle, the number of projections needed to achieve the best performance for each observer is similar across all observer/channel combinations, i.e., Np = 25 for scan angle 96° and Np = 13 for scan angle 24°, and 2) given these sufficient numbers of projections, the number of slices for each observer to achieve the best performance differs depending on the channel/observer types, which is more pronounced in the narrow scan angle case.

  5. Building a stronger child dental health system in Australia: statistical sampling masks the burden of dental disease distribution in Australian children.

    Tennant, M; Kruger, E

    2014-01-01

    In Australia, over the past 30 years, the prevalence of dental decay in children has reduced significantly, where today 60-70% of all 12-year-olds are caries free, and only 10% of children have more than two decayed teeth. However, many studies continue to report a small but significant subset of children suffering severe levels of decay. The present study applies Monte Carlo simulation to examine, at the national level, 12-year-old decayed, missing or filled teeth and shed light on both the statistical limitation of Australia's reporting to date as well as the problem of targeting high-risk children. A simulation for 273 000 Australian 12-year-old children found that moving from different levels of geographic clustering produced different statistical influences that drive different conclusions. At the high scale (ie state level) the gross averaging of the non-normally distributed disease burden masks the small subset of disease bearing children. At the much higher acuity of analysis (ie local government area) the risk of low numbers in the sample becomes a significant issue. The results clearly highlight the importance of care when examining the existing data, and, second, opportunities for far greater levels of targeting of services to children in need. The sustainability (and fairness) of universal coverage systems needs to be examined to ensure they remain highly targeted at disease burden, and not just focused on the children that are easy to reach (and suffer the least disease).

  6. Short time-scale optical variability properties of the largest AGN sample observed with Kepler/K2

    Aranzana, E.; Körding, E.; Uttley, P.; Scaringi, S.; Bloemen, S.

    2018-05-01

    We present the first short time-scale (˜hours to days) optical variability study of a large sample of active galactic nuclei (AGNs) observed with the Kepler/K2 mission. The sample contains 252 AGN observed over four campaigns with ˜30 min cadence selected from the Million Quasar Catalogue with R magnitude <19. We performed time series analysis to determine their variability properties by means of the power spectral densities (PSDs) and applied Monte Carlo techniques to find the best model parameters that fit the observed power spectra. A power-law model is sufficient to describe all the PSDs of our sample. A variety of power-law slopes were found indicating that there is not a universal slope for all AGNs. We find that the rest-frame amplitude variability in the frequency range of 6 × 10-6-10-4 Hz varies from 1to10 per cent with an average of 1.7 per cent. We explore correlations between the variability amplitude and key parameters of the AGN, finding a significant correlation of rest-frame short-term variability amplitude with redshift. We attribute this effect to the known `bluer when brighter' variability of quasars combined with the fixed bandpass of Kepler data. This study also enables us to distinguish between Seyferts and blazars and confirm AGN candidates. For our study, we have compared results obtained from light curves extracted using different aperture sizes and with and without detrending. We find that limited detrending of the optimal photometric precision light curve is the best approach, although some systematic effects still remain present.

  7. Statistical inference

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  8. Statistical Observations of The Patients With Vertigo in The Oto-Rhino-Laryngological Department of The Ryukyu University Hospital in 1980

    勢理客, 友子; 名嘉嶺, 苗子; 喜友名, 千佳子; 又吉, 重光; 野田, 寛; Serikyaku, Tomoko; Nakamine, Naeko; Kiyuna, Chikako; Matayoshi, Shigemitsu; Noda, Yutaka; 琉球大学医学部附属病院耳鼻咽喉科

    1982-01-01

    Statistical analyses were presented, regarding to the 69 patients with vertigo in the Oto-Rhino-Laryngological Department of the Ryukyus University Hospital, and the following features were observed : 1. We had monthly 7.7 patients with vertigo on an average, which were increased in comparison with average 4.9 patients a month in 1979. 2. Many of the patients were presented in the third to the fifth decade of age in both sex, and the female patients were 2.5 times more than the male. 3. The p...

  9. Observed Statistics of Extreme Waves

    2006-12-01

    hardware used, the reader is referred to Tinder (2000) and Ardhuin et al., (2003). SHOWEX took place during a particularly active hurricane season...c10, 5925-5938. Tinder , C. V., (2000). Swell Transformation Across the Continental Shelf. Master’s Thesis, Naval Postgraduate School, Monterey

  10. The interprocess NIR sampling as an alternative approach to multivariate statistical process control for identifying sources of product-quality variability.

    Marković, Snežana; Kerč, Janez; Horvat, Matej

    2017-03-01

    We are presenting a new approach of identifying sources of variability within a manufacturing process by NIR measurements of samples of intermediate material after each consecutive unit operation (interprocess NIR sampling technique). In addition, we summarize the development of a multivariate statistical process control (MSPC) model for the production of enteric-coated pellet product of the proton-pump inhibitor class. By developing provisional NIR calibration models, the identification of critical process points yields comparable results to the established MSPC modeling procedure. Both approaches are shown to lead to the same conclusion, identifying parameters of extrusion/spheronization and characteristics of lactose that have the greatest influence on the end-product's enteric coating performance. The proposed approach enables quicker and easier identification of variability sources during manufacturing process, especially in cases when historical process data is not straightforwardly available. In the presented case the changes of lactose characteristics are influencing the performance of the extrusion/spheronization process step. The pellet cores produced by using one (considered as less suitable) lactose source were on average larger and more fragile, leading to consequent breakage of the cores during subsequent fluid bed operations. These results were confirmed by additional experimental analyses illuminating the underlying mechanism of fracture of oblong pellets during the pellet coating process leading to compromised film coating.

  11. Neutron-induced {sup 63}Ni activity and microscopic observation of copper samples exposed to the Hiroshima atomic bomb

    Shizuma, Kiyoshi, E-mail: shizuma@hiroshima-u.ac.jp [Quantum Energy Applications, Department of Mechanical Science and Engineering, Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan); Endo, Satoru [Quantum Energy Applications, Department of Mechanical Science and Engineering, Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan); Shinozaki, Kenji [Materials Joining Science and Engineering, Department of Mechanical Science and Engineering, Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan); Fukushima, Hiroshi [Materials Physics, Department of Mechanical Science and Engineering, Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan)

    2013-05-01

    Fast neutron activation data for {sup 63}Ni in copper samples exposed to the Hiroshima atomic bomb are important in evaluating neutron doses to the survivors. Up to until now, accelerator mass spectrometry and liquid scintillation counting methods have been applied in {sup 63}Ni measurements and data were accumulated within 1500 m from the hypocenter. The slope of the activation curve versus distance shows reasonable agreement with the calculation result, however, data near the hypocenter are scarce. In the present work, two copper samples obtained from the Atomic bomb dome (155 m from the hypocenter) and the Bank of Japan building (392 m) were utilized in {sup 63}Ni beta-ray measurement with a Si surface barrier detector. Additionally, microscopic observation of the metal surfaces was performed for the first time. Only upper limit of {sup 63}Ni production was obtained for copper sample of the Atomic bomb dome. The result of the {sup 63}Ni measurement for Bank of Japan building show reasonable agreement with the AMS measurement and to fast neutron activation calculations based on the Dosimetry System 2002 (DS02) neutrons.

  12. LOCAL BENCHMARKS FOR THE EVOLUTION OF MAJOR-MERGER GALAXIES-SPITZER OBSERVATIONS OF A K-BAND SELECTED SAMPLE

    Xu, C. Kevin; Cheng Yiwen; Lu Nanyao; Mazzarella, Joseph M.; Cutri, Roc; Domingue, Donovan; Huang Jiasheng; Gao Yu; Sun, W.-H.; Surace, Jason

    2010-01-01

    We present Spitzer observations for a sample of close major-merger galaxy pairs (KPAIR sample) selected from cross-matches between the Two Micron All Sky Survey and Sloan Digital Sky Survey Data Release 3. The goals are to study the star formation activity in these galaxies and to set a local bench mark for the cosmic evolution of close major mergers. The Spitzer KPAIR sample (27 pairs, 54 galaxies) includes all spectroscopically confirmed spiral-spiral (S+S) and spiral-elliptical (S+E) pairs in a parent sample that is complete for primaries brighter than K = 12.5 mag, projected separations of 5 h -1 kpc ≤ s ≤ 20 h -1 kpc, and mass ratios ≤2.5. The Spitzer data, consisting of images in seven bands (3.6, 4.5, 5.8, 8, 24, 70, 160 μm), show very diversified IR emission properties. Compared to single spiral galaxies in a control sample, only spiral galaxies in S+S pairs show significantly enhanced specific star formation rate (sSFR = SFR/M), whereas spiral galaxies in S+E pairs do not. Furthermore, the SFR enhancement of spiral galaxies in S+S pairs is highly mass-dependent. Only those with M ∼> 10 10.5 M sun show significant enhancement. Relatively low-mass (M ∼ 10 10 M sun ) spirals in S+S pairs have about the same SFR/M compared to their counterparts in the control sample, while those with 10 11 M sun have on average a ∼3 times higher SFR/M than single spirals. There is evidence for a correlation between the global star formation activities (but not the nuclear activities) of the component galaxies in massive S+S major-merger pairs (the H olmberg effect ) . There is no significant difference in the SFR/M between the primaries and the secondaries, nor between spirals of SEP KPAIR =2.54 x 10 -4 (M sun yr -1 Mpc -3 ).

  13. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Holoien, Thomas W.-S.; /Ohio State U., Dept. Astron. /Ohio State U., CCAPP /KIPAC, Menlo Park /SLAC; Marshall, Philip J.; Wechsler, Risa H.; /KIPAC, Menlo Park /SLAC

    2017-05-11

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  14. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  15. Impulsive IR-multiphoton dissociation of acrolein: observation of non-statistical product vibrational excitation in CO ( v=1-12) by time resolved IR fluorescence spectroscopy

    Chowdhury, P. K.

    2000-10-01

    On IR-multiphoton excitation, vibrationally highly excited acrolein molecules undergo concerted dissociation generating CO and ethylene. The vibrationally excited products, CO and ethylene, are detected immediately following the CO 2 laser pulse by observing IR fluorescence at 4.7 and 3.2 μm, respectively. The nascent CO is formed with significant vibrational excitation, with a Boltzmann population distribution for v=1-12 levels corresponding to T v=12 950±50 K. The average vibrational energy in the product CO is found to be 26 kcal mol -1, in contrast to its statistical share of 5 kcal mol -1, available from the product energy distribution. The nascent vibrationally excited ethylene either dissociates by absorbing further infrared laser photons from the tail of the CO 2 laser pulse or relaxes by collisional deactivation. Ethylene IR-fluorescence excitation spectrum showed a structure in the quasi-continuum, with a facile resonance at 10.53 μm corresponding to the 10P(14) CO 2 laser line, which explains the higher acetylene yield observed at a higher pressure. A hydrogen atom transfer mechanism followed by C-C impulsive break in the acrolein transition state may be responsible for such non-statistical product energy distribution.

  16. Update on the non-prewhitening model observer in computed tomography for the assessment of the adaptive statistical and model-based iterative reconstruction algorithms

    Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.

    2014-08-01

    The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.

  17. A Catalog Sample of Low-mass Galaxies Observed in X-Rays with Central Candidate Black Holes

    Nucita, A. A.; Manni, L.; Paolis, F. De; Giordano, M.; Ingrosso, G., E-mail: nucita@le.infn.it [Department of Mathematics and Physics “E. De Giorgi”, University of Salento, Via per Arnesano, CP 193, I-73100, Lecce (Italy)

    2017-03-01

    We present a sample of X-ray-selected candidate black holes in 51 low-mass galaxies with z ≤ 0.055 and masses up to 10{sup 10} M {sub ⊙} obtained by cross-correlating the NASA-SLOAN Atlas with the 3XMM catalog. We have also searched in the available catalogs for radio counterparts of the black hole candidates and find that 19 of the previously selected sources also have a radio counterpart. Our results show that about 37% of the galaxies of our sample host an X-ray source (associated with a radio counterpart) spatially coincident with the galaxy center, in agreement with other recent works. For these nuclear sources, the X-ray/radio fundamental plane relation allows one to estimate the mass of the (central) candidate black holes, which are in the range of 10{sup 4}–2 × 10{sup 8} M {sub ⊙} (with a median value of ≃3 × 10{sup 7} M {sub ⊙} and eight candidates having masses below 10{sup 7} M {sub ⊙}). This result, while suggesting that X-ray emitting black holes in low-mass galaxies may have had a key role in the evolution of such systems, makes it even more urgent to explain how such massive objects formed in galaxies. Of course, dedicated follow-up observations both in the X-ray and radio bands, as well as in the optical, are necessary in order to confirm our results.

  18. Metal-poor dwarf galaxies in the SIGRID galaxy sample. I. H II region observations and chemical abundances

    Nicholls, David C.; Dopita, Michael A.; Sutherland, Ralph S.; Jerjen, Helmut; Kewley, Lisa J.; Basurah, Hassan

    2014-01-01

    In this paper we present the results of observations of 17 H II regions in thirteen galaxies from the SIGRID sample of isolated gas-rich irregular dwarf galaxies. The spectra of all but one of the galaxies exhibit the auroral [O III] 4363 Å line, from which we calculate the electron temperature, T e , and gas-phase oxygen abundance. Five of the objects are blue compact dwarf galaxies, of which four have not previously been analyzed spectroscopically. We include one unusual galaxy which exhibits no evidence of the [N II] λλ 6548,6584 Å lines, suggesting a particularly low metallicity (< Z ☉ /30). We compare the electron temperature based abundances with those derived using eight of the new strong-line diagnostics presented by Dopita et al. Using a method derived from first principles for calculating total oxygen abundance, we show that the discrepancy between the T e -based and strong-line gas-phase abundances have now been reduced to within ∼0.07 dex. The chemical abundances are consistent with what is expected from the luminosity-metallicity relation. We derive estimates of the electron densities and find them to be between ∼5 and ∼100 cm –3 . We find no evidence for a nitrogen plateau for objects in this sample with metallicities 0.5 > Z ☉ > 0.15.

  19. Metal-poor dwarf galaxies in the SIGRID galaxy sample. I. H II region observations and chemical abundances

    Nicholls, David C.; Dopita, Michael A.; Sutherland, Ralph S.; Jerjen, Helmut; Kewley, Lisa J. [Research School of Astronomy and Astrophysics, Australian National University, Cotter Road, Weston ACT 2611 (Australia); Basurah, Hassan, E-mail: David.Nicholls@anu.edu.au [Astronomy Department, King Abdulaziz University, P.O. Box 80203 Jeddah (Saudi Arabia)

    2014-05-10

    In this paper we present the results of observations of 17 H II regions in thirteen galaxies from the SIGRID sample of isolated gas-rich irregular dwarf galaxies. The spectra of all but one of the galaxies exhibit the auroral [O III] 4363 Å line, from which we calculate the electron temperature, T{sub e} , and gas-phase oxygen abundance. Five of the objects are blue compact dwarf galaxies, of which four have not previously been analyzed spectroscopically. We include one unusual galaxy which exhibits no evidence of the [N II] λλ 6548,6584 Å lines, suggesting a particularly low metallicity (< Z {sub ☉}/30). We compare the electron temperature based abundances with those derived using eight of the new strong-line diagnostics presented by Dopita et al. Using a method derived from first principles for calculating total oxygen abundance, we show that the discrepancy between the T{sub e} -based and strong-line gas-phase abundances have now been reduced to within ∼0.07 dex. The chemical abundances are consistent with what is expected from the luminosity-metallicity relation. We derive estimates of the electron densities and find them to be between ∼5 and ∼100 cm{sup –3}. We find no evidence for a nitrogen plateau for objects in this sample with metallicities 0.5 > Z {sub ☉} > 0.15.

  20. How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions

    Michael Robert Cunningham

    2016-10-01

    Full Text Available The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger, Wood, Stiff, and Chatzisarantis, 2010. Meta-analyses are supposed to reduce bias in literature reviews. Carter, Kofler, Forster, and McCullough’s (2015 meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and funnel plot asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test [PET] and Precision Effect Estimate with Standard Error (PEESE procedures. Despite these serious problems, the Carter et al. meta-analysis results actually indicate that there is a real depletion effect – contrary to their title.

  1. Statistical analyses of in-situ and soil-sample measurements for radionuclides in surface soil near the 116-K-2 trench

    Gilbert, R.O.; Klover, W.J.

    1988-09-01

    Radiation detection surveys are used at the US Department of Energy's Hanford Reservation near Richland, Washington, to determine areas that need posting as radiation zones or to measure dose rates in the field. The relationship between measurements made by Sodium Iodide (NaI) detectors mounted on the mobile Road Monitor vehicle and those made by hand-held GM P-11 probes and Micro-R meters are of particular interest because the Road Monitor can survey land areas in much less time than hand-held detectors. Statistical regression methods are used here to develop simple equations to predict GM P-11 probe gross gamma count-per-minute (cpm) and Micro-R-Meter μR/h measurements on the basis of NaI gross gamma count-per-second (cps) measurements obtained using the Road Monitor. These equations were estimated using data collected near the 116-K-2 Trench in the 100-K area on the Hanford Reservation. Equations are also obtained for estimating upper and lower limits within which the GM P-11 or Micro-R-Meter measurement corresponding to a given NaI Road Monitor measurement at a new location is expected to fall with high probability. An equation and limits for predicting GM P-11 measurements on the basis of Micro-R- Meter measurements is also estimated. Also, we estimate an equation that may be useful for approximating the 90 Sr measurement of a surface soil sample on the basis of a spectroscopy measurement for 137 Cs on that sample. 3 refs., 16 figs., 44 tabs

  2. The structure of Diagnostic and Statistical Manual of Mental Disorders (4th edition, text revision) personality disorder symptoms in a large national sample.

    Trull, Timothy J; Vergés, Alvaro; Wood, Phillip K; Jahng, Seungmin; Sher, Kenneth J

    2012-10-01

    We examined the latent structure underlying the criteria for DSM-IV-TR (American Psychiatric Association, 2000, Diagnostic and statistical manual of mental disorders (4th ed., text revision). Washington, DC: Author.) personality disorders in a large nationally representative sample of U.S. adults. Personality disorder symptom data were collected using a structured diagnostic interview from approximately 35,000 adults assessed over two waves of data collection in the National Epidemiologic Survey on Alcohol and Related Conditions. Our analyses suggested that a seven-factor solution provided the best fit for the data, and these factors were marked primarily by one or at most two personality disorder criteria sets. A series of regression analyses that used external validators tapping Axis I psychopathology, treatment for mental health problems, functioning scores, interpersonal conflict, and suicidal ideation and behavior provided support for the seven-factor solution. We discuss these findings in the context of previous studies that have examined the structure underlying the personality disorder criteria as well as the current proposals for DSM-5 personality disorders. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  3. Fractionation of metals in street sediment samples by using the BCR sequential extraction procedure and multivariate statistical elucidation of the data

    Kartal, Senol; Aydin, Zeki; Tokalioglu, Serife

    2006-01-01

    The concentrations of metals (Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) in street sediment samples were determined by flame atomic absorption spectrometry (FAAS) using the modified BCR (the European Community Bureau of Reference) sequential extraction procedure. According to the BCR protocol for extracting the metals from the relevant target phases, 1.0 g of specimen of the sample was treated with 0.11 M acetic acid (exchangeable and bound to carbonates), 0.5 M hydroxylamine hydrochloride (bound to iron- and manganese-oxides), and 8.8 M hydrogen peroxide plus 1 M ammonium acetate (bound to sulphides and organics), sequentially. The residue was treated with aqua regia solution for recovery studies, although this step is not part of the BCR procedure. The mobility sequence based on the sum of the BCR sequential extraction stages was: Cd ∼ Zn (∼90%) > Pb (∼84%) > Cu (∼75%) > Mn (∼70%) > Co (∼57%) > Ni (∼43%) > Cr (∼40%) > Fe (∼17%). Enrichment factors as the criteria for examining the impact of the anthropogenic emission sources of heavy metals were calculated, and it was observed that the highest enriched elements were Cd, Pb, and Zn in the dust samples, average 190, 111, and 20, respectively. Correlation analysis (CA) and principal component analysis (PCA) were applied to the data matrix to evaluate the analytical results and to identify the possible pollution sources of metals. PCA revealed that the sampling area was mainly influenced from three pollution sources, namely; traffic, industrial, and natural sources. The results show that chemical sequential extraction is a precious operational tool. Validation of the analytical results was checked by both recovery studies and analysis of the standard reference material (NIST SRM 2711 Montana Soil)

  4. A statistical analysis of angular distribution of neutrino events observed in Kamiokande II and IMB detectors from supernova SN 1987 A

    Krivoruchenko, M.I.

    1989-01-01

    A detailed statistical analysis of angular distribution of neutrino events observed in Kamiokande II and IMB detectors on UT 07:35, 2/23'87 is carried out. Distribution functions of the mean scattering angles in the reaction anti υ e p→e + n and υe→υe are constructed with account taken of the multiple Coulomb scattering and the experimental angular errors. The Smirnov and Wald-Wolfowitz run tests are used to test the hypothesis that the angular distributions of events from the two detectors agree with each other. We test with the use of the Kolmogorov and Mises statistical criterions the hypothesis that the recorded events all represent anti υ e p→e + n inelastic scatterings. Then the Neyman-Pearson test is applied to each event in testing the hypothesis anti υ e p→e + n against the alternative υe→υe. The hypotheses that the number of elastic events equals s=0, 1, 2, ... against the alternatives s≠0, 1, 2, ... are tested on the basis of the generalized likelihood ratio criterion. The confidence intervals for the number of elastic events are also constructed. The current supernova models fail to give a satisfactory account of the angular distribution data. (orig.)

  5. A statistical study of sporadic sodium layer observed by Sodium lidar at Hefei (31.8° N, 117.3° E

    X.-K. Dou

    2009-06-01

    Full Text Available Sodium lidar observations of sporadic sodium layers (SSLs during the past 3 years at a mid-latitude location (Hefei, China, 31.8° N, 117.3° E are reported in this paper. From 64 SSL events detected in about 900 h of observation, an SSL occurrence rate of 1 event every 14 h at our location was obtained. This result, combined with previous studies, reveals that the SSL occurrence can be relatively frequent at some mid-latitude locations. Statistical analysis of main parameters for the 64 SSL events was performed. By examining the corresponding data from an ionosonde, a considerable correlation was found with a Pearson coefficient of 0.66 between seasonal variations of SSL and those of sporadic E (Es during nighttime, which was in line with the research by Nagasawa and Abo (1995. From comparison between observations from the University of Science and Technology of China (USTC lidar and from Wuhan Institute of Physics and Mathematics (WIPM lidar (Wuhan, China, 31° N, 114° E, the minimum horizontal range for some events was estimated to be over 500 km.

  6. Statistical study on the variations of OH and O2 rotational temperatures observed by SATI at King Sejong Station (62.22S, 58.78W), Antarctica

    Kim, J.; Kim, J. H.; Jee, G.; Lee, C.; Kim, Y.

    2017-12-01

    Spectral Airglow Temperature Imager (SATI) installed at King Sejong Station (62.22S, 58.78W), Antarctica, has been continuously measured the airglow emissions from OH (6-2) Meinel and O2 (0-1) atmospheric bands since 2002, in order to investigate the dynamics of the polar MLT region. The measurements allow us to derive the rotational temperature at peak emission heights known as about 87 km and 94 km for OH and O2 airglows, respectively. In this study, we briefly introduce improved analysis technique that modified original analysis code. The major change compared to original program is the improvement of the function to find the exact center position in the observed image. In addition to brief introduction of the improved technique, we also present the results statistically investigating the periodic variations on the temperatures of two layers during the period of 2002 through 2011 and compare our results with those from the temperatures measured by satellite.

  7. Arecibo Radar Observation of Near-Earth Asteroids: Expanded Sample Size, Determination of Radar Albedos, and Measurements of Polarization Ratios

    Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.

    2017-10-01

    The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we

  8. Next generation sensing platforms for extended deployments in large-scale, multidisciplinary, adaptive sampling and observational networks

    Cross, J. N.; Meinig, C.; Mordy, C. W.; Lawrence-Slavas, N.; Cokelet, E. D.; Jenkins, R.; Tabisola, H. M.; Stabeno, P. J.

    2016-12-01

    New autonomous sensors have dramatically increased the resolution and accuracy of oceanographic data collection, enabling rapid sampling over extremely fine scales. Innovative new autonomous platofrms like floats, gliders, drones, and crawling moorings leverage the full potential of these new sensors by extending spatiotemporal reach across varied environments. During 2015 and 2016, The Innovative Technology for Arctic Exploration Program at the Pacific Marine Environmental Laboratory tested several new types of fully autonomous platforms with increased speed, durability, and power and payload capacity designed to deliver cutting-edge ecosystem assessment sensors to remote or inaccessible environments. The Expendable Ice-Tracking (EXIT) gloat developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) is moored near bottom during the ice-free season and released on an autonomous timer beneath the ice during the following winter. The float collects a rapid profile during ascent, and continues to collect critical, poorly-accessible under-ice data until melt, when data is transmitted via satellite. The autonomous Oculus sub-surface glider developed by the University of Washington and PMEL has a large power and payload capacity and an enhanced buoyancy engine. This 'coastal truck' is designed for the rapid water column ascent required by optical imaging systems. The Saildrone is a solar and wind powered ocean unmanned surface vessel (USV) developed by Saildrone, Inc. in partnership with PMEL. This large-payload (200 lbs), fast (1-7 kts), durable (46 kts winds) platform was equipped with 15 sensors designed for ecosystem assessment during 2016, including passive and active acoustic systems specially redesigned for autonomous vehicle deployments. The senors deployed on these platforms achieved rigorous accuracy and precision standards. These innovative platforms provide new sampling capabilities and cost efficiencies in high-resolution sensor deployment

  9. TOWARD CHARACTERIZATION OF THE TYPE IIP SUPERNOVA PROGENITOR POPULATION: A STATISTICAL SAMPLE OF LIGHT CURVES FROM Pan-STARRS1

    Sanders, N. E.; Soderberg, A. M.; Chornock, R.; Berger, E.; Challis, P.; Drout, M.; Kirshner, R. P.; Lunnan, R.; Marion, G. H.; Margutti, R.; McKinnon, R.; Milisavljevic, D. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Gezari, S. [Department of Astronomy, University of Maryland, College Park, MD 20742-2421 (United States); Betancourt, M. [Department of Statistics, University of Warwick, Coventry (United Kingdom); Foley, R. J. [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Narayan, G. [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Rest, A. [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Kankare, E.; Mattila, S. [Finnish Centre for Astronomy with ESO (FINCA), University of Turku, Väisäläntie 20, 21500 Piikkiö (Finland); Smartt, S. J., E-mail: nsanders@cfa.harvard.edu [Astrophysics Research Centre, School of Mathematics and Physics, Queens University, BT7 1NN, Belfast (United Kingdom); and others

    2015-02-01

    In recent years, wide-field sky surveys providing deep multiband imaging have presented a new path for indirectly characterizing the progenitor populations of core-collapse supernovae (SNe): systematic light-curve studies. We assemble a set of 76 grizy-band Type IIP SN light curves from Pan-STARRS1, obtained over a constant survey program of 4 yr and classified using both spectroscopy and machine-learning-based photometric techniques. We develop and apply a new Bayesian model for the full multiband evolution of each light curve in the sample. We find no evidence of a subpopulation of fast-declining explosions (historically referred to as ''Type IIL'' SNe). However, we identify a highly significant relation between the plateau phase decay rate and peak luminosity among our SNe IIP. These results argue in favor of a single parameter, likely determined by initial stellar mass, predominantly controlling the explosions of red supergiants. This relation could also be applied for SN cosmology, offering a standardizable candle good to an intrinsic scatter of ≲ 0.2 mag. We compare each light curve to physical models from hydrodynamic simulations to estimate progenitor initial masses and other properties of the Pan-STARRS1 Type IIP SN sample. We show that correction of systematic discrepancies between modeled and observed SN IIP light-curve properties and an expanded grid of progenitor properties are needed to enable robust progenitor inferences from multiband light-curve samples of this kind. This work will serve as a pathfinder for photometric studies of core-collapse SNe to be conducted through future wide-field transient searches.

  10. A statistical study of diurnal, seasonal and solar cycle variations of F-region and topside auroral upflows observed by EISCAT between 1984 and 1996

    C. Foster

    Full Text Available A statistical analysis of F-region and topside auroral ion upflow events is presented. The study is based on observations from EISCAT Common Programmes (CP 1 and 2 made between 1984 and 1996, and Common Programme 7 observations taken between 1990 and 1995. The occurrence frequency of ion upflow events (IUEs is examined over the altitude range 200 to 500 km, using field-aligned observations from CP-1 and CP-2. The study is extended in altitude with vertical measurements from CP-7. Ion upflow events were identified by consideration of both velocity and flux, with threshold values of 100 m s–1 and 1013 m–2 s–1, respectively. The frequency of occurrence of IUEs is seen to increase with increasing altitude. Further analysis of the field-aligned observations reveals that the number and nature of ion upflow events vary diurnally and with season and solar activity. In particular, the diurnal distribution of upflows is strongly dependent on solar cycle. Furthermore, events identified by the velocity selection criterion dominate at solar minimum, whilst events identified by the upward field-aligned flux criterion dominated at solar maximum. The study also provides a quantitative estimate of the proportion of upflows that are associated with enhanced plasma temperature. Between 50 and 60% of upflows are simultaneous with enhanced ion temperature, and approximately 80% of events are associated with either increased F-region ion or electron temperatures.

    Key words. Ionosphere (auroral ionosphere; particle acceleration

  11. First study of correlation between oleic acid content and SAD gene polymorphism in olive oil samples through statistical and bayesian modeling analyses.

    Ben Ayed, Rayda; Ennouri, Karim; Ercişli, Sezai; Ben Hlima, Hajer; Hanana, Mohsen; Smaoui, Slim; Rebai, Ahmed; Moreau, Fabienne

    2018-04-10

    Virgin olive oil is appreciated for its particular aroma and taste and is recognized worldwide for its nutritional value and health benefits. The olive oil contains a vast range of healthy compounds such as monounsaturated free fatty acids, especially, oleic acid. The SAD.1 polymorphism localized in the Stearoyl-acyl carrier protein desaturase gene (SAD) was genotyped and showed that it is associated with the oleic acid composition of olive oil samples. However, the effect of polymorphisms in fatty acid-related genes on olive oil monounsaturated and saturated fatty acids distribution in the Tunisian olive oil varieties is not understood. Seventeen Tunisian olive-tree varieties were selected for fatty acid content analysis by gas chromatography. The association of SAD.1 genotypes with the fatty acids composition was studied by statistical and Bayesian modeling analyses. Fatty acid content analysis showed interestingly that some Tunisian virgin olive oil varieties could be classified as a functional food and nutraceuticals due to their particular richness in oleic acid. In fact, the TT-SAD.1 genotype was found to be associated with a higher proportion of mono-unsaturated fatty acids (MUFA), mainly oleic acid (C18:1) (r = - 0.79, p SAD.1 association with the oleic acid composition of olive oil was identified among the studied varieties. This correlation fluctuated between studied varieties, which might elucidate variability in lipidic composition among them and therefore reflecting genetic diversity through differences in gene expression and biochemical pathways. SAD locus would represent an excellent marker for identifying interesting amongst virgin olive oil lipidic composition.

  12. COMPRENSIÓN DE LAS DISTRIBUCIONES MUESTRALES EN UN CURSO DE ESTADÍSTICA PARA INGENIEROS UNDERSTANDING OF SAMPLE DISTRIBUTIONS FOR A COURSE ON STATISTICS FOR ENGINEERS

    Lidia Retamal P

    2007-04-01

    hacia la construcción del significado de las distribuciones muestrales, usando el lenguaje gráfico con apoyo del computador, para posteriormente analizar con los estudiantes su forma algebraica según la naturaleza de las variables aleatorias.In this work we describe a contextual didactic approach using the software @risk, for teaching "sample distributions" in a Statistics course for engineers. Using the theory of semiotic functions, developed by Universidad de Granada in Spain, we characterize the meaning elements of the main properties of sample distributions. Then, using algebraic and simulation problems, we determine and evaluate the different errors and difficulties that emerge when the students simulate processes in Engineering. After considering the meaning elements obtained from the students' answers, we propose simulation as a first approach towards the construction of the meaning for sample distributions for small and big samples, using intuitive forms by means of a graphic language via computer support, so as to analize with the students the algebraic form, depending on the nature of the random variables.

  13. UMTRA water sampling technical (peer) review: Responses to observations, comments, and recommendations submitted by Don Messinger (Roy F. Weston, Inc.)

    1993-08-01

    An independent technical review (peer review) was conducted during the period of September 15--17, 1992. The review was conducted by C. Warren Ankerberg (Geraghty and Miller, Inc., Tampa, Florida) and Don Messinger (Roy F. Weston, Inc., West Chester, Pennsylvania). The review was held at Jacobs Engineering in Albuquerque, New Mexico, and at the Shiprock, New Mexico, site. The peer review included a review of written documentation [water sampling standard operating procedures (SOP)], an inspection of technical reports and other deliverables, a review of staff qualifications and training, and a field visit to evaluate the compliance of field procedures with SOPS. Upon completion of the peer review, each reviewer independently prepared a report of findings from the review. The reports listed findings and recommended actions. This document responds to the observations, comments, and recommendations submitted by Don Messinger following his review. The format of this document is to present the findings and recommendations verbatim from Mr. Messinger's report, followed by responses from the UMTRA Project staff. Included in the responses from the UMTRA Project staff are recommended changes in SOPs and strategies for implementing the charges

  14. THE ORIGIN OF THE INFRARED EMISSION IN RADIO GALAXIES. II. ANALYSIS OF MID- TO FAR-INFRARED SPITZER OBSERVATIONS OF THE 2JY SAMPLE

    Dicken, D.; Tadhunter, C.; Axon, D.; Morganti, R.; Inskip, K. J.; Holt, J.; Groves, B.; Delgado, R. Gonzalez

    2009-01-01

    We present an analysis of deep mid- to far-infrared (MFIR) Spitzer photometric observations of the southern 2Jy sample of powerful radio sources (0.05 < z < 0.7), conducting a statistical investigation of the links between radio jet, active galactic nucleus (AGN), starburst activity and MFIR properties. This is part of an ongoing extensive study of powerful radio galaxies that benefits from both complete optical emission line information and a uniquely high detection rate in the far-infrared (far-IR). We find tight correlations between the MFIR and [O III]λ5007 emission luminosities, which are significantly better than those between MFIR and extended radio luminosities, or between radio and [O III] luminosities. Since [O III] is a known indicator of intrinsic AGN power, these correlations confirm AGN illumination of the circumnuclear dust as the primary heating mechanism for the dust producing thermal MFIR emission at both 24 and 70 μm. We demonstrate that AGN heating is energetically feasible, and identify the narrow-line region clouds as the most likely location of the cool, far-IR emitting dust. Starbursts make a major contribution to the heating of the cool dust in only 15%-28% of our targets. We also investigate the orientation dependence of the continuum properties, finding that the broad- and narrow-line objects in our sample with strong emission lines have similar distributions of MFIR luminosities and colors. Therefore our results are entirely consistent with the orientation-based unified schemes for powerful radio galaxies. However, the weak line radio galaxies form a separate class of objects with intrinsically low-luminosity AGNs in which both the optical emission lines and the MFIR continuum are weak.

  15. Analysis of statistical misconception in terms of statistical reasoning

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  16. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847

  17. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 1: Method

    Norris, Peter M.; Da Silva, Arlindo M.

    2016-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  18. Validation of Refractivity Profiles Retrieved from FORMOSAT-3/COSMIC Radio Occultation Soundings: Preliminary Results of Statistical Comparisons Utilizing Balloon-Borne Observations

    Hiroo Hayashi

    2009-01-01

    Full Text Available The GPS radio occultation (RO soundings by the FORMOSAT-3/COSMIC (Taiwan¡¦s Formosa Satellite Misssion #3/Constellation Observing System for Meteorology, Ionosphere and Climate satellites launched in mid-April 2006 are compared with high-resolution balloon-borne (radiosonde and ozonesonde observations. This paper presents preliminary results of validation of the COSMIC RO measurements in terms of refractivity through the troposphere and lower stratosphere. With the use of COSMIC RO soundings within 2 hours and 300 km of sonde profiles, statistical comparisons between the collocated refractivity profiles are erformed for some tropical regions (Malaysia and Western Pacific islands where moisture-rich air is expected in the lower troposphere and for both northern and southern polar areas with a very dry troposphere. The results of the comparisons show good agreement between COSMIC RO and sonde refractivity rofiles throughout the troposphere (1 - 1.5% difference at most with a positive bias generally becoming larger at progressively higher altitudes in the lower stratosphere (1 - 2% difference around 25 km, and a very small standard deviation (about 0.5% or less for a few kilometers below the tropopause level. A large standard deviation of fractional differences in the lowermost troposphere, which reaches up to as much as 3.5 - 5%at 3 km, is seen in the tropics while a much smaller standard deviation (1 - 2% at most is evident throughout the polar troposphere.

  19. Statistical analysis of dust signals observed by ROSINA/COPS onboard of the Rosetta spacecraft at comet 67P/Churyumov-Gerasimenko

    Tzou, Chia-Yu; altwegg, kathrin; Bieler, Andre; Calmonte, Ursina; Gasc, Sébastien; Le Roy, Léna; Rubin, Martin

    2016-10-01

    ROSINA is the in situ Rosetta Orbiter Spectrometer for Ion and Neutral Analysis on board of Rosetta, one of the corner stone missions of the European Space Agency (ESA) to land and orbit the Jupiter family comet 67P/Churyumov-Gerasimenko (67P). ROSINA consists of two mass spectrometers and a pressure sensor. The Reflectron Time of Flight Spectrometer (RTOF) and the Double Focusing Mass Spectrometer (DFMS) complement each other in mass and time resolution.The Comet Pressure Sensor (COPS) provides density measurements of the neutral molecules in the cometary coma of 67P. COPS has two gauges, a nude gauge that measures the total neutral density and a ram gauge that measures the dynamic pressure from the comet. Combining the two COPS is also capable of providing gas dynamic information such as gas velocity and gas temperature of the coma.While Rosetta started orbiting around 67P in August 2014, COPS observed diurnal and seasonal variations of the neutral gas density in the coma. Surprisingly, additional to these major density variation patterns, COPS occasionally observed small spikes in the density that are associated with dust. These dust signals can be interpreted as a result of cometary dust releasing volatiles while heated up near COPS. A statistical analysis of dust signals detected by COPS will be presented.

  20. CMEs in the Heliosphere: I. A Statistical Analysis of the Observational Properties of CMEs Detected in the Heliosphere from 2007 to 2017 by STEREO/HI-1

    Harrison, R. A.; Davies, J. A.; Barnes, D.; Byrne, J. P.; Perry, C. H.; Bothmer, V.; Eastwood, J. P.; Gallagher, P. T.; Kilpua, E. K. J.; Möstl, C.; Rodriguez, L.; Rouillard, A. P.; Odstrčil, D.

    2018-05-01

    We present a statistical analysis of coronal mass ejections (CMEs) imaged by the Heliospheric Imager (HI) instruments on board NASA's twin-spacecraft STEREO mission between April 2007 and August 2017 for STEREO-A and between April 2007 and September 2014 for STEREO-B. The analysis exploits a catalogue that was generated within the FP7 HELCATS project. Here, we focus on the observational characteristics of CMEs imaged in the heliosphere by the inner (HI-1) cameras, while following papers will present analyses of CME propagation through the entire HI fields of view. More specifically, in this paper we present distributions of the basic observational parameters - namely occurrence frequency, central position angle (PA) and PA span - derived from nearly 2000 detections of CMEs in the heliosphere by HI-1 on STEREO-A or STEREO-B from the minimum between Solar Cycles 23 and 24 to the maximum of Cycle 24; STEREO-A analysis includes a further 158 CME detections from the descending phase of Cycle 24, by which time communication with STEREO-B had been lost. We compare heliospheric CME characteristics with properties of CMEs observed at coronal altitudes, and with sunspot number. As expected, heliospheric CME rates correlate with sunspot number, and are not inconsistent with coronal rates once instrumental factors/differences in cataloguing philosophy are considered. As well as being more abundant, heliospheric CMEs, like their coronal counterparts, tend to be wider during solar maximum. Our results confirm previous coronagraph analyses suggesting that CME launch sites do not simply migrate to higher latitudes with increasing solar activity. At solar minimum, CMEs tend to be launched from equatorial latitudes, while at maximum, CMEs appear to be launched over a much wider latitude range; this has implications for understanding the CME/solar source association. Our analysis provides some supporting evidence for the systematic dragging of CMEs to lower latitude as they propagate

  1. Statistical thermodynamics

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  2. Geology of the Alarcón Rise Based on 1-m Resolution Bathymetry and ROV Observations and Sampling

    Clague, D. A.; Caress, D. W.; Lundsten, L.; Martin, J. F.; Paduan, J. B.; Portner, R. A.; Bowles, J. A.; Castillo, P. R.; Dreyer, B. M.; Guardado-France, R.; Nieves-Cardoso, C.; Rivera-Huerta, H.; Santa Rosa-del Rio, M.; Spelz-Madero, R.

    2012-12-01

    Alarcón Rise is a ~50 km-long segment of the northernmost East Pacific Rise, bounded on the north and south by the Pescadero and Tamayo Fracture Zones. In April 2012, the MBARI AUV D. Allan B. completed a 1.5-3.1-km wide bathymetric map along the neovolcanic zone between the two fracture zones during 10 surveys. A single AUV survey was also completed on Alarcón Seamount, a near-ridge seamount with 4 offset calderas. Bathymetric data have 1 m lateral and 0.2 m vertical resolution. The maps guided 8 dives of the ROV Doc Ricketts on the ridge and 1 on the seamount. The morphology of the rise changes dramatically along strike and includes an inflated zone, centered ~14 km from the southern end, paved by a young sheet flow erupted from an 8-km-long en echelon fissure system. A young flat-topped volcano and an older shield volcano occur near the center of the ridge segment. Areas nearer the fracture zones are mainly pillow mounds and ridges, some strongly cut by faults and fissures, but others have few structural disruptions. More than 150 of the 194 lava samples recovered from the neovolcanic zone are aphyric to plagioclase-phyric to ultraphyric N-MORB with glass MgO ranging up to 8.5%. The basal cm from 87 short cores contain common limu o Pele and adequate foramifers to provide minimum radiocarbon ages for the underlying lava flows. A rugged lava dome of rhyolite (based on glass compositions) is surrounded by large pillow flows of dacite, centered ~8 km from the north end of the Rise. Pillow flows are steeply uptilted for 2-3 km north and south of the dome, possibly reflecting intrusion of viscous rhyolitic dikes along strike. Near the southern end of this deformed zone, an andesite flow crops out in a fault scarp. Mapping data also reveal the presence of about 110 apparent hydrothermal chimney structures as tall as 18 m, scattered along roughly the central half of the Rise. Subsequent ROV dives observed 70 of these structures and found active venting at 22 of them

  3. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  4. Two-Sample Statistics for Testing the Equality of Survival Functions Against Improper Semi-parametric Accelerated Failure Time Alternatives: An Application to the Analysis of a Breast Cancer Clinical Trial

    BROËT, PHILIPPE; TSODIKOV, ALEXANDER; DE RYCKE, YANN; MOREAU, THIERRY

    2010-01-01

    This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests. PMID:15293627

  5. Two-sample statistics for testing the equality of survival functions against improper semi-parametric accelerated failure time alternatives: an application to the analysis of a breast cancer clinical trial.

    Broët, Philippe; Tsodikov, Alexander; De Rycke, Yann; Moreau, Thierry

    2004-06-01

    This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests.

  6. Micro-organism distribution sampling for bioassays

    Nelson, B. A.

    1975-01-01

    Purpose of sampling distribution is to characterize sample-to-sample variation so statistical tests may be applied, to estimate error due to sampling (confidence limits) and to evaluate observed differences between samples. Distribution could be used for bioassays taken in hospitals, breweries, food-processing plants, and pharmaceutical plants.

  7. Statistical intervals a guide for practitioners

    Hahn, Gerald J

    2011-01-01

    Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

  8. Activity of invasive slug Limax maximus in relation to climate conditions based on citizen's observations and novel regularization based statistical approaches.

    Morii, Yuta; Ohkubo, Yusaku; Watanabe, Sanae

    2018-05-13

    Citizen science is a powerful tool that can be used to resolve the problems of introduced species. An amateur naturalist and author of this paper, S. Watanabe, recorded the total number of Limax maximus (Limacidae, Pulmonata) individuals along a fixed census route almost every day for two years on Hokkaido Island, Japan. L. maximus is an invasive slug considered a pest species of horticultural and agricultural crops. We investigated how weather conditions were correlated to the intensity of slug activity using for the first time in ecology the recently developed statistical analyses, Bayesian regularization regression with comparisons among Laplace, Horseshoe and Horseshoe+ priors for the first time in ecology. The slug counts were compared with meteorological data from 5:00 in the morning on the day of observation (OT- and OD-models) and the day before observation (DBOD-models). The OT- and OD-models were more supported than the DBOD-models based on the WAIC scores, and the meteorological predictors selected in the OT-, OD- and DBOD-models were different. The probability of slug appearance was increased on mornings with higher than 20-year-average humidity (%) and lower than average wind velocity (m/s) and precipitation (mm) values in the OT-models. OD-models showed a pattern similar to OT-models in the probability of slug appearance, but also suggested other meteorological predictors for slug activities; positive effect of solar radiation (MJ) for example. Five meteorological predictors, mean and highest temperature (°C), wind velocity (m/s), precipitation amount (mm) and atmospheric pressure (hPa), were selected as the effective factors for the counts in the DBOD-models. Therefore, the DBOD-models will be valuable for the prediction of slug activity in the future, much like a weather forecast. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 2: Sensitivity tests and results

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational–Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by

  10. Monte Carlo Bayesian Inference on a Statistical Model of Sub-gridcolumn Moisture Variability Using High-resolution Cloud Observations . Part II; Sensitivity Tests and Results

    da Silva, Arlindo M.; Norris, Peter M.

    2013-01-01

    Part I presented a Monte Carlo Bayesian method for constraining a complex statistical model of GCM sub-gridcolumn moisture variability using high-resolution MODIS cloud data, thereby permitting large-scale model parameter estimation and cloud data assimilation. This part performs some basic testing of this new approach, verifying that it does indeed significantly reduce mean and standard deviation biases with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud top pressure, and that it also improves the simulated rotational-Ramman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the OMI instrument. Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows finite jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. This paper also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in the cloud observables on cloud vertical structure, beyond cloud top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard (1998) provides some help in this respect, by better honoring inversion structures in the background state.

  11. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 2: Sensitivity Tests and Results

    Norris, Peter M.; da Silva, Arlindo M.

    2016-01-01

    Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational-Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by

  12. Ground-Based Global Navigation Satellite System (GNSS) Compact Observation Data (1-second sampling, sub-hourly files) from NASA CDDIS

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) Observation Data (1-second sampling, sub-hourly files) from the NASA Crustal Dynamics...

  13. Statistical Optics

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  14. Explorations in statistics: the log transformation.

    Curran-Everett, Douglas

    2018-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.

  15. Home sampling for sexually transmitted infections and HIV in men who have sex with men: a prospective observational study.

    Martin Fisher

    Full Text Available To determine uptake of home sampling kit (HSK for STI/HIV compared to clinic-based testing, whether the availability of HSK would increase STI testing rates amongst HIV infected MSM, and those attending a community-based HIV testing clinic compared to historical control. Prospective observational study in three facilities providing STI/HIV testing services in Brighton, UK was conducted. Adult MSM attending/contacting a GUM clinic requesting an STI screen (group 1, HIV infected MSM attending routine outpatient clinic (group 2, and MSM attending a community-based rapid HIV testing service (group 3 were eligible. Participants were required to have no symptomatology consistent with STI and known to be immune to hepatitis A and B (group 1. Eligible men were offered a HSK to obtain self-collected specimens as an alternative to routine testing. HSK uptake compared to conventional clinic-based STI/HIV testing in group 1, increase in STI testing rates due to availability of HSK compared to historical controls in group 2 and 3, and HSK return rates in all settings were calculated. Among the 128 eligible men in group 1, HSK acceptance was higher (62.5% (95% CI: 53.5-70.9 compared to GUM clinic-based testing (37.5% (95% CI: 29.1-46.5, (p = 0.0004. Two thirds of eligible MSM offered an HSK in all three groups accepted it, but HSK return rates varied (highest in group 1, 77.5%, lowest in group 3, 16%. HSK for HIV testing was acceptable to 81% of men in group 1. Compared to historical controls, availability of HSK increased the proportion of MSM testing for STIs in group 2 but not in group 3. HSK for STI/HIV offers an alternative to conventional clinic-based testing for MSM seeking STI screening. It significantly increases STI testing uptake in HIV infected MSM. HSK could be considered as an adjunct to clinic-based services to further improve STI/HIV testing in MSM.

  16. The SENSE-Isomorphism Theoretical Image Voxel Estimation (SENSE-ITIVE) Model for Reconstruction and Observing Statistical Properties of Reconstruction Operators

    Bruce, Iain P.; Karaman, M. Muge; Rowe, Daniel B.

    2012-01-01

    The acquisition of sub-sampled data from an array of receiver coils has become a common means of reducing data acquisition time in MRI. Of the various techniques used in parallel MRI, SENSitivity Encoding (SENSE) is one of the most common, making use of a complex-valued weighted least squares estimation to unfold the aliased images. It was recently shown in Bruce et al. [Magn. Reson. Imag. 29(2011):1267–1287] that when the SENSE model is represented in terms of a real-valued isomorphism, it assumes a skew-symmetric covariance between receiver coils, as well as an identity covariance structure between voxels. In this manuscript, we show that not only is the skew-symmetric coil covariance unlike that of real data, but the estimated covariance structure between voxels over a time series of experimental data is not an identity matrix. As such, a new model, entitled SENSE-ITIVE, is described with both revised coil and voxel covariance structures. Both the SENSE and SENSE-ITIVE models are represented in terms of real-valued isomorphisms, allowing for a statistical analysis of reconstructed voxel means, variances, and correlations resulting from the use of different coil and voxel covariance structures used in the reconstruction processes to be conducted. It is shown through both theoretical and experimental illustrations that the miss-specification of the coil and voxel covariance structures in the SENSE model results in a lower standard deviation in each voxel of the reconstructed images, and thus an artificial increase in SNR, compared to the standard deviation and SNR of the SENSE-ITIVE model where both the coil and voxel covariances are appropriately accounted for. It is also shown that there are differences in the correlations induced by the reconstruction operations of both models, and consequently there are differences in the correlations estimated throughout the course of reconstructed time series. These differences in correlations could result in meaningful

  17. Design of a Polynomial Fuzzy Observer Controller With Sampled-Output Measurements for Nonlinear Systems Considering Unmeasurable Premise Variables

    Liu, Chuang; Lam, H. K.

    2015-01-01

    In this paper, we propose a polynomial fuzzy observer controller for nonlinear systems, where the design is achieved through the stability analysis of polynomial-fuzzy-model-based (PFMB) observer-control system. The polynomial fuzzy observer estimates the system states using estimated premise variables. The estimated states are then employed by the polynomial fuzzy controller for the feedback control of nonlinear systems represented by the polynomial fuzzy model. The system stability of the P...

  18. Balanced sampling

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  19. Cancer Statistics

    ... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...

  20. AUTOMATED SOLAR FLARE STATISTICS IN SOFT X-RAYS OVER 37 YEARS OF GOES OBSERVATIONS: THE INVARIANCE OF SELF-ORGANIZED CRITICALITY DURING THREE SOLAR CYCLES

    Aschwanden, Markus J.; Freeland, Samuel L.

    2012-01-01

    We analyzed the soft X-ray light curves from the Geostationary Operational Environmental Satellites over the last 37 years (1975-2011) and measured with an automated flare detection algorithm over 300,000 solar flare events (amounting to ≈5 times higher sensitivity than the NOAA flare catalog). We find a power-law slope of α F = 1.98 ± 0.11 for the (background-subtracted) soft X-ray peak fluxes that is invariant through three solar cycles and agrees with the theoretical prediction α F = 2.0 of the fractal-diffusive self-organized criticality (FD-SOC) model. For the soft X-ray flare rise times, we find a power-law slope of α T = 2.02 ± 0.04 during solar cycle minima years, which is also consistent with the prediction α T = 2.0 of the FD-SOC model. During solar cycle maxima years, the power-law slope is steeper in the range of α T ≈ 2.0-5.0, which can be modeled by a solar-cycle-dependent flare pile-up bias effect. These results corroborate the FD-SOC model, which predicts a power-law slope of α E = 1.5 for flare energies and thus rules out significant nanoflare heating. While the FD-SOC model predicts the probability distribution functions of spatio-temporal scaling laws of nonlinear energy dissipation processes, additional physical models are needed to derive the scaling laws between the geometric SOC parameters and the observed emissivity in different wavelength regimes, as we derive here for soft X-ray emission. The FD-SOC model also yields statistical probabilities for solar flare forecasting.

  1. Using NASA Earth Observing Satellites and Statistical Model Analysis to Monitor Vegetation and Habitat Rehabilitation in Southwest Virginia's Reclaimed Mine Lands

    Tate, Z.; Dusenge, D.; Elliot, T. S.; Hafashimana, P.; Medley, S.; Porter, R. P.; Rajappan, R.; Rodriguez, P.; Spangler, J.; Swaminathan, R. S.; VanGundy, R. D.

    2014-12-01

    The majority of the population in southwest Virginia depends economically on coal mining. In 2011, coal mining generated $2,000,000 in tax revenue to Wise County alone. However, surface mining completely removes land cover and leaves the land exposed to erosion. The destruction of the forest cover directly impacts local species, as some are displaced and others perish in the mining process. Even though surface mining has a negative impact on the environment, land reclamation efforts are in place to either restore mined areas to their natural vegetated state or to transform these areas for economic purposes. This project aimed to monitor the progress of land reclamation and the effect on the return of local species. By incorporating NASA Earth observations, such as Landsat 8 Operational Land Imager (OLI) and Landsat 5 Thematic Mapper (TM), re-vegetation process in reclamation sites was estimated through a Time series analysis using the Normalized Difference Vegetation Index (NDVI). A continuous source of cloud free images was accomplished by utilizing the Spatial and Temporal Adaptive Reflectance Fusion Model (STAR-FM). This model developed synthetic Landsat imagery by integrating the high-frequency temporal information from Terra/Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and high-resolution spatial information from Landsat sensors In addition, the Maximum Entropy Modeling (MaxENT), an eco-niche model was used to estimate the adaptation of animal species to the newly formed habitats. By combining factors such as land type, precipitation from Tropical Rainfall Measuring Mission (TRMM), and slope from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), the MaxENT model produced a statistical analysis on the probability of species habitat. Altogether, the project compiled the ecological information which can be used to identify suitable habitats for local species in reclaimed mined areas.

  2. SDO/AIA Observations of Quasi-periodic Fast (~1000 km/s) Propagating (QFP) Waves as Evidence of Fast-mode Magnetosonic Waves in the Low Corona: Statistics and Implications

    Liu, W.; Ofman, L.; Title, A. M.; Zhao, J.; Aschwanden, M. J.

    2011-12-01

    Recent EUV imaging observations from SDO/AIA led to the discovery of quasi-periodic fast (~2000 km/s) propagating (QFP) waves in active regions (Liu et al. 2011). They were interpreted as fast-mode magnetosonic waves and reproduced in 3D MHD simulations (Ofman et al. 2011). Since then, we have extended our study to a sample of more than a dozen such waves observed during the SDO mission (2010/04-now). We will present the statistical properties of these waves including: (1) Their projected speeds measured in the plane of the sky are about 400-2200 km/s, which, as the lower limits of their true speeds in 3D space, fall in the expected range of coronal Alfven or fast-mode speeds. (2) They usually originate near flare kernels, often in the wake of a coronal mass ejection, and propagate in narrow funnels of coronal loops that serve as waveguides. (3) These waves are launched repeatedly with quasi-periodicities in the 30-200 seconds range, often lasting for more than one hour; some frequencies coincide with those of the quasi-periodic pulsations (QPPs) in the accompanying flare, suggestive a common excitation mechanism. We obtained the k-omega diagrams and dispersion relations of these waves using Fourier analysis. We estimate their energy fluxes and discuss their contribution to coronal heating as well as their diagnostic potential for coronal seismology.

  3. NEAR-ULTRAVIOLET PROPERTIES OF A LARGE SAMPLE OF TYPE Ia SUPERNOVAE AS OBSERVED WITH THE Swift UVOT

    Milne, Peter A.; Brown, Peter J.; Roming, Peter W. A.; Vanden Berk, Daniel; Holland, Stephen T.; Immler, Stefan; Bufano, Filomena; Gehrels, Neil; Filippenko, Alexei V.; Ganeshalingam, Mohan; Li Weidong; Stritzinger, Maximilian; Phillips, Mark M.; Hicken, Malcolm; Kirshner, Robert P.; Challis, Peter J.; Mazzali, Paolo; Schmidt, Brian P.

    2010-01-01

    We present ultraviolet (UV) and optical photometry of 26 Type Ia supernovae (SNe Ia) observed from 2005 March to 2008 March with the NASA Swift Ultraviolet and Optical Telescope (UVOT). The dataset consists of 2133 individual observations, making it by far the most complete study of the UV emission from SNe Ia to date. Grouping the SNe into three subclasses as derived from optical observations, we investigate the evolution of the colors of these SNe, finding a high degree of homogeneity within the normal subclass, but dramatic differences between that group and the subluminous and SN 2002cx-like groups. For the normal events, the redder UV filters on UVOT (u, uvw1) show more homogeneity than do the bluer UV filters (uvm2, uvw2). Searching for purely UV characteristics to determine existing optically based groupings, we find the peak width to be a poor discriminant, but we do see a variation in the time delay between peak emission and the late, flat phase of the light curves. The UV light curves peak a few days before the B band for most subclasses (as was previously reported by Jha et al.), although the SN 2002cx-like objects peak at a very early epoch in the UV. That group also features the bluest emission observed among SNe Ia. As the observational campaign is ongoing, we discuss the critical times to observe, as determined by this study, in order to maximize the scientific output of future observations.

  4. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. VLBI observations of the nuclei of a mixed sample of bright galaxies and quasars at 327 MHz

    Ananthakrishnan, S.; Kulkarni, V.K.

    1989-01-01

    The first VLBI observations using the Ooty telescope are presented. An array consisting of telescopes at Ooty (India), Crimea (USSR), Torun (Poland), Westerbork (Netherlands) and Jodrell Bank (United Kingdom) was operated in 1983 December at a frequency of 327 MHz. Nearby galaxies, compact quasars and SS433 were observed in this pilot experiment. Most of the galaxies were found to be well resolved. The structure of SS433 (visible only on the shortest baseline) is consistent with that obtained in previous high-frequency VLBI work. The visibilities of the compact quasars indicate that large-scale scattering may be taking place in the interplanetary medium. (author)

  6. Clinical use of fungal PCR from deep tissue samples in the diagnosis of invasive fungal diseases: a retrospective observational study.

    Ala-Houhala, M; Koukila-Kähkölä, P; Antikainen, J; Valve, J; Kirveskari, J; Anttila, V-J

    2018-03-01

    To assess the clinical use of panfungal PCR for diagnosis of invasive fungal diseases (IFDs). We focused on the deep tissue samples. We first described the design of panfungal PCR, which is in clinical use at Helsinki University Hospital. Next we retrospectively evaluated the results of 307 fungal PCR tests performed from 2013 to 2015. Samples were taken from normally sterile tissues and fluids. The patient population was nonselected. We classified the likelihood of IFD according to the criteria of the European Organization for Research and Treatment of Cancer/Invasive Fungal Infections Cooperative Group and the National Institute of Allergy and Infectious Diseases Mycoses Study Group (EORTC/MSG), comparing the fungal PCR results to the likelihood of IFD along with culture and microscopy results. There were 48 positive (16%) and 259 negative (84%) PCR results. The sensitivity and specificity of PCR for diagnosing IFDs were 60.5% and 91.7%, respectively, while the negative predictive value and positive predictive value were 93.4% and 54.2%, respectively. The concordance between the PCR and the culture results was 86% and 87% between PCR and microscopy, respectively. Of the 48 patients with positive PCR results, 23 had a proven or probable IFD. Fungal PCR can be useful for diagnosing IFDs in deep tissue samples. It is beneficial to combine fungal PCR with culture and microscopy. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  7. FAINT RADIO-SOURCES WITH PEAKED SPECTRA .1. VLA OBSERVATIONS OF A NEW SAMPLE WITH INTERMEDIATE FLUX-DENSITIES

    SNELLEN, IAG; ZHANG, M; SCHILIZZI, RT; ROTTGERING, HJA; DEBRUYN, AG; MILEY, GK

    We present 2 and 20 cm observations with the VLA of 25 candidate peaked spectrum radio sources. These data combined with those from earlier surveys have allowed us to construct radio spectra spanning a range of frequency from 0.3 to 15 GHz. Ten of the 25 sources are found to be variable with no

  8. Laser biostimulation effects on invertebral disks: histological evidence on intra-observer samples. Retrospective double-blind study.

    Tramontana, Alfonso; Sorge, Roberto; Page, Juan Carlos Miangolarra

    2016-12-30

    Background and aims: The intervertebral disk degeneration is a pathological process determined by a decrease of mucopolysaccharides in the nucleus pulposus with the consequent dehydration and degeneration of the elastic fibers in the annulus fibrosus of the disk. The laser is a therapeutic tool that has, on the treated tissues, biostimulation effects with an increase of oxidative phosphorylation and production of ATP with an acceleration of the mucopolysaccharides synthesis with a consequent rehydration, biostimulation and production of new elastic fibers. The goal of this project is studying whether the laser stimulation may treat degenerated intervertebral disks. Materials and methods: 60 subjects with the same anthropometric parameters were selected and divided into two randomized groups. 30 subjects underwent laser stimulation, whereas 30 underwent placebo. All 60 subjects underwent a discectomy surgery and the intraoperative findings were examined in a lab, studying the positivity of the PAS reaction and the presence of potential newly formed elastic fibers. Results: It has been shown a higher number of mucopolysaccharides and young newly formed elastic fibers in the group that was treated with laser irradiation with a statistically significant difference, compared to the placebo group (pdisks.

  9. Detachment of Tertiary Dendrite Arms during Controlled Directional Solidification in Aluminum - 7 wt Percent Silicon Alloys: Observations from Ground-based and Microgravity Processed Samples

    Grugel, Richard N.; Erdman, Robert; Van Hoose, James R.; Tewari, Surendra; Poirier, David

    2012-01-01

    Electron Back Scattered Diffraction results from cross-sections of directionally solidified aluminum 7wt% silicon alloys unexpectedly revealed tertiary dendrite arms that were detached and mis-oriented from their parent arm. More surprisingly, the same phenomenon was observed in a sample similarly processed in the quiescent microgravity environment aboard the International Space Station (ISS) in support of the joint US-European MICAST investigation. The work presented here includes a brief introduction to MICAST and the directional solidification facilities, and their capabilities, available aboard the ISS. Results from the ground-based and microgravity processed samples are compared and possible mechanisms for the observed tertiary arm detachment are suggested.

  10. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    Guignard, P.A.; Chan, W. (Royal Melbourne Hospital, Parkville (Australia). Dept. of Nuclear Medicine)

    1984-09-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned.

  11. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    Guignard, P.A.; Chan, W.

    1984-01-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned. (author)

  12. The statistical-inference approach to generalized thermodynamics

    Lavenda, B.H.; Scherer, C.

    1987-01-01

    Limit theorems, such as the central-limit theorem and the weak law of large numbers, are applicable to statistical thermodynamics for sufficiently large sample size of indipendent and identically distributed observations performed on extensive thermodynamic (chance) variables. The estimation of the intensive thermodynamic quantities is a problem in parametric statistical estimation. The normal approximation to the Gibbs' distribution is justified by the analysis of large deviations. Statistical thermodynamics is generalized to include the statistical estimation of variance as well as mean values

  13. A statistical rationale for establishing process quality control limits using fixed sample size, for critical current verification of SSC superconducting wire

    Pollock, D.A.; Brown, G.; Capone, D.W. II; Christopherson, D.; Seuntjens, J.M.; Woltz, J.

    1992-03-01

    The purpose of this paper is to demonstrate a statistical method for verifying superconducting wire process stability as represented by I c . The paper does not propose changing the I c testing frequency for wire during Phase 1 of the present Vendor Qualification Program. The actual statistical limits demonstrated for one supplier's data are not expected to be suitable for all suppliers. However, the method used to develop the limits and the potential for improved process through their use, may be applied equally. Implementing the demonstrated method implies that the current practice of testing all pieces of wire from each billet, for the purpose of detecting manufacturing process errors (i.e. missing a heat-treatment cycle for a part of the billet, etc.) can be replaced by other less costly process control measures. As used in this paper process control limits for critical current are quantitative indicators of the source manufacturing process uniformity. The limits serve as alarms indicating the need for manufacturing process investigation

  14. The Effect of Unequal Samples, Heterogeneity of Covariance Matrices, and Number of Variables on Discriminant Analysis Classification Tables and Related Statistics.

    Spearing, Debra; Woehlke, Paula

    To assess the effect on discriminant analysis in terms of correct classification into two groups, the following parameters were systematically altered using Monte Carlo techniques: sample sizes; proportions of one group to the other; number of independent variables; and covariance matrices. The pairing of the off diagonals (or covariances) with…

  15. Statistical analysis of aging trend of mechanical properties in ethylene propylene rubber-insulated safety-related cables sampled from containments (Denryoko Chuo Kenkyusho Hokoku, December 2013 issue)

    Fuse, Norikazu; Kanegami, Masaki; Misaka, Hideki; Homma, Hiroya; Okamoto, Tatsuki

    2013-01-01

    As for polymeric insulations used in nuclear power plant safety cables, it is known that the present prediction model sometimes estimates the service life conservatively. In order to sophisticate the model to reflect to the aging in containments, disconnections between the prediction and realities are needed to be clarified. In the present paper, statistical analysis has been carried out on various aging status of insulations removed from domestic containments. Aging in operational environment is found to be slower than the one expected from acceleration aging test results. Temperature dependence of estimated lifetime on Arrhenius plot also suggests that elementary chemical reaction dominant under the two aging conditions is different, which results in apparent difference in activation energies and pre-exponential factors. Following two kinds of issues are found necessary to be clarified for the model sophistication; temperature change in predominant degradation chemical processes, and the effect of low oxygen concentration environment in boiling water reactor type containment. (author)

  16. Understanding Computational Bayesian Statistics

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  17. Usage Statistics

    ... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...

  18. Mathematical statistics

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  19. Statistical forecast of seasonal discharge in Central Asia using observational records: development of a generic linear modelling tool for operational water resource management

    H. Apel

    2018-04-01

    Full Text Available The semi-arid regions of Central Asia crucially depend on the water resources supplied by the mountainous areas of the Tien Shan and Pamir and Altai mountains. During the summer months the snow-melt- and glacier-melt-dominated river discharge originating in the mountains provides the main water resource available for agricultural production, but also for storage in reservoirs for energy generation during the winter months. Thus a reliable seasonal forecast of the water resources is crucial for sustainable management and planning of water resources. In fact, seasonal forecasts are mandatory tasks of all national hydro-meteorological services in the region. In order to support the operational seasonal forecast procedures of hydro-meteorological services, this study aims to develop a generic tool for deriving statistical forecast models of seasonal river discharge based solely on observational records. The generic model structure is kept as simple as possible in order to be driven by meteorological and hydrological data readily available at the hydro-meteorological services, and to be applicable for all catchments in the region. As snow melt dominates summer runoff, the main meteorological predictors for the forecast models are monthly values of winter precipitation and temperature, satellite-based snow cover data, and antecedent discharge. This basic predictor set was further extended by multi-monthly means of the individual predictors, as well as composites of the predictors. Forecast models are derived based on these predictors as linear combinations of up to four predictors. A user-selectable number of the best models is extracted automatically by the developed model fitting algorithm, which includes a test for robustness by a leave-one-out cross-validation. Based on the cross-validation the predictive uncertainty was quantified for every prediction model. Forecasts of the mean seasonal discharge of the period April to September are derived

  20. Statistical forecast of seasonal discharge in Central Asia using observational records: development of a generic linear modelling tool for operational water resource management

    Apel, Heiko; Abdykerimova, Zharkinay; Agalhanova, Marina; Baimaganbetov, Azamat; Gavrilenko, Nadejda; Gerlitz, Lars; Kalashnikova, Olga; Unger-Shayesteh, Katy; Vorogushyn, Sergiy; Gafurov, Abror

    2018-04-01

    The semi-arid regions of Central Asia crucially depend on the water resources supplied by the mountainous areas of the Tien Shan and Pamir and Altai mountains. During the summer months the snow-melt- and glacier-melt-dominated river discharge originating in the mountains provides the main water resource available for agricultural production, but also for storage in reservoirs for energy generation during the winter months. Thus a reliable seasonal forecast of the water resources is crucial for sustainable management and planning of water resources. In fact, seasonal forecasts are mandatory tasks of all national hydro-meteorological services in the region. In order to support the operational seasonal forecast procedures of hydro-meteorological services, this study aims to develop a generic tool for deriving statistical forecast models of seasonal river discharge based solely on observational records. The generic model structure is kept as simple as possible in order to be driven by meteorological and hydrological data readily available at the hydro-meteorological services, and to be applicable for all catchments in the region. As snow melt dominates summer runoff, the main meteorological predictors for the forecast models are monthly values of winter precipitation and temperature, satellite-based snow cover data, and antecedent discharge. This basic predictor set was further extended by multi-monthly means of the individual predictors, as well as composites of the predictors. Forecast models are derived based on these predictors as linear combinations of up to four predictors. A user-selectable number of the best models is extracted automatically by the developed model fitting algorithm, which includes a test for robustness by a leave-one-out cross-validation. Based on the cross-validation the predictive uncertainty was quantified for every prediction model. Forecasts of the mean seasonal discharge of the period April to September are derived every month from