International Nuclear Information System (INIS)
Clerc, F; Njiki-Menga, G-H; Witschger, O
2013-01-01
Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a
Dispersal of potato cyst nematodes measured using historical and spatial statistical analyses.
Banks, N C; Hodda, M; Singh, S K; Matveeva, E M
2012-06-01
Rates and modes of dispersal of potato cyst nematodes (PCNs) were investigated. Analysis of records from eight countries suggested that PCNs spread a mean distance of 5.3 km/year radially from the site of first detection, and spread 212 km over ≈40 years before detection. Data from four countries with more detailed histories of invasion were analyzed further, using distance from first detection, distance from previous detection, distance from nearest detection, straight line distance, and road distance. Linear distance from first detection was significantly related to the time since the first detection. Estimated rate of spread was 5.7 km/year, and did not differ statistically between countries. Time between the first detection and estimated introduction date varied between 0 and 20 years, and differed among countries. Road distances from nearest and first detection were statistically significantly related to time, and gave slightly higher estimates for rate of spread of 6.0 and 7.9 km/year, respectively. These results indicate that the original site of introduction of PCNs may act as a source for subsequent spread and that this may occur at a relatively constant rate over time regardless of whether this distance is measured by road or by a straight line. The implications of this constant radial rate of dispersal for biosecurity and pest management are discussed, along with the effects of control strategies.
Measurements and statistical analyses of indoor radon concentrations in Tokyo and surrounding areas
International Nuclear Information System (INIS)
Sugiura, Shiroharu; Suzuki, Takashi; Inokoshi, Yukio
1995-01-01
Since the UNSCEAR report published in 1982, radiation exposure to the respiratory tract due to radon and its progeny has been regarded as the single largest contributor to the natural radiation exposure of the general public. In Japan, the measurement of radon gas concentrations in many types of buildings have been surveyed by national and private institutes. We also carried out the measurement of radon gas concentrations in different types of residential buildings in Tokyo and its adjoining prefectures from October 1988 to September 1991, to evaluate the potential radiation risk of the people living there. One or two simplified passive radon monitors were set up in each of the 34 residential buildings located in the above-mentioned area for an exposure period of 3 months each. Comparing the average concentrations in the buildings of different materials and structures, those in the concrete steel buildings were always higher than those in the wooden and the prefabricated mortared buildings. The radon concentrations were proved to become higher in autumn and winter, and lower in spring and summer. Radon concentrations in an underground room of a concrete steel building showed the highest value throughout our investigation, and statistically significant seasonal variation was detected by the X-11 method developed by the U.S. Bureau of Census. The values measured in a room at the first floor of the same concrete steel building also showed seasonal variation, but the phase of variation was different. Another multivariate analysis suggested that the building material and structure are the most important factors concerning the levels of radon concentration among other factors such as the age of the building and the use of ventilators. (author)
International Nuclear Information System (INIS)
Kaufmann, R.K.; Kauppi, H.; Stock, J.H.
2006-01-01
Comparing statistical estimates for the long-run temperature effect of doubled CO2 with those generated by climate models begs the question, is the long-run temperature effect of doubled CO2 that is estimated from the instrumental temperature record using statistical techniques consistent with the transient climate response, the equilibrium climate sensitivity, or the effective climate sensitivity. Here, we attempt to answer the question, what do statistical analyses of the observational record measure, by using these same statistical techniques to estimate the temperature effect of a doubling in the atmospheric concentration of carbon dioxide from seventeen simulations run for the Coupled Model Intercomparison Project 2 (CMIP2). The results indicate that the temperature effect estimated by the statistical methodology is consistent with the transient climate response and that this consistency is relatively unaffected by sample size or the increase in radiative forcing in the sample
Statistical Measures of Marksmanship
National Research Council Canada - National Science Library
Johnson, Richard
2001-01-01
.... This report describes objective statistical procedures to measure both rifle marksmanship accuracy, the proximity of an array of shots to the center of mass of a target, and marksmanship precision...
Statistical analyses of extreme food habits
International Nuclear Information System (INIS)
Breuninger, M.; Neuhaeuser-Berthold, M.
2000-01-01
This report is a summary of the results of the project ''Statistical analyses of extreme food habits'', which was ordered from the National Office for Radiation Protection as a contribution to the amendment of the ''General Administrative Regulation to paragraph 45 of the Decree on Radiation Protection: determination of the radiation exposition by emission of radioactive substances from facilities of nuclear technology''. Its aim is to show if the calculation of the radiation ingested by 95% of the population by food intake, like it is planned in a provisional draft, overestimates the true exposure. If such an overestimation exists, the dimension of it should be determined. It was possible to prove the existence of this overestimation but its dimension could only roughly be estimated. To identify the real extent of it, it is necessary to include the specific activities of the nuclides, which were not available for this investigation. In addition to this the report shows how the amounts of food consumption of different groups of foods influence each other and which connections between these amounts should be taken into account, in order to estimate the radiation exposition as precise as possible. (orig.) [de
Fundamental data analyses for measurement control
International Nuclear Information System (INIS)
Campbell, K.; Barlich, G.L.; Fazal, B.; Strittmatter, R.B.
1987-02-01
A set of measurment control data analyses was selected for use by analysts responsible for maintaining measurement quality of nuclear materials accounting instrumentation. The analyses consist of control charts for bias and precision and statistical tests used as analytic supplements to the control charts. They provide the desired detection sensitivity and yet can be interpreted locally, quickly, and easily. The control charts provide for visual inspection of data and enable an alert reviewer to spot problems possibly before statistical tests detect them. The statistical tests are useful for automating the detection of departures from the controlled state or from the underlying assumptions (such as normality). 8 refs., 3 figs., 5 tabs
Applied statistics a handbook of BMDP analyses
Snell, E J
1987-01-01
This handbook is a realization of a long term goal of BMDP Statistical Software. As the software supporting statistical analysis has grown in breadth and depth to the point where it can serve many of the needs of accomplished statisticians it can also serve as an essential support to those needing to expand their knowledge of statistical applications. Statisticians should not be handicapped by heavy computation or by the lack of needed options. When Applied Statistics, Principle and Examples by Cox and Snell appeared we at BMDP were impressed with the scope of the applications discussed and felt that many statisticians eager to expand their capabilities in handling such problems could profit from having the solutions carried further, to get them started and guided to a more advanced level in problem solving. Who would be better to undertake that task than the authors of Applied Statistics? A year or two later discussions with David Cox and Joyce Snell at Imperial College indicated that a wedding of the proble...
International Nuclear Information System (INIS)
Gilbert, R.O.; Klover, W.J.
1988-09-01
Radiation detection surveys are used at the US Department of Energy's Hanford Reservation near Richland, Washington, to determine areas that need posting as radiation zones or to measure dose rates in the field. The relationship between measurements made by Sodium Iodide (NaI) detectors mounted on the mobile Road Monitor vehicle and those made by hand-held GM P-11 probes and Micro-R meters are of particular interest because the Road Monitor can survey land areas in much less time than hand-held detectors. Statistical regression methods are used here to develop simple equations to predict GM P-11 probe gross gamma count-per-minute (cpm) and Micro-R-Meter μR/h measurements on the basis of NaI gross gamma count-per-second (cps) measurements obtained using the Road Monitor. These equations were estimated using data collected near the 116-K-2 Trench in the 100-K area on the Hanford Reservation. Equations are also obtained for estimating upper and lower limits within which the GM P-11 or Micro-R-Meter measurement corresponding to a given NaI Road Monitor measurement at a new location is expected to fall with high probability. An equation and limits for predicting GM P-11 measurements on the basis of Micro-R- Meter measurements is also estimated. Also, we estimate an equation that may be useful for approximating the 90 Sr measurement of a surface soil sample on the basis of a spectroscopy measurement for 137 Cs on that sample. 3 refs., 16 figs., 44 tabs
Counting statistics in radioactivity measurements
International Nuclear Information System (INIS)
Martin, J.
1975-01-01
The application of statistical methods to radioactivity measurement problems is analyzed in several chapters devoted successively to: the statistical nature of radioactivity counts; the application to radioactive counting of two theoretical probability distributions, Poisson's distribution law and the Laplace-Gauss law; true counting laws; corrections related to the nature of the apparatus; statistical techniques in gamma spectrometry [fr
Measurement and statistics for teachers
Van Blerkom, Malcolm
2008-01-01
Written in a student-friendly style, Measurement and Statistics for Teachers shows teachers how to use measurement and statistics wisely in their classes. Although there is some discussion of theory, emphasis is given to the practical, everyday uses of measurement and statistics. The second part of the text provides more complete coverage of basic descriptive statistics and their use in the classroom than in any text now available.Comprehensive and accessible, Measurement and Statistics for Teachers includes:Short vignettes showing concepts in action Numerous classroom examples Highlighted vocabulary Boxes summarizing related concepts End-of-chapter exercises and problems Six full chapters devoted to the essential topic of Classroom Tests Instruction on how to carry out informal assessments, performance assessments, and portfolio assessments, and how to use and interpret standardized tests A five-chapter section on Descriptive Statistics, giving instructors the option of more thoroughly teaching basic measur...
Lidar measurements of plume statistics
DEFF Research Database (Denmark)
Ejsing Jørgensen, Hans; Mikkelsen, T.
1993-01-01
of measured crosswind concentration profiles, the following statistics were obtained: 1) Mean profile, 2) Root mean square profile, 3) Fluctuation intensities,and 4)Intermittency factors. Furthermore, some experimentally determined probability density functions (pdf's) of the fluctuations are presented. All...... the measured statistics are referred to a fixed and a 'moving' frame of reference, the latter being defined as a frame of reference from which the (low frequency) plume meander is removed. Finally, the measured statistics are compared with statistics on concentration fluctuations obtained with a simple puff...
Statistical measures of galaxy clustering
International Nuclear Information System (INIS)
Porter, D.H.
1988-01-01
Consideration is given to the large-scale distribution of galaxies and ways in which this distribution may be statistically measured. Galaxy clustering is hierarchical in nature, so that the positions of clusters of galaxies are themselves spatially clustered. A simple identification of groups of galaxies would be an inadequate description of the true richness of galaxy clustering. Current observations of the large-scale structure of the universe and modern theories of cosmology may be studied with a statistical description of the spatial and velocity distributions of galaxies. 8 refs
Hydrometeorological and statistical analyses of heavy rainfall in Midwestern USA
Thorndahl, S.; Smith, J. A.; Krajewski, W. F.
2012-04-01
During the last two decades the mid-western states of the United States of America has been largely afflicted by heavy flood producing rainfall. Several of these storms seem to have similar hydrometeorological properties in terms of pattern, track, evolution, life cycle, clustering, etc. which raise the question if it is possible to derive general characteristics of the space-time structures of these heavy storms. This is important in order to understand hydrometeorological features, e.g. how storms evolve and with what frequency we can expect extreme storms to occur. In the literature, most studies of extreme rainfall are based on point measurements (rain gauges). However, with high resolution and quality radar observation periods exceeding more than two decades, it is possible to do long-term spatio-temporal statistical analyses of extremes. This makes it possible to link return periods to distributed rainfall estimates and to study precipitation structures which cause floods. However, doing these statistical frequency analyses of rainfall based on radar observations introduces some different challenges, converting radar reflectivity observations to "true" rainfall, which are not problematic doing traditional analyses on rain gauge data. It is for example difficult to distinguish reflectivity from high intensity rain from reflectivity from other hydrometeors such as hail, especially using single polarization radars which are used in this study. Furthermore, reflectivity from bright band (melting layer) should be discarded and anomalous propagation should be corrected in order to produce valid statistics of extreme radar rainfall. Other challenges include combining observations from several radars to one mosaic, bias correction against rain gauges, range correction, ZR-relationships, etc. The present study analyzes radar rainfall observations from 1996 to 2011 based the American NEXRAD network of radars over an area covering parts of Iowa, Wisconsin, Illinois, and
Statistical sampling for holdup measurement
International Nuclear Information System (INIS)
Picard, R.R.; Pillay, K.K.S.
1986-01-01
Nuclear materials holdup is a serious problem in many operating facilities. Estimating amounts of holdup is important for materials accounting and, sometimes, for process safety. Clearly, measuring holdup in all pieces of equipment is not a viable option in terms of time, money, and radiation exposure to personnel. Furthermore, 100% measurement is not only impractical but unnecessary for developing estimated values. Principles of statistical sampling are valuable in the design of cost effective holdup monitoring plans and in qualifying uncertainties in holdup estimates. The purpose of this paper is to describe those principles and to illustrate their use
Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures
Energy Technology Data Exchange (ETDEWEB)
Udey, Ruth Norma [Michigan State Univ., East Lansing, MI (United States)
2013-01-01
Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.
Social Media Analyses for Social Measurement.
Schober, Michael F; Pasek, Josh; Guggenheim, Lauren; Lampe, Cliff; Conrad, Frederick G
2016-01-01
Demonstrations that analyses of social media content can align with measurement from sample surveys have raised the question of whether survey research can be supplemented or even replaced with less costly and burdensome data mining of already-existing or "found" social media content. But just how trustworthy such measurement can be-say, to replace official statistics-is unknown. Survey researchers and data scientists approach key questions from starting assumptions and analytic traditions that differ on, for example, the need for representative samples drawn from frames that fully cover the population. New conversations between these scholarly communities are needed to understand the potential points of alignment and non-alignment. Across these approaches, there are major differences in (a) how participants (survey respondents and social media posters) understand the activity they are engaged in; (b) the nature of the data produced by survey responses and social media posts, and the inferences that are legitimate given the data; and (c) practical and ethical considerations surrounding the use of the data. Estimates are likely to align to differing degrees depending on the research topic and the populations under consideration, the particular features of the surveys and social media sites involved, and the analytic techniques for extracting opinions and experiences from social media. Traditional population coverage may not be required for social media content to effectively predict social phenomena to the extent that social media content distills or summarizes broader conversations that are also measured by surveys.
Statistical analyses of conserved features of genomic islands in bacteria.
Guo, F-B; Xia, Z-K; Wei, W; Zhao, H-L
2014-03-17
We performed statistical analyses of five conserved features of genomic islands of bacteria. Analyses were made based on 104 known genomic islands, which were identified by comparative methods. Four of these features include sequence size, abnormal G+C content, flanking tRNA gene, and embedded mobility gene, which are frequently investigated. One relatively new feature, G+C homogeneity, was also investigated. Among the 104 known genomic islands, 88.5% were found to fall in the typical length of 10-200 kb and 80.8% had G+C deviations with absolute values larger than 2%. For the 88 genomic islands whose hosts have been sequenced and annotated, 52.3% of them were found to have flanking tRNA genes and 64.7% had embedded mobility genes. For the homogeneity feature, 85% had an h homogeneity index less than 0.1, indicating that their G+C content is relatively uniform. Taking all the five features into account, 87.5% of 88 genomic islands had three of them. Only one genomic island had only one conserved feature and none of the genomic islands had zero features. These statistical results should help to understand the general structure of known genomic islands. We found that larger genomic islands tend to have relatively small G+C deviations relative to absolute values. For example, the absolute G+C deviations of 9 genomic islands longer than 100,000 bp were all less than 5%. This is a novel but reasonable result given that larger genomic islands should have greater restrictions in their G+C contents, in order to maintain the stable G+C content of the recipient genome.
Statistical reliability analyses of two wood plastic composite extrusion processes
International Nuclear Information System (INIS)
Crookston, Kevin A.; Mark Young, Timothy; Harper, David; Guess, Frank M.
2011-01-01
Estimates of the reliability of wood plastic composites (WPC) are explored for two industrial extrusion lines. The goal of the paper is to use parametric and non-parametric analyses to examine potential differences in the WPC metrics of reliability for the two extrusion lines that may be helpful for use by the practitioner. A parametric analysis of the extrusion lines reveals some similarities and disparities in the best models; however, a non-parametric analysis reveals unique and insightful differences between Kaplan-Meier survival curves for the modulus of elasticity (MOE) and modulus of rupture (MOR) of the WPC industrial data. The distinctive non-parametric comparisons indicate the source of the differences in strength between the 10.2% and 48.0% fractiles [3,183-3,517 MPa] for MOE and for MOR between the 2.0% and 95.1% fractiles [18.9-25.7 MPa]. Distribution fitting as related to selection of the proper statistical methods is discussed with relevance to estimating the reliability of WPC. The ability to detect statistical differences in the product reliability of WPC between extrusion processes may benefit WPC producers in improving product reliability and safety of this widely used house-decking product. The approach can be applied to many other safety and complex system lifetime comparisons.
Methodology development for statistical evaluation of reactor safety analyses
International Nuclear Information System (INIS)
Mazumdar, M.; Marshall, J.A.; Chay, S.C.; Gay, R.
1976-07-01
In February 1975, Westinghouse Electric Corporation, under contract to Electric Power Research Institute, started a one-year program to develop methodology for statistical evaluation of nuclear-safety-related engineering analyses. The objectives of the program were to develop an understanding of the relative efficiencies of various computational methods which can be used to compute probability distributions of output variables due to input parameter uncertainties in analyses of design basis events for nuclear reactors and to develop methods for obtaining reasonably accurate estimates of these probability distributions at an economically feasible level. A series of tasks was set up to accomplish these objectives. Two of the tasks were to investigate the relative efficiencies and accuracies of various Monte Carlo and analytical techniques for obtaining such estimates for a simple thermal-hydraulic problem whose output variable of interest is given in a closed-form relationship of the input variables and to repeat the above study on a thermal-hydraulic problem in which the relationship between the predicted variable and the inputs is described by a short-running computer program. The purpose of the report presented is to document the results of the investigations completed under these tasks, giving the rationale for choices of techniques and problems, and to present interim conclusions
Statistical inference based on divergence measures
Pardo, Leandro
2005-01-01
The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...
Statistical aspects in radioactivity measurements
International Nuclear Information System (INIS)
Hoetzl, H.
1979-10-01
This report contains a summary of basic concepts and formulae important for the treatment of errors and for calculating lower limits of detection in radioactivity measurements. Special attention has been paid to practical application and examples which are of interest for scientists working in this field. (orig./HP) [de
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
Non-Statistical Methods of Analysing of Bankruptcy Risk
Directory of Open Access Journals (Sweden)
Pisula Tomasz
2015-06-01
Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.
A weighted U statistic for association analyses considering genetic heterogeneity.
Wei, Changshuai; Elston, Robert C; Lu, Qing
2016-07-20
Converging evidence suggests that common complex diseases with the same or similar clinical manifestations could have different underlying genetic etiologies. While current research interests have shifted toward uncovering rare variants and structural variations predisposing to human diseases, the impact of heterogeneity in genetic studies of complex diseases has been largely overlooked. Most of the existing statistical methods assume the disease under investigation has a homogeneous genetic effect and could, therefore, have low power if the disease undergoes heterogeneous pathophysiological and etiological processes. In this paper, we propose a heterogeneity-weighted U (HWU) method for association analyses considering genetic heterogeneity. HWU can be applied to various types of phenotypes (e.g., binary and continuous) and is computationally efficient for high-dimensional genetic data. Through simulations, we showed the advantage of HWU when the underlying genetic etiology of a disease was heterogeneous, as well as the robustness of HWU against different model assumptions (e.g., phenotype distributions). Using HWU, we conducted a genome-wide analysis of nicotine dependence from the Study of Addiction: Genetics and Environments dataset. The genome-wide analysis of nearly one million genetic markers took 7h, identifying heterogeneous effects of two new genes (i.e., CYP3A5 and IKBKB) on nicotine dependence. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Statistical and extra-statistical considerations in differential item functioning analyses
Directory of Open Access Journals (Sweden)
G. K. Huysamen
2004-10-01
Full Text Available This article briefly describes the main procedures for performing differential item functioning (DIF analyses and points out some of the statistical and extra-statistical implications of these methods. Research findings on the sources of DIF, including those associated with translated tests, are reviewed. As DIF analyses are oblivious of correlations between a test and relevant criteria, the elimination of differentially functioning items does not necessarily improve predictive validity or reduce any predictive bias. The implications of the results of past DIF research for test development in the multilingual and multi-cultural South African society are considered. Opsomming Hierdie artikel beskryf kortliks die hoofprosedures vir die ontleding van differensiële itemfunksionering (DIF en verwys na sommige van die statistiese en buite-statistiese implikasies van hierdie metodes. ’n Oorsig word verskaf van navorsingsbevindings oor die bronne van DIF, insluitend dié by vertaalde toetse. Omdat DIF-ontledings nie die korrelasies tussen ’n toets en relevante kriteria in ag neem nie, sal die verwydering van differensieel-funksionerende items nie noodwendig voorspellingsgeldigheid verbeter of voorspellingsydigheid verminder nie. Die implikasies van vorige DIF-navorsingsbevindings vir toetsontwikkeling in die veeltalige en multikulturele Suid-Afrikaanse gemeenskap word oorweeg.
Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio
2013-03-01
To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.
UMTS signal measurements with digital spectrum analysers
International Nuclear Information System (INIS)
Licitra, G.; Palazzuoli, D.; Ricci, A. S.; Silvi, A. M.
2004-01-01
The launch of the Universal Mobile Telecommunications System (UNITS), the most recent mobile telecommunications standard has imposed the requirement of updating measurement instrumentation and methodologies. In order to define the most reliable measurement procedure, which is aimed at assessing the exposure to electromagnetic fields, modern spectrum analysers' features for correct signal characterisation has been reviewed. (authors)
Automated statistical modeling of analytical measurement systems
International Nuclear Information System (INIS)
Jacobson, J.J.
1992-01-01
The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability
Temporal scaling and spatial statistical analyses of groundwater level fluctuations
Sun, H.; Yuan, L., Sr.; Zhang, Y.
2017-12-01
Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.
Statistical analysis of angular correlation measurements
International Nuclear Information System (INIS)
Oliveira, R.A.A.M. de.
1986-01-01
Obtaining the multipole mixing ratio, δ, of γ transitions in angular correlation measurements is a statistical problem characterized by the small number of angles in which the observation is made and by the limited statistic of counting, α. The inexistence of a sufficient statistics for the estimator of δ, is shown. Three different estimators for δ were constructed and their properties of consistency, bias and efficiency were tested. Tests were also performed in experimental results obtained in γ-γ directional correlation measurements. (Author) [pt
Using statistical inference for decision making in best estimate analyses
International Nuclear Information System (INIS)
Sermer, P.; Weaver, K.; Hoppe, F.; Olive, C.; Quach, D.
2008-01-01
For broad classes of safety analysis problems, one needs to make decisions when faced with randomly varying quantities which are also subject to errors. The means for doing this involves a statistical approach which takes into account the nature of the physical problems, and the statistical constraints they impose. We describe the methodology for doing this which has been developed at Nuclear Safety Solutions, and we draw some comparisons to other methods which are commonly used in Canada and internationally. Our methodology has the advantages of being robust and accurate and compares favourably to other best estimate methods. (author)
Additional methodology development for statistical evaluation of reactor safety analyses
International Nuclear Information System (INIS)
Marshall, J.A.; Shore, R.W.; Chay, S.C.; Mazumdar, M.
1977-03-01
The project described is motivated by the desire for methods to quantify uncertainties and to identify conservatisms in nuclear power plant safety analysis. The report examines statistical methods useful for assessing the probability distribution of output response from complex nuclear computer codes, considers sensitivity analysis and several other topics, and also sets the path for using the developed methods for realistic assessment of the design basis accident
Error calculations statistics in radioactive measurements
International Nuclear Information System (INIS)
Verdera, Silvia
1994-01-01
Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test
Statistical considerations for grain-size analyses of tills
Jacobs, A.M.
1971-01-01
Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard
Statistical reporting errors and collaboration on statistical analyses in psychological science
Veldkamp, C.L.S.; Nuijten, M.B.; Dominguez Alvarez, L.; van Assen, M.A.L.M.; Wicherts, J.M.
2014-01-01
Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we
Statistical analysis of ultrasonic measurements in concrete
Chiang, Chih-Hung; Chen, Po-Chih
2002-05-01
Stress wave techniques such as measurements of ultrasonic pulse velocity are often used to evaluate concrete quality in structures. For proper interpretation of measurement results, the dependence of pulse transit time on the average acoustic impedance and the material homogeneity along the sound path need to be examined. Semi-direct measurement of pulse velocity could be more convenient than through transmission measurement. It is not necessary to assess both sides of concrete floors or walls. A novel measurement scheme is proposed and verified based on statistical analysis. It is shown that Semi-direct measurements are very effective for gathering large amount of pulse velocity data from concrete reference specimens. The variability of measurements is comparable with that reported by American Concrete Institute using either break-off or pullout tests.
Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.
Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M
2014-01-01
Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.
Social Media Analyses for Social Measurement
Schober, Michael F.; Pasek, Josh; Guggenheim, Lauren; Lampe, Cliff; Conrad, Frederick G.
2016-01-01
Demonstrations that analyses of social media content can align with measurement from sample surveys have raised the question of whether survey research can be supplemented or even replaced with less costly and burdensome data mining of already-existing or “found” social media content. But just how trustworthy such measurement can be—say, to replace official statistics—is unknown. Survey researchers and data scientists approach key questions from starting assumptions and analytic traditions that differ on, for example, the need for representative samples drawn from frames that fully cover the population. New conversations between these scholarly communities are needed to understand the potential points of alignment and non-alignment. Across these approaches, there are major differences in (a) how participants (survey respondents and social media posters) understand the activity they are engaged in; (b) the nature of the data produced by survey responses and social media posts, and the inferences that are legitimate given the data; and (c) practical and ethical considerations surrounding the use of the data. Estimates are likely to align to differing degrees depending on the research topic and the populations under consideration, the particular features of the surveys and social media sites involved, and the analytic techniques for extracting opinions and experiences from social media. Traditional population coverage may not be required for social media content to effectively predict social phenomena to the extent that social media content distills or summarizes broader conversations that are also measured by surveys. PMID:27257310
Statistical methods towards more efficient infiltration measurements.
Franz, T; Krebs, P
2006-01-01
A comprehensive knowledge about the infiltration situation in a catchment is required for operation and maintenance. Due to the high expenditures, an optimisation of necessary measurement campaigns is essential. Methods based on multivariate statistics were developed to improve the information yield of measurements by identifying appropriate gauge locations. The methods have a high degree of freedom against data needs. They were successfully tested on real and artificial data. For suitable catchments, it is estimated that the optimisation potential amounts up to 30% accuracy improvement compared to nonoptimised gauge distributions. Beside this, a correlation between independent reach parameters and dependent infiltration rates could be identified, which is not dominated by the groundwater head.
"What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"
Ozturk, Elif
2012-01-01
The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…
Statistical contact angle analyses; "slow moving" drops on a horizontal silicon-oxide surface.
Schmitt, M; Grub, J; Heib, F
2015-06-01
Sessile drop experiments on horizontal surfaces are commonly used to characterise surface properties in science and in industry. The advancing angle and the receding angle are measurable on every solid. Specially on horizontal surfaces even the notions themselves are critically questioned by some authors. Building a standard, reproducible and valid method of measuring and defining specific (advancing/receding) contact angles is an important challenge of surface science. Recently we have developed two/three approaches, by sigmoid fitting, by independent and by dependent statistical analyses, which are practicable for the determination of specific angles/slopes if inclining the sample surface. These approaches lead to contact angle data which are independent on "user-skills" and subjectivity of the operator which is also of urgent need to evaluate dynamic measurements of contact angles. We will show in this contribution that the slightly modified procedures are also applicable to find specific angles for experiments on horizontal surfaces. As an example droplets on a flat freshly cleaned silicon-oxide surface (wafer) are dynamically measured by sessile drop technique while the volume of the liquid is increased/decreased. The triple points, the time, the contact angles during the advancing and the receding of the drop obtained by high-precision drop shape analysis are statistically analysed. As stated in the previous contribution the procedure is called "slow movement" analysis due to the small covered distance and the dominance of data points with low velocity. Even smallest variations in velocity such as the minimal advancing motion during the withdrawing of the liquid are identifiable which confirms the flatness and the chemical homogeneity of the sample surface and the high sensitivity of the presented approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Applications of measure theory to statistics
Pantsulaia, Gogi
2016-01-01
This book aims to put strong reasonable mathematical senses in notions of objectivity and subjectivity for consistent estimations in a Polish group by using the concept of Haar null sets in the corresponding group. This new approach – naturally dividing the class of all consistent estimates of an unknown parameter in a Polish group into disjoint classes of subjective and objective estimates – helps the reader to clarify some conjectures arising in the criticism of null hypothesis significance testing. The book also acquaints readers with the theory of infinite-dimensional Monte Carlo integration recently developed for estimation of the value of infinite-dimensional Riemann integrals over infinite-dimensional rectangles. The book is addressed both to graduate students and to researchers active in the fields of analysis, measure theory, and mathematical statistics.
Statistics Analysis Measures Painting of Cooling Tower
Directory of Open Access Journals (Sweden)
A. Zacharopoulou
2013-01-01
Full Text Available This study refers to the cooling tower of Megalopolis (construction 1975 and protection from corrosive environment. The maintenance of the cooling tower took place in 2008. The cooling tower was badly damaged from corrosion of reinforcement. The parabolic cooling towers (factory of electrical power are a typical example of construction, which has a special aggressive environment. The protection of cooling towers is usually achieved through organic coatings. Because of the different environmental impacts on the internal and external side of the cooling tower, a different system of paint application is required. The present study refers to the damages caused by corrosion process. The corrosive environments, the application of this painting, the quality control process, the measures and statistics analysis, and the results were discussed in this study. In the process of quality control the following measurements were taken into consideration: (1 examination of the adhesion with the cross-cut test, (2 examination of the film thickness, and (3 controlling of the pull-off resistance for concrete substrates and paintings. Finally, this study refers to the correlations of measurements, analysis of failures in relation to the quality of repair, and rehabilitation of the cooling tower. Also this study made a first attempt to apply the specific corrosion inhibitors in such a large structure.
Poulos, M. J.; Pierce, J. L.; McNamara, J. P.; Flores, A. N.; Benner, S. G.
2015-12-01
Terrain aspect alters the spatial distribution of insolation across topography, driving eco-pedo-hydro-geomorphic feedbacks that can alter landform evolution and result in valley asymmetries for a suite of land surface characteristics (e.g. slope length and steepness, vegetation, soil properties, and drainage development). Asymmetric valleys serve as natural laboratories for studying how landscapes respond to climate perturbation. In the semi-arid montane granodioritic terrain of the Idaho batholith, Northern Rocky Mountains, USA, prior works indicate that reduced insolation on northern (pole-facing) aspects prolongs snow pack persistence, and is associated with thicker, finer-grained soils, that retain more water, prolong the growing season, support coniferous forest rather than sagebrush steppe ecosystems, stabilize slopes at steeper angles, and produce sparser drainage networks. We hypothesize that the primary drivers of valley asymmetry development are changes in the pedon-scale water-balance that coalesce to alter catchment-scale runoff and drainage development, and ultimately cause the divide between north and south-facing land surfaces to migrate northward. We explore this conceptual framework by coupling land surface analyses with statistical modeling to assess relationships and the relative importance of land surface characteristics. Throughout the Idaho batholith, we systematically mapped and tabulated various statistical measures of landforms, land cover, and hydroclimate within discrete valley segments (n=~10,000). We developed a random forest based statistical model to predict valley slope asymmetry based upon numerous measures (n>300) of landscape asymmetries. Preliminary results suggest that drainages are tightly coupled with hillslopes throughout the region, with drainage-network slope being one of the strongest predictors of land-surface-averaged slope asymmetry. When slope-related statistics are excluded, due to possible autocorrelation, valley
Fundamental data analyses for measurement control
International Nuclear Information System (INIS)
Campbell, K.; Barlich, G.L.; Fazal, B.; Strittmatter, R.B.
1989-01-01
An important aspect of a complete measurement control program for special nuclear materials is the analysis of data from periodic control measurements of known standards. This chapter covers the following topics: basic algorithms including an introduction and terminology, the standard case (known mean and standard deviation), Shewart control charts, and sequential test for bias; modifications for nonstandard cases including modification for changing (decaying) standard value, modifications for deteriorating measurement precision, and modifications when repeated measurements are made; maintenance information including estimation of historical standard deviation (standard case), estimation of historical standard deviation (changing with time), normality and outliners, and other tests of randomness
SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit
Directory of Open Access Journals (Sweden)
Annie Chu
2009-04-01
Full Text Available The web-based, Java-written SOCR (Statistical Online Computational Resource toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b. It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b. Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses.
Schmitt, M; Groß, K; Grub, J; Heib, F
2015-06-01
Contact angle determination by sessile drop technique is essential to characterise surface properties in science and in industry. Different specific angles can be observed on every solid which are correlated with the advancing or the receding of the triple line. Different procedures and definitions for the determination of specific angles exist which are often not comprehensible or reproducible. Therefore one of the most important things in this area is to build standard, reproducible and valid methods for determining advancing/receding contact angles. This contribution introduces novel techniques to analyse dynamic contact angle measurements (sessile drop) in detail which are applicable for axisymmetric and non-axisymmetric drops. Not only the recently presented fit solution by sigmoid function and the independent analysis of the different parameters (inclination, contact angle, velocity of the triple point) but also the dependent analysis will be firstly explained in detail. These approaches lead to contact angle data and different access on specific contact angles which are independent from "user-skills" and subjectivity of the operator. As example the motion behaviour of droplets on flat silicon-oxide surfaces after different surface treatments is dynamically measured by sessile drop technique when inclining the sample plate. The triple points, the inclination angles, the downhill (advancing motion) and the uphill angles (receding motion) obtained by high-precision drop shape analysis are independently and dependently statistically analysed. Due to the small covered distance for the dependent analysis (contact angle determination. They are characterised by small deviations of the computed values. Additional to the detailed introduction of this novel analytical approaches plus fit solution special motion relations for the drop on inclined surfaces and detailed relations about the reactivity of the freshly cleaned silicon wafer surface resulting in acceleration
Statistics for Radiation Measurement. Chapter 5
Energy Technology Data Exchange (ETDEWEB)
Lötter, M. G. [Department of Medical Physics, University of the Free State, Bloemfontein (South Africa)
2014-12-15
Measurement errors are of three general types: (i) blunders, (ii) systematic errors or accuracy of measurements, and (iii) random errors or precision of measurements. Blunders produce grossly inaccurate results and experienced observers easily detect their occurrence. Examples in radiation counting or measurements include the incorrect setting of the energy window, counting heavily contaminated samples, using contaminated detectors for imaging or counting, obtaining measurements of high activities, resulting in count rates that lead to excessive dead time effects, and selecting the wrong patient orientation during imaging. Although some blunders can be detected as outliers or by duplicate samples and measurements, blunders should be avoided by careful, meticulous and dedicated work. This is especially important where results will determine the diagnosis or treatment of patients.
Kanda, Junya
2016-01-01
The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.
Methods for analysing cardiovascular studies with repeated measures
Cleophas, T. J.; Zwinderman, A. H.; van Ouwerkerk, B. M.
2009-01-01
Background. Repeated measurements in a single subject are generally more similar than unrepeated measurements in different subjects. Unrepeated analyses of repeated data cause underestimation of the treatment effects. Objective. To review methods adequate for the analysis of cardiovascular studies
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
Statistical analyses of the magnet data for the advanced photon source storage ring magnets
International Nuclear Information System (INIS)
Kim, S.H.; Carnegie, D.W.; Doose, C.; Hogrefe, R.; Kim, K.; Merl, R.
1995-01-01
The statistics of the measured magnetic data of 80 dipole, 400 quadrupole, and 280 sextupole magnets of conventional resistive designs for the APS storage ring is summarized. In order to accommodate the vacuum chamber, the curved dipole has a C-type cross section and the quadrupole and sextupole cross sections have 180 degrees and 120 degrees symmetries, respectively. The data statistics include the integrated main fields, multipole coefficients, magnetic and mechanical axes, and roll angles of the main fields. The average and rms values of the measured magnet data meet the storage ring requirements
Some Statistics for Measuring Large-Scale Structure
Brandenberger, Robert H.; Kaplan, David M.; A, Stephen; Ramsey
1993-01-01
Good statistics for measuring large-scale structure in the Universe must be able to distinguish between different models of structure formation. In this paper, two and three dimensional ``counts in cell" statistics and a new ``discrete genus statistic" are applied to toy versions of several popular theories of structure formation: random phase cold dark matter model, cosmic string models, and global texture scenario. All three statistics appear quite promising in terms of differentiating betw...
Statistical analyses in the study of solar wind-magnetosphere coupling
International Nuclear Information System (INIS)
Baker, D.N.
1985-01-01
Statistical analyses provide a valuable method for establishing initially the existence (or lack of existence) of a relationship between diverse data sets. Statistical methods also allow one to make quantitative assessments of the strengths of observed relationships. This paper reviews the essential techniques and underlying statistical bases for the use of correlative methods in solar wind-magnetosphere coupling studies. Techniques of visual correlation and time-lagged linear cross-correlation analysis are emphasized, but methods of multiple regression, superposed epoch analysis, and linear prediction filtering are also described briefly. The long history of correlation analysis in the area of solar wind-magnetosphere coupling is reviewed with the assessments organized according to data averaging time scales (minutes to years). It is concluded that these statistical methods can be very useful first steps, but that case studies and various advanced analysis methods should be employed to understand fully the average response of the magnetosphere to solar wind input. It is clear that many workers have not always recognized underlying assumptions of statistical methods and thus the significance of correlation results can be in doubt. Long-term averages (greater than or equal to 1 hour) can reveal gross relationships, but only when dealing with high-resolution data (1 to 10 min) can one reach conclusions pertinent to magnetospheric response time scales and substorm onset mechanisms
Using Inequality Measures to Incorporate Environmental Justice into Regulatory Analyses
Abstract: Formally evaluating how specific policy measures influence environmental justice is challenging, especially in the context of regulatory analyses in which quantitative comparisons are the norm. However, there is a large literature on developing and applying quantitative...
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
Energy Technology Data Exchange (ETDEWEB)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.
Attitudes toward statistics in medical postgraduates: measuring, evaluating and monitoring.
Zhang, Yuhai; Shang, Lei; Wang, Rui; Zhao, Qinbo; Li, Chanjuan; Xu, Yongyong; Su, Haixia
2012-11-23
In medical training, statistics is considered a very difficult course to learn and teach. Current studies have found that students' attitudes toward statistics can influence their learning process. Measuring, evaluating and monitoring the changes of students' attitudes toward statistics are important. Few studies have focused on the attitudes of postgraduates, especially medical postgraduates. Our purpose was to understand current attitudes regarding statistics held by medical postgraduates and explore their effects on students' achievement. We also wanted to explore the influencing factors and the sources of these attitudes and monitor their changes after a systematic statistics course. A total of 539 medical postgraduates enrolled in a systematic statistics course completed the pre-form of the Survey of Attitudes Toward Statistics -28 scale, and 83 postgraduates were selected randomly from among them to complete the post-form scale after the course. Most medical postgraduates held positive attitudes toward statistics, but they thought statistics was a very difficult subject. The attitudes mainly came from experiences in a former statistical or mathematical class. Age, level of statistical education, research experience, specialty and mathematics basis may influence postgraduate attitudes toward statistics. There were significant positive correlations between course achievement and attitudes toward statistics. In general, student attitudes showed negative changes after completing a statistics course. The importance of student attitudes toward statistics must be recognized in medical postgraduate training. To make sure all students have a positive learning environment, statistics teachers should measure their students' attitudes and monitor their change of status during a course. Some necessary assistance should be offered for those students who develop negative attitudes.
Attitudes toward statistics in medical postgraduates: measuring, evaluating and monitoring
2012-01-01
Background In medical training, statistics is considered a very difficult course to learn and teach. Current studies have found that students’ attitudes toward statistics can influence their learning process. Measuring, evaluating and monitoring the changes of students’ attitudes toward statistics are important. Few studies have focused on the attitudes of postgraduates, especially medical postgraduates. Our purpose was to understand current attitudes regarding statistics held by medical postgraduates and explore their effects on students’ achievement. We also wanted to explore the influencing factors and the sources of these attitudes and monitor their changes after a systematic statistics course. Methods A total of 539 medical postgraduates enrolled in a systematic statistics course completed the pre-form of the Survey of Attitudes Toward Statistics −28 scale, and 83 postgraduates were selected randomly from among them to complete the post-form scale after the course. Results Most medical postgraduates held positive attitudes toward statistics, but they thought statistics was a very difficult subject. The attitudes mainly came from experiences in a former statistical or mathematical class. Age, level of statistical education, research experience, specialty and mathematics basis may influence postgraduate attitudes toward statistics. There were significant positive correlations between course achievement and attitudes toward statistics. In general, student attitudes showed negative changes after completing a statistics course. Conclusions The importance of student attitudes toward statistics must be recognized in medical postgraduate training. To make sure all students have a positive learning environment, statistics teachers should measure their students’ attitudes and monitor their change of status during a course. Some necessary assistance should be offered for those students who develop negative attitudes. PMID:23173770
Statistical analyses of the data on occupational radiation expousure at JPDR
International Nuclear Information System (INIS)
Kato, Shohei; Anazawa, Yutaka; Matsuno, Kenji; Furuta, Toshishiro; Akiyama, Isamu
1980-01-01
In the statistical analyses of the data on occupational radiation exposure at JPDR, statistical features were obtained as follows. (1) The individual doses followed log-normal distribution. (2) In the distribution of doses from one job in controlled area, the logarithm of the mean (μ) depended on the exposure rate (γ(mR/h)), and the σ correlated to the nature of the job and normally distributed. These relations were as follows. μ = 0.48 ln r-0.24, σ = 1.2 +- 0.58 (3) For the data containing different groups, the distribution of doses showed a polygonal line on the log-normal probability paper. (4) Under the dose limitation, the distribution of the doses showed asymptotic curve along the limit on the log-normal probability paper. (author)
Energy Technology Data Exchange (ETDEWEB)
Amidan, Brett G.; Pulsipher, Brent A.; Matzke, Brett D.
2009-12-17
number of zeros. Using QQ plots these data characteristics show a lack of normality from the data after contamination. Normality is improved when looking at log(CFU/cm2). Variance component analysis (VCA) and analysis of variance (ANOVA) were used to estimate the amount of variance due to each source and to determine which sources of variability were statistically significant. In general, the sampling methods interacted with the across event variability and with the across room variability. For this reason, it was decided to do analyses for each sampling method, individually. The between event variability and between room variability were significant for each method, except for the between event variability for the swabs. For both the wipes and vacuums, the within room standard deviation was much larger (26.9 for wipes and 7.086 for vacuums) than the between event standard deviation (6.552 for wipes and 1.348 for vacuums) and the between room standard deviation (6.783 for wipes and 1.040 for vacuums). Swabs between room standard deviation was 0.151, while both the within room and between event standard deviations are less than 0.10 (all measurements in CFU/cm2).
Measuring radioactive half-lives via statistical sampling in practice
Lorusso, G.; Collins, S. M.; Jagan, K.; Hitt, G. W.; Sadek, A. M.; Aitken-Smith, P. M.; Bridi, D.; Keightley, J. D.
2017-10-01
The statistical sampling method for the measurement of radioactive decay half-lives exhibits intriguing features such as that the half-life is approximately the median of a distribution closely resembling a Cauchy distribution. Whilst initial theoretical considerations suggested that in certain cases the method could have significant advantages, accurate measurements by statistical sampling have proven difficult, for they require an exercise in non-standard statistical analysis. As a consequence, no half-life measurement using this method has yet been reported and no comparison with traditional methods has ever been made. We used a Monte Carlo approach to address these analysis difficulties, and present the first experimental measurement of a radioisotope half-life (211Pb) by statistical sampling in good agreement with the literature recommended value. Our work also focused on the comparison between statistical sampling and exponential regression analysis, and concluded that exponential regression achieves generally the highest accuracy.
arXiv Statistical Analyses of Higgs- and Z-Portal Dark Matter Models
Ellis, John; Marzola, Luca; Raidal, Martti
2018-06-12
We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing $\\gtrsim 100$ GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.
The intervals method: a new approach to analyse finite element outputs using multivariate statistics
Directory of Open Access Journals (Sweden)
Jordi Marcé-Nogué
2017-10-01
Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.
The intervals method: a new approach to analyse finite element outputs using multivariate statistics
De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep
2017-01-01
Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107
The estimation of the measurement results with using statistical methods
International Nuclear Information System (INIS)
Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T
2015-01-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed
The estimation of the measurement results with using statistical methods
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Directory of Open Access Journals (Sweden)
H. Kojima
1999-01-01
Full Text Available We present the characteristics of the Electrostatic Solitary Waves (ESW observed by the Geotail spacecraft in the plasma sheet boundary layer based on the statistical analyses. We also discuss the results referring to a model of ESW generation due to electron beams, which is proposed by computer simulations. In this generation model, the nonlinear evolution of Langmuir waves excited by electron bump-on-tail instabilities leads to formation of isolated electrostatic potential structures corresponding to "electron hole" in the phase space. The statistical analyses of the Geotail data, which we conducted under the assumption that polarity of ESW potentials is positive, show that most of ESW propagate in the same direction of electron beams, which are observed by the plasma instrument, simultaneously. Further, we also find that the ESW potential energy is much smaller than the background electron thermal energy and that the ESW potential widths are typically shorter than 60 times of local electron Debye length when we assume that the ESW potentials travel in the same velocity of electron beams. These results are very consistent with the ESW generation model that the nonlinear evolution of electron bump-on-tail instability leads to the formation of electron holes in the phase space.
Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics
DEFF Research Database (Denmark)
Khanmohammadi, Mahdieh
transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...
A weighted U-statistic for genetic association analyses of sequencing data.
Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing
2014-12-01
With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.
Emoto, K.; Saito, T.; Shiomi, K.
2017-12-01
Short-period (2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.
Directory of Open Access Journals (Sweden)
Sunando Roy
2009-10-01
Full Text Available Feline immunodeficiency virus (FIV and human immunodeficiency virus (HIV are recently identified lentiviruses that cause progressive immune decline and ultimately death in infected cats and humans. It is of great interest to understand how to prevent immune system collapse caused by these lentiviruses. We recently described that disease caused by a virulent FIV strain in cats can be attenuated if animals are first infected with a feline immunodeficiency virus derived from a wild cougar. The detailed temporal tracking of cat immunological parameters in response to two viral infections resulted in high-dimensional datasets containing variables that exhibit strong co-variation. Initial analyses of these complex data using univariate statistical techniques did not account for interactions among immunological response variables and therefore potentially obscured significant effects between infection state and immunological parameters.Here, we apply a suite of multivariate statistical tools, including Principal Component Analysis, MANOVA and Linear Discriminant Analysis, to temporal immunological data resulting from FIV superinfection in domestic cats. We investigated the co-variation among immunological responses, the differences in immune parameters among four groups of five cats each (uninfected, single and dual infected animals, and the "immune profiles" that discriminate among them over the first four weeks following superinfection. Dual infected cats mount an immune response by 24 days post superinfection that is characterized by elevated levels of CD8 and CD25 cells and increased expression of IL4 and IFNgamma, and FAS. This profile discriminates dual infected cats from cats infected with FIV alone, which show high IL-10 and lower numbers of CD8 and CD25 cells.Multivariate statistical analyses demonstrate both the dynamic nature of the immune response to FIV single and dual infection and the development of a unique immunological profile in dual
Measurement assurance program for FTIR analyses of deuterium oxide samples
International Nuclear Information System (INIS)
Johnson, S.R.; Clark, J.P.
1997-01-01
Analytical chemistry measurements require an installed criterion based assessment program to identify and control sources of error. This program should also gauge the uncertainty about the data. A self- assessment was performed of long established quality control practices against the characteristics of a comprehensive measurement assurance program. Opportunities for improvement were identified. This paper discusses the efforts to transform quality control practices into a complete measurement assurance program. The resulting program heightened the laboratory's confidence in the data it generated, by providing real-time statistical information to control and determine measurement quality
DEFF Research Database (Denmark)
Frandsen, Tove Faber; Nicolaisen, Jeppe
2017-01-01
Using statistical methods to analyse digital material for patterns makes it possible to detect patterns in big data that we would otherwise not be able to detect. This paper seeks to exemplify this fact by statistically analysing a large corpus of references in systematic reviews. The aim...
Using Inequality Measures to Incorporate Environmental Justice into Regulatory Analyses
Harper, Sam; Ruder, Eric; Roman, Henry A.; Geggel, Amelia; Nweke, Onyemaechi; Payne-Sturges, Devon; Levy, Jonathan I.
2013-01-01
Formally evaluating how specific policy measures influence environmental justice is challenging, especially in the context of regulatory analyses in which quantitative comparisons are the norm. However, there is a large literature on developing and applying quantitative measures of health inequality in other settings, and these measures may be applicable to environmental regulatory analyses. In this paper, we provide information to assist policy decision makers in determining the viability of using measures of health inequality in the context of environmental regulatory analyses. We conclude that quantification of the distribution of inequalities in health outcomes across social groups of concern, considering both within-group and between-group comparisons, would be consistent with both the structure of regulatory analysis and the core definition of environmental justice. Appropriate application of inequality indicators requires thorough characterization of the baseline distribution of exposures and risks, leveraging data generally available within regulatory analyses. Multiple inequality indicators may be applicable to regulatory analyses, and the choice among indicators should be based on explicit value judgments regarding the dimensions of environmental justice of greatest interest. PMID:23999551
Statistical analysis of complex systems with nonclassical invariant measures
Fratalocchi, Andrea
2011-01-01
I investigate the problem of finding a statistical description of a complex many-body system whose invariant measure cannot be constructed stemming from classical thermodynamics ensembles. By taking solitons as a reference system and by employing a
Use of Statistics for Data Evaluation in Environmental Radioactivity Measurements
International Nuclear Information System (INIS)
Sutarman
2001-01-01
Counting statistics will give a correction on environmental radioactivity measurement result. Statistics provides formulas to determine standard deviation (S B ) and minimum detectable concentration (MDC) according to the Poisson distribution. Both formulas depend on the background count rate, counting time, counting efficiency, gamma intensity, and sample size. A long time background counting results in relatively low S B and MDC that can present relatively accurate measurement results. (author)
Statistical analyses to support guidelines for marine avian sampling. Final report
Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris
2012-01-01
distribution to describe counts of a given species in a particular region and season. 4. Using a large database of historical at-sea seabird survey data, we applied this technique to identify appropriate statistical distributions for modeling a variety of species, allowing the distribution to vary by season. For each species and season, we used the selected distribution to calculate and map retrospective statistical power to detect hotspots and coldspots, and map pvalues from Monte Carlo significance tests of hotspots and coldspots, in discrete lease blocks designated by the U.S. Department of Interior, Bureau of Ocean Energy Management (BOEM). 5. Because our definition of hotspots and coldspots does not explicitly include variability over time, we examine the relationship between the temporal scale of sampling and the proportion of variance captured in time series of key environmental correlates of marine bird abundance, as well as available marine bird abundance time series, and use these analyses to develop recommendations for the temporal distribution of sampling to adequately represent both shortterm and long-term variability. We conclude by presenting a schematic “decision tree” showing how this power analysis approach would fit in a general framework for avian survey design, and discuss implications of model assumptions and results. We discuss avenues for future development of this work, and recommendations for practical implementation in the context of siting and wildlife assessment for offshore renewable energy development projects.
Analysis of neutron flux measurement systems using statistical functions
International Nuclear Information System (INIS)
Pontes, Eduardo Winston
1997-01-01
This work develops an integrated analysis for neutron flux measurement systems using the concepts of cumulants and spectra. Its major contribution is the generalization of Campbell's theorem in the form of spectra in the frequency domain, and its application to the analysis of neutron flux measurement systems. Campbell's theorem, in its generalized form, constitutes an important tool, not only to find the nth-order frequency spectra of the radiation detector, but also in the system analysis. The radiation detector, an ionization chamber for neutrons, is modeled for cylindrical, plane and spherical geometries. The detector current pulses are characterized by a vector of random parameters, and the associated charges, statistical moments and frequency spectra of the resulting current are calculated. A computer program is developed for application of the proposed methodology. In order for the analysis to integrate the associated electronics, the signal processor is studied, considering analog and digital configurations. The analysis is unified by developing the concept of equivalent systems that can be used to describe the cumulants and spectra in analog or digital systems. The noise in the signal processor input stage is analysed in terms of second order spectrum. Mathematical expressions are presented for cumulants and spectra up to fourth order, for important cases of filter positioning relative to detector spectra. Unbiased conventional estimators for cumulants are used, and, to evaluate systems precision and response time, expressions are developed for their variances. Finally, some possibilities for obtaining neutron radiation flux as a function of cumulants are discussed. In summary, this work proposes some analysis tools which make possible important decisions in the design of better neutron flux measurement systems. (author)
Bayesian statistics in radionuclide metrology: measurement of a decaying source
International Nuclear Information System (INIS)
Bochud, F. O.; Bailat, C.J.; Laedermann, J.P.
2007-01-01
The most intuitive way of defining a probability is perhaps through the frequency at which it appears when a large number of trials are realized in identical conditions. The probability derived from the obtained histogram characterizes the so-called frequentist or conventional statistical approach. In this sense, probability is defined as a physical property of the observed system. By contrast, in Bayesian statistics, a probability is not a physical property or a directly observable quantity, but a degree of belief or an element of inference. The goal of this paper is to show how Bayesian statistics can be used in radionuclide metrology and what its advantages and disadvantages are compared with conventional statistics. This is performed through the example of an yttrium-90 source typically encountered in environmental surveillance measurement. Because of the very low activity of this kind of source and the small half-life of the radionuclide, this measurement takes several days, during which the source decays significantly. Several methods are proposed to compute simultaneously the number of unstable nuclei at a given reference time, the decay constant and the background. Asymptotically, all approaches give the same result. However, Bayesian statistics produces coherent estimates and confidence intervals in a much smaller number of measurements. Apart from the conceptual understanding of statistics, the main difficulty that could deter radionuclide metrologists from using Bayesian statistics is the complexity of the computation. (authors)
Statistical learning modeling method for space debris photometric measurement
Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen
2016-03-01
Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.
Statistical analyses of incidents on onshore gas transmission pipelines based on PHMSA database
International Nuclear Information System (INIS)
Lam, Chio; Zhou, Wenxing
2016-01-01
This article reports statistical analyses of the mileage and pipe-related incidents data corresponding to the onshore gas transmission pipelines in the US between 2002 and 2013 collected by the Pipeline Hazardous Material Safety Administration of the US Department of Transportation. The analysis indicates that there are approximately 480,000 km of gas transmission pipelines in the US, approximately 60% of them more than 45 years old as of 2013. Eighty percent of the pipelines are Class 1 pipelines, and about 20% of the pipelines are Classes 2 and 3 pipelines. It is found that the third-party excavation, external corrosion, material failure and internal corrosion are the four leading failure causes, responsible for more than 75% of the total incidents. The 12-year average rate of rupture equals 3.1 × 10"−"5 per km-year due to all failure causes combined. External corrosion is the leading cause for ruptures: the 12-year average rupture rate due to external corrosion equals 1.0 × 10"−"5 per km-year and is twice the rupture rate due to the third-party excavation or material failure. The study provides insights into the current state of gas transmission pipelines in the US and baseline failure statistics for the quantitative risk assessments of such pipelines. - Highlights: • Analyze PHMSA pipeline mileage and incident data between 2002 and 2013. • Focus on gas transmission pipelines. • Leading causes for pipeline failures are identified. • Provide baseline failure statistics for risk assessments of gas transmission pipelines.
Csillik, O.; Evans, I. S.; Drăguţ, L.
2015-03-01
Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.
Statistical analyses of the performance of Macedonian investment and pension funds
Directory of Open Access Journals (Sweden)
Petar Taleski
2015-10-01
Full Text Available The foundation of the post-modern portfolio theory is creating a portfolio based on a desired target return. This specifically applies to the performance of investment and pension funds that provide a rate of return meeting payment requirements from investment funds. A desired target return is the goal of an investment or pension fund. It is the primary benchmark used to measure performances, dynamic monitoring and evaluation of the risk–return ratio on investment funds. The analysis in this paper is based on monthly returns of Macedonian investment and pension funds (June 2011 - June 2014. Such analysis utilizes the basic, but highly informative statistical characteristic moments like skewness, kurtosis, Jarque–Bera, and Chebyishev’s Inequality. The objective of this study is to perform a trough analysis, utilizing the above mentioned and other types of statistical techniques (Sharpe, Sortino, omega, upside potential, Calmar, Sterling to draw relevant conclusions regarding the risks and characteristic moments in Macedonian investment and pension funds. Pension funds are the second largest segment of the financial system, and has great potential for further growth due to constant inflows from pension insurance. The importance of investment funds for the financial system in the Republic of Macedonia is still small, although open-end investment funds have been the fastest growing segment of the financial system. Statistical analysis has shown that pension funds have delivered a significantly positive volatility-adjusted risk premium in the analyzed period more so than investment funds.
Estimation of measurement variance in the context of environment statistics
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
Measures of trajectory ensemble disparity in nonequilibrium statistical dynamics
International Nuclear Information System (INIS)
Crooks, Gavin E; Sivak, David A
2011-01-01
Many interesting divergence measures between conjugate ensembles of nonequilibrium trajectories can be experimentally determined from the work distribution of the process. Herein, we review the statistical and physical significance of several of these measures, in particular the relative entropy (dissipation), Jeffreys divergence (hysteresis), Jensen–Shannon divergence (time-asymmetry), Chernoff divergence (work cumulant generating function), and Rényi divergence
The Use of Statistical Process Control Tools for Analysing Financial Statements
Directory of Open Access Journals (Sweden)
Niezgoda Janusz
2017-06-01
Full Text Available This article presents the proposed application of one type of the modified Shewhart control charts in the monitoring of changes in the aggregated level of financial ratios. The control chart x̅ has been used as a basis of analysis. The examined variable from the sample in the mentioned chart is the arithmetic mean. The author proposes to substitute it with a synthetic measure that is determined and based on the selected ratios. As the ratios mentioned above, are expressed in different units and characters, the author applies standardisation. The results of selected comparative analyses have been presented for both bankrupts and non-bankrupts. They indicate the possibility of using control charts as an auxiliary tool in financial analyses.
Statistical analysis of lightning electric field measured under Malaysian condition
Salimi, Behnam; Mehranzamir, Kamyar; Abdul-Malek, Zulkurnain
2014-02-01
Lightning is an electrical discharge during thunderstorms that can be either within clouds (Inter-Cloud), or between clouds and ground (Cloud-Ground). The Lightning characteristics and their statistical information are the foundation for the design of lightning protection system as well as for the calculation of lightning radiated fields. Nowadays, there are various techniques to detect lightning signals and to determine various parameters produced by a lightning flash. Each technique provides its own claimed performances. In this paper, the characteristics of captured broadband electric fields generated by cloud-to-ground lightning discharges in South of Malaysia are analyzed. A total of 130 cloud-to-ground lightning flashes from 3 separate thunderstorm events (each event lasts for about 4-5 hours) were examined. Statistical analyses of the following signal parameters were presented: preliminary breakdown pulse train time duration, time interval between preliminary breakdowns and return stroke, multiplicity of stroke, and percentages of single stroke only. The BIL model is also introduced to characterize the lightning signature patterns. Observations on the statistical analyses show that about 79% of lightning signals fit well with the BIL model. The maximum and minimum of preliminary breakdown time duration of the observed lightning signals are 84 ms and 560 us, respectively. The findings of the statistical results show that 7.6% of the flashes were single stroke flashes, and the maximum number of strokes recorded was 14 multiple strokes per flash. A preliminary breakdown signature in more than 95% of the flashes can be identified.
International Nuclear Information System (INIS)
Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard
2015-01-01
Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single
Energy Technology Data Exchange (ETDEWEB)
Edjabou, Maklawe Essonanawe, E-mail: vine@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Petersen, Claus [Econet AS, Omøgade 8, 2.sal, 2100 Copenhagen (Denmark); Scheutz, Charlotte; Astrup, Thomas Fruergaard [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark)
2015-02-15
Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single
Testing Genetic Pleiotropy with GWAS Summary Statistics for Marginal and Conditional Analyses.
Deng, Yangqing; Pan, Wei
2017-12-01
There is growing interest in testing genetic pleiotropy, which is when a single genetic variant influences multiple traits. Several methods have been proposed; however, these methods have some limitations. First, all the proposed methods are based on the use of individual-level genotype and phenotype data; in contrast, for logistical, and other, reasons, summary statistics of univariate SNP-trait associations are typically only available based on meta- or mega-analyzed large genome-wide association study (GWAS) data. Second, existing tests are based on marginal pleiotropy, which cannot distinguish between direct and indirect associations of a single genetic variant with multiple traits due to correlations among the traits. Hence, it is useful to consider conditional analysis, in which a subset of traits is adjusted for another subset of traits. For example, in spite of substantial lowering of low-density lipoprotein cholesterol (LDL) with statin therapy, some patients still maintain high residual cardiovascular risk, and, for these patients, it might be helpful to reduce their triglyceride (TG) level. For this purpose, in order to identify new therapeutic targets, it would be useful to identify genetic variants with pleiotropic effects on LDL and TG after adjusting the latter for LDL; otherwise, a pleiotropic effect of a genetic variant detected by a marginal model could simply be due to its association with LDL only, given the well-known correlation between the two types of lipids. Here, we develop a new pleiotropy testing procedure based only on GWAS summary statistics that can be applied for both marginal analysis and conditional analysis. Although the main technical development is based on published union-intersection testing methods, care is needed in specifying conditional models to avoid invalid statistical estimation and inference. In addition to the previously used likelihood ratio test, we also propose using generalized estimating equations under the
Statistical evaluation and measuring strategy for extremely small line shifts
International Nuclear Information System (INIS)
Hansen, P.G.
1978-01-01
For a measuring situation limited by counting statistics, but where the level of precision is such that possible systematic errors are a major concern, it is proposed to determine the position of a spectral line from a measured line segment by applying a bias correction to the centre of gravity of the segment. This procedure is statistically highly efficient and not sensitive to small errors in assumptions about the line shape. The counting strategy for an instrument that takes data point by point is also considered. It is shown that an optimum (''two-point'') strategy exists; a scan of the central part of the line is 68% efficient by this standard. (Auth.)
Tomlinson, Alan; Hair, Mario; McFadyen, Angus
2013-10-01
Dry eye is a multifactorial disease which would require a broad spectrum of test measures in the monitoring of its treatment and diagnosis. However, studies have typically reported improvements in individual measures with treatment. Alternative approaches involve multiple, combined outcomes being assessed by different statistical analyses. In order to assess the effect of various statistical approaches to the use of single and combined test measures in dry eye, this review reanalyzed measures from two previous studies (osmolarity, evaporation, tear turnover rate, and lipid film quality). These analyses assessed the measures as single variables within groups, pre- and post-intervention with a lubricant supplement, by creating combinations of these variables and by validating these combinations with the combined sample of data from all groups of dry eye subjects. The effectiveness of single measures and combinations in diagnosis of dry eye was also considered. Copyright © 2013. Published by Elsevier Inc.
Directory of Open Access Journals (Sweden)
Sean Ekins
Full Text Available Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets. We generated computational 3D pharmacophores based on data derived by both acoustic and tip-based transfer. The computed pharmacophores differ significantly depending upon dispensing and dilution methods. The acoustic dispensing-derived pharmacophore correctly identified active compounds in a subsequent test set where the tip-based method failed. Data from acoustic dispensing generates a pharmacophore containing two hydrophobic features, one hydrogen bond donor and one hydrogen bond acceptor. This is consistent with X-ray crystallography studies of ligand-protein interactions and automatically generated pharmacophores derived from this structural data. In contrast, the tip-based data suggest a pharmacophore with two hydrogen bond acceptors, one hydrogen bond donor and no hydrophobic features. This pharmacophore is inconsistent with the X-ray crystallographic studies and automatically generated pharmacophores. In short, traditional dispensing processes are another important source of error in high-throughput screening that impacts computational and statistical analyses. These findings have far-reaching implications in biological research.
Statistical Measure Of Association Between Smoking And Lung ...
African Journals Online (AJOL)
Statistical Measure Of Association Between Smoking And Lung Cancer In Abakaliki, Ebonyi State Nigeria. ... East African Journal of Public Health ... To investigate the havoc caused by all these on people, questionnaire was distributed among smokers and non smokers in various areas of specialization and habitations.
Measuring University Students' Approaches to Learning Statistics: An Invariance Study
Chiesi, Francesca; Primi, Caterina; Bilgin, Ayse Aysin; Lopez, Maria Virginia; del Carmen Fabrizio, Maria; Gozlu, Sitki; Tuan, Nguyen Minh
2016-01-01
The aim of the current study was to provide evidence that an abbreviated version of the Approaches and Study Skills Inventory for Students (ASSIST) was invariant across different languages and educational contexts in measuring university students' learning approaches to statistics. Data were collected on samples of university students attending…
Concepts and recent advances in generalized information measures and statistics
Kowalski, Andres M
2013-01-01
Since the introduction of the information measure widely known as Shannon entropy, quantifiers based on information theory and concepts such as entropic forms and statistical complexities have proven to be useful in diverse scientific research fields. This book contains introductory tutorials suitable for the general reader, together with chapters dedicated to the basic concepts of the most frequently employed information measures or quantifiers and their recent applications to different areas, including physics, biology, medicine, economics, communication and social sciences. As these quantif
McKinley, C. C.; Scudder, R.; Thomas, D. J.
2016-12-01
The Neodymium Isotopic composition (Nd IC) of oxide coatings has been applied as a tracer of water mass composition and used to address fundamental questions about past ocean conditions. The leached authigenic oxide coating from marine sediment is widely assumed to reflect the dissolved trace metal composition of the bottom water interacting with sediment at the seafloor. However, recent studies have shown that readily reducible sediment components, in addition to trace metal fluxes from the pore water, are incorporated into the bottom water, influencing the trace metal composition of leached oxide coatings. This challenges the prevailing application of the authigenic oxide Nd IC as a proxy of seawater composition. Therefore, it is important to identify the component end-members that create sediments of different lithology and determine if, or how they might contribute to the Nd IC of oxide coatings. To investigate lithologic influence on the results of sequential leaching, we selected two sites with complete bulk sediment statistical characterization. Site U1370 in the South Pacific Gyre, is predominantly composed of Rhyolite ( 60%) and has a distinguishable ( 10%) Fe-Mn Oxyhydroxide component (Dunlea et al., 2015). Site 1149 near the Izu-Bonin-Arc is predominantly composed of dispersed ash ( 20-50%) and eolian dust from Asia ( 50-80%) (Scudder et al., 2014). We perform a two-step leaching procedure: a 14 mL of 0.02 M hydroxylamine hydrochloride (HH) in 20% acetic acid buffered to a pH 4 for one hour, targeting metals bound to Fe- and Mn- oxides fractions, and a second HH leach for 12 hours, designed to remove any remaining oxides from the residual component. We analyze all three resulting fractions for a large suite of major, trace and rare earth elements, a sub-set of the samples are also analyzed for Nd IC. We use multivariate statistical analyses of the resulting geochemical data to identify how each component of the sediment partitions across the sequential
Review of Statistical Analyses Resulting from Performance of HLDWD-DWPF-005
International Nuclear Information System (INIS)
Beck, R.S.
1997-01-01
The Engineering Department at the Defense Waste Processing Facility (DWPF) has reviewed two reports from the Statistical Consulting Section (SCS) involving the statistical analysis of test results for analysis of small sample inserts (references 1 ampersand 2). The test results cover two proposed analytical methods, a room temperature hydrofluoric acid preparation (Cold Chem) and a sodium peroxide/sodium hydroxide fusion modified for insert samples (Modified Fusion). The reports support implementation of the proposed small sample containers and analytical methods at DWPF. Hydragard sampler valve performance was typical of previous results (reference 3). Using an element from each major feed stream. lithium from the frit and iron from the sludge, the sampler was determined to deliver a uniform mixture in either sample container.The lithium to iron ratios were equivalent for the standard 15 ml vial and the 3 ml insert.The proposed method provide equivalent analyses as compared to the current methods. The biases associated with the proposed methods on a vitrified basis are less than 5% for major elements. The sum of oxides for the proposed method compares favorably with the sum of oxides for the conventional methods. However, the average sum of oxides for the Cold Chem method was 94.3% which is below the minimum required recovery of 95%. Both proposed methods, cold Chem and Modified Fusion, will be required at first to provide an accurate analysis which will routinely meet the 95% and 105% average sum of oxides limit for Product Composition Control System (PCCS).Issued to be resolved during phased implementation are as follows: (1) Determine calcine/vitrification factor for radioactive feed; (2) Evaluate covariance matrix change against process operating ranges to determine optimum sample size; (3) Evaluate sources for low sum of oxides; and (4) Improve remote operability of production versions of equipment and instruments for installation in 221-S.The specifics of
International Nuclear Information System (INIS)
Bithell, J.F.; Stone, R.A.
1989-01-01
This paper sets out to show that epidemiological methods most commonly used can be improved. When analysing geographical data it is necessary to consider location. The most obvious quantification of location is ranked distance, though other measures which may be more meaningful in relation to aetiology may be substituted. A test based on distance ranks, the ''Poisson maximum test'', depends on the maximum of observed relative risk in regions of increasing size, but with significance level adjusted for selection. Applying this test to data from Sellafield and Sizewell shows that the excess of leukaemia incidence observed at Seascale, near Sellafield, is not an artefact due to data selection by region, and that the excess probably results from a genuine, if as yet unidentified cause (there being little evidence of any other locational association once the Seascale cases have been removed). So far as Sizewell is concerned, geographical proximity to the nuclear power station does not seen particularly important. (author)
Event group importance measures for top event frequency analyses
International Nuclear Information System (INIS)
1995-01-01
Three traditional importance measures, risk reduction, partial derivative, nd variance reduction, have been extended to permit analyses of the relative importance of groups of underlying failure rates to the frequencies of resulting top events. The partial derivative importance measure was extended by assessing the contribution of a group of events to the gradient of the top event frequency. Given the moments of the distributions that characterize the uncertainties in the underlying failure rates, the expectation values of the top event frequency, its variance, and all of the new group importance measures can be quantified exactly for two familiar cases: (1) when all underlying failure rates are presumed independent, and (2) when pairs of failure rates based on common data are treated as being equal (totally correlated). In these cases, the new importance measures, which can also be applied to assess the importance of individual events, obviate the need for Monte Carlo sampling. The event group importance measures are illustrated using a small example problem and demonstrated by applications made as part of a major reactor facility risk assessment. These illustrations and applications indicate both the utility and the versatility of the event group importance measures
Event group importance measures for top event frequency analyses
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-07-31
Three traditional importance measures, risk reduction, partial derivative, nd variance reduction, have been extended to permit analyses of the relative importance of groups of underlying failure rates to the frequencies of resulting top events. The partial derivative importance measure was extended by assessing the contribution of a group of events to the gradient of the top event frequency. Given the moments of the distributions that characterize the uncertainties in the underlying failure rates, the expectation values of the top event frequency, its variance, and all of the new group importance measures can be quantified exactly for two familiar cases: (1) when all underlying failure rates are presumed independent, and (2) when pairs of failure rates based on common data are treated as being equal (totally correlated). In these cases, the new importance measures, which can also be applied to assess the importance of individual events, obviate the need for Monte Carlo sampling. The event group importance measures are illustrated using a small example problem and demonstrated by applications made as part of a major reactor facility risk assessment. These illustrations and applications indicate both the utility and the versatility of the event group importance measures.
MASER: Measuring, Analysing, Simulating low frequency Radio Emissions.
Cecconi, B.; Le Sidaner, P.; Savalle, R.; Bonnin, X.; Zarka, P. M.; Louis, C.; Coffre, A.; Lamy, L.; Denis, L.; Griessmeier, J. M.; Faden, J.; Piker, C.; André, N.; Genot, V. N.; Erard, S.; King, T. A.; Mafi, J. N.; Sharlow, M.; Sky, J.; Demleitner, M.
2017-12-01
The MASER (Measuring, Analysing and Simulating Radio Emissions) project provides a comprehensive infrastructure dedicated to low frequency radio emissions (typically Radioastronomie de Nançay and the CDPP deep archive. These datasets include Cassini/RPWS, STEREO/Waves, WIND/Waves, Ulysses/URAP, ISEE3/SBH, Voyager/PRA, Nançay Decameter Array (Routine, NewRoutine, JunoN), RadioJove archive, swedish Viking mission, Interball/POLRAD... MASER also includes a Python software library for reading raw data.
Lowe, David J.; Pearce, Nicholas J. G.; Jorgensen, Murray A.; Kuehn, Stephen C.; Tryon, Christian A.; Hayward, Chris L.
2017-11-01
We define tephras and cryptotephras and their components (mainly ash-sized particles of glass ± crystals in distal deposits) and summarize the basis of tephrochronology as a chronostratigraphic correlational and dating tool for palaeoenvironmental, geological, and archaeological research. We then document and appraise recent advances in analytical methods used to determine the major, minor, and trace elements of individual glass shards from tephra or cryptotephra deposits to aid their correlation and application. Protocols developed recently for the electron probe microanalysis of major elements in individual glass shards help to improve data quality and standardize reporting procedures. A narrow electron beam (diameter ∼3-5 μm) can now be used to analyze smaller glass shards than previously attainable. Reliable analyses of 'microshards' (defined here as glass shards T2 test). Randomization tests can be used where distributional assumptions such as multivariate normality underlying parametric tests are doubtful. Compositional data may be transformed and scaled before being subjected to multivariate statistical procedures including calculation of distance matrices, hierarchical cluster analysis, and PCA. Such transformations may make the assumption of multivariate normality more appropriate. A sequential procedure using Mahalanobis distance and the Hotelling two-sample T2 test is illustrated using glass major element data from trachytic to phonolitic Kenyan tephras. All these methods require a broad range of high-quality compositional data which can be used to compare 'unknowns' with reference (training) sets that are sufficiently complete to account for all possible correlatives, including tephras with heterogeneous glasses that contain multiple compositional groups. Currently, incomplete databases are tending to limit correlation efficacy. The development of an open, online global database to facilitate progress towards integrated, high
Papageorgiou, Spyridon N; Papadopoulos, Moschos A; Athanasiou, Athanasios E
2014-02-01
Ideally meta-analyses (MAs) should consolidate the characteristics of orthodontic research in order to produce an evidence-based answer. However severe flaws are frequently observed in most of them. The aim of this study was to evaluate the statistical methods, the methodology, and the quality characteristics of orthodontic MAs and to assess their reporting quality during the last years. Electronic databases were searched for MAs (with or without a proper systematic review) in the field of orthodontics, indexed up to 2011. The AMSTAR tool was used for quality assessment of the included articles. Data were analyzed with Student's t-test, one-way ANOVA, and generalized linear modelling. Risk ratios with 95% confidence intervals were calculated to represent changes during the years in reporting of key items associated with quality. A total of 80 MAs with 1086 primary studies were included in this evaluation. Using the AMSTAR tool, 25 (27.3%) of the MAs were found to be of low quality, 37 (46.3%) of medium quality, and 18 (22.5%) of high quality. Specific characteristics like explicit protocol definition, extensive searches, and quality assessment of included trials were associated with a higher AMSTAR score. Model selection and dealing with heterogeneity or publication bias were often problematic in the identified reviews. The number of published orthodontic MAs is constantly increasing, while their overall quality is considered to range from low to medium. Although the number of MAs of medium and high level seems lately to rise, several other aspects need improvement to increase their overall quality.
Realistic thermodynamic and statistical-mechanical measures for neural synchronization.
Kim, Sang-Yoon; Lim, Woochang
2014-04-15
Synchronized brain rhythms, associated with diverse cognitive functions, have been observed in electrical recordings of brain activity. Neural synchronization may be well described by using the population-averaged global potential VG in computational neuroscience. The time-averaged fluctuation of VG plays the role of a "thermodynamic" order parameter O used for describing the synchrony-asynchrony transition in neural systems. Population spike synchronization may be well visualized in the raster plot of neural spikes. The degree of neural synchronization seen in the raster plot is well measured in terms of a "statistical-mechanical" spike-based measure Ms introduced by considering the occupation and the pacing patterns of spikes. The global potential VG is also used to give a reference global cycle for the calculation of Ms. Hence, VG becomes an important collective quantity because it is associated with calculation of both O and Ms. However, it is practically difficult to directly get VG in real experiments. To overcome this difficulty, instead of VG, we employ the instantaneous population spike rate (IPSR) which can be obtained in experiments, and develop realistic thermodynamic and statistical-mechanical measures, based on IPSR, to make practical characterization of the neural synchronization in both computational and experimental neuroscience. Particularly, more accurate characterization of weak sparse spike synchronization can be achieved in terms of realistic statistical-mechanical IPSR-based measure, in comparison with the conventional measure based on VG. Copyright © 2014. Published by Elsevier B.V.
Signal processing and statistical analysis of spaced-based measurements
International Nuclear Information System (INIS)
Iranpour, K.
1996-05-01
The reports deals with data obtained by the ROSE rocket project. This project was designed to investigate the low altitude auroral instabilities in the electrojet region. The spectral and statistical analyses indicate the existence of unstable waves in the ionized gas in the region. An experimentally obtained dispersion relation for these waves were established. It was demonstrated that the characteristic phase velocities are much lower than what is expected from the standard theoretical results. This analysis of the ROSE data indicate the cascading of energy from lower to higher frequencies. 44 refs., 54 figs
CLASSIFYING BENIGN AND MALIGNANT MASSES USING STATISTICAL MEASURES
Directory of Open Access Journals (Sweden)
B. Surendiran
2011-11-01
Full Text Available Breast cancer is the primary and most common disease found in women which causes second highest rate of death after lung cancer. The digital mammogram is the X-ray of breast captured for the analysis, interpretation and diagnosis. According to Breast Imaging Reporting and Data System (BIRADS benign and malignant can be differentiated using its shape, size and density, which is how radiologist visualize the mammograms. According to BIRADS mass shape characteristics, benign masses tend to have round, oval, lobular in shape and malignant masses are lobular or irregular in shape. Measuring regular and irregular shapes mathematically is found to be a difficult task, since there is no single measure to differentiate various shapes. In this paper, the malignant and benign masses present in mammogram are classified using Hue, Saturation and Value (HSV weight function based statistical measures. The weight function is robust against noise and captures the degree of gray content of the pixel. The statistical measures use gray weight value instead of gray pixel value to effectively discriminate masses. The 233 mammograms from the Digital Database for Screening Mammography (DDSM benchmark dataset have been used. The PASW data mining modeler has been used for constructing Neural Network for identifying importance of statistical measures. Based on the obtained important statistical measure, the C5.0 tree has been constructed with 60-40 data split. The experimental results are found to be encouraging. Also, the results will agree to the standard specified by the American College of Radiology-BIRADS Systems.
Statistical modeling of optical attenuation measurements in continental fog conditions
Khan, Muhammad Saeed; Amin, Muhammad; Awan, Muhammad Saleem; Minhas, Abid Ali; Saleem, Jawad; Khan, Rahimdad
2017-03-01
Free-space optics is an innovative technology that uses atmosphere as a propagation medium to provide higher data rates. These links are heavily affected by atmospheric channel mainly because of fog and clouds that act to scatter and even block the modulated beam of light from reaching the receiver end, hence imposing severe attenuation. A comprehensive statistical study of the fog effects and deep physical understanding of the fog phenomena are very important for suggesting improvements (reliability and efficiency) in such communication systems. In this regard, 6-months real-time measured fog attenuation data are considered and statistically investigated. A detailed statistical analysis related to each fog event for that period is presented; the best probability density functions are selected on the basis of Akaike information criterion, while the estimates of unknown parameters are computed by maximum likelihood estimation technique. The results show that most fog attenuation events follow normal mixture distribution and some follow the Weibull distribution.
Definition and measurement of statistical gloss parameters from curved objects
Energy Technology Data Exchange (ETDEWEB)
Kuivalainen, Kalle; Oksman, Antti; Peiponen, Kai-Erik
2010-09-20
Gloss standards are commonly defined for gloss measurement from flat surfaces, and, accordingly, glossmeters are typically developed for flat objects. However, gloss inspection of convex, concave, and small products is also important. In this paper, we define statistical gloss parameters for curved objects and measure gloss data on convex and concave surfaces using two different diffractive-optical-element-based glossmeters. Examples of measurements with the two diffractive-optical-element-based glossmeters are given for convex and concave aluminum pipe samples with and without paint. The defined gloss parameters for curved objects are useful in the characterization of the surface quality of metal pipes and other objects.
Definition and measurement of statistical gloss parameters from curved objects
International Nuclear Information System (INIS)
Kuivalainen, Kalle; Oksman, Antti; Peiponen, Kai-Erik
2010-01-01
Gloss standards are commonly defined for gloss measurement from flat surfaces, and, accordingly, glossmeters are typically developed for flat objects. However, gloss inspection of convex, concave, and small products is also important. In this paper, we define statistical gloss parameters for curved objects and measure gloss data on convex and concave surfaces using two different diffractive-optical-element-based glossmeters. Examples of measurements with the two diffractive-optical-element-based glossmeters are given for convex and concave aluminum pipe samples with and without paint. The defined gloss parameters for curved objects are useful in the characterization of the surface quality of metal pipes and other objects.
Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati
2017-09-01
One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.
Essentials of Excel, Excel VBA, SAS and Minitab for statistical and financial analyses
Lee, Cheng-Few; Chang, Jow-Ran; Tai, Tzu
2016-01-01
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, MINITAB, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and MINITAB. Of those, we look at MINITAB and SAS in this textbook. One of the main reasons to use MINITAB is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities...
Energy Technology Data Exchange (ETDEWEB)
NONE
2011-07-01
This report is an assessment of the current model and presentation form of bio energy statistics. It appears proposed revision and enhancement of both collection and data representation. In the context of market development both in general for energy and particularly for bio energy and government targets, a good bio energy statistics form the basis to follow up the objectives and means.(eb)
Jeffrey P. Prestemon
2009-01-01
Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...
Statistical methods for assessing agreement between continuous measurements
DEFF Research Database (Denmark)
Sokolowski, Ineta; Hansen, Rikke Pilegaard; Vedsted, Peter
Background: Clinical research often involves study of agreement amongst observers. Agreement can be measured in different ways, and one can obtain quite different values depending on which method one uses. Objective: We review the approaches that have been discussed to assess the agreement between...... continuous measures and discuss their strengths and weaknesses. Different methods are illustrated using actual data from the `Delay in diagnosis of cancer in general practice´ project in Aarhus, Denmark. Subjects and Methods: We use weighted kappa-statistic, intraclass correlation coefficient (ICC......), concordance coefficient, Bland-Altman limits of agreement and percentage of agreement to assess the agreement between patient reported delay and doctor reported delay in diagnosis of cancer in general practice. Key messages: The correct statistical approach is not obvious. Many studies give the product...
Development of 3D statistical mandible models for cephalometric measurements
International Nuclear Information System (INIS)
Kim, Sung Goo; Yi, Won Jin; Hwang, Soon Jung; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il; Hong, Helen; Yoo, Ji Hyun
2012-01-01
The aim of this study was to provide sex-matched three-dimensional (3D) statistical shape models of the mandible, which would provide cephalometric parameters for 3D treatment planning and cephalometric measurements in orthognathic surgery. The subjects used to create the 3D shape models of the mandible included 23 males and 23 females. The mandibles were segmented semi-automatically from 3D facial CT images. Each individual mandible shape was reconstructed as a 3D surface model, which was parameterized to establish correspondence between different individual surfaces. The principal component analysis (PCA) applied to all mandible shapes produced a mean model and characteristic models of variation. The cephalometric parameters were measured directly from the mean models to evaluate the 3D shape models. The means of the measured parameters were compared with those from other conventional studies. The male and female 3D statistical mean models were developed from 23 individual mandibles, respectively. The male and female characteristic shapes of variation produced by PCA showed a large variability included in the individual mandibles. The cephalometric measurements from the developed models were very close to those from some conventional studies. We described the construction of 3D mandibular shape models and presented the application of the 3D mandibular template in cephalometric measurements. Optimal reference models determined from variations produced by PCA could be used for craniofacial patients with various types of skeletal shape.
Development of 3D statistical mandible models for cephalometric measurements
Energy Technology Data Exchange (ETDEWEB)
Kim, Sung Goo; Yi, Won Jin; Hwang, Soon Jung; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il [School of Dentistry, Seoul National University, Seoul (Korea, Republic of); Hong, Helen; Yoo, Ji Hyun [Division of Multimedia Engineering, Seoul Women' s University, Seoul (Korea, Republic of)
2012-09-15
The aim of this study was to provide sex-matched three-dimensional (3D) statistical shape models of the mandible, which would provide cephalometric parameters for 3D treatment planning and cephalometric measurements in orthognathic surgery. The subjects used to create the 3D shape models of the mandible included 23 males and 23 females. The mandibles were segmented semi-automatically from 3D facial CT images. Each individual mandible shape was reconstructed as a 3D surface model, which was parameterized to establish correspondence between different individual surfaces. The principal component analysis (PCA) applied to all mandible shapes produced a mean model and characteristic models of variation. The cephalometric parameters were measured directly from the mean models to evaluate the 3D shape models. The means of the measured parameters were compared with those from other conventional studies. The male and female 3D statistical mean models were developed from 23 individual mandibles, respectively. The male and female characteristic shapes of variation produced by PCA showed a large variability included in the individual mandibles. The cephalometric measurements from the developed models were very close to those from some conventional studies. We described the construction of 3D mandibular shape models and presented the application of the 3D mandibular template in cephalometric measurements. Optimal reference models determined from variations produced by PCA could be used for craniofacial patients with various types of skeletal shape.
Buttigieg, Pier Luigi; Ramette, Alban Nicolas
2014-01-01
The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynami...
Statistical measures of Planck scale signal correlations in interferometers
Energy Technology Data Exchange (ETDEWEB)
Hogan, Craig J. [Univ. of Chicago, Chicago, IL (United States); Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Kwon, Ohkyung [Univ. of Chicago, Chicago, IL (United States)
2015-06-22
A model-independent statistical framework is presented to interpret data from systems where the mean time derivative of positional cross correlation between world lines, a measure of spreading in a quantum geometrical wave function, is measured with a precision smaller than the Planck time. The framework provides a general way to constrain possible departures from perfect independence of classical world lines, associated with Planck scale bounds on positional information. A parametrized candidate set of possible correlation functions is shown to be consistent with the known causal structure of the classical geometry measured by an apparatus, and the holographic scaling of information suggested by gravity. Frequency-domain power spectra are derived that can be compared with interferometer data. As a result, simple projections of sensitivity for specific experimental set-ups suggests that measurements will directly yield constraints on a universal time derivative of the correlation function, and thereby confirm or rule out a class of Planck scale departures from classical geometry.
Gene coexpression measures in large heterogeneous samples using count statistics.
Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan
2014-11-18
With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.
International Nuclear Information System (INIS)
Lassahn, G.D.; Taylor, D.J.N.
1982-08-01
Analyses of uncertainty components inherent in pulsed-neutron-activation (PNA) measurements in general and the Loss-of-Fluid-Test (LOFT) system in particular are given. Due to the LOFT system's unique conditions, previously-used techniques were modified to make the volocity measurement. These methods render a useful, cost-effective measurement with an estimated uncertainty of 11% of reading
DEFF Research Database (Denmark)
Denwood, M.J.; McKendrick, I.J.; Matthews, L.
Introduction. There is an urgent need for a method of analysing FECRT data that is computationally simple and statistically robust. A method for evaluating the statistical power of a proposed FECRT study would also greatly enhance the current guidelines. Methods. A novel statistical framework has...... been developed that evaluates observed FECRT data against two null hypotheses: (1) the observed efficacy is consistent with the expected efficacy, and (2) the observed efficacy is inferior to the expected efficacy. The method requires only four simple summary statistics of the observed data. Power...... that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...
Statistical characteristics of surrogate data based on geophysical measurements
Directory of Open Access Journals (Sweden)
V. Venema
2006-01-01
Full Text Available In this study, the statistical properties of a range of measurements are compared with those of their surrogate time series. Seven different records are studied, amongst others, historical time series of mean daily temperature, daily rain sums and runoff from two rivers, and cloud measurements. Seven different algorithms are used to generate the surrogate time series. The best-known method is the iterative amplitude adjusted Fourier transform (IAAFT algorithm, which is able to reproduce the measured distribution as well as the power spectrum. Using this setup, the measurements and their surrogates are compared with respect to their power spectrum, increment distribution, structure functions, annual percentiles and return values. It is found that the surrogates that reproduce the power spectrum and the distribution of the measurements are able to closely match the increment distributions and the structure functions of the measurements, but this often does not hold for surrogates that only mimic the power spectrum of the measurement. However, even the best performing surrogates do not have asymmetric increment distributions, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found deviations of the structure functions on small scales.
Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.
2012-12-01
Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent
Validating Future Force Performance Measures (Army Class): Concluding Analyses
2016-06-01
32 Table 3.10. Descriptive Statistics and Intercorrelations for LV Final Predictor Factor Scores...55 Table 4.7. Descriptive Statistics for Analysis Criteria...Soldier attrition and performance: Dependability (Non- Delinquency ), Adjustment, Physical Conditioning, Leadership, Work Orientation, and Agreeableness
Ellis, Barbara G.; Dick, Steven J.
1996-01-01
Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)
International Nuclear Information System (INIS)
Beck, W.
1984-01-01
From the complexity of computer programs for the solution of scientific and technical problems results a lot of questions. Typical questions concern the strength and weakness of computer programs, the propagation of incertainties among the input data, the sensitivity of input data on output data and the substitute of complex models by more simple ones, which provide equivalent results in certain ranges. Those questions have a general practical meaning, principle answers may be found by statistical methods, which are based on the Monte Carlo Method. In this report the statistical methods are chosen, described and valuated. They are implemented into the modular program system STAR, which is an own component of the program system RSYST. The design of STAR considers users with different knowledge of data processing and statistics. The variety of statistical methods, generating and evaluating procedures. The processing of large data sets in complex structures. The coupling to other components of RSYST and RSYST foreign programs. That the system can be easily modificated and enlarged. Four examples are given, which demonstrate the application of STAR. (orig.) [de
International Nuclear Information System (INIS)
Kleijnen, J.P.C.; Helton, J.C.
1999-01-01
Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked
Statistical analysis of complex systems with nonclassical invariant measures
Fratalocchi, Andrea
2011-02-28
I investigate the problem of finding a statistical description of a complex many-body system whose invariant measure cannot be constructed stemming from classical thermodynamics ensembles. By taking solitons as a reference system and by employing a general formalism based on the Ablowitz-Kaup-Newell-Segur scheme, I demonstrate how to build an invariant measure and, within a one-dimensional phase space, how to develop a suitable thermodynamics. A detailed example is provided with a universal model of wave propagation, with reference to a transparent potential sustaining gray solitons. The system shows a rich thermodynamic scenario, with a free-energy landscape supporting phase transitions and controllable emergent properties. I finally discuss the origin of such behavior, trying to identify common denominators in the area of complex dynamics.
Statistic analyses of the color experience according to the age of the observer.
Hunjet, Anica; Parac-Osterman, Durdica; Vucaj, Edita
2013-04-01
Psychological experience of color is a real state of the communication between the environment and color, and it will depend on the source of the light, angle of the view, and particular on the observer and his health condition. Hering's theory or a theory of the opponent processes supposes that cones, which are situated in the retina of the eye, are not sensible on the three chromatic domains (areas, fields, zones) (red, green and purple-blue), but they produce a signal based on the principle of the opposed pairs of colors. A reason of this theory depends on the fact that certain disorders of the color eyesight, which include blindness to certain colors, cause blindness to pairs of opponent colors. This paper presents a demonstration of the experience of blue and yellow tone according to the age of the observer. For the testing of the statistically significant differences in the omission in the color experience according to the color of the background we use following statistical tests: Mann-Whitnney U Test, Kruskal-Wallis ANOVA and Median test. It was proven that the differences are statistically significant in the elderly persons (older than 35 years).
International Nuclear Information System (INIS)
Allen, Bruce; Creighton, Jolien D.E.; Flanagan, Eanna E.; Romano, Joseph D.
2003-01-01
In a previous paper (paper I), we derived a set of near-optimal signal detection techniques for gravitational wave detectors whose noise probability distributions contain non-Gaussian tails. The methods modify standard methods by truncating or clipping sample values which lie in those non-Gaussian tails. The methods were derived, in the frequentist framework, by minimizing false alarm probabilities at fixed false detection probability in the limit of weak signals. For stochastic signals, the resulting statistic consisted of a sum of an autocorrelation term and a cross-correlation term; it was necessary to discard 'by hand' the autocorrelation term in order to arrive at the correct, generalized cross-correlation statistic. In the present paper, we present an alternative derivation of the same signal detection techniques from within the Bayesian framework. We compute, for both deterministic and stochastic signals, the probability that a signal is present in the data, in the limit where the signal-to-noise ratio squared per frequency bin is small, where the signal is nevertheless strong enough to be detected (integrated signal-to-noise ratio large compared to 1), and where the total probability in the non-Gaussian tail part of the noise distribution is small. We show that, for each model considered, the resulting probability is to a good approximation a monotonic function of the detection statistic derived in paper I. Moreover, for stochastic signals, the new Bayesian derivation automatically eliminates the problematic autocorrelation term
Rapid objective measurement of gamma camera resolution using statistical moments.
Hander, T A; Lancaster, J L; Kopp, D T; Lasher, J C; Blumhardt, R; Fox, P T
1997-02-01
An easy and rapid method for the measurement of the intrinsic spatial resolution of a gamma camera was developed. The measurement is based on the first and second statistical moments of regions of interest (ROIs) applied to bar phantom images. This leads to an estimate of the modulation transfer function (MTF) and the full-width-at-half-maximum (FWHM) of a line spread function (LSF). Bar phantom images were acquired using four large field-of-view (LFOV) gamma cameras (Scintronix, Picker, Searle, Siemens). The following factors important for routine measurements of gamma camera resolution with this method were tested: ROI placement and shape, phantom orientation, spatial sampling, and procedural consistency. A 0.2% coefficient of variation (CV) between repeat measurements of MTF was observed for a circular ROI. The CVs of less than 2% were observed for measured MTF values for bar orientations ranging from -10 degrees to +10 degrees with respect to the x and y axes of the camera acquisition matrix. A 256 x 256 matrix (1.6 mm pixel spacing) was judged sufficient for routine measurements, giving an estimate of the FWHM to within 0.1 mm of manufacturer-specified values (3% difference). Under simulated clinical conditions, the variation in measurements attributable to procedural effects yielded a CV of less than 2% in newer generation cameras. The moments method for determining MTF correlated well with a peak-valley method, with an average difference of 0.03 across the range of spatial frequencies tested (0.11-0.17 line pairs/mm, corresponding to 4.5-3.0 mm bars). When compared with the NEMA method for measuring intrinsic spatial resolution, the moments method was found to be within 4% of the expected FWHM.
International Nuclear Information System (INIS)
Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe
2013-01-01
Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)
Cohen, Alan A; Milot, Emmanuel; Li, Qing; Legault, Véronique; Fried, Linda P; Ferrucci, Luigi
2014-09-01
Measuring physiological dysregulation during aging could be a key tool both to understand underlying aging mechanisms and to predict clinical outcomes in patients. However, most existing indices are either circular or hard to interpret biologically. Recently, we showed that statistical distance of 14 common blood biomarkers (a measure of how strange an individual's biomarker profile is) was associated with age and mortality in the WHAS II data set, validating its use as a measure of physiological dysregulation. Here, we extend the analyses to other data sets (WHAS I and InCHIANTI) to assess the stability of the measure across populations. We found that the statistical criteria used to determine the original 14 biomarkers produced diverging results across populations; in other words, had we started with a different data set, we would have chosen a different set of markers. Nonetheless, the same 14 markers (or the subset of 12 available for InCHIANTI) produced highly similar predictions of age and mortality. We include analyses of all combinatorial subsets of the markers and show that results do not depend much on biomarker choice or data set, but that more markers produce a stronger signal. We conclude that statistical distance as a measure of physiological dysregulation is stable across populations in Europe and North America. Copyright © 2014 Elsevier Inc. All rights reserved.
Statistical Measures to Quantify Similarity between Molecular Dynamics Simulation Trajectories
Directory of Open Access Journals (Sweden)
Jenny Farmer
2017-11-01
Full Text Available Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values.
Statistical methods for analysing the relationship between bank profitability and liquidity
Boguslaw Guzik
2006-01-01
The article analyses the most popular methods for the empirical estimation of the relationship between bank profitability and liquidity. Owing to the fact that profitability depends on various factors (both economic and non-economic), a simple correlation coefficient, two-dimensional (profitability/liquidity) graphs or models where profitability depends only on liquidity variable do not provide good and reliable results. Quite good results can be obtained only when multifactorial profitabilit...
Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins
2014-11-01
Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.
Directory of Open Access Journals (Sweden)
Mohammad D. AL-Tahat
2012-01-01
Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.
DEFF Research Database (Denmark)
Edjabou, Vincent Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona
2015-01-01
Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both...... comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub......-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10-50 waste fractions, organised according to a three-level (tiered approach) facilitating,comparison of the waste data between individual sub-areas with different fractionation (waste...
Chemometric and Statistical Analyses of ToF-SIMS Spectra of Increasingly Complex Biological Samples
Energy Technology Data Exchange (ETDEWEB)
Berman, E S; Wu, L; Fortson, S L; Nelson, D O; Kulp, K S; Wu, K J
2007-10-24
Characterizing and classifying molecular variation within biological samples is critical for determining fundamental mechanisms of biological processes that will lead to new insights including improved disease understanding. Towards these ends, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to examine increasingly complex samples of biological relevance, including monosaccharide isomers, pure proteins, complex protein mixtures, and mouse embryo tissues. The complex mass spectral data sets produced were analyzed using five common statistical and chemometric multivariate analysis techniques: principal component analysis (PCA), linear discriminant analysis (LDA), partial least squares discriminant analysis (PLSDA), soft independent modeling of class analogy (SIMCA), and decision tree analysis by recursive partitioning. PCA was found to be a valuable first step in multivariate analysis, providing insight both into the relative groupings of samples and into the molecular basis for those groupings. For the monosaccharides, pure proteins and protein mixture samples, all of LDA, PLSDA, and SIMCA were found to produce excellent classification given a sufficient number of compound variables calculated. For the mouse embryo tissues, however, SIMCA did not produce as accurate a classification. The decision tree analysis was found to be the least successful for all the data sets, providing neither as accurate a classification nor chemical insight for any of the tested samples. Based on these results we conclude that as the complexity of the sample increases, so must the sophistication of the multivariate technique used to classify the samples. PCA is a preferred first step for understanding ToF-SIMS data that can be followed by either LDA or PLSDA for effective classification analysis. This study demonstrates the strength of ToF-SIMS combined with multivariate statistical and chemometric techniques to classify increasingly complex biological samples
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
International Nuclear Information System (INIS)
Tsuji, Hirokazu; Yokoyama, Norio; Nakajima, Hajime; Kondo, Tatsuo
1993-05-01
Statistical analyses were conducted by using the cyclic crack growth rate data for pressure vessel steels stored in the JAERI Material Performance Database (JMPD), and comparisons were made on variability and/or reproducibility of the data between obtained by ΔK-increasing and by ΔK-constant type tests. Based on the results of the statistical analyses, it was concluded that ΔK-constant type tests are generally superior to the commonly used ΔK-increasing type ones from the viewpoint of variability and/or reproducibility of the data. Such a tendency was more pronounced in the tests conducted in simulated LWR primary coolants than those in air. (author)
Statistical method for quality control in presence of measurement errors
International Nuclear Information System (INIS)
Lauer-Peccoud, M.R.
1998-01-01
In a quality inspection of a set of items where the measurements of values of a quality characteristic of the item are contaminated by random errors, one can take wrong decisions which are damageable to the quality. So of is important to control the risks in such a way that a final quality level is insured. We consider that an item is defective or not if the value G of its quality characteristic is larger or smaller than a given level g. We assume that, due to the lack of precision of the measurement instrument, the measurement M of this characteristic is expressed by ∫ (G) + ξ where f is an increasing function such that the value ∫ (g 0 ) is known and ξ is a random error with mean zero and given variance. First we study the problem of the determination of a critical measure m such that a specified quality target is reached after the classification of a lot of items where each item is accepted or rejected depending on whether its measurement is smaller or greater than m. Then we analyse the problem of testing the global quality of a lot from the measurements for a example of items taken from the lot. For these two kinds of problems and for different quality targets, we propose solutions emphasizing on the case where the function ∫ is linear and the error ξ and the variable G are Gaussian. Simulation results allow to appreciate the efficiency of the different considered control procedures and their robustness with respect to deviations from the assumptions used in the theoretical derivations. (author)
Detailed modeling of the statistical uncertainty of Thomson scattering measurements
International Nuclear Information System (INIS)
Morton, L A; Parke, E; Hartog, D J Den
2013-01-01
The uncertainty of electron density and temperature fluctuation measurements is determined by statistical uncertainty introduced by multiple noise sources. In order to quantify these uncertainties precisely, a simple but comprehensive model was made of the noise sources in the MST Thomson scattering system and of the resulting variance in the integrated scattered signals. The model agrees well with experimental and simulated results. The signal uncertainties are then used by our existing Bayesian analysis routine to find the most likely electron temperature and density, with confidence intervals. In the model, photonic noise from scattered light and plasma background light is multiplied by the noise enhancement factor (F) of the avalanche photodiode (APD). Electronic noise from the amplifier and digitizer is added. The amplifier response function shapes the signal and induces correlation in the noise. The data analysis routine fits a characteristic pulse to the digitized signals from the amplifier, giving the integrated scattered signals. A finite digitization rate loses information and can cause numerical integration error. We find a formula for the variance of the scattered signals in terms of the background and pulse amplitudes, and three calibration constants. The constants are measured easily under operating conditions, resulting in accurate estimation of the scattered signals' uncertainty. We measure F ≈ 3 for our APDs, in agreement with other measurements for similar APDs. This value is wavelength-independent, simplifying analysis. The correlated noise we observe is reproduced well using a Gaussian response function. Numerical integration error can be made negligible by using an interpolated characteristic pulse, allowing digitization rates as low as the detector bandwidth. The effect of background noise is also determined
Siegel, Joshua S; Power, Jonathan D; Dubis, Joseph W; Vogel, Alecia C; Church, Jessica A; Schlaggar, Bradley L; Petersen, Steven E
2014-05-01
Subject motion degrades the quality of task functional magnetic resonance imaging (fMRI) data. Here, we test two classes of methods to counteract the effects of motion in task fMRI data: (1) a variety of motion regressions and (2) motion censoring ("motion scrubbing"). In motion regression, various regressors based on realignment estimates were included as nuisance regressors in general linear model (GLM) estimation. In motion censoring, volumes in which head motion exceeded a threshold were withheld from GLM estimation. The effects of each method were explored in several task fMRI data sets and compared using indicators of data quality and signal-to-noise ratio. Motion censoring decreased variance in parameter estimates within- and across-subjects, reduced residual error in GLM estimation, and increased the magnitude of statistical effects. Motion censoring performed better than all forms of motion regression and also performed well across a variety of parameter spaces, in GLMs with assumed or unassumed response shapes. We conclude that motion censoring improves the quality of task fMRI data and can be a valuable processing step in studies involving populations with even mild amounts of head movement. Copyright © 2013 Wiley Periodicals, Inc.
Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.
Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi
2013-12-01
Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.
2011-03-01
"NHTSA selected the vehicle footprint (the measure of a vehicles wheelbase multiplied by its average track width) as the attribute upon which to base the CAFE standards for model year 2012-2016 passenger cars and light trucks. These standards are ...
Laser Beam Caustic Measurement with Focal Spot Analyser
DEFF Research Database (Denmark)
Olsen, Flemming Ove; Gong, Hui; Bagger, Claus
2005-01-01
In industrial applications of high power CO2-lasers the caustic characteristics of the laser beam have great effects on the performance of the lasers. A welldefined high intense focused spot is essential for reliable production results. This paper presents a focal spot analyser that is developed...
Statistical shape modeling based renal volume measurement using tracked ultrasound
Pai Raikar, Vipul; Kwartowitz, David M.
2017-03-01
Autosomal dominant polycystic kidney disease (ADPKD) is the fourth most common cause of kidney transplant worldwide accounting for 7-10% of all cases. Although ADPKD usually progresses over many decades, accurate risk prediction is an important task.1 Identifying patients with progressive disease is vital to providing new treatments being developed and enable them to enter clinical trials for new therapy. Among other factors, total kidney volume (TKV) is a major biomarker predicting the progression of ADPKD. Consortium for Radiologic Imaging Studies in Polycystic Kidney Disease (CRISP)2 have shown that TKV is an early, and accurate measure of cystic burden and likely growth rate. It is strongly associated with loss of renal function.3 While ultrasound (US) has proven as an excellent tool for diagnosing the disease; monitoring short-term changes using ultrasound has been shown to not be accurate. This is attributed to high operator variability and reproducibility as compared to tomographic modalities such as CT and MR (Gold standard). Ultrasound has emerged as one of the standout modality for intra-procedural imaging and with methods for spatial localization has afforded us the ability to track 2D ultrasound in physical space which it is being used. In addition to this, the vast amount of recorded tomographic data can be used to generate statistical shape models that allow us to extract clinical value from archived image sets. In this work, we aim at improving the prognostic value of US in managing ADPKD by assessing the accuracy of using statistical shape model augmented US data, to predict TKV, with the end goal of monitoring short-term changes.
Towards a best practice of modeling unit of measure and related statistical metadata
Grossmann, Wilfried
2011-01-01
Data and metadata exchange between organizations requires a common language for describing structure and content of statistical data and metadata. The SDMX consortium develops content oriented guidelines (COG) recommending harmonized cross-domain concepts and terminology to increase the efficiency of (meta-) data exchange. A recent challenge is a recommended code list for the unit of measure. Based on examples from SDMX sponsor organizations this paper analyses the diversity of ""unit of measure"" as used in practice, including potential breakdowns and interdependencies of the respective meta-
Directory of Open Access Journals (Sweden)
A. V. Nikitenko
2014-04-01
Full Text Available Purpose. Defining and analysis of the probabilistic and spectral characteristics of random current in regenerative braking mode of DC electric rolling stock are observed in this paper. Methodology. The elements and methods of the probability theory (particularly the theory of stationary and non-stationary processes and methods of the sampling theory are used for processing of the regenerated current data arrays by PC. Findings. The regenerated current records are obtained from the locomotives and trains in Ukraine railways and trams in Poland. It was established that the current has uninterrupted and the jumping variations in time (especially in trams. For the random current in the regenerative braking mode the functions of mathematical expectation, dispersion and standard deviation are calculated. Histograms, probabilistic characteristics and correlation functions are calculated and plotted down for this current too. It was established that the current of the regenerative braking mode can be considered like the stationary and non-ergodic process. The spectral analysis of these records and “tail part” of the correlation function found weak periodical (or low-frequency components which are known like an interharmonic. Originality. Firstly, the theory of non-stationary random processes was adapted for the analysis of the recuperated current which has uninterrupted and the jumping variations in time. Secondly, the presence of interharmonics in the stochastic process of regenerated current was defined for the first time. And finally, the patterns of temporal changes of the correlation current function are defined too. This allows to reasonably apply the correlation functions method in the identification of the electric traction system devices. Practical value. The results of probabilistic and statistic analysis of the recuperated current allow to estimate the quality of recovered energy and energy quality indices of electric rolling stock in the
Directory of Open Access Journals (Sweden)
Le Bao
2014-11-01
Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.
International Nuclear Information System (INIS)
2005-01-01
For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees
International Nuclear Information System (INIS)
2001-01-01
For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
1999-01-01
For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models
Directory of Open Access Journals (Sweden)
Saleh Altwaijri
2012-12-01
Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but
The analyses of measured nuclide concentration in project ISTC 2670
International Nuclear Information System (INIS)
Chrapciak, V.
2006-01-01
In this article are analyzed experiments for WWER-440 fuel and compared with theoretical results by new version of the SCALE 5 code: nuclide compositions - measurement in Kurchatov institute for 3.6% - measurement in Dimitrovgrad for 3.6% (project ISTC 2670) The focus is on modules TRITON and ORIGEN-S (Authors)
Measurement assurance program for LSC analyses of tritium samples
International Nuclear Information System (INIS)
Levi, G.D. Jr.; Clark, J.P.
1997-01-01
Liquid Scintillation Counting (LSC) for Tritium is done on 600 to 800 samples daily as part of a contamination control program at the Savannah River Site's Tritium Facilities. The tritium results from the LSCs are used: to release items as radiologically clean; to establish radiological control measures for workers; and to characterize waste. The following is a list of the sample matrices that are analyzed for tritium: filter paper smears, aqueous, oil, oily rags, ethylene glycol, ethyl alcohol, freon and mercury. Routine and special causes of variation in standards, counting equipment, environment, operators, counting times, samples, activity levels, etc. produce uncertainty in the LSC measurements. A comprehensive analytical process measurement assurance program such as JTIPMAP trademark has been implemented. The process measurement assurance program is being used to quantify and control many of the sources of variation and provide accurate estimates of the overall measurement uncertainty associated with the LSC measurements. The paper will describe LSC operations, process improvements, quality control and quality assurance programs along with future improvements associated with the implementation of the process measurement assurance program
International Nuclear Information System (INIS)
Kleijnen, J.P.C.; Helton, J.C.
1999-01-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type II errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples
International Nuclear Information System (INIS)
2003-01-01
For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products
International Nuclear Information System (INIS)
2004-01-01
For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Measurement System Analyses - Gauge Repeatability and Reproducibility Methods
Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej
2018-02-01
The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.
Measurement of the Dead-Time in a Multichannel Analyser
DEFF Research Database (Denmark)
Mortensen, L.; Olsen, J.
1973-01-01
By means of two simple measurements three different dead-times are determined: the normal dead-time, a dead-time coming from the pile-up, and a dead-time due to the finite width of the timing pulses.......By means of two simple measurements three different dead-times are determined: the normal dead-time, a dead-time coming from the pile-up, and a dead-time due to the finite width of the timing pulses....
How Do Adults Perceive, Analyse and Measure Slope?
Duncan, Bruce; Chick, Helen
2013-01-01
Slope is a mathematical concept that is both fundamental to the study of advanced calculus and commonly perceived in everyday life. The measurement of steepness of terrain as a ratio is an example of an everyday application the concept of slope. In this study, a group of pre-service teachers were tested for their capacity to mathematize the…
SARDA HITL Preliminary Human Factors Measures and Analyses
Hyashi, Miwa; Dulchinos, Victoria
2012-01-01
Human factors data collected during the SARDA HITL Simulation Experiment include a variety of subjective measures, including the NASA TLX, questionnaire questions regarding situational awareness, advisory usefulness, UI usability, and controller trust. Preliminary analysis of the TLX data indicate that workload may not be adversely affected by use of the advisories, additionally, the controller's subjective ratings of the advisories may suggest acceptance of the tool.
Measurement uncertainty of liquid chromatographic analyses visualized by Ishikawa diagrams.
Meyer, Veronika R
2003-09-01
Ishikawa, or cause-and-effect diagrams, help to visualize the parameters that influence a chromatographic analysis. Therefore, they facilitate the set up of the uncertainty budget of the analysis, which can then be expressed in mathematical form. If the uncertainty is calculated as the Gaussian sum of all uncertainty parameters, it is necessary to quantitate them all, a task that is usually not practical. The other possible approach is to use the intermediate precision as a base for the uncertainty calculation. In this case, it is at least necessary to consider the uncertainty of the purity of the reference material in addition to the precision data. The Ishikawa diagram is then very simple, and so is the uncertainty calculation. This advantage is given by the loss of information about the parameters that influence the measurement uncertainty.
Measurement and analyses of molten Ni-Co alloy density
Institute of Scientific and Technical Information of China (English)
XIAO Feng; K. MUKAI; FANG Liang; FU Ya; YANG Ren-hui
2006-01-01
With the advent of powerful mathematical modeling techniques for material phenomena, there is renewed interest in reliable data for the density of the Ni-based superalloys. Up to now, there has been few report on the density of molten Ni-Co alloy.In order to obtain more accurate density data for molten Ni-Co alloy, the density of molten Ni-Co alloy was measured with a modified sessile drop method, and the accommodation of different atoms in molten Ni-Co alloy was analyzed. The density of alloy is found to decrease with increasing temperature and Co concentration in the alloy. The molar volume of molten Ni-Co alloy increases with increasing Co concentration. The molar volume of Ni-Co alloy determined shows a positive deviation from the linear molar volume, and the deviation of molar volume from ideal mixing increases with increasing Co concentration over the experimental concentration range.
Likert scales, levels of measurement and the "laws" of statistics.
Norman, Geoff
2010-12-01
Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for "getting the wrong answer".
Fundamentals of gamma-ray measurements and radiometric analyses
International Nuclear Information System (INIS)
Hochel, R.C.
1990-01-01
There are four primary modes of radioactive decay. All can be measured using various types of detectors and are the basis of many analytical techniques and much of what we know about the nucleus and its structure. Alpha particle emission occurs mostly in heavy nuclei of atomic number, Z, greater than 82 like Po, Ra, Th, and U, etc. Beta particles are simply electrons. They are emitted from the nucleus with a distribution of energies ranging from 0--3 MeV. Gamma-rays are photons with energies ranging from a few keV to 10 MeV or more. They usually follow alpha or beta decay, and depending on their energy, can have considerable range in matter. Neutrons are emitted in fission processes and also from a few of the highly excited fission product nuclei. Fission neutrons typically have energies of 1--2 MeV. Like gamma-rays, they have long ranges. The energies involved in nuclear decay processes are much higher than anything encountered in, say, chemical reactions. They are at the very top of the electromagnetic spectrum -- about a million times more energetic than visible light. As a result, these particles always produce ionization, either directly or indirectly, as they pass through matter. It is this ionization which is the basis of all radiation detectors
International Nuclear Information System (INIS)
Bennett, J.T.; Crowder, C.A.; Connolly, M.J.
1994-01-01
Gas samples from drums of radioactive waste at the Department of Energy (DOE) Idaho National Engineering Laboratory are being characterized for 29 volatile organic compounds to determine the feasibility of storing the waste in DOE's Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. Quality requirements for the gas chromatography (GC) and GC/mass spectrometry chemical methods used to analyze the waste are specified in the Quality Assurance Program Plan for the WIPP Experimental Waste Characterization Program. Quality requirements consist of both objective criteria (data quality objectives, DQOs) and statistical criteria (process control). The DQOs apply to routine sample analyses, while the statistical criteria serve to determine and monitor precision and accuracy (P ampersand A) of the analysis methods and are also used to assign upper confidence limits to measurement results close to action levels. After over two years and more than 1000 sample analyses there are two general conclusions concerning the two approaches to quality control: (1) Objective criteria (e.g., ± 25% precision, ± 30% accuracy) based on customer needs and the usually prescribed criteria for similar EPA- approved methods are consistently attained during routine analyses. (2) Statistical criteria based on short term method performance are almost an order of magnitude more stringent than objective criteria and are difficult to satisfy following the same routine laboratory procedures which satisfy the objective criteria. A more cost effective and representative approach to establishing statistical method performances criteria would be either to utilize a moving average of P ampersand A from control samples over a several month time period or to determine within a sample variation by one-way analysis of variance of several months replicate sample analysis results or both. Confidence intervals for results near action levels could also be determined by replicate analysis of the sample in
STATISTICAL ANALYSIS OF THE HEAVY NEUTRAL ATOMS MEASURED BY IBEX
International Nuclear Information System (INIS)
Park, Jeewoo; Kucharek, Harald; Möbius, Eberhard; Galli, André; Livadiotis, George; Fuselier, Steve A.; McComas, David J.
2015-01-01
We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O and Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O and Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath
CAPABILITY ASSESSMENT OF MEASURING EQUIPMENT USING STATISTIC METHOD
Directory of Open Access Journals (Sweden)
Pavel POLÁK
2014-10-01
Full Text Available Capability assessment of the measurement device is one of the methods of process quality control. Only in case the measurement device is capable, the capability of the measurement and consequently production process can be assessed. This paper deals with assessment of the capability of the measuring device using indices Cg and Cgk.
Monaco, E.; Memmolo, V.; Ricci, F.; Boffa, N. D.; Maio, L.
2015-03-01
Maintenance approaches based on sensorised structures and Structural Health Monitoring systems could represent one of the most promising innovations in the fields of aerostructures since many years, mostly when composites materials (fibers reinforced resins) are considered. Layered materials still suffer today of drastic reductions of maximum allowable stress values during the design phase as well as of costly and recurrent inspections during the life cycle phase that don't permit of completely exploit their structural and economic potentialities in today aircrafts. Those penalizing measures are necessary mainly to consider the presence of undetected hidden flaws within the layered sequence (delaminations) or in bonded areas (partial disbonding); in order to relax design and maintenance constraints a system based on sensors permanently installed on the structure to detect and locate eventual flaws can be considered (SHM system) once its effectiveness and reliability will be statistically demonstrated via a rigorous Probability Of Detection function definition and evaluation. This paper presents an experimental approach with a statistical procedure for the evaluation of detection threshold of a guided waves based SHM system oriented to delaminations detection on a typical wing composite layered panel. The experimental tests are mostly oriented to characterize the statistical distribution of measurements and damage metrics as well as to characterize the system detection capability using this approach. Numerically it is not possible to substitute part of the experimental tests aimed at POD where the noise in the system response is crucial. Results of experiments are presented in the paper and analyzed.
Directory of Open Access Journals (Sweden)
Susmaga Robert
2018-03-01
Full Text Available The paper considers particular interestingness measures, called confirmation measures (also known as Bayesian confirmation measures, used for the evaluation of “if evidence, then hypothesis” rules. The agreement of such measures with a statistically sound (significant dependency between the evidence and the hypothesis in data is thoroughly investigated. The popular confirmation measures were not defined to possess such form of agreement. However, in error-prone environments, potential lack of agreement may lead to undesired effects, e.g. when a measure indicates either strong confirmation or strong disconfirmation, while in fact there is only weak dependency between the evidence and the hypothesis. In order to detect and prevent such situations, the paper employs a coefficient allowing to assess the level of dependency between the evidence and the hypothesis in data, and introduces a method of quantifying the level of agreement (referred to as a concordance between this coefficient and the measure being analysed. The concordance is characterized and visualised using specialized histograms, scatter-plots, etc. Moreover, risk-related interpretations of the concordance are introduced. Using a set of 12 confirmation measures, the paper presents experiments designed to establish the actual concordance as well as other useful characteristics of the measures.
Hopewell, Sally; Witt, Claudia M; Linde, Klaus; Icke, Katja; Adedire, Olubusola; Kirtley, Shona; Altman, Douglas G
2018-01-11
Selective reporting of outcomes in clinical trials is a serious problem. We aimed to investigate the influence of the peer review process within biomedical journals on reporting of primary outcome(s) and statistical analyses within reports of randomised trials. Each month, PubMed (May 2014 to April 2015) was searched to identify primary reports of randomised trials published in six high-impact general and 12 high-impact specialty journals. The corresponding author of each trial was invited to complete an online survey asking authors about changes made to their manuscript as part of the peer review process. Our main outcomes were to assess: (1) the nature and extent of changes as part of the peer review process, in relation to reporting of the primary outcome(s) and/or primary statistical analysis; (2) how often authors followed these requests; and (3) whether this was related to specific journal or trial characteristics. Of 893 corresponding authors who were invited to take part in the online survey 258 (29%) responded. The majority of trials were multicentre (n = 191; 74%); median sample size 325 (IQR 138 to 1010). The primary outcome was clearly defined in 92% (n = 238), of which the direction of treatment effect was statistically significant in 49%. The majority responded (1-10 Likert scale) they were satisfied with the overall handling (mean 8.6, SD 1.5) and quality of peer review (mean 8.5, SD 1.5) of their manuscript. Only 3% (n = 8) said that the editor or peer reviewers had asked them to change or clarify the trial's primary outcome. However, 27% (n = 69) reported they were asked to change or clarify the statistical analysis of the primary outcome; most had fulfilled the request, the main motivation being to improve the statistical methods (n = 38; 55%) or avoid rejection (n = 30; 44%). Overall, there was little association between authors being asked to make this change and the type of journal, intervention, significance of the
Directory of Open Access Journals (Sweden)
Viviane Moura Rocha
2015-04-01
Full Text Available This paper presents a topographic analysis of the fields of consumer loyalty and loyalty programs, vastly studied in the last decades and still relevant in the marketing literature. After the identification of 250 scientific papers that were published in the last ten years in indexed journals, a subset of 76 were chosen and their 3223 references were extracted. The journals in which these papers were published, their key words, abstracts, authors, institutions of origin and citation patterns were identified and analyzed using bibliometrics, spatial statistics techniques and network analyses. The results allow the identification of the central components of the field, as well as its main authors, journals, institutions and countries that intermediate the diffusion of knowledge, which contributes to the understanding of the constitution of the field by researchers and students.
High order statistical signatures from source-driven measurements of subcritical fissile systems
International Nuclear Information System (INIS)
Mattingly, J.K.
1998-01-01
This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements
Buttigieg, Pier Luigi; Ramette, Alban
2014-12-01
The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community. © 2014 The Authors. FEMS Microbiology Ecology published by John Wiley & Sons Ltd on behalf of Federation of European Microbiological Societies.
Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor
2015-11-30
Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important. Copyright © 2015 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Agus, M; Penna, M P; Peró-Cebollero, M; Guàrdia-Olmos, J
2015-01-01
Numerous studies have examined students' difficulties in understanding some notions related to statistical problems. Some authors observed that the presentation of distinct visual representations could increase statistical reasoning, supporting the principle of graphical facilitation. But other researchers disagree with this viewpoint, emphasising the impediments related to the use of illustrations that could overcharge the cognitive system with insignificant data. In this work we aim at comparing the probabilistic statistical reasoning regarding two different formats of problem presentations: graphical and verbal-numerical. We have conceived and presented five pairs of homologous simple problems in the verbal numerical and graphical format to 311 undergraduate Psychology students (n=156 in Italy and n=155 in Spain) without statistical expertise. The purpose of our work was to evaluate the effect of graphical facilitation in probabilistic statistical reasoning. Every undergraduate has solved each pair of problems in two formats in different problem presentation orders and sequences. Data analyses have highlighted that the effect of graphical facilitation is infrequent in psychology undergraduates. This effect is related to many factors (as knowledge, abilities, attitudes, and anxiety); moreover it might be considered the resultant of interaction between individual and task characteristics
Bayesian statistical evaluation of peak area measurements in gamma spectrometry
International Nuclear Information System (INIS)
Silva, L.; Turkman, A.; Paulino, C.D.
2010-01-01
We analyze results from determinations of peak areas for a radioactive source containing several radionuclides. The statistical analysis was performed using Bayesian methods based on the usual Poisson model for observed counts. This model does not appear to be a very good assumption for the counting system under investigation, even though it is not questioned as a whole by the inferential procedures adopted. We conclude that, in order to avoid incorrect inferences on relevant quantities, one must proceed to a further study that allows us to include missing influence parameters and to select a model explaining the observed data much better.
The measurement and role of government procurement in macroeconomic statistics
Sumit Dey-Chowdhury; Geoff Tily
2007-01-01
Details the measurement and role of government procurement in the UK National Accounts, including existing data methodcollections, and identifies specific initiatives.This article details the measurement and role of government procurement in the UK National Accounts. The needfor an accurate estimate has increased following both internal and external usersâ€™ analytical requirements, in particularthe development of measures of market sector gross value added, emphasis on government productivit...
Statistical measure of ensemble non reproducibility and correction to Bell's inequality
International Nuclear Information System (INIS)
Khrennikov, A.
2000-01-01
In this work it has been analysed the proof of Bell's inequality and demonstrate that this inequality is related to one particular model of probability theory, namely Kolmogorov measure-theoretical axiomatic, 1933. It was found a (numerical) statistical correction to Bell's inequality. Such an additional term ε φ on the right-hand side of Bell's inequality can be considered as a probability invariant of a quantum state φ. This is a measure of non reproducibility of hidden variables in different runs of experiments. Experiments to verify Bell's inequality can be considered as just experiments to estimate the constant ε φ . It seems that Bell's inequality could not be used as a crucial reason to deny local realism
Counting statistics in low level radioactivity measurements fluctuating counting efficiency
International Nuclear Information System (INIS)
Pazdur, M.F.
1976-01-01
A divergence between the probability distribution of the number of nuclear disintegrations and the number of observed counts, caused by counting efficiency fluctuation, is discussed. The negative binominal distribution is proposed to describe the probability distribution of the number of counts, instead of Poisson distribution, which is assumed to hold for the number of nuclear disintegrations only. From actual measurements the r.m.s. amplitude of counting efficiency fluctuation is estimated. Some consequences of counting efficiency fluctuation are investigated and the corresponding formulae are derived: (1) for detection limit as a function of the number of partial measurements and the relative amplitude of counting efficiency fluctuation, and (2) for optimum allocation of the number of partial measurements between sample and background. (author)
Wavelet Statistical Analysis of Low-Latitude Geomagnetic Measurements
Papa, A. R.; Akel, A. F.
2009-05-01
Following previous works by our group (Papa et al., JASTP, 2006), where we analyzed a series of records acquired at the Vassouras National Geomagnetic Observatory in Brazil for the month of October 2000, we introduced a wavelet analysis for the same type of data and for other periods. It is well known that wavelets allow a more detailed study in several senses: the time window for analysis can be drastically reduced if compared to other traditional methods (Fourier, for example) and at the same time allow an almost continuous accompaniment of both amplitude and frequency of signals as time goes by. This advantage brings some possibilities for potentially useful forecasting methods of the type also advanced by our group in previous works (see for example, Papa and Sosman, JASTP, 2008). However, the simultaneous statistical analysis of both time series (in our case amplitude and frequency) is a challenging matter and is in this sense that we have found what we consider our main goal. Some possible trends for future works are advanced.
Distortion in fingerprints: a statistical investigation using shape measurement tools.
Sheets, H David; Torres, Anne; Langenburg, Glenn; Bush, Peter J; Bush, Mary A
2014-07-01
Friction ridge impression appearance can be affected due to the type of surface touched and pressure exerted during deposition. Understanding the magnitude of alterations, regions affected, and systematic/detectable changes occurring would provide useful information. Geometric morphometric techniques were used to statistically characterize these changes. One hundred and fourteen prints were obtained from a single volunteer and impressed with heavy, normal, and light pressure on computer paper, soft gloss paper, 10-print card stock, and retabs. Six hundred prints from 10 volunteers were rolled with heavy, normal, and light pressure on soft gloss paper and 10-print card stock. Results indicate that while different substrates/pressure levels produced small systematic changes in fingerprints, the changes were small in magnitude: roughly the width of one ridge. There were no detectable changes in the degree of random variability of prints associated with either pressure or substrate. In conclusion, the prints transferred reliably regardless of pressure or substrate. © 2014 American Academy of Forensic Sciences.
Discussant Remarks on Session: Statistical Aspects of Measuring the Internet
Energy Technology Data Exchange (ETDEWEB)
Cottrell, Les
1999-04-02
These remarks will briefly summarize what we learn from the talks in this session, and add some more areas in Internet Measurement that may provide challenges for statisticians. It will also point out some reasons why statisticians may be interested in working in this area.
Measuring social capital through multivariate analyses for the IQ-SC.
Campos, Ana Cristina Viana; Borges, Carolina Marques; Vargas, Andréa Maria Duarte; Gomes, Viviane Elisangela; Lucas, Simone Dutra; Ferreira e Ferreira, Efigênia
2015-01-20
Social capital can be viewed as a societal process that works toward the common good as well as toward the good of the collective based on trust, reciprocity, and solidarity. Our study aimed to present two multivariate statistical analyses to examine the formation of latent classes of social capital using the IQ-SC and to identify the most important factors in building an indicator of individual social capital. A cross-sectional study was conducted in 2009 among working adolescents supported by a Brazilian NGO. The sample consisted of 363 individuals, and data were collected using the World Bank Questionnaire for measuring social capital. First, the participants were grouped by a segmentation analysis using the Two Step Cluster method based on the Euclidian distance and the centroid criteria as the criteria for aggregate answers. Using specific weights for each item, discriminant analysis was used to validate the cluster analysis in an attempt to maximize the variance among the groups with respect to the variance within the clusters. "Community participation" and "trust in one's neighbors" contributed significantly to the development of the model with two distinct discriminant functions (p < 0.001). The majority of cases (95.0%) and non-cases (93.1%) were correctly classified by discriminant analysis. The two multivariate analyses (segmentation analysis and canonical discriminant analysis), used together, can be considered good choices for measuring social capital. Our results indicate that it is possible to form three social capital groups (low, medium and high) using the IQ-SC.
Lind, Mads V; Savolainen, Otto I; Ross, Alastair B
2016-08-01
Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.
Wang, W. L.; Tsui, K. L.; Lo, S. M.; Liu, S. B.
2018-01-01
Crowded transportation hubs such as metro stations are thought as ideal places for the development and spread of epidemics. However, for the special features of complex spatial layout, confined environment with a large number of highly mobile individuals, it is difficult to quantify human contacts in such environments, wherein disease spreading dynamics were less explored in the previous studies. Due to the heterogeneity and dynamic nature of human interactions, increasing studies proved the importance of contact distance and length of contact in transmission probabilities. In this study, we show how detailed information on contact and exposure patterns can be obtained by statistical analyses on microscopic crowd simulation data. To be specific, a pedestrian simulation model-CityFlow was employed to reproduce individuals' movements in a metro station based on site survey data, values and distributions of individual contact rate and exposure in different simulation cases were obtained and analyzed. It is interesting that Weibull distribution fitted the histogram values of individual-based exposure in each case very well. Moreover, we found both individual contact rate and exposure had linear relationship with the average crowd densities of the environments. The results obtained in this paper can provide reference to epidemic study in complex and confined transportation hubs and refine the existing disease spreading models.
Precision Measurement and Calibration. Volume 1. Statistical Concepts and Procedures
1969-02-01
1950. When Both Variables are Subject to 11. R. D. Stiehler, G. G. Richey, and J. Man- Error," Biometrics , Vol. 5, No. 3, pp. del, "Measurement of...34Analysis of extreme values," Ann. Math. Stat., 21 (1950), 488-506. 3. W. J. DixoN, "Processing data for outliers," Biometrics , 9 (1953), 74-89. 4...standards written by John Perry-" and Figure 1-- hematic Riepesentatitl of Hierarhies ml lftarRalph W. Smith ( ’. Staldards Laberatorles Using
Forbes, Valery E; Aufderheide, John; Warbritton, Ryan; van der Hoeven, Nelly; Caspers, Norbert
2007-03-01
This study presents results of the effects of bisphenol A (BPA) on adult egg production, egg hatchability, egg development rates and juvenile growth rates in the freshwater gastropod, Marisa cornuarietis. We observed no adult mortality, substantial inter-snail variability in reproductive output, and no effects of BPA on reproduction during 12 weeks of exposure to 0, 0.1, 1.0, 16, 160 or 640 microg/L BPA. We observed no effects of BPA on egg hatchability or timing of egg hatching. Juveniles showed good growth in the control and all treatments, and there were no significant effects of BPA on this endpoint. Our results do not support previous claims of enhanced reproduction in Marisa cornuarietis in response to exposure to BPA. Statistical power analysis indicated high levels of inter-snail variability in the measured endpoints and highlighted the need for sufficient replication when testing treatment effects on reproduction in M. cornuarietis with adequate power.
Measurement and Statistics of Application Business in Complex Internet
Wang, Lei; Li, Yang; Li, Yipeng; Wu, Shuhang; Song, Shiji; Ren, Yong
Owing to independent topologies and autonomic routing mechanism, the logical networks formed by Internet application business behavior cause the significant influence on the physical networks. In this paper, the backbone traffic of TUNET (Tsinghua University Networks) is measured, further more, the two most important application business: HTTP and P2P are analyzed at IP-packet level. It is shown that uplink HTTP and P2P packets behavior presents spatio-temporal power-law characteristics with exponents 1.25 and 1.53 respectively. Downlink HTTP packets behavior also presents power-law characteristics, but has more little exponents γ = 0.82 which differs from traditional complex networks research result. Moreover, downlink P2P packets distribution presents an approximate power-law which means that flow equilibrium profits little from distributed peer-to peer mechanism actually.
Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.
2010-01-01
This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…
Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R
2016-12-01
: MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We
Laurinaviciene, Aida; Plancoulaine, Benoit; Baltrusaityte, Indra; Meskauskas, Raimundas; Besusparis, Justinas; Lesciute-Krilaviciene, Daiva; Raudeliunas, Darius; Iqbal, Yasir; Herlin, Paulette; Laurinavicius, Arvydas
2014-01-01
Digital immunohistochemistry (IHC) is one of the most promising applications brought by new generation image analysis (IA). While conventional IHC staining quality is monitored by semi-quantitative visual evaluation of tissue controls, IA may require more sensitive measurement. We designed an automated system to digitally monitor IHC multi-tissue controls, based on SQL-level integration of laboratory information system with image and statistical analysis tools. Consecutive sections of TMA containing 10 cores of breast cancer tissue were used as tissue controls in routine Ki67 IHC testing. Ventana slide label barcode ID was sent to the LIS to register the serial section sequence. The slides were stained and scanned (Aperio ScanScope XT), IA was performed by the Aperio/Leica Colocalization and Genie Classifier/Nuclear algorithms. SQL-based integration ensured automated statistical analysis of the IA data by the SAS Enterprise Guide project. Factor analysis and plot visualizations were performed to explore slide-to-slide variation of the Ki67 IHC staining results in the control tissue. Slide-to-slide intra-core IHC staining analysis revealed rather significant variation of the variables reflecting the sample size, while Brown and Blue Intensity were relatively stable. To further investigate this variation, the IA results from the 10 cores were aggregated to minimize tissue-related variance. Factor analysis revealed association between the variables reflecting the sample size detected by IA and Blue Intensity. Since the main feature to be extracted from the tissue controls was staining intensity, we further explored the variation of the intensity variables in the individual cores. MeanBrownBlue Intensity ((Brown+Blue)/2) and DiffBrownBlue Intensity (Brown-Blue) were introduced to better contrast the absolute intensity and the colour balance variation in each core; relevant factor scores were extracted. Finally, tissue-related factors of IHC staining variance were
International Nuclear Information System (INIS)
Pirson, A.S.; George, J.; Krug, B.; Vander Borght, T.; Van Laere, K.; Jamart, J.; D'Asseler, Y.; Minoshima, S.
2009-01-01
Fully automated analysis programs have been applied more and more to aid for the reading of regional cerebral blood flow SPECT study. They are increasingly based on the comparison of the patient study with a normal database. In this study, we evaluate the ability of Three-Dimensional Stereotactic Surface Projection (3 D-S.S.P.) to isolate effects of age and gender in a previously studied normal population. The results were also compared with those obtained using Statistical Parametric Mapping (S.P.M.99). Methods Eighty-nine 99m Tc-E.C.D.-SPECT studies performed in carefully screened healthy volunteers (46 females, 43 males; age 20 - 81 years) were analysed using 3 D-S.S.P.. A multivariate analysis based on the general linear model was performed with regions as intra-subject factor, gender as inter-subject factor and age as co-variate. Results Both age and gender had a significant interaction effect with regional tracer uptake. An age-related decline (p < 0.001) was found in the anterior cingulate gyrus, left frontal association cortex and left insula. Bilateral occipital association and left primary visual cortical uptake showed a significant relative increase with age (p < 0.001). Concerning the gender effect, women showed higher uptake (p < 0.01) in the parietal and right sensorimotor cortices. An age by gender interaction (p < 0.01) was only found in the left medial frontal cortex. The results were consistent with those obtained with S.P.M.99. Conclusion 3 D-S.S.P. analysis of normal r.C.B.F. variability is consistent with the literature and other automated voxel-based techniques, which highlight the effects of both age and gender. (authors)
Interpretation of the results of statistical measurements. [search for basic probability model
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
Brouwer, D.; Meijer, R.R.; Zevalkink, D.J.
2013-01-01
Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual
Shim, Wonsik "Jeff"; McClure, Charles R.; Fraser, Bruce T.; Bertot, John Carlo
This manual provides a beginning approach for research libraries to better describe the use and users of their networked services. The manual also aims to increase the visibility and importance of developing such statistics and measures. Specific objectives are: to identify selected key statistics and measures that can describe use and users of…
Hoover, F. A.; Bowling, L. C.; Prokopy, L. S.
2015-12-01
Urban stormwater is an on-going management concern in municipalities of all sizes. In both combined or separated sewer systems, pollutants from stormwater runoff enter the natural waterway system during heavy rain events. Urban flooding during frequent and more intense storms are also a growing concern. Therefore, stormwater best-management practices (BMPs) are being implemented in efforts to reduce and manage stormwater pollution and overflow. The majority of BMP water quality studies focus on the small-scale, individual effects of the BMP, and the change in water quality directly from the runoff of these infrastructures. At the watershed scale, it is difficult to establish statistically whether or not these BMPs are making a difference in water quality, given that watershed scale monitoring is often costly and time consuming, relying on significant sources of funds, which a city may not have. Hence, there is a need to quantify the level of sampling needed to detect the water quality impact of BMPs at the watershed scale. In this study, a power analysis was performed on data from an urban watershed in Lafayette, Indiana, to determine the frequency of sampling required to detect a significant change in water quality measurements. Using the R platform, results indicate that detecting a significant change in watershed level water quality would require hundreds of weekly measurements, even when improvement is present. The second part of this study investigates whether the difficulty in demonstrating water quality change represents a barrier to adoption of stormwater BMPs. Semi-structured interviews of community residents and organizations in Chicago, IL are being used to investigate residents understanding of water quality and best management practices and identify their attitudes and perceptions towards stormwater BMPs. Second round interviews will examine how information on uncertainty in water quality improvements influences their BMP attitudes and perceptions.
Melsen, W G; Rovers, M M; Bonten, M J M; Bootsma, M C J|info:eu-repo/dai/nl/304830305
Variance between studies in a meta-analysis will exist. This heterogeneity may be of clinical, methodological or statistical origin. The last of these is quantified by the I(2) -statistic. We investigated, using simulated studies, the accuracy of I(2) in the assessment of heterogeneity and the
Concepts for measuring maintenance performance and methods for analysing competing failure modes
International Nuclear Information System (INIS)
Cooke, Roger; Paulsen, Jette
1997-01-01
Measurement of maintenance performance is done on the basis of component history data in which service sojourns are distinguished according to whether they terminate in corrective or preventive maintenance. From the viewpoint of data analysis, corrective and preventive maintenance constitute competing failure nudes. This article examines ways to assess maintenance performance without introducing statistical assumptions, then introduces a plausible statistical model for describing the interaction of preventive and corrective maintenance, and finally illustrates these with examples from the Nordic TUD data system
Statistical Process Control Charts for Measuring and Monitoring Temporal Consistency of Ratings
Omar, M. Hafidz
2010-01-01
Methods of statistical process control were briefly investigated in the field of educational measurement as early as 1999. However, only the use of a cumulative sum chart was explored. In this article other methods of statistical quality control are introduced and explored. In particular, methods in the form of Shewhart mean and standard deviation…
Measured sections and analyses of uranium host rocks of the Dockum Group, New Mexico and Texas
International Nuclear Information System (INIS)
Dickson, R.E.; Drake, D.P.; Reese, T.J.
1977-02-01
This report presents 27 measured sections from the Dockum Group of Late Triassic age, in the southern High Plains of eastern New Mexico and northwestern Texas. Many of the measured sections are only partial; the intent in those cases was to measure the parts of sections that had prominent sandstone/conglomerate beds or that had uranium deposits. No attempt was made to relate rock color to a rock color chart; rock colors are therefore approximate. Modal analyses (by thin-section examination) of sandstone and conglomerate samples and gamma-ray spectrometric analyses of the samples are presented in appendices
International Nuclear Information System (INIS)
Mizokami, Shinya; Hotta, Akitoshi; Kudo, Yoshiro; Yonehara, Tadashi; Watada, Masayuki; Sakaba, Hiroshi
2009-01-01
Current licensing practice in Japan consists of using conservative boundary and initial conditions(BIC), assumptions and analytical codes. The safety analyses for licensing purpose are inherently deterministic. Therefore, conservative BIC and assumptions, such as single failure, must be employed for the analyses. However, using conservative analytical codes are not considered essential. The standard committee of Atomic Energy Society of Japan(AESJ) has drawn up the standard for using best estimate codes for safety analyses in 2008 after three-years of discussions reflecting domestic and international recent findings. (author)
Reese, Hayne W.
1997-01-01
Recommends that when repeated-measures Latin-square designs are used to counterbalance treatments across a procedural variable or to reduce the number of treatment combinations given to each participant, effects be analyzed statistically, and that in all uses, researchers consider alternative interpretations of the variance associated with the…
Tay, Louis; Drasgow, Fritz
2012-01-01
Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…
Directory of Open Access Journals (Sweden)
Gyslain Giguère
2006-03-01
Full Text Available In this tutorial, we demonstrate how to use the Aggregate and Restructure procedures available in SPSS (versions 11 and up to prepare data files for repeated-measures analyses. In the first two sections of the tutorial, we briefly describe the Aggregate and Restructure procedures. In the final section, we present an example in which the data from a fictional lexical decision task are prepared for analysis using a mixed-design ANOVA. The tutorial demonstrates that the presented method is the most efficient way to prepare data for repeated-measures analyses in SPSS.
Harris, Joshua D; Brand, Jefferson C; Cote, Mark P; Dhawan, Aman
2017-08-01
Within the health care environment, there has been a recent and appropriate trend towards emphasizing the value of care provision. Reduced cost and higher quality improve the value of care. Quality is a challenging, heterogeneous, variably defined concept. At the core of quality is the patient's outcome, quantified by a vast assortment of subjective and objective outcome measures. There has been a recent evolution towards evidence-based medicine in health care, clearly elucidating the role of high-quality evidence across groups of patients and studies. Synthetic studies, such as systematic reviews and meta-analyses, are at the top of the evidence-based medicine hierarchy. Thus, these investigations may be the best potential source of guiding diagnostic, therapeutic, prognostic, and economic medical decision making. Systematic reviews critically appraise and synthesize the best available evidence to provide a conclusion statement (a "take-home point") in response to a specific answerable clinical question. A meta-analysis uses statistical methods to quantitatively combine data from single studies. Meta-analyses should be performed with high methodological quality homogenous studies (Level I or II) or evidence randomized studies, to minimize confounding variable bias. When it is known that the literature is inadequate or a recent systematic review has already been performed with a demonstration of insufficient data, then a new systematic review does not add anything meaningful to the literature. PROSPERO registration and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines assist authors in the design and conduct of systematic reviews and should always be used. Complete transparency of the conduct of the review permits reproducibility and improves fidelity of the conclusions. Pooling of data from overly dissimilar investigations should be avoided. This particularly applies to Level IV evidence, that is, noncomparative investigations
Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.
2018-05-01
The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.
Santos, José António; Galante-Oliveira, Susana; Barroso, Carlos
2011-03-01
The current work presents an innovative statistical approach to model ordinal variables in environmental monitoring studies. An ordinal variable has values that can only be compared as "less", "equal" or "greater" and it is not possible to have information about the size of the difference between two particular values. The example of ordinal variable under this study is the vas deferens sequence (VDS) used in imposex (superimposition of male sexual characters onto prosobranch females) field assessment programmes for monitoring tributyltin (TBT) pollution. The statistical methodology presented here is the ordered logit regression model. It assumes that the VDS is an ordinal variable whose values match up a process of imposex development that can be considered continuous in both biological and statistical senses and can be described by a latent non-observable continuous variable. This model was applied to the case study of Nucella lapillus imposex monitoring surveys conducted in the Portuguese coast between 2003 and 2008 to evaluate the temporal evolution of TBT pollution in this country. In order to produce more reliable conclusions, the proposed model includes covariates that may influence the imposex response besides TBT (e.g. the shell size). The model also provides an analysis of the environmental risk associated to TBT pollution by estimating the probability of the occurrence of females with VDS ≥ 2 in each year, according to OSPAR criteria. We consider that the proposed application of this statistical methodology has a great potential in environmental monitoring whenever there is the need to model variables that can only be assessed through an ordinal scale of values.
Zhang, Zhang
2012-03-22
Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.
Directory of Open Access Journals (Sweden)
Zhang Zhang
2012-03-01
Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.
Energy Technology Data Exchange (ETDEWEB)
Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu
2014-01-15
According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located on a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.
International Nuclear Information System (INIS)
Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef
2014-01-01
According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located on a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers
Directory of Open Access Journals (Sweden)
R. Eric Heidel
2016-01-01
Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Measurement and analysis of radioactive substances; Mesure et analyse de substances radioactives
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-07-01
Here are gathered the abstracts presented to the 3. summer university of the year 2001 whose main themes were the destructive (5 conferences) and nondestructive (8 conferences) analyses applied to nuclear industry. The points of view of different organisms (as DSIN: Directorate for the Safety of Nuclear Installations, IPSN: Institute of Nuclear Protection and Safety, OPRI: Office of Protection against Ionizing Radiations, TUI: Institute for Transuranium Elements, COGEMA, EDF: Electric Utilities, ANDRA: French National Agency for Radioactive Waste Management, CRLC Val d'Aurelle, France) concerning the needs involved in nuclear facilities control, the methods of radionuclide speciation in use internationally, the measurements and analyses of radioactive substances are given too as well as some general concepts concerning 1)the laser-matter interaction 2)the ions production 3)the quality applied to the measurements and analyses 4)the standard in activity metrology. (O.M.)
On divergence of finite measures and their applicability in statistics and information theory
Czech Academy of Sciences Publication Activity Database
Vajda, Igor; Stummer, W.
2009-01-01
Roč. 44, č. 2 (2009), s. 169-187 ISSN 0233-1888 R&D Projects: GA MŠk(CZ) 1M0572; GA ČR(CZ) GA102/07/1131 Institutional research plan: CEZ:AV0Z10750506 Keywords : Local and global divergences of finite measures * Divergences of sigma-finite measures * Statistical censoring * Pinsker's inequality, Ornstein's distance * Differential power entropies Subject RIV: BD - Theory of Information Impact factor: 0.759, year: 2009 http://library.utia.cas.cz/separaty/2009/SI/vajda-on divergence of finite measures and their applicability in statistics and information theory.pdf
Barellini, A; Bogi, L; Licitra, G; Silvi, A M; Zari, A
2009-12-01
Air traffic control (ATC) primary radars are 'classical' radars that use echoes of radiofrequency (RF) pulses from aircraft to determine their position. High-power RF pulses radiated from radar antennas may produce high electromagnetic field levels in the surrounding area. Measurement of electromagnetic fields produced by RF-pulsed radar by means of a swept-tuned spectrum analyser are investigated here. Measurements have been carried out both in the laboratory and in situ on signals generated by an ATC primary radar.
Mathur, Sunil; Sadana, Ajit
2015-12-01
We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.
International Nuclear Information System (INIS)
Czyzykiewicz, R.
2007-02-01
The analysing power measurements for the #vector#pp→ppη reaction studied in this dissertation are used in the determination of the reaction mechanism of the η meson production in nucleon-nucleon collisions. Measurements have been performed in the close-to-threshold energy region at beam momenta of p beam =2.010 and 2.085 GeV/c, corresponding to the excess energies of Q=10 and 36 MeV, respectively. The experiments were realised by means of a cooler synchrotron and storage ring COSY along with a cluster jet target. For registration of the reaction products the COSY-11 facility has been used. The identification of the η meson has been performed with the missing mass method. The results for the angular dependence of the analysing power combined with the hitherto determined isospin dependence of the total cross section for the η meson production in the nucleon-nucleon collisions, reveal a statistically significant indication that the excitation of the nucleon to the S 11 resonance, the process which intermediates the production of the η meson, is predominantly due to the exchange of a π meson between the colliding nucleons. The determined values of the analysing power at both excess energies are consistent with zero implying that the η meson is produced predominantly in the s-wave at both excess energies. (orig.)
Directory of Open Access Journals (Sweden)
Putnik-Delić Marina I.
2010-01-01
Full Text Available Eleven sugar beet genotypes were tested for their capacity to tolerate drought. Plants were grown in semi-controlled conditions, in the greenhouse, and watered daily. After 90 days, water deficit was imposed by the cessation of watering, while the control plants continued to be watered up to 80% of FWC. Five days later concentration of free proline in leaves was determined. Analysis was done in three replications. Statistical analysis was performed using STATISTICA 9.0, Minitab 15, and R2.11.1. Differences between genotypes were statistically processed by Duncan test. Because of nonormality of the data distribution and heterogeneity of variances in different groups, two types of transformations of row data were applied. For this type of data more appropriate in eliminating nonormality was Johnson transformation, as opposed to Box-Cox. Based on the both transformations it may be concluded that in all genotypes except for 10, concentration of free proline differs significantly between treatment (drought and the control.
Sound source measurement by using a passive sound insulation and a statistical approach
Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.
2015-10-01
This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.
Statistical Determination of Impact of Property Attributes for Weak Measurement Scales
Directory of Open Access Journals (Sweden)
Doszyń Mariusz
2017-12-01
Full Text Available Many of the property attributes are measured on weak scales (nominal and ordinal scale. For example, land allocation in the development plan is measured on a nominal scale and such categories as proximity, equipment, access to means of communication, location, and soil and water conditions, are measured on an ordinal scale. The use of statistical measures appropriate for interval or quotient scales is wrong in such cases. Therefore, the article presents statistical measures that allow specifying the impact of the attributes on real estate prices, which can be used for the weaker scales, mainly for the ordinal scale. In the empirical illustration the proposed measures will be calculated by using the actual database of transaction prices.
Directory of Open Access Journals (Sweden)
Carles Comas
2015-04-01
Full Text Available Aim of study: Understanding inter- and intra-specific competition for water is crucial in drought-prone environments. However, little is known about the spatial interdependencies for water uptake among individuals in mixed stands. The aim of this work was to compare water uptake patterns during a drought episode in two common Mediterranean tree species, Quercus ilex L. and Pinus halepensis Mill., using the isotope composition of xylem water (δ18O, δ2H as hydrological marker. Area of study: The study was performed in a mixed stand, sampling a total of 33 oaks and 78 pines (plot area= 888 m2. We tested the hypothesis that both species uptake water differentially along the soil profile, thus showing different levels of tree-to-tree interdependency, depending on whether neighbouring trees belong to one species or the other. Material and Methods: We used pair-correlation functions to study intra-specific point-tree configurations and the bivariate pair correlation function to analyse the inter-specific spatial configuration. Moreover, the isotopic composition of xylem water was analysed as a mark point pattern. Main results: Values for Q. ilex (δ18O = –5.3 ± 0.2‰, δ2H = –54.3 ± 0.7‰ were significantly lower than for P. halepensis (δ18O = –1.2 ± 0.2‰, δ2H = –25.1 ± 0.8‰, pointing to a greater contribution of deeper soil layers for water uptake by Q. ilex. Research highlights: Point-process analyses revealed spatial intra-specific dependencies among neighbouring pines, showing neither oak-oak nor oak-pine interactions. This supports niche segregation for water uptake between the two species.
A high-statistics measurement of the pp→nn charge-exchange reaction at 875 MeV/c
International Nuclear Information System (INIS)
Lamanna, M.; Ahmidouch, A.; Birsa, R.; Bradamante, F.; Bressan, A.; Bressani, T.; Dalla Torre-Colautti, S.; Giorgi, M.; Heer, E.; Hess, R.; Kunne, R.A.; Lechanoine-Le Luc, C.; Martin, A.; Mascarini, C.; Masoni, A.; Penzo, A.; Rapin, D.; Schiavon, P.; Tessarotto, F.
1995-01-01
A new measurement of the differential cross section and of the analysing power A 0n of the charge-exchange reaction pp→nn at 875 MeV/c is presented. The A 0n data cover the entire angular range and constitute a considerable improvement over previously published data, both in the forward and in the backward hemisphere. The cross-section data cover only the backward region, but are unique at this energy. A careful study of the long-term drifts of the apparatus has allowed to fully exploit the good statistics of the data. ((orig.))
Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino
2018-06-01
We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.
Directory of Open Access Journals (Sweden)
Jing-Chzi Hsieh
2016-05-01
Full Text Available The recycled Kevlar®/polyester/low-melting-point polyester (recycled Kevlar®/PET/LPET nonwoven geotextiles are immersed in neutral, strong acid, and strong alkali solutions, respectively, at different temperatures for four months. Their tensile strength is then tested according to various immersion periods at various temperatures, in order to determine their durability to chemicals. For the purpose of analyzing the possible factors that influence mechanical properties of geotextiles under diverse environmental conditions, the experimental results and statistical analyses are incorporated in this study. Therefore, influences of the content of recycled Kevlar® fibers, implementation of thermal treatment, and immersion periods on the tensile strength of recycled Kevlar®/PET/LPET nonwoven geotextiles are examined, after which their influential levels are statistically determined by performing multiple regression analyses. According to the results, the tensile strength of nonwoven geotextiles can be enhanced by adding recycled Kevlar® fibers and thermal treatment.
Performance evaluation of CT measurements made on step gauges using statistical methodologies
DEFF Research Database (Denmark)
Angel, J.; De Chiffre, L.; Kruth, J.P.
2015-01-01
In this paper, a study is presented in which statistical methodologies were applied to evaluate the measurement of step gauges on an X-ray computed tomography (CT) system. In particular, the effects of step gauge material density and orientation were investigated. The step gauges consist of uni......- and bidirectional lengths. By confirming the repeatability of measurements made on the test system, the number of required scans in the design of experiment (DOE) was reduced. The statistical model was checked using model adequacy principles; model adequacy checking is an important step in validating...
International Nuclear Information System (INIS)
Brodsky, A.
1986-04-01
This report provides statistical concepts and formulas for defining minimum detectable amount (MDA), bias and precision of sample analytical measurements of radioactivity for radiobioassay purposes. The defined statistical quantities and accuracy criteria were developed for use in standard performance criteria for radiobioassay, but are also useful in intralaboratory quality assurance programs. This report also includes a literature review and analysis of accuracy needs and accuracy recommendations of national and international scientific organizations for radiation or radioactivity measurements used for radiation protection purposes. Computer programs are also included for calculating the probabilities of passing or failing multiple analytical tests for different acceptable ranges of bias and precision
Franco, Ana; Gaillard, Vinciane; Cleeremans, Axel; Destrebecqz, Arnaud
2015-12-01
Statistical learning can be used to extract the words from continuous speech. Gómez, Bion, and Mehler (Language and Cognitive Processes, 26, 212-223, 2011) proposed an online measure of statistical learning: They superimposed auditory clicks on a continuous artificial speech stream made up of a random succession of trisyllabic nonwords. Participants were instructed to detect these clicks, which could be located either within or between words. The results showed that, over the length of exposure, reaction times (RTs) increased more for within-word than for between-word clicks. This result has been accounted for by means of statistical learning of the between-word boundaries. However, even though statistical learning occurs without an intention to learn, it nevertheless requires attentional resources. Therefore, this process could be affected by a concurrent task such as click detection. In the present study, we evaluated the extent to which the click detection task indeed reflects successful statistical learning. Our results suggest that the emergence of RT differences between within- and between-word click detection is neither systematic nor related to the successful segmentation of the artificial language. Therefore, instead of being an online measure of learning, the click detection task seems to interfere with the extraction of statistical regularities.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
A Modified Jonckheere Test Statistic for Ordered Alternatives in Repeated Measures Design
Directory of Open Access Journals (Sweden)
Hatice Tül Kübra AKDUR
2016-09-01
Full Text Available In this article, a new test based on Jonckheere test [1] for randomized blocks which have dependent observations within block is presented. A weighted sum for each block statistic rather than the unweighted sum proposed by Jonckheereis included. For Jonckheere type statistics, the main assumption is independency of observations within block. In the case of repeated measures design, the assumption of independence is violated. The weighted Jonckheere type statistic for the situation of dependence for different variance-covariance structure and the situation based on ordered alternative hypothesis structure of each block on the design is used. Also, the proposed statistic is compared to the existing test based on Jonckheere in terms of type I error rates by performing Monte Carlo simulation. For the strong correlations, circular bootstrap version of the proposed Jonckheere test provides lower rates of type I error.
The measurement of statistical reasoning in verbal-numerical and graphical forms: a pilot study
International Nuclear Information System (INIS)
Agus, M; Penna, M P; Peró-Cebollero, M; Guàrdia-Olmos, J
2013-01-01
Numerous subjects have trouble in understanding various conceptions connected to statistical problems. Research reports how students' ability to solve problems (including statistical problems) can be influenced by exhibiting proofs. In this work we aim to contrive an original and easy instrument able to assess statistical reasoning on uncertainty and on association, regarding two different forms of proof presentation: pictorial-graphical and verbal–numerical. We have conceived eleven pairs of simple problems in the verbal–numerical and pictorial–graphical form and we have presented the proofs to 47 undergraduate students. The purpose of our work was to evaluate the goodness and reliability of these problems in the assessment of statistical reasoning. Each subject solved each pair of proofs in the verbal-numerical and in the pictorial–graphical form, in different problem presentation orders. Data analyses have highlighted that six out of the eleven pairs of problems appear to be useful and adequate to estimate statistical reasoning on uncertainty and that there is no effect due to the order of presentation in the verbal–numerical and pictorial–graphical form
Yokoyama, Shozo; Takenaka, Naomi
2005-04-01
Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.
Wafer, Lucas; Kloczewiak, Marek; Luo, Yin
2016-07-01
Analytical ultracentrifugation-sedimentation velocity (AUC-SV) is often used to quantify high molar mass species (HMMS) present in biopharmaceuticals. Although these species are often present in trace quantities, they have received significant attention due to their potential immunogenicity. Commonly, AUC-SV data is analyzed as a diffusion-corrected, sedimentation coefficient distribution, or c(s), using SEDFIT to numerically solve Lamm-type equations. SEDFIT also utilizes maximum entropy or Tikhonov-Phillips regularization to further allow the user to determine relevant sample information, including the number of species present, their sedimentation coefficients, and their relative abundance. However, this methodology has several, often unstated, limitations, which may impact the final analysis of protein therapeutics. These include regularization-specific effects, artificial "ripple peaks," and spurious shifts in the sedimentation coefficients. In this investigation, we experimentally verified that an explicit Bayesian approach, as implemented in SEDFIT, can largely correct for these effects. Clear guidelines on how to implement this technique and interpret the resulting data, especially for samples containing micro-heterogeneity (e.g., differential glycosylation), are also provided. In addition, we demonstrated how the Bayesian approach can be combined with F statistics to draw more accurate conclusions and rigorously exclude artifactual peaks. Numerous examples with an antibody and an antibody-drug conjugate were used to illustrate the strengths and drawbacks of each technique.
Gregoire, Alexandre David
2011-07-01
The goal of this research was to accurately predict the ultimate compressive load of impact damaged graphite/epoxy coupons using a Kohonen self-organizing map (SOM) neural network and multivariate statistical regression analysis (MSRA). An optimized use of these data treatment tools allowed the generation of a simple, physically understandable equation that predicts the ultimate failure load of an impacted damaged coupon based uniquely on the acoustic emissions it emits at low proof loads. Acoustic emission (AE) data were collected using two 150 kHz resonant transducers which detected and recorded the AE activity given off during compression to failure of thirty-four impacted 24-ply bidirectional woven cloth laminate graphite/epoxy coupons. The AE quantification parameters duration, energy and amplitude for each AE hit were input to the Kohonen self-organizing map (SOM) neural network to accurately classify the material failure mechanisms present in the low proof load data. The number of failure mechanisms from the first 30% of the loading for twenty-four coupons were used to generate a linear prediction equation which yielded a worst case ultimate load prediction error of 16.17%, just outside of the +/-15% B-basis allowables, which was the goal for this research. Particular emphasis was placed upon the noise removal process which was largely responsible for the accuracy of the results.
Two-dimensional Kikuchi patterns of Si as measured using an electrostatic analyser
Energy Technology Data Exchange (ETDEWEB)
Vos, Maarten, E-mail: maarten.vos@anu.edu.au [Electronic Materials Engineering Department, Research School of Physics and Engineering, The Australian National University, Canberra 2601 (Australia); Winkelmann, Aimo [Bruker Nano GmbH, Am Studio 2D, Berlin 12489 (Germany)
2016-12-15
We present Kikuchi patterns of Si single crystals measured with an electrostatic analyser, where the kinetic energy of the diffracted electron is known with sub-eV precision. Two-dimensional patterns are acquired by rotating the crystal under computer control. This makes detailed comparison of calculated and measured distributions possible with precise knowledge of the energy of the scattered electrons. The case of Si is used to validate the method, and these experiments provide a detailed comparison of measured and calculated Kikuchi patterns. In this way, we can gain more insight on Kikuchi pattern formation in non-energy resolved measurements of conventional electron backscatter diffraction (EBSD) and electron channeling patterns (ECP). It was possible to identify the influence of channeling of the incoming beam on the measured Kikuchi pattern. The effect of energy loss on the Kikuchi pattern was established, and it is demonstrated that, under certain conditions, the channeling features have a different dependence on the energy loss compared to the Kikuchi lines. - Highlights: • Two-dimensional Kikuchi patterns measured for Silicon with electrostatic analyser. • Good agreement obtained with dynamical theory of diffraction. • Channeling effects of the incoming beam are identified.
International Nuclear Information System (INIS)
Kolluri, Srinivas Sahan; Esfahani, Iman Janghorban; Garikiparthy, Prithvi Sai Nadh; Yoo, Chang Kyoo
2015-01-01
Our aim was to analyze, monitor, and predict the outcomes of processes in a full-scale seawater reverse osmosis (SWRO) desalination plant using multivariate statistical techniques. Multivariate analysis of variance (MANOVA) was used to investigate the performance and efficiencies of two SWRO processes, namely, pore controllable fiber filterreverse osmosis (PCF-SWRO) and sand filtration-ultra filtration-reverse osmosis (SF-UF-SWRO). Principal component analysis (PCA) was applied to monitor the two SWRO processes. PCA monitoring revealed that the SF-UF-SWRO process could be analyzed reliably with a low number of outliers and disturbances. Partial least squares (PLS) analysis was then conducted to predict which of the seven input parameters of feed flow rate, PCF/SF-UF filtrate flow rate, temperature of feed water, turbidity feed, pH, reverse osmosis (RO)flow rate, and pressure had a significant effect on the outcome variables of permeate flow rate and concentration. Root mean squared errors (RMSEs) of the PLS models for permeate flow rates were 31.5 and 28.6 for the PCF-SWRO process and SF-UF-SWRO process, respectively, while RMSEs of permeate concentrations were 350.44 and 289.4, respectively. These results indicate that the SF-UF-SWRO process can be modeled more accurately than the PCF-SWRO process, because the RMSE values of permeate flowrate and concentration obtained using a PLS regression model of the SF-UF-SWRO process were lower than those obtained for the PCF-SWRO process.
Energy Technology Data Exchange (ETDEWEB)
Kolluri, Srinivas Sahan; Esfahani, Iman Janghorban; Garikiparthy, Prithvi Sai Nadh; Yoo, Chang Kyoo [Kyung Hee University, Yongin (Korea, Republic of)
2015-08-15
Our aim was to analyze, monitor, and predict the outcomes of processes in a full-scale seawater reverse osmosis (SWRO) desalination plant using multivariate statistical techniques. Multivariate analysis of variance (MANOVA) was used to investigate the performance and efficiencies of two SWRO processes, namely, pore controllable fiber filterreverse osmosis (PCF-SWRO) and sand filtration-ultra filtration-reverse osmosis (SF-UF-SWRO). Principal component analysis (PCA) was applied to monitor the two SWRO processes. PCA monitoring revealed that the SF-UF-SWRO process could be analyzed reliably with a low number of outliers and disturbances. Partial least squares (PLS) analysis was then conducted to predict which of the seven input parameters of feed flow rate, PCF/SF-UF filtrate flow rate, temperature of feed water, turbidity feed, pH, reverse osmosis (RO)flow rate, and pressure had a significant effect on the outcome variables of permeate flow rate and concentration. Root mean squared errors (RMSEs) of the PLS models for permeate flow rates were 31.5 and 28.6 for the PCF-SWRO process and SF-UF-SWRO process, respectively, while RMSEs of permeate concentrations were 350.44 and 289.4, respectively. These results indicate that the SF-UF-SWRO process can be modeled more accurately than the PCF-SWRO process, because the RMSE values of permeate flowrate and concentration obtained using a PLS regression model of the SF-UF-SWRO process were lower than those obtained for the PCF-SWRO process.
Deng, Xueliang; Nie, Suping; Deng, Weitao; Cao, Weihua
2018-04-01
In this study, we compared the following four different gridded monthly precipitation products: the National Centers for Environmental Prediction version 2 (NCEP-2) reanalysis data, the satellite-based Climate Prediction Center Morphing technique (CMORPH) data, the merged satellite-gauge Global Precipitation Climatology Project (GPCP) data, and the merged satellite-gauge-model data from the Beijing Climate Center Merged Estimation of Precipitation (BMEP). We evaluated the performances of these products using monthly precipitation observations spanning the period of January 2003 to December 2013 from a dense, national, rain gauge network in China. Our assessment involved several statistical techniques, including spatial pattern, temporal variation, bias, root-mean-square error (RMSE), and correlation coefficient (CC) analysis. The results show that NCEP-2, GPCP, and BMEP generally overestimate monthly precipitation at the national scale and CMORPH underestimates it. However, all of the datasets successfully characterized the northwest to southeast increase in the monthly precipitation over China. Because they include precipitation gauge information from the Global Telecommunication System (GTS) network, GPCP and BMEP have much smaller biases, lower RMSEs, and higher CCs than NCEP-2 and CMORPH. When the seasonal and regional variations are considered, NCEP-2 has a larger error over southern China during the summer. CMORPH poorly reproduces the magnitude of the precipitation over southeastern China and the temporal correlation over western and northwestern China during all seasons. BMEP has a lower RMSE and higher CC than GPCP over eastern and southern China, where the station network is dense. In contrast, BMEP has a lower CC than GPCP over western and northwestern China, where the gauge network is relatively sparse.
Statistical measurement of power spectrum density of large aperture optical component
International Nuclear Information System (INIS)
Xu Jiancheng; Xu Qiao; Chai Liqun
2010-01-01
According to the requirement of ICF, a method based on statistical theory has been proposed to measure the power spectrum density (PSD) of large aperture optical components. The method breaks the large-aperture wavefront into small regions, and obtains the PSD of the large-aperture wavefront by weighted averaging of the PSDs of the regions, where the weight factor is each region's area. Simulation and experiment demonstrate the effectiveness of the proposed method. They also show that, the obtained PSDs of the large-aperture wavefront by statistical method and sub-aperture stitching method fit well, when the number of small regions is no less than 8 x 8. The statistical method is not sensitive to translation stage's errors and environment instabilities, thus it is appropriate for PSD measurement during the process of optical fabrication. (authors)
Measurement of the analysing power of elastic proton-proton scattering at 582 MeV
International Nuclear Information System (INIS)
Berdoz, A.; Favier, B.; Foroughi, F.; Weddigen, C.
1984-01-01
The authors have measured the analysing power of elastic proton-proton scattering at 582 MeV for 14 angles from 20 to 80 0 CM. The angular range was limited to >20 0 by the energy loss of the recoil protons. The experiment was performed at the PM1 beam line at SIN. A beam intensity of about 10 8 particles s -1 was used. (Auth.)
New measurements in plutonium L X ray emission spectrum using an electron probe micro-analyser
International Nuclear Information System (INIS)
Bobin, J.L.; Despres, J.
1966-01-01
Further studies by means of an electron-probe micro-analyser, allowed report CEA-R--1798 authors to set up a larger plutonium X ray spectrum table. Measurements of plutonium L II and L III levels excitation potentials have also been achieved. Some remarks about apparatus performance data (such as spectrograph sensibility, resolving power and accuracy) will be found in the appendix. (authors) [fr
The ICF has made a difference to functioning and disability measurement and statistics.
Madden, Rosamond H; Bundy, Anita
2018-02-12
Fifteen years after the publication of the International Classification of Functioning, Disability and Health (ICF), we investigated: How ICF applications align with ICF aims, contents and principles, and how the ICF has been used to improve measurement of functioning and related statistics. In a scoping review, we investigated research published 2001-2015 relating to measurement and statistics for evidence of: a change in thinking; alignment of applications with ICF specifications and philosophy; and the emergence of new knowledge. The ICF is used in diverse applications, settings and countries, with processes largely aligned with the ICF and intended to improve measurement and statistics: new national surveys, information systems and ICF-based instruments; and international efforts to improve disability data. Knowledge is growing about the components and interactions of the ICF model, the diverse effects of the environment on functioning, and the meaning and measurement of participation. The ICF provides specificity and a common language in the complex world of functioning and disability and is stimulating new thinking, new applications in measurement and statistics, and the assembling of new knowledge. Nevertheless, the field needs to mature. Identified gaps suggest ways to improve measurement and statistics to underpin policies, services and outcomes. Implications for Rehabilitation The ICF offers a conceptualization of functioning and disability that can underpin assessment and documentation in rehabilitation, with a growing body of experience to draw on for guidance. Experience with the ICF reminds practitioners to consider all the domains of participation, the effect of the environment on participation and the importance of involving clients/patients in assessment and service planning. Understanding the variability of functioning within everyday environments and designing interventions for removing barriers in various environments is a vital part of
International Nuclear Information System (INIS)
Barellini, A.; Bogi, L.; Licitra, G.; Silvi, A. M.; Zari, A.
2009-01-01
Air traffic control (ATC) primary radars are 'classical' radars that use echoes of radiofrequency (RF) pulses from aircraft to determine their position. High-power RF pulses radiated from radar antennas may produce high electromagnetic field levels in the surrounding area. Measurement of electromagnetic fields produced by RF-pulsed radar by means of a swept-tuned spectrum analyser are investigated here. Measurements have been carried out both in the laboratory and in situ on signals generated by an ATC primary radar. (authors)
Tuneable diode laser gas analyser for methane measurements on a large scale solid oxide fuel cell
Lengden, Michael; Cunningham, Robert; Johnstone, Walter
2011-10-01
A new in-line, real time gas analyser is described that uses tuneable diode laser spectroscopy (TDLS) for the measurement of methane in solid oxide fuel cells. The sensor has been tested on an operating solid oxide fuel cell (SOFC) in order to prove the fast response and accuracy of the technology as compared to a gas chromatograph. The advantages of using a TDLS system for process control in a large-scale, distributed power SOFC unit are described. In future work, the addition of new laser sources and wavelength modulation will allow the simultaneous measurement of methane, water vapour, carbon-dioxide and carbon-monoxide concentrations.
International Nuclear Information System (INIS)
Kawano, Takao
2014-01-01
It is known that radiation is detected at random and the radiation counts fluctuate statistically. In the present study, a radiation measurement experiment was performed to understand the randomness and statistical fluctuation of radiation counts. In the measurement, three natural radiation sources were used. The sources were fabricated from potassium chloride chemicals, chemical fertilizers and kelps. These materials contain naturally occurring potassium-40 that is a radionuclide. From high schools, junior high schools and elementary schools, nine teachers participated to the radiation measurement experiment. Each participant measured the 1-min integration counts of radiation five times using GM survey meters, and 45 sets of data were obtained for the respective natural radiation sources. It was found that the frequency of occurrence of radiation counts was distributed according to a Gaussian distribution curve, although the obtained 45 data sets of radiation counts superficially looked to be fluctuating meaninglessly. (author)
One of the main uses of biomarker measurements is to compare different populations to each other and to assess risk in comparison to established parameters. This is most often done using summary statistics such as central tendency, variance components, confidence intervals, excee...
International Nuclear Information System (INIS)
Dowdy, E.J.; Hansen, G.E.; Robba, A.A.; Pratt, J.C.
1980-01-01
The complete formalism for the use of statistical neutron fluctuation measurements for the nondestructive assay of fissionable materials has been developed. This formalism includes the effect of detector deadtime, neutron multiplicity, random neutron pulse contributions from (α,n) contaminants in the sample, and the sample multiplication of both fission-related and background neutrons
Watts, A.L.; Lilienfeld, S.O.; Edens, J.F.; Douglas, K.S.; Skeem, J.L.; Verschuere, B.; LoPilato, A.C.
2016-01-01
Given that psychopathy is associated with narcissism, lack of insight, and pathological lying, the assumption that the validity of self-report psychopathy measures is compromised by response distortion has been widespread. We examined the statistical effects (moderation, suppression) of response
Statistics & Input-Output Measures for School Libraries in Colorado, 2002.
Colorado State Library, Denver.
This document presents statistics and input-output measures for K-12 school libraries in Colorado for 2002. Data are presented by type and size of school, i.e., high schools (six categories ranging from 2,000 and over to under 300), junior high/middle schools (five categories ranging from 1,000-1,999 to under 300), elementary schools (four…
Directory of Open Access Journals (Sweden)
Wang Xiaoqiang
2012-04-01
Full Text Available Abstract Background Quantitative trait loci (QTL detection on a huge amount of phenotypes, like eQTL detection on transcriptomic data, can be dramatically impaired by the statistical properties of interval mapping methods. One of these major outcomes is the high number of QTL detected at marker locations. The present study aims at identifying and specifying the sources of this bias, in particular in the case of analysis of data issued from outbred populations. Analytical developments were carried out in a backcross situation in order to specify the bias and to propose an algorithm to control it. The outbred population context was studied through simulated data sets in a wide range of situations. The likelihood ratio test was firstly analyzed under the "one QTL" hypothesis in a backcross population. Designs of sib families were then simulated and analyzed using the QTL Map software. On the basis of the theoretical results in backcross, parameters such as the population size, the density of the genetic map, the QTL effect and the true location of the QTL, were taken into account under the "no QTL" and the "one QTL" hypotheses. A combination of two non parametric tests - the Kolmogorov-Smirnov test and the Mann-Whitney-Wilcoxon test - was used in order to identify the parameters that affected the bias and to specify how much they influenced the estimation of QTL location. Results A theoretical expression of the bias of the estimated QTL location was obtained for a backcross type population. We demonstrated a common source of bias under the "no QTL" and the "one QTL" hypotheses and qualified the possible influence of several parameters. Simulation studies confirmed that the bias exists in outbred populations under both the hypotheses of "no QTL" and "one QTL" on a linkage group. The QTL location was systematically closer to marker locations than expected, particularly in the case of low QTL effect, small population size or low density of markers, i
Directory of Open Access Journals (Sweden)
Deverick J Anderson
Full Text Available The rate of community-acquired Clostridium difficile infection (CA-CDI is increasing. While receipt of antibiotics remains an important risk factor for CDI, studies related to acquisition of C. difficile outside of hospitals are lacking. As a result, risk factors for exposure to C. difficile in community settings have been inadequately studied.To identify novel environmental risk factors for CA-CDI.We performed a population-based retrospective cohort study of patients with CA-CDI from 1/1/2007 through 12/31/2014 in a 10-county area in central North Carolina. 360 Census Tracts in these 10 counties were used as the demographic Geographic Information System (GIS base-map. Longitude and latitude (X, Y coordinates were generated from patient home addresses and overlaid to Census Tracts polygons using ArcGIS; ArcView was used to assess "hot-spots" or clusters of CA-CDI. We then constructed a mixed hierarchical model to identify environmental variables independently associated with increased rates of CA-CDI.A total of 1,895 unique patients met our criteria for CA-CDI. The mean patient age was 54.5 years; 62% were female and 70% were Caucasian. 402 (21% patient addresses were located in "hot spots" or clusters of CA-CDI (p<0.001. "Hot spot" census tracts were scattered throughout the 10 counties. After adjusting for clustering and population density, age ≥ 60 years (p = 0.03, race (<0.001, proximity to a livestock farm (0.01, proximity to farming raw materials services (0.02, and proximity to a nursing home (0.04 were independently associated with increased rates of CA-CDI.Our study is the first to use spatial statistics and mixed models to identify important environmental risk factors for acquisition of C. difficile and adds to the growing evidence that farm practices may put patients at risk for important drug-resistant infections.
Rai, Hari Mohan; Saxena, Shailendra K.; Mishra, Vikash; Kumar, Rajesh; Sagdeo, P. R.
2017-08-01
Magnetodielectric (MD) materials have attracted considerable attention due to their intriguing physics and potential future applications. However, the intrinsicality of the MD effect is always a major concern in such materials as the MD effect may arise also due to the MR (magnetoresistance) effect. In the present case study, we report an experimental approach to analyse and separate the intrinsic and MR dominated contributions of the MD phenomenon. For this purpose, polycrystalline samples of LaGa1-xAxO3 (A = Mn/Fe) have been prepared by solid state reaction method. The purity of their structural phase (orthorhombic) has been validated by refining the X-ray diffraction data. The RTMD (room temperature MD) response has been recorded over a frequency range of 20 Hz to 10 MHz. In order to analyse the intrinsicality of the MD effect, FDMR (frequency dependent MR) by means of IS (impedance spectroscopy) and dc MR measurements in four probe geometry have been carried out at RT. A significant RTMD effect has been observed in selected Mn/Fe doped LaGaO3 (LGO) compositions. The mechanism of MR free/intrinsic MD effect, observed in Mn/Fe doped LGO, has been understood speculatively in terms of modified cell volume associated with the reorientation/retransformation of spin-coupled Mn/Fe orbitals due to the application of magnetic field. The present analysis suggests that in order to justify the intrinsic/resistive origin of the MD phenomenon, FDMR measurements are more useful than measuring only dc MR or analysing the trends of magnetic field dependent change in the dielectric constant and tanδ. On the basis of the present case study, we propose that IS (FDMR) alone can be used as an effective experimental tool to detect and analyse the resistive and intrinsic parts contributing to the MD phenomenon.
International Nuclear Information System (INIS)
Tauhata, L.; Vianna, M.E.; Oliveira, A.E. de; Clain, A.F.; Ferreira, A.C.M.; Bernardes, E.M.
2000-01-01
The accuracy and precision of results of the radionuclide analyses in environmental samples are widely claimed internationally due to its consequences in the decision process coupled to evaluation of environmental pollution, impact, internal and external population exposure. These characteristics of measurement of the laboratories can be shown clearly using intercomparison data, due to the existence of a reference value and the need of three determinations for each analysis. In intercomparison studies accuracy in radionuclide assays in low-level environmental samples has usually been the main focus in performance evaluation and it can be estimated by taking into account the deviation between the experimental laboratory mean value and the reference value. The laboratory repeatability of measurements or their standard deviation is seldom included in performance evaluation. In order to show the influence of the uncertainties in performance evaluation of the laboratories, data of 22 intercomparison runs which distributed 790 spiked environmental samples to 20 Brazilian participant laboratories were compared, using the 'Normalised Standard Deviation' as statistical criteria for performance evaluation of U.S.EPA. It mainly takes into account the laboratory accuracy and the performance evaluation using the same data classified by normalised standard deviation modified by a weight reactor that includes the individual laboratory uncertainty. The results show a relative decrease in laboratory performance in each radionuclide assay: 1.8% for 65 Zn, 2.8% for 40 K, 3.4 for 60 Co, 3.7% for 134 Cs, 4.0% for 137 Cs, 4.4% for Th and U nat , 4.5% for 3 H, 6.3% for 133 Ba, 8.6% for 90 Sr, 10.6% for Gross Alpha, 10.9% for 106 Ru, 11.1% for 226 Ra, 11.5% for Gross Beta and 13.6% for 228 Ra. The changes in the parameters of the statistical distribution function were negligible and the distribution remained as Gaussian type for all radionuclides analysed. Data analyses in terms of
Statistics Hacks Tips & Tools for Measuring the World and Beating the Odds
Frey, Bruce
2008-01-01
Want to calculate the probability that an event will happen? Be able to spot fake data? Prove beyond doubt whether one thing causes another? Or learn to be a better gambler? You can do that and much more with 75 practical and fun hacks packed into Statistics Hacks. These cool tips, tricks, and mind-boggling solutions from the world of statistics, measurement, and research methods will not only amaze and entertain you, but will give you an advantage in several real-world situations-including business.
Schulz, Marcus; Neumann, Daniel; Fleet, David M; Matthies, Michael
2013-12-01
During the last decades, marine pollution with anthropogenic litter has become a worldwide major environmental concern. Standardized monitoring of litter since 2001 on 78 beaches selected within the framework of the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR) has been used to identify temporal trends of marine litter. Based on statistical analyses of this dataset a two-part multi-criteria evaluation system for beach litter pollution of the North-East Atlantic and the North Sea is proposed. Canonical correlation analyses, linear regression analyses, and non-parametric analyses of variance were used to identify different temporal trends. A classification of beaches was derived from cluster analyses and served to define different states of beach quality according to abundances of 17 input variables. The evaluation system is easily applicable and relies on the above-mentioned classification and on significant temporal trends implied by significant rank correlations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Markovic, Gabriela; Schult, Marie-Louise; Bartfai, Aniko; Elg, Mattias
2017-01-31
Progress in early cognitive recovery after acquired brain injury is uneven and unpredictable, and thus the evaluation of rehabilitation is complex. The use of time-series measurements is susceptible to statistical change due to process variation. To evaluate the feasibility of using a time-series method, statistical process control, in early cognitive rehabilitation. Participants were 27 patients with acquired brain injury undergoing interdisciplinary rehabilitation of attention within 4 months post-injury. The outcome measure, the Paced Auditory Serial Addition Test, was analysed using statistical process control. Statistical process control identifies if and when change occurs in the process according to 3 patterns: rapid, steady or stationary performers. The statistical process control method was adjusted, in terms of constructing the baseline and the total number of measurement points, in order to measure a process in change. Statistical process control methodology is feasible for use in early cognitive rehabilitation, since it provides information about change in a process, thus enabling adjustment of the individual treatment response. Together with the results indicating discernible subgroups that respond differently to rehabilitation, statistical process control could be a valid tool in clinical decision-making. This study is a starting-point in understanding the rehabilitation process using a real-time-measurements approach.
Statistical methods for quality assurance basics, measurement, control, capability, and improvement
Vardeman, Stephen B
2016-01-01
This undergraduate statistical quality assurance textbook clearly shows with real projects, cases and data sets how statistical quality control tools are used in practice. Among the topics covered is a practical evaluation of measurement effectiveness for both continuous and discrete data. Gauge Reproducibility and Repeatability methodology (including confidence intervals for Repeatability, Reproducibility and the Gauge Capability Ratio) is thoroughly developed. Process capability indices and corresponding confidence intervals are also explained. In addition to process monitoring techniques, experimental design and analysis for process improvement are carefully presented. Factorial and Fractional Factorial arrangements of treatments and Response Surface methods are covered. Integrated throughout the book are rich sets of examples and problems that help readers gain a better understanding of where and how to apply statistical quality control tools. These large and realistic problem sets in combination with the...
A simple program to measure and analyse tree rings using Excel, R and SigmaScan
Hietz, Peter
2011-01-01
I present a new software that links a program for image analysis (SigmaScan), one for spreadsheets (Excel) and one for statistical analysis (R) for applications of tree-ring analysis. The first macro measures ring width marked by the user on scanned images, stores raw and detrended data in Excel and calculates the distance to the pith and inter-series correlations. A second macro measures darkness along a defined path to identify latewood–earlywood transition in conifers, and a third shows the potential for automatic detection of boundaries. Written in Visual Basic for Applications, the code makes use of the advantages of existing programs and is consequently very economic and relatively simple to adjust to the requirements of specific projects or to expand making use of already available code. PMID:26109835
A simple program to measure and analyse tree rings using Excel, R and SigmaScan.
Hietz, Peter
I present a new software that links a program for image analysis (SigmaScan), one for spreadsheets (Excel) and one for statistical analysis (R) for applications of tree-ring analysis. The first macro measures ring width marked by the user on scanned images, stores raw and detrended data in Excel and calculates the distance to the pith and inter-series correlations. A second macro measures darkness along a defined path to identify latewood-earlywood transition in conifers, and a third shows the potential for automatic detection of boundaries. Written in Visual Basic for Applications, the code makes use of the advantages of existing programs and is consequently very economic and relatively simple to adjust to the requirements of specific projects or to expand making use of already available code.
Matenaers, Cyrill; Popper, Bastian; Rieger, Alexandra; Wanke, Rüdiger; Blutke, Andreas
2018-01-01
The accuracy of quantitative stereological analysis tools such as the (physical) disector method substantially depends on the precise determination of the thickness of the analyzed histological sections. One conventional method for measurement of histological section thickness is to re-embed the section of interest vertically to its original section plane. The section thickness is then measured in a subsequently prepared histological section of this orthogonally re-embedded sample. However, the orthogonal re-embedding (ORE) technique is quite work- and time-intensive and may produce inaccurate section thickness measurement values due to unintentional slightly oblique (non-orthogonal) positioning of the re-embedded sample-section. Here, an improved ORE method is presented, allowing for determination of the factual section plane angle of the re-embedded section, and correction of measured section thickness values for oblique (non-orthogonal) sectioning. For this, the analyzed section is mounted flat on a foil of known thickness (calibration foil) and both the section and the calibration foil are then vertically (re-)embedded. The section angle of the re-embedded section is then calculated from the deviation of the measured section thickness of the calibration foil and its factual thickness, using basic geometry. To find a practicable, fast, and accurate alternative to ORE, the suitability of spectral reflectance (SR) measurement for determination of plastic section thicknesses was evaluated. Using a commercially available optical reflectometer (F20, Filmetrics®, USA), the thicknesses of 0.5 μm thick semi-thin Epon (glycid ether)-sections and of 1-3 μm thick plastic sections (glycolmethacrylate/ methylmethacrylate, GMA/MMA), as regularly used in physical disector analyses, could precisely be measured within few seconds. Compared to the measured section thicknesses determined by ORE, SR measures displayed less than 1% deviation. Our results prove the applicability
Statistical Analysis and Comparison of Harmonics Measured in Offshore Wind Farms
DEFF Research Database (Denmark)
Kocewiak, Lukasz Hubert; Hjerrild, Jesper; Bak, Claus Leth
2011-01-01
The paper shows statistical analysis of harmonic components measured in different offshore wind farms. Harmonic analysis is a complex task and requires many aspects, such as measurements, data processing, modeling, validation, to be taken into consideration. The paper describes measurement process...... and shows sophisticated analysis on representative harmonic measurements from Avedøre Holme, Gunfleet Sands and Burbo Bank wind farms. The nature of generation and behavior of harmonic components in offshore wind farms clearly presented and explained based on probabilistic approach. Some issues regarding...... commonly applied standards are also put forward in the discussion. Based on measurements and data analysis it is shown that a general overview about wind farm harmonic behaviour cannot be fully observed only based on single-value measurements as suggested in the standards but using more descriptive...
A new support measure to quantify the impact of local optima in phylogenetic analyses.
Brammer, Grant
2011-09-29
Phylogentic analyses are often incorrectly assumed to have stabilized to a single optimum. However, a set of trees from a phylogenetic analysis may contain multiple distinct local optima with each optimum providing different levels of support for each clade. For situations with multiple local optima, we propose p-support which is a clade support measure that shows the impact optima have on a final consensus tree. Our p-support measure is implemented in our PeakMapper software package. We study our approach on two published, large-scale biological tree collections. PeakMapper shows that each data set contains multiple local optima. p-support shows that both datasets contain clades in the majority consensus tree that are only supported by a subset of the local optima. Clades with low p-support are most likely to benefit from further investigation. These tools provide researchers with new information regarding phylogenetic analyses beyond what is provided by other support measures alone.
Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning
Directory of Open Access Journals (Sweden)
Chuan Li
2016-06-01
Full Text Available Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM. The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.
Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego
2016-06-17
Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.
International Nuclear Information System (INIS)
Pinotti, E.; Brenna, M.; Puppin, E.
2008-01-01
In magneto-optical Kerr measurements of the Barkhausen noise, a magnetization jump ΔM due to a domain reversal produces a variation ΔI of the intensity of a laser beam reflected by the sample, which is the physical quantity actually measured. Due to the non-uniform beam intensity profile, the magnitude of ΔI depends both on ΔM and on its position on the laser spot. This could distort the statistical distribution p(ΔI) of the measured ΔI with respect to the true distribution p(ΔM) of the magnetization jumps ΔM. In this work the exact relationship between the two distributions is derived in a general form, which will be applied to some possible beam profiles. It will be shown that in most cases the usual Gaussian beam produces a negligible statistical distortion. Moreover, for small ΔI the noise of the experimental setup can also distort the statistical distribution p(ΔI), by erroneously rejecting small ΔI as noise. This effect has been calculated for white noise, and it will be shown that it is relatively small but not totally negligible as the measured ΔI approaches the detection limit
Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain
2017-10-01
We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.
Design and Analyses of High Aspect Ratio Nozzles for Distributed Propulsion Acoustic Measurements
Dippold, Vance F., III
2016-01-01
A series of three convergent round-to-rectangular high-aspect ratio nozzles were designed for acoustics measurements. The nozzles have exit area aspect ratios of 8:1, 12:1, and 16:1. With septa inserts, these nozzles will mimic an array of distributed propulsion system nozzles, as found on hybrid wing-body aircraft concepts. Analyses were performed for the three nozzle designs and showed that the flow through the nozzles was free of separated flow and shocks. The exit flow was mostly uniform with the exception of a pair of vortices at each span-wise end of the nozzle.
A conceptual framework for analysing and measuring land-use intensity
DEFF Research Database (Denmark)
Erb, Karl-Heinz; Haberl, Helmut; Jepsen, Martin Rudbeck
2013-01-01
Large knowledge gaps currently exist that limit our ability to understand and characterise dynamics and patterns of land-use intensity: in particular, a comprehensive conceptual framework and a system of measurement are lacking. This situation hampers the development of a sound understanding...... of the mechanisms, determinants, and constraints underlying changes in land-use intensity. On the basis of a review of approaches for studying land-use intensity, we propose a conceptual framework to quantify and analyse land-use intensity. This framework integrates three dimensions: (a) input intensity, (b) output...
The IBAS image analyser and its use in particle size measurement
International Nuclear Information System (INIS)
Snelling, K.W.
1984-10-01
The Kontron image analyser (IBAS) is used at Winfrith primarily for size analysis of aerosol particles. The system incorporates two computers, IBAS 1 for system communication and control, and IBAS 2 containing the main image memories. The first is accessed via a keyboard or digitiser tablet, and output can be displayed on a monitor or in printed form. The contents of the image memories are displayed on a colour monitor. Automatic image analysis is described, with typical applications, including the measurement of monodisperse particles, sodium fire aerosols, reactor crud particles and cadmium-silver aerosol particles. (U.K.)
α -induced reactions on 115In: Cross section measurements and statistical model analysis
Kiss, G. G.; Szücs, T.; Mohr, P.; Török, Zs.; Huszánk, R.; Gyürky, Gy.; Fülöp, Zs.
2018-05-01
Background: α -nucleus optical potentials are basic ingredients of statistical model calculations used in nucleosynthesis simulations. While the nucleon+nucleus optical potential is fairly well known, for the α +nucleus optical potential several different parameter sets exist and large deviations, reaching sometimes even an order of magnitude, are found between the cross section predictions calculated using different parameter sets. Purpose: A measurement of the radiative α -capture and the α -induced reaction cross sections on the nucleus 115In at low energies allows a stringent test of statistical model predictions. Since experimental data are scarce in this mass region, this measurement can be an important input to test the global applicability of α +nucleus optical model potentials and further ingredients of the statistical model. Methods: The reaction cross sections were measured by means of the activation method. The produced activities were determined by off-line detection of the γ rays and characteristic x rays emitted during the electron capture decay of the produced Sb isotopes. The 115In(α ,γ )119Sb and 115In(α ,n )Sb118m reaction cross sections were measured between Ec .m .=8.83 and 15.58 MeV, and the 115In(α ,n )Sb118g reaction was studied between Ec .m .=11.10 and 15.58 MeV. The theoretical analysis was performed within the statistical model. Results: The simultaneous measurement of the (α ,γ ) and (α ,n ) cross sections allowed us to determine a best-fit combination of all parameters for the statistical model. The α +nucleus optical potential is identified as the most important input for the statistical model. The best fit is obtained for the new Atomki-V1 potential, and good reproduction of the experimental data is also achieved for the first version of the Demetriou potentials and the simple McFadden-Satchler potential. The nucleon optical potential, the γ -ray strength function, and the level density parametrization are also
Pion-induced fission of 209Bi and 119Sn: measurements, calculations, analyses and comparison
International Nuclear Information System (INIS)
Rana, M.A.; Sher, G.; Manzoor, S.; Shehzad, M.I.
2011-01-01
Cross-sections for the π - -induced fission of 209 Bi and 119 Sn have been measured using the most sensitive CR-39 solid-state nuclear track detector. In experiments, target–detector stacks were exposed to negative pions of energy 500, 672, 1068, and 1665 MeV at the Brookhaven National Laboratory, USA. An important aspect of the present paper is the comparison of pion-induced fission fragment spectra of above mentioned nuclei with the spontaneous fission fragment spectra of 252 Cf. This comparison is made in terms of fission fragment track lengths in the CR-39 detectors. Measurement results are compared with calculations of Monte Carlo and statistical weight functions methods using the computer code CEM95. Agreement between measurements and calculations is fairly good for 209 Bi target nuclei whereas it is indigent for the case of 119 Sn. The possibilities of the trustworthy calculations, using the computer code CEM95, comparable with measurements of pion-induced fission in intermediate and heavy nuclei are explored by employing various systematics available in the code. Energy dependence of pion-induced fission in 119 Sn and 209 Bi is analyzed employing a newly defined parameter geometric-size-normalized fission cross-section (χ f g ). It is found that the collective nuclear excitations, which may lead to fission, become more probable for both 209 Bi and 119 Sn nuclei with increasing energy of negative pions from 500 to 1665 MeV. (author)
Accuracy of a new partial coherence interferometry analyser for biometric measurements.
Holzer, M P; Mamusa, M; Auffarth, G U
2009-06-01
Precise biometry is an essential preoperative measurement for refractive surgery as well as cataract surgery. A new device based on partial coherence interferometry technology was tested and evaluated for accuracy of measurements. In a prospective study 200 eyes of 100 healthy phakic volunteers were examined with a functional prototype of the new ALLEGRO BioGraph (Wavelight AG)/LENSTAR LS 900 (Haag Streit AG) biometer and with the IOLMaster V.5 (Carl Zeiss Meditec AG). As recommended by the manufacturers, repeated measurements were performed with both devices and the results compared using Spearman correlation calculations (WinSTAT). Spearman correlation showed high correlations for axial length and keratometry measurements between the two devices tested. Anterior chamber depth, however, had a lower correlation between the two biometry devices. In addition, the mean values of the anterior chamber depth differed (IOLMaster 3.48 (SD 0.42) mm versus BioGraph/LENSTAR 3.64 (SD 0.26) mm); however, this difference was not statistically different (p>0.05, t test). The new biometer provided results that correlated very well with those of the IOLMaster. The ALLEGRO BioGraph/LENSTAR LS 900 is a precise device containing additional features that will be helpful tools for any cataract or refractive surgeon.
Statistical x-ray computed tomography imaging from photon-starved measurements
Chang, Zhiqian; Zhang, Ruoqiao; Thibault, Jean-Baptiste; Sauer, Ken; Bouman, Charles
2013-03-01
Dose reduction in clinical X-ray computed tomography (CT) causes low signal-to-noise ratio (SNR) in photonsparse situations. Statistical iterative reconstruction algorithms have the advantage of retaining image quality while reducing input dosage, but they meet their limits of practicality when significant portions of the sinogram near photon starvation. The corruption of electronic noise leads to measured photon counts taking on negative values, posing a problem for the log() operation in preprocessing of data. In this paper, we propose two categories of projection correction methods: an adaptive denoising filter and Bayesian inference. The denoising filter is easy to implement and preserves local statistics, but it introduces correlation between channels and may affect image resolution. Bayesian inference is a point-wise estimation based on measurements and prior information. Both approaches help improve diagnostic image quality at dramatically reduced dosage.
Reliability of corneal dynamic scheimpflug analyser measurements in virgin and post-PRK eyes.
Chen, Xiangjun; Stojanovic, Aleksandar; Hua, Yanjun; Eidet, Jon Roger; Hu, Di; Wang, Jingting; Utheim, Tor Paaske
2014-01-01
To determine the measurement reliability of CorVis ST, a dynamic Scheimpflug analyser, in virgin and post-photorefractive keratectomy (PRK) eyes and compare the results between these two groups. Forty virgin eyes and 42 post-PRK eyes underwent CorVis ST measurements performed by two technicians. Repeatability was evaluated by comparing three consecutive measurements by technician A. Reproducibility was determined by comparing the first measurement by technician A with one performed by technician B. Intraobserver and interobserver intraclass correlation coefficients (ICCs) were calculated. Univariate analysis of covariance (ANCOVA) was used to compare measured parameters between virgin and post-PRK eyes. The intraocular pressure (IOP), central corneal thickness (CCT) and 1st applanation time demonstrated good intraobserver repeatability and interobserver reproducibility (ICC ≧ 0.90) in virgin and post-PRK eyes. The deformation amplitude showed a good or close to good repeatability and reproducibility in both groups (ICC ≧ 0.88). The CCT correlated positively with 1st applanation time (r = 0.437 and 0.483, respectively, pPRK eyes, virgin eyes showed longer 1st applanation time (7.29 ± 0.21 vs. 6.96 ± 0.17 ms, pPRK eyes. There were differences in 1st applanation time and deformation amplitude between virgin and post-PRK eyes, which may reflect corneal biomechanical changes occurring after the surgery in the latter.
How to statistically analyze nano exposure measurement results: using an ARIMA time series approach
International Nuclear Information System (INIS)
Klein Entink, Rinke H.; Fransman, Wouter; Brouwer, Derk H.
2011-01-01
Measurement strategies for exposure to nano-sized particles differ from traditional integrated sampling methods for exposure assessment by the use of real-time instruments. The resulting measurement series is a time series, where typically the sequential measurements are not independent from each other but show a pattern of autocorrelation. This article addresses the statistical difficulties when analyzing real-time measurements for exposure assessment to manufactured nano objects. To account for autocorrelation patterns, Autoregressive Integrated Moving Average (ARIMA) models are proposed. A simulation study shows the pitfalls of using a standard t-test and the application of ARIMA models is illustrated with three real-data examples. Some practical suggestions for the data analysis of real-time exposure measurements conclude this article.
Developing standard exercises and statistics to measure the impact of cyber defenses
Berninger, Matthew L.
2014-01-01
CHDS State/Local As companies seek protection from cyber attacks, justifying proper levels of investment in cyber security is essential. Like all investments, cyber defense costs must be weighed against their expected benefits. While some cyber investment models exist that can relate costs and benefits, these models are largely untested with experimental data. This research develops an experimental framework and statistics for testing and measuring the efficacy of cyber mitigation methods,...
Directory of Open Access Journals (Sweden)
Hamid Reza Marateb
2014-01-01
Full Text Available Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD. Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
Measuring streetscape complexity based on the statistics of local contrast and spatial frequency.
Directory of Open Access Journals (Sweden)
André Cavalcante
Full Text Available Streetscapes are basic urban elements which play a major role in the livability of a city. The visual complexity of streetscapes is known to influence how people behave in such built spaces. However, how and which characteristics of a visual scene influence our perception of complexity have yet to be fully understood. This study proposes a method to evaluate the complexity perceived in streetscapes based on the statistics of local contrast and spatial frequency. Here, 74 streetscape images from four cities, including daytime and nighttime scenes, were ranked for complexity by 40 participants. Image processing was then used to locally segment contrast and spatial frequency in the streetscapes. The statistics of these characteristics were extracted and later combined to form a single objective measure. The direct use of statistics revealed structural or morphological patterns in streetscapes related to the perception of complexity. Furthermore, in comparison to conventional measures of visual complexity, the proposed objective measure exhibits a higher correlation with the opinion of the participants. Also, the performance of this method is more robust regarding different time scenarios.
A portable analyser for the measurement of ammonium in marine waters.
Amornthammarong, Natchanon; Zhang, Jia-Zhong; Ortner, Peter B; Stamates, Jack; Shoemaker, Michael; Kindel, Michael W
2013-03-01
A portable ammonium analyser was developed and used to measure in situ ammonium in the marine environment. The analyser incorporates an improved LED photodiode-based fluorescence detector (LPFD). This system is more sensitive and considerably smaller than previous systems and incorporates a pre-filtering subsystem enabling measurements in turbid, sediment-laden waters. Over the typical range for ammonium in marine waters (0–10 mM), the response is linear (r(2) = 0.9930) with a limit of detection (S/N ratio > 3) of 10 nM. The working range for marine waters is 0.05–10 mM. Repeatability is 0.3% (n =10) at an ammonium level of 2 mM. Results from automated operation in 15 min cycles over 16 days had good overall precision (RSD = 3%, n = 660). The system was field tested at three shallow South Florida sites. Diurnal cycles and possibly a tidal influence were expressed in the concentration variability observed.
Reliability of corneal dynamic scheimpflug analyser measurements in virgin and post-PRK eyes.
Directory of Open Access Journals (Sweden)
Xiangjun Chen
Full Text Available PURPOSE: To determine the measurement reliability of CorVis ST, a dynamic Scheimpflug analyser, in virgin and post-photorefractive keratectomy (PRK eyes and compare the results between these two groups. METHODS: Forty virgin eyes and 42 post-PRK eyes underwent CorVis ST measurements performed by two technicians. Repeatability was evaluated by comparing three consecutive measurements by technician A. Reproducibility was determined by comparing the first measurement by technician A with one performed by technician B. Intraobserver and interobserver intraclass correlation coefficients (ICCs were calculated. Univariate analysis of covariance (ANCOVA was used to compare measured parameters between virgin and post-PRK eyes. RESULTS: The intraocular pressure (IOP, central corneal thickness (CCT and 1st applanation time demonstrated good intraobserver repeatability and interobserver reproducibility (ICC ≧ 0.90 in virgin and post-PRK eyes. The deformation amplitude showed a good or close to good repeatability and reproducibility in both groups (ICC ≧ 0.88. The CCT correlated positively with 1st applanation time (r = 0.437 and 0.483, respectively, p<0.05 and negatively with deformation amplitude (r = -0.384 and -0.375, respectively, p<0.05 in both groups. Compared to post-PRK eyes, virgin eyes showed longer 1st applanation time (7.29 ± 0.21 vs. 6.96 ± 0.17 ms, p<0.05 and lower deformation amplitude (1.06 ± 0.07 vs. 1.17 ± 0.08 mm, p < 0.05. CONCLUSIONS: CorVis ST demonstrated reliable measurements for CCT, IOP, and 1st applanation time, as well as relatively reliable measurement for deformation amplitude in both virgin and post-PRK eyes. There were differences in 1st applanation time and deformation amplitude between virgin and post-PRK eyes, which may reflect corneal biomechanical changes occurring after the surgery in the latter.
International Nuclear Information System (INIS)
Vincent, C.H.
1982-01-01
Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)
Statistical MOSFET Parameter Extraction with Parameter Selection for Minimal Point Measurement
Directory of Open Access Journals (Sweden)
Marga Alisjahbana
2013-11-01
Full Text Available A method to statistically extract MOSFET model parameters from a minimal number of transistor I(V characteristic curve measurements, taken during fabrication process monitoring. It includes a sensitivity analysis of the model, test/measurement point selection, and a parameter extraction experiment on the process data. The actual extraction is based on a linear error model, the sensitivity of the MOSFET model with respect to the parameters, and Newton-Raphson iterations. Simulated results showed good accuracy of parameter extraction and I(V curve fit for parameter deviations of up 20% from nominal values, including for a process shift of 10% from nominal.
Measurement time and statistics for a noise thermometer with a synthetic-noise reference
White, D. R.; Benz, S. P.; Labenski, J. R.; Nam, S. W.; Qu, J. F.; Rogalla, H.; Tew, W. L.
2008-08-01
This paper describes methods for reducing the statistical uncertainty in measurements made by noise thermometers using digital cross-correlators and, in particular, for thermometers using pseudo-random noise for the reference signal. First, a discrete-frequency expression for the correlation bandwidth for conventional noise thermometers is derived. It is shown how an alternative frequency-domain computation can be used to eliminate the spectral response of the correlator and increase the correlation bandwidth. The corresponding expressions for the uncertainty in the measurement of pseudo-random noise in the presence of uncorrelated thermal noise are then derived. The measurement uncertainty in this case is less than that for true thermal-noise measurements. For pseudo-random sources generating a frequency comb, an additional small reduction in uncertainty is possible, but at the cost of increasing the thermometer's sensitivity to non-linearity errors. A procedure is described for allocating integration times to further reduce the total uncertainty in temperature measurements. Finally, an important systematic error arising from the calculation of ratios of statistical variables is described.
A Statistical Study of Eiscat Electron and Ion Temperature Measurements In The E-region
Hussey, G.; Haldoupis, C.; Schlegel, K.; Bösinger, T.
Motivated by the large EISCAT data base, which covers over 15 years of common programme operation, and previous statistical work with EISCAT data (e.g., C. Hal- doupis, K. Schlegel, and G. Hussey, Auroral E-region electron density gradients mea- sured with EISCAT, Ann. Geopshysicae, 18, 1172-1181, 2000), a detailed statistical analysis of electron and ion EISCAT temperature measurements has been undertaken. This study was specifically concerned with the statistical dependence of heating events with other ambient parameters such as the electric field and electron density. The re- sults showed previously reported dependences such as the electron temperature being directly correlated with the ambient electric field and inversely related to the electron density. However, these correlations were found to be also dependent upon altitude. There was also evidence of the so called "Schlegel effect" (K. Schlegel, Reduced effective recombination coefficient in the disturbed polar E-region, J. Atmos. Terr. Phys., 44, 183-185, 1982); that is, the heated electron gas leads to increases in elec- tron density through a reduction in the recombination rate. This paper will present the statistical heating results and attempt to offer physical explanations and interpretations of the findings.
International Nuclear Information System (INIS)
Lee, Tae-Hong; Kim, Seong-Jang; Kim, In-Ju; Kim, Yong-Ki; Kim, Dong-Soo; Park, Kyung-Pil
2007-01-01
Statistical parametric mapping (SPM) and statistical probabilistic anatomical mapping (SPAM) were applied to basal/acetazolamide Tc-99m ECD brain perfusion SPECT images in patients with middle cerebral artery (MCA) stenosis to assess the efficacy of endovascular stenting of the MCA. Enrolled in the study were 11 patients (8 men and 3 women, mean age 54.2 ± 6.2 years) who had undergone endovascular stent placement for MCA stenosis. Using SPM and SPAM analyses, we compared the number of significant voxels and cerebral counts in basal and acetazolamide SPECT images before and after stenting, and assessed the perfusion changes and cerebral vascular reserve index (CVRI). The numbers of hypoperfusion voxels in SPECT images were decreased from 10,083 ± 8,326 to 4,531 ± 5,091 in basal images (P 0.0317) and from 13,398 ± 14,222 to 7,699 ± 10,199 in acetazolamide images (P = 0.0142) after MCA stenting. On SPAM analysis, the increases in cerebral counts were significant in acetazolamide images (90.9 ± 2.2 to 93.5 ± 2.3, P = 0.0098) but not in basal images (91 ± 2.7 to 92 ± 2.6, P = 0.1602). The CVRI also showed a statistically significant increase from before stenting (median 0.32; 95% CI -2.19-2.37) to after stenting (median 1.59; 95% CI -0.85-4.16; P = 0.0068). This study revealed the usefulness of voxel-based analysis of basal/acetazolamide brain perfusion SPECT after MCA stent placement. This study showed that SPM and SPAM analyses of basal/acetazolamide Tc-99m brain SPECT could be used to evaluate the short-term hemodynamic efficacy of successful MCA stent placement. (orig.)
Statistical analysis concerning broad band measurements of radio frequency electromagnetic fields
International Nuclear Information System (INIS)
Lubritto, C.; D'Onofrio, A.; Palmieri, A.; Sabbarese, C.; Terrasi, F.; Petraglia, A.; Pinto, G.; Romano, G.
2002-01-01
Electromagnetic fields (EMF) actually represents one of the most common and the fastest growing environmental factors influencing human life. The care of the public community for the so called electromagnetic pollution is continually increasing because of the booming use of mobile phones over the past decade in business, commerce and social life. Moreover the incumbent third generation mobile systems will increase the use of all communication technologies, including fax, e-mail and Internet accesses. This extensive use has been accompanied by public debate about possible adverse effects on human health. In particular there are concerns related to the emission of radiofrequency radiation from the cellular phones and from base stations. Due to this very fast and wide development of cellular telephony more and more data are becoming available from monitoring, measuring and predicting electromagnetic fields as requested by the laws in order to get the authorization to install antenna and apparatus size of the database is such consistent that statistics have been carried out with a high degree of confidence: in particular in this paper statistical analysis has been focussed on data collected during about 1000 check measurements of electromagnetic field values performed by a private company in 167 different located in almost all Italian regions. One of the aim set consist in to find the most critical factors for the measurements, besides the field conformation: position in space, logistic conditions, technology employed, distance from the centre of the antenna, etc. The first step of the study deals with the building of a database fulfilled with information relevant to the measurements. In a second step, by means of appropriate statistical procedures, the electromagnetic field is evaluated and then the different measurement procedures are critically reviewed
A Geometrical-Statistical Approach to Outlier Removal for TDOA Measurements
Compagnoni, Marco; Pini, Alessia; Canclini, Antonio; Bestagini, Paolo; Antonacci, Fabio; Tubaro, Stefano; Sarti, Augusto
2017-08-01
The curse of outlier measurements in estimation problems is a well known issue in a variety of fields. Therefore, outlier removal procedures, which enables the identification of spurious measurements within a set, have been developed for many different scenarios and applications. In this paper, we propose a statistically motivated outlier removal algorithm for time differences of arrival (TDOAs), or equivalently range differences (RD), acquired at sensor arrays. The method exploits the TDOA-space formalism and works by only knowing relative sensor positions. As the proposed method is completely independent from the application for which measurements are used, it can be reliably used to identify outliers within a set of TDOA/RD measurements in different fields (e.g. acoustic source localization, sensor synchronization, radar, remote sensing, etc.). The proposed outlier removal algorithm is validated by means of synthetic simulations and real experiments.
Statistical analysis with measurement error or misclassification strategy, method and application
Yi, Grace Y
2017-01-01
This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...
Simulation of the Impact of New Ocean Surface Wind Measurements on H*Wind Analyses
Miller, Timothy; Atlas, Robert; Black, Peter; Chen, Shuyi; Hood, Robbie; Johnson, James; Jones, Linwood; Ruf, Chris; Uhlhorn, Eric
2008-01-01
The H*Wind analysis, a product of the Hurricane Research Division of NOAA's Atlantic Oceanographic and Meteorological Laboratory, brings together wind measurements from a variety of observation platforms into an objective analysis of the distribution of surface wind speeds in a tropical cyclone. This product is designed to improve understanding of the extent and strength of the wind field, and to improve the assessment of hurricane intensity. See http://www.aoml.noaa.gov/hrd/data sub/wind.html. The Hurricane Imaging Radiometer (HIRAD) is a new passive microwave remote sensor for hurricane observations that is currently under development by NASA Marshall Space Flight Center, NOAA Hurricane Research Division, the University of Central Florida and the University of Michigan. HIRAD is being designed to enhance the current real-time airborne ocean surface winds observation capabilities of NOAA and USAF Weather Squadron hurricane hunter aircraft using the operational airbome Stepped Frequency Microwave Radiometer (SFMR). Unlike SFMR, which measures wind speed and rain rate along the ground track directly beneath the aircraft, HIRAD will provide images of the surface wind and rain field over a wide swath (approximately 3 x the aircraft altitude, or approximately 2 km from space). The instrument is described in a separate paper presented at this conference. The present paper describes a set of Observing System Simulation Experiments (OSSEs) in which measurements from the new instrument as well as those from existing instruments (air, surface, and space-based) are simulated from the output of a numerical model from the University of Miami, and those results are used to construct H*Wind analyses. Evaluations will be presented on the relative impact of HIRAD and other instruments on H*Wind analyses, including the use of HIRAD from 2 aircraft altitudes and from a space-based platform.
Spriestersbach, Albert; Röhrig, Bernd; du Prel, Jean-Baptist; Gerhold-Ay, Aslihan; Blettner, Maria
2009-09-01
Descriptive statistics are an essential part of biometric analysis and a prerequisite for the understanding of further statistical evaluations, including the drawing of inferences. When data are well presented, it is usually obvious whether the author has collected and evaluated them correctly and in keeping with accepted practice in the field. Statistical variables in medicine may be of either the metric (continuous, quantitative) or categorical (nominal, ordinal) type. Easily understandable examples are given. Basic techniques for the statistical description of collected data are presented and illustrated with examples. The goal of a scientific study must always be clearly defined. The definition of the target value or clinical endpoint determines the level of measurement of the variables in question. Nearly all variables, whatever their level of measurement, can be usefully presented graphically and numerically. The level of measurement determines what types of diagrams and statistical values are appropriate. There are also different ways of presenting combinations of two independent variables graphically and numerically. The description of collected data is indispensable. If the data are of good quality, valid and important conclusions can already be drawn when they are properly described. Furthermore, data description provides a basis for inferential statistics.
Concepts for measuring maintenance performance and methods for analysing competing failure modes
DEFF Research Database (Denmark)
Cooke, R.; Paulsen, J.L.
1997-01-01
competing failure modes. This article examines ways to assess maintenance performance without introducing statistical assumptions, then introduces a plausible statistical model for describing the interaction of preventive and corrective maintenance, and finally illustrates these with examples from...
Directory of Open Access Journals (Sweden)
Ian T. Kracalik
2012-11-01
Full Text Available We compared a local clustering and a cluster morphology statistic using anthrax outbreaks in large (cattle and small (sheep and goats domestic ruminants across Kazakhstan. The Getis-Ord (Gi* statistic and a multidirectional optimal ecotope algorithm (AMOEBA were compared using 1st, 2nd and 3rd order Rook contiguity matrices. Multivariate statistical tests were used to evaluate the environmental signatures between clusters and non-clusters from the AMOEBA and Gi* tests. A logistic regression was used to define a risk surface for anthrax outbreaks and to compare agreement between clustering methodologies. Tests revealed differences in the spatial distribution of clusters as well as the total number of clusters in large ruminants for AMOEBA (n = 149 and for small ruminants (n = 9. In contrast, Gi* revealed fewer large ruminant clusters (n = 122 and more small ruminant clusters (n = 61. Significant environmental differences were found between groups using the Kruskall-Wallis and Mann- Whitney U tests. Logistic regression was used to model the presence/absence of anthrax outbreaks and define a risk surface for large ruminants to compare with cluster analyses. The model predicted 32.2% of the landscape as high risk. Approximately 75% of AMOEBA clusters corresponded to predicted high risk, compared with ~64% of Gi* clusters. In general, AMOEBA predicted more irregularly shaped clusters of outbreaks in both livestock groups, while Gi* tended to predict larger, circular clusters. Here we provide an evaluation of both tests and a discussion of the use of each to detect environmental conditions associated with anthrax outbreak clusters in domestic livestock. These findings illustrate important differences in spatial statistical methods for defining local clusters and highlight the importance of selecting appropriate levels of data aggregation.
International Nuclear Information System (INIS)
Kennedy, G.
1981-01-01
The question of whether to use Poisson or Ruark-Devol statistics for radioactivity measurements in which the counting time is long compared to the half-life is discussed. Experimental data are presented which are well described by Poisson statistics. The applications of Ruark-Devol statistics are found to be very limited, in disagreement with earlier publications. (author)
Mihajilov-Krstev, Tatjana M; Denić, Marija S; Zlatković, Bojan K; Stankov-Jovanović, Vesna P; Mitić, Violeta D; Stojanović, Gordana S; Radulović, Niko S
2015-04-01
In Serbia, delicatessen fruit alcoholic drinks are produced from autochthonous fruit-bearing species such as cornelian cherry, blackberry, elderberry, wild strawberry, European wild apple, European blueberry and blackthorn fruits. There are no chemical data on many of these and herein we analysed volatile minor constituents of these rare fruit distillates. Our second goal was to determine possible chemical markers of these distillates through a statistical/multivariate treatment of the herein obtained and previously reported data. Detailed chemical analyses revealed a complex volatile profile of all studied fruit distillates with 371 identified compounds. A number of constituents were recognised as marker compounds for a particular distillate. Moreover, 33 of them represent newly detected flavour constituents in alcoholic beverages or, in general, in foodstuffs. With the aid of multivariate analyses, these volatile profiles were successfully exploited to infer the origin of raw materials used in the production of these spirits. It was also shown that all fruit distillates possessed weak antimicrobial properties. It seems that the aroma of these highly esteemed wild-fruit spirits depends on the subtle balance of various minor volatile compounds, whereby some of them are specific to a certain type of fruit distillate and enable their mutual distinction. © 2014 Society of Chemical Industry.
International Nuclear Information System (INIS)
Boning, Duane S.; Chung, James E.
1998-01-01
Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of 'dummy fill' or 'metal fill' to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal
Statistics and error considerations at the application of SSND T-technique in radon measurement
International Nuclear Information System (INIS)
Jonsson, G.
1993-01-01
Plastic films are used for the detection of alpha particles from disintegrating radon and radon daughter nuclei. After etching there are tracks (cones) or holes in the film as a result of the exposure. The step from a counted number of tracks/holes per surface unit of the film to a reliable value of the radon and radon daughter level is surrounded by statistical considerations of different nature. Some of them are the number of counted tracks, the length of the time of exposure, the season of the time of exposure, the etching technique and the method of counting the tracks or holes. The number of background tracks of an unexposed film increases the error of the measured radon level. Some of the mentioned effects of statistical nature will be discussed in the report. (Author)
Assessment of the GPC Control Quality Using Non–Gaussian Statistical Measures
Directory of Open Access Journals (Sweden)
Domański Paweł D.
2017-06-01
Full Text Available This paper presents an alternative approach to the task of control performance assessment. Various statistical measures based on Gaussian and non-Gaussian distribution functions are evaluated. The analysis starts with the review of control error histograms followed by their statistical analysis using probability distribution functions. Simulation results obtained for a control system with the generalized predictive controller algorithm are considered. The proposed approach using Cauchy and Lévy α-stable distributions shows robustness against disturbances and enables effective control loop quality evaluation. Tests of the predictive algorithm prove its ability to detect the impact of the main controller parameters, such as the model gain, the dynamics or the prediction horizon.
Time-of-Flight Measurements as a Possible Method to Observe Anyonic Statistics
Umucalılar, R. O.; Macaluso, E.; Comparin, T.; Carusotto, I.
2018-06-01
We propose a standard time-of-flight experiment as a method for observing the anyonic statistics of quasiholes in a fractional quantum Hall state of ultracold atoms. The quasihole states can be stably prepared by pinning the quasiholes with localized potentials and a measurement of the mean square radius of the freely expanding cloud, which is related to the average total angular momentum of the initial state, offers direct signatures of the statistical phase. Our proposed method is validated by Monte Carlo calculations for ν =1 /2 and 1 /3 fractional quantum Hall liquids containing a realistic number of particles. Extensions to quantum Hall liquids of light and to non-Abelian anyons are briefly discussed.
International Nuclear Information System (INIS)
Lucisano, Marilia Pacifico; Nelson-Filho, Paulo; Silva, Raquel Assed Bezerra da; Silva, Lea Assed Bezerra da; Battaglino, Ricardo; Watanabe, Plauto Christopher Aranha
2013-01-01
Precise techniques for the measurement of maxillary bone mineral density (BMD) are useful for the early diagnosis of systemic diseases. The aim of this study was to compare in vivo the efficacy of dual energy x-ray absorptiometry (DXA) and radiographic densitometry for the measurement of BMD after systemic administration of sodium alendronate. Wistar rats were randomly allocated to a control group (n = 5), which received distilled water, and a sodium alendronate group (n = 8), which received two doses of chemically pure sodium alendronate (1 mg/kg) per week. After 8 weeks, the animals were euthanized, the tibias were removed, and the BMD of the proximal tibial metaphysis was analyzed radiographically and by DXA. The data were subjected to statistical analysis by the Kruskal-Wallis test at a significance level of 5%. Both of the techniques revealed that the alendronate-treated group had a significantly higher BMD (p < 0.05) than the control group after 8 weeks of treatment. Comparing the groups with and without alendronate therapy revealed increases of 14.9% and 29.6% in BMD, as detected radiographically and by DXA, respectively. In conclusion, both of the methods were able to detect an increase in BMD of the proximal tibial metaphysis after alendronate therapy. (author)
Energy Technology Data Exchange (ETDEWEB)
Lucisano, Marilia Pacifico; Nelson-Filho, Paulo; Silva, Raquel Assed Bezerra da; Silva, Lea Assed Bezerra da, E-mail: nelson@forp.usp.br [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Fac. de Odontologia. Dept. de Clinica Pediatrica, Preventiva e Odontologia Comunitaria; Morse, Leslie [Department of Physical Medicine and Rehabilitation, School of Medicine, Harvard Univ., Boston, MA (United States); Battaglino, Ricardo [Department of Skeletal Biology, Forsyth Institute, Cambridge, MA (United States); Watanabe, Plauto Christopher Aranha [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Fac. de Odontologia. Dept. de Morfologia, Estomacologia e Fisiologia
2013-05-15
Precise techniques for the measurement of maxillary bone mineral density (BMD) are useful for the early diagnosis of systemic diseases. The aim of this study was to compare in vivo the efficacy of dual energy x-ray absorptiometry (DXA) and radiographic densitometry for the measurement of BMD after systemic administration of sodium alendronate. Wistar rats were randomly allocated to a control group (n = 5), which received distilled water, and a sodium alendronate group (n = 8), which received two doses of chemically pure sodium alendronate (1 mg/kg) per week. After 8 weeks, the animals were euthanized, the tibias were removed, and the BMD of the proximal tibial metaphysis was analyzed radiographically and by DXA. The data were subjected to statistical analysis by the Kruskal-Wallis test at a significance level of 5%. Both of the techniques revealed that the alendronate-treated group had a significantly higher BMD (p < 0.05) than the control group after 8 weeks of treatment. Comparing the groups with and without alendronate therapy revealed increases of 14.9% and 29.6% in BMD, as detected radiographically and by DXA, respectively. In conclusion, both of the methods were able to detect an increase in BMD of the proximal tibial metaphysis after alendronate therapy. (author)
A precise Higgs mass measurement at the ILC and test beam data analyses with CALICE
International Nuclear Information System (INIS)
Ruan, Manqi
2008-01-01
Utilizing Monte Carlo tools and test-beam data, some basic detector performance properties are studied for the International Linear Collider (ILC). The contributions of this thesis are mainly twofold, first, a study of the Higgs mass and cross section measurements at the ILC (with full simulation to the e + e - → HZ→Hμμ channel and backgrounds); and second, an analysis of test-beam data of the Calorimeter for Linear Collider Experiment (CALICE). For a most general type of Higgs particle with 120 GeV the mass, setting the center-of-mass energy to 230 GeV and with an integrated luminosity of 500fb -1 , a precision of 38.4 MeV is obtained in a model independent analysis for the Higgs boson mass measurement, while the cross section could be measured to 5%; if we make some assumptions about the Higgs boson's decay, for example a Standard Model Higgs boson with a dominant invisible decay mode, the measurement result can be improved by 25% (achieving a mass measurement precision of 29 MeV and a cross section measurement precision of 4%). For the CALICE test-beam data analysis, our work is mainly focused upon two aspects: data quality checks and the track-free ECAL angular measurement. Data quality checks aim to detect strange signals or unexpected phenomena in the test-beam data so that one knows quickly how the overall data taking quality is. They also serve to classify all the data and give useful information for the later offline data analyses. The track-free ECAL angular resolution algorithm is designed to precisely measure the direction of a photon, a very important component in determining the direction of the neutral components in jets. We found that the angular resolution can be well fitted as a function of the square root of the beam energy (in a similar way as for the energy resolution) with a precision of approximately 80 mrad/√(E/GeV) in the angular resolution. (author)
Directory of Open Access Journals (Sweden)
Bavdaž Mojca
2015-12-01
Full Text Available Response burden in business surveys has long been a concern for National Statistical Institutes (NSIs for three types of reasons: political reasons, because response burden is part of the total administrative burden governments impose on businesses; methodological reasons, because an excessive response burden may reduce data quality and increase data-collection costs; and strategic reasons, because it affects relations between the NSIs and the business community. This article investigates NSI practices concerning business response burden measurement and reduction actions based on a survey of 41 NSIs from 39 countries. Most NSIs monitor at least some burden aspects and have implemented some actions to reduce burden, but large differences exist between NSIs’ methodologies for burden measurement and actions taken to reduce burden. Future research should find ways to deal with methodological differences in burden conceptualization, operationalization, and measurement, and provide insights into the effectiveness and efficiency of burden-reduction actions.
Veterinary antimicrobial-usage statistics based on standardized measures of dosage
DEFF Research Database (Denmark)
Jensen, Vibeke Frøkjær; Jacobsen, Erik; Bager, Flemming
2004-01-01
In human medicine, the defined daily dose is used as a technical measure of drug usage, which is independent of the variations in the potency of the active compound and the formulation of the pharmaceutical product-therefore providing a measure of the relative importance of different drugs....... A national system of animal defined daily doses (ADD) for each age-group and species has been defined in VetStat (the Danish national system monitoring veterinary therapeutic drug use). The usage is further standardized according to the number of animals in the target population, acquired from production...... data on the national level or on herd size by species and age in the Danish central husbandry register (CHR). Statistics based on standardized measures of VetStat data can be used for comparison of drug usage between different herds, veterinary practices, or geographic regions (allowing subdivision...
International Nuclear Information System (INIS)
Hahn, G.T.
1977-01-01
Substantial progress was made in three important areas: crack propagation and arrest theory, two-dimensional dynamic crack propagation analyses, and a laboratory test method for the material property data base. The major findings were as follows: Measurements of run-arrest events lent support to the dynamic, energy conservation theory of crack arrest. A two-dimensional, dynamic, finite-difference analysis, including inertia forces and thermal gradients, was developed. The analysis was successfully applied to run-arrest events in DCB (double-cantilever-beam) and SEN (single-edge notched) test pieces. A simplified procedure for measuring K/sub D/ and K/sub Im/ values with ordinary and duplex DCB specimens was demonstrated. The procedure employs a dynamic analysis of the crack length at arrest and requires no special instrumentation. The new method was applied to ''duplex'' specimens to measure the large K/sub D/ values displayed by A533B steel above the nil-ductility temperature. K/sub D/ crack velocity curves and K/sub Im/ values of two heats of A533B steel and the corresponding values for the plane strain fracture toughness associated with static initiation (K/sub Ic/), dynamic initiation (K/sub Id/), and the static stress intensity at crack arrest (K/sub Ia/) were measured. Possible relations among these toughness indices are identified. During the past year the principal investigators of the participating groups reached agreement on a crack arrest theory appropriate for the pressure vessel problem. 7 figures
Hincks, Ian; Granade, Christopher; Cory, David G.
2018-01-01
The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.
Influence of the statistical distribution of bioassay measurement errors on the intake estimation
International Nuclear Information System (INIS)
Lee, T. Y; Kim, J. K
2006-01-01
The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution
Abbey, Craig K.; Samuelson, Frank W.; Gallas, Brandon D.; Boone, John M.; Niklason, Loren T.
2013-03-01
The receiver operating characteristic (ROC) curve has become a common tool for evaluating diagnostic imaging technologies, and the primary endpoint of such evaluations is the area under the curve (AUC), which integrates sensitivity over the entire false positive range. An alternative figure of merit for ROC studies is expected utility (EU), which focuses on the relevant region of the ROC curve as defined by disease prevalence and the relative utility of the task. However if this measure is to be used, it must also have desirable statistical properties keep the burden of observer performance studies as low as possible. Here, we evaluate effect size and variability for EU and AUC. We use two observer performance studies recently submitted to the FDA to compare the EU and AUC endpoints. The studies were conducted using the multi-reader multi-case methodology in which all readers score all cases in all modalities. ROC curves from the study were used to generate both the AUC and EU values for each reader and modality. The EU measure was computed assuming an iso-utility slope of 1.03. We find mean effect sizes, the reader averaged difference between modalities, to be roughly 2.0 times as big for EU as AUC. The standard deviation across readers is roughly 1.4 times as large, suggesting better statistical properties for the EU endpoint. In a simple power analysis of paired comparison across readers, the utility measure required 36% fewer readers on average to achieve 80% statistical power compared to AUC.
The statistical interpretations of counting data from measurements of low-level radioactivity
International Nuclear Information System (INIS)
Donn, J.J.; Wolke, R.L.
1977-01-01
The statistical model appropriate to measurements of low-level or background-dominant radioactivity is examined and the derived relationships are applied to two practical problems involving hypothesis testing: 'Does the sample exhibit a net activity above background' and 'Is the activity of the sample below some preselected limit'. In each of these cases, the appropriate decision rule is formulated, procedures are developed for estimating the preset count which is necessary to achieve a desired probability of detection, and a specific sequence of operations is provided for the worker in the field. (author)
DEFF Research Database (Denmark)
Sathe, Ameya
This report is prepared as a written contribution to the Remote Sensing Summer School, that is organized by the Department of Wind Energy, Technical University of Denmark. It provides an overview of the state-of-the-art with regards to estimating turbulence statistics from lidar measurements...... configuration. The so-called velocity Azimuth Display (VAD) and the Doppler Beam Swinging (DBS) methods of post processing the lidar data are investigated in greater details, partly due to their wide use in commercial lidars. It is demonstrated that the VAD or DBS techniques result in introducing significant...
Statistical approaches to forecast gamma dose rates by using measurements from the atmosphere
International Nuclear Information System (INIS)
Jeong, H.J.; Hwang, W. T.; Kim, E.H.; Han, M.H.
2008-01-01
In this paper, the results obtained by inter-comparing several statistical techniques for estimating gamma dose rates, such as an exponential moving average model, a seasonal exponential smoothing model and an artificial neural networks model, are reported. Seven years of gamma dose rates data measured in Daejeon City, Korea, were divided into two parts to develop the models and validate the effectiveness of the generated predictions by the techniques mentioned above. Artificial neural networks model shows the best forecasting capability among the three statistical models. The reason why the artificial neural networks model provides a superior prediction to the other models would be its ability for a non-linear approximation. To replace the gamma dose rates when missing data for an environmental monitoring system occurs, the moving average model and the seasonal exponential smoothing model can be better because they are faster and easier for applicability than the artificial neural networks model. These kinds of statistical approaches will be helpful for a real-time control of radio emissions or for an environmental quality assessment. (authors)
Evaluating anemometer drift: A statistical approach to correct biases in wind speed measurement
Azorin-Molina, Cesar; Asin, Jesus; McVicar, Tim R.; Minola, Lorenzo; Lopez-Moreno, Juan I.; Vicente-Serrano, Sergio M.; Chen, Deliang
2018-05-01
Recent studies on observed wind variability have revealed a decline (termed "stilling") of near-surface wind speed during the last 30-50 years over many mid-latitude terrestrial regions, particularly in the Northern Hemisphere. The well-known impact of cup anemometer drift (i.e., wear on the bearings) on the observed weakening of wind speed has been mentioned as a potential contributor to the declining trend. However, to date, no research has quantified its contribution to stilling based on measurements, which is most likely due to lack of quantification of the ageing effect. In this study, a 3-year field experiment (2014-2016) with 10-minute paired wind speed measurements from one new and one malfunctioned (i.e., old bearings) SEAC SV5 cup anemometer which has been used by the Spanish Meteorological Agency in automatic weather stations since mid-1980s, was developed for assessing for the first time the role of anemometer drift on wind speed measurement. The results showed a statistical significant impact of anemometer drift on wind speed measurements, with the old anemometer measuring lower wind speeds than the new one. Biases show a marked temporal pattern and clear dependency on wind speed, with both weak and strong winds causing significant biases. This pioneering quantification of biases has allowed us to define two regression models that correct up to 37% of the artificial bias in wind speed due to measurement with an old anemometer.
Riemann, Bryan L; Lininger, Monica R
2018-01-01
To describe the concepts of measurement reliability and minimal important change. All measurements have some magnitude of error. Because clinical practice involves measurement, clinicians need to understand measurement reliability. The reliability of an instrument is integral in determining if a change in patient status is meaningful. Measurement reliability is the extent to which a test result is consistent and free of error. Three perspectives of reliability-relative reliability, systematic bias, and absolute reliability-are often reported. However, absolute reliability statistics, such as the minimal detectable difference, are most relevant to clinicians because they provide an expected error estimate. The minimal important difference is the smallest change in a treatment outcome that the patient would identify as important. Clinicians should use absolute reliability characteristics, preferably the minimal detectable difference, to determine the extent of error around a patient's measurement. The minimal detectable difference, coupled with an appropriately estimated minimal important difference, can assist the practitioner in identifying clinically meaningful changes in patients.
Nuclear analyses in biology and medical science. Measuring on nucleii in stead of atoms
International Nuclear Information System (INIS)
De Goeij, J.J.M.
1996-01-01
A brief overview is given of the use of nuclear analyses in life sciences. Features of nuclear analytical methods (NAMs) are grouped into four categories: physical basis, isotopic analyses rather than elemental analyses, no interference of electronic and molecular structure, and penetrating character of nuclear radiation. Obstacles in applying NAMs in the life sciences are outlined. 1 tab
Zimmerman, Richard K
2013-12-16
Health care worker (HCW) influenza vaccination rates are modest. This paper provides a detailed ethical analysis of the major options to increase HCW vaccination rates, comparing how major ethical theories would address the options. The main categories of interventions to raise rates include education, incentives, easy access, competition with rewards, assessment and feedback, declination, mandates with alternative infection control measures, and mandates with administrative action as consequences. The aforementioned interventions, except mandates, arouse little ethical controversy. However, these efforts are time and work intensive and rarely achieve vaccination rates higher than about 70%. The primary concerns voiced about mandates are loss of autonomy, injustice, lack of due process, and subsuming the individual for institutional ends. Proponents of mandates argue that they are ethical based on beneficence, non-maleficence, and duty. A number of professional associations support mandates. Arguments by analogy can be made by mandates for HCW vaccination against other diseases. The ethical systems used in the analyses include evolutionary ethics, utilitarianism, principalism (autonomy, beneficence, non-maleficence, and justice), Kantism, and altruism. Across these systems, the most commonly preferred options are easy access, assessment and feedback, declinations, and mandates with infection control measures as consequences for non-compliance. Given the ethical imperatives of non-maleficence and beneficence, the limited success of lower intensive interventions, and the need for putting patient safety ahead of HCW convenience, mandates with additional infection control measures as consequences for non-compliance are preferred. For those who opt out of vaccination due to conscience concerns, such mandates provide a means to remain employed but not put patient safety at risk. Copyright © 2013 Elsevier Ltd. All rights reserved.
A comparison of hand-wrist bone and cervical vertebral analyses in measuring skeletal maturation.
Gandini, Paola; Mancini, Marta; Andreani, Federico
2006-11-01
To compare skeletal maturation as measured by hand-wrist bone analysis and by cervical vertebral analysis. A radiographic hand-wrist bone analysis and cephalometric cervical vertebral analysis of 30 patients (14 males and 16 females; 7-18 years of age) were examined. The hand-wrist bone analysis was evaluated by the Bjork index, whereas the cervical vertebral analysis was assessed by the cervical vertebral maturation stage (CVMS) method. To define vertebral stages, the analysis consisted of both cephalometric (13 points) and morphologic evaluation of three cervical vertebrae (concavity of second, third, and fourth vertebrae and shape of third and fourth vertebrae). These measurements were then compared with the hand-wrist bone analysis, and the results were statistically analyzed by the Cohen kappa concordance index. The same procedure was repeated after 6 months and showed identical results. The Cohen kappa index obtained (mean +/- SD) was 0.783 +/- 0.098, which is in the significant range. The results show a concordance of 83.3%, considering that the estimated percentage for each case is 23.3%. The results also show a correlation of CVMS I with Bjork stages 1-3 (interval A), CVMS II with Bjork stage 4 (interval B), CVMS III with Bjork stage 5 (interval C), CVMS IV with Bjork stages 6 and 7 (interval D), and CVMS V with Bjork stages 8 and 9 (interval E). Vertebral analysis on a lateral cephalogram is as valid as the hand-wrist bone analysis with the advantage of reducing the radiation exposure of growing subjects.
Directory of Open Access Journals (Sweden)
Hossein Bevrani, PhD
2011-09-01
Full Text Available Objective: The aim of this study is to explore the confirmatory factor analysis results of the Persian adaptation of Statistics Anxiety Measure (SAM, proposed by Earp.Method: The validity and reliability assessments of the scale were performed on 298 college students chosen randomly from Tabriz University in Iran. Confirmatory factor analysis (CFA was carried out to determine the factor structures of the Persian adaptation of SAM.Results: As expected, the second order model provided a better fit to the data than the three alternative models. Conclusions: Hence, SAM provides an equally valid measure for use among college students. The study both expands and adds support to the existing body of math anxiety literature.
Baldewijns, Greet; Luca, Stijn; Nagels, William; Vanrumste, Bart; Croonenborghs, Tom
2015-01-01
It has been shown that gait speed and transfer times are good measures of functional ability in elderly. However, data currently acquired by systems that measure either gait speed or transfer times in the homes of elderly people require manual reviewing by healthcare workers. This reviewing process is time-consuming. To alleviate this burden, this paper proposes the use of statistical process control methods to automatically detect both positive and negative changes in transfer times. Three SPC techniques: tabular CUSUM, standardized CUSUM and EWMA, known for their ability to detect small shifts in the data, are evaluated on simulated transfer times. This analysis shows that EWMA is the best-suited method with a detection accuracy of 82% and an average detection time of 9.64 days.
Statistical Validation for Clinical Measures: Repeatability and Agreement of Kinect™-Based Software.
Lopez, Natalia; Perez, Elisa; Tello, Emanuel; Rodrigo, Alejandro; Valentinuzzi, Max E
2018-01-01
The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement technique with established
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Directory of Open Access Journals (Sweden)
Michael Robert Cunningham
2016-10-01
Full Text Available The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger, Wood, Stiff, and Chatzisarantis, 2010. Meta-analyses are supposed to reduce bias in literature reviews. Carter, Kofler, Forster, and McCullough’s (2015 meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and funnel plot asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test [PET] and Precision Effect Estimate with Standard Error (PEESE procedures. Despite these serious problems, the Carter et al. meta-analysis results actually indicate that there is a real depletion effect – contrary to their title.
MRI volumetric measurement of hippocampal formation based on statistic parametric mapping
International Nuclear Information System (INIS)
Hua Jianming; Jiang Biao; Zhou Jiong; Zhang Weimin
2010-01-01
Objective: To study MRI volumetric measurement of hippocampal formation using statistic parametric mapping (SPM) software and to discuss the value of the method applied to Alzheimer's disease (AD). Methods: The SPM software was used to divide the three-dimensional MRI brain image into gray matter, white matter and CSF separately. The bilateral hippocampal formations in both AD group and normal control group were delineated and the volumes were measured. The SPM method was compared with conventional method based on region of interest (ROI), which was the gold standard of volume measurement. The time used in measuring the volume by these two methods were respectively recorded and compared by two independent samples't test. Moreover, 7 physicians measured the left hippocampal formation of one same control with both of the two methods. The frequency distribution and dispersion of data acquired with the two methods were evaluated using standard deviation coefficient. Results (1) The volume of the bilateral hippocampal formations with SPM method was (1.88 ± 0.07) cm 3 and (1.93 ± 0.08) cm 3 respectively in the AD group, while was (2.99 ± 0.07) cm 3 and (3.02 ± 0.06) cm 3 in the control group. The volume of bilateral hippocampal formations measured by ROI method was (1.87 ± 0.06) cm 3 and (1.91 ± 0.09) cm 3 in the AD group, while was (2.97 ± 0.08) cm 3 and (3.00 ± 0.05) cm 3 in the control group. There was no significant difference between SPM method and conventional ROI method in the AD group and the control group (t=1.500, 1.617, 1.095, 1.889, P>0.05). However, the time used for delineation and volume measurement was significantly different. The time used in SPM measurement was (38.1 ± 2.0) min, while that in ROI measurement was (55.4 ± 2.4) min (t=-25.918, P 3 respectively. The frequency distribution of hippocampal formation volume measured by SPM method and ROI method was different. The CV SPM was 7% and the CV ROI was 19%. Conclusions: The borders of
Pestel, G; Fukui, K; Higashi, M; Schmidtmann, I; Werner, C
2018-06-01
An ideal non-invasive monitoring system should provide accurate and reproducible measurements of clinically relevant variables that enables clinicians to guide therapy accordingly. The monitor should be rapid, easy to use, readily available at the bedside, operator-independent, cost-effective and should have a minimal risk and side effect profile for patients. An example is the introduction of pulse oximetry, which has become established for non-invasive monitoring of oxygenation worldwide. A corresponding non-invasive monitoring of hemodynamics and perfusion could optimize the anesthesiological treatment to the needs in individual cases. In recent years several non-invasive technologies to monitor hemodynamics in the perioperative setting have been introduced: suprasternal Doppler ultrasound, modified windkessel function, pulse wave transit time, radial artery tonometry, thoracic bioimpedance, endotracheal bioimpedance, bioreactance, and partial CO 2 rebreathing have been tested for monitoring cardiac output or stroke volume. The photoelectric finger blood volume clamp technique and respiratory variation of the plethysmography curve have been assessed for monitoring fluid responsiveness. In this manuscript meta-analyses of non-invasive monitoring technologies were performed when non-invasive monitoring technology and reference technology were comparable. The primary evaluation criterion for all studies screened was a Bland-Altman analysis. Experimental and pediatric studies were excluded, as were all studies without a non-invasive monitoring technique or studies without evaluation of cardiac output/stroke volume or fluid responsiveness. Most studies found an acceptable bias with wide limits of agreement. Thus, most non-invasive hemodynamic monitoring technologies cannot be considered to be equivalent to the respective reference method. Studies testing the impact of non-invasive hemodynamic monitoring technologies as a trend evaluation on outcome, as well as
Thailand's Low-Carbon Scenario 2050: The AIM/CGE analyses of CO2 mitigation measures
International Nuclear Information System (INIS)
Thepkhun, Panida; Limmeechokchai, Bundit; Fujimori, Shinichiro; Masui, Toshihiko; Shrestha, Ram M.
2013-01-01
Climate change and CO 2 mitigation have become increasingly important environmental issues. Recently Thailand has proposed policies on GHG mitigation such as Thailand’s Nationally Appropriate Mitigation Action (NAMA), which aims at GHG mitigation in the energy sector. This study used the computable general equilibrium (CGE) model, called “AIM/CGE” model, to analyse GHG mitigation measures under emission trading and carbon capture and storage (CCS) technology in Thailand. Results show that the international free emission trading policy can drive more GHG reduction by decreasing energy supply and demand, and increasing prices of emissions. The CCS technologies would balance emission reduction but they would reduce energy efficiency improvement and renewable energy utilization. In the energy security aspect, the policy options in this study would improve energy security, energy import dependency, and co-benefits of GHG mitigation in forms of improving local air quality. Results are also helpful to GHG mitigation policy in developing countries. -- Highlights: •A Computable General Equilibrium (CGE) model was used to analyze GHG mitigation policies in Thailand. •The CCS and emission trading will increase GHG mitigation in Thailand. •The 30% GHG mitigation target with 50% emission trading will give the best result in GDP. •The share of biomass resource and energy efficiency will decrease with CCS. •The emission trading will play an important role in decreasing fossil consumption and increasing renewable energy utilization
International Nuclear Information System (INIS)
2010-05-01
This document reports various analyses performed within the frame of the preparation and filming of a TV documentary on the Belgium National Institute of Radio-elements. It reports gamma radiation measurements performed at the vicinity of the institute, discusses the possible origin of its increase at the vicinity of the institute, analyses of sludge samples coming from a wastewater treatment works, and analyses of milk, cabbage, mosses and sediments collected by residents
Bonin, Timothy A.; Newman, Jennifer F.; Klein, Petra M.; Chilson, Phillip B.; Wharton, Sonia
2016-12-01
Since turbulence measurements from Doppler lidars are being increasingly used within wind energy and boundary-layer meteorology, it is important to assess and improve the accuracy of these observations. While turbulent quantities are measured by Doppler lidars in several different ways, the simplest and most frequently used statistic is vertical velocity variance (w'2) from zenith stares. However, the competing effects of signal noise and resolution volume limitations, which respectively increase and decrease w'2, reduce the accuracy of these measurements. Herein, an established method that utilises the autocovariance of the signal to remove noise is evaluated and its skill in correcting for volume-averaging effects in the calculation of w'2 is also assessed. Additionally, this autocovariance technique is further refined by defining the amount of lag time to use for the most accurate estimates of w'2. Through comparison of observations from two Doppler lidars and sonic anemometers on a 300 m tower, the autocovariance technique is shown to generally improve estimates of w'2. After the autocovariance technique is applied, values of w'2 from the Doppler lidars are generally in close agreement (R2 ≈ 0.95 - 0.98) with those calculated from sonic anemometer measurements.
International Nuclear Information System (INIS)
Weise, K.
1998-01-01
When a contribution of a particular nuclear radiation is to be detected, for instance, a spectral line of interest for some purpose of radiation protection, and quantities and their uncertainties must be taken into account which, such as influence quantities, cannot be determined by repeated measurements or by counting nuclear radiation events, then conventional statistics of event frequencies is not sufficient for defining the decision threshold, the detection limit, and the limits of a confidence interval. These characteristic limits are therefore redefined on the basis of Bayesian statistics for a wider applicability and in such a way that the usual practice remains as far as possible unaffected. The principle of maximum entropy is applied to establish probability distributions from available information. Quantiles of these distributions are used for defining the characteristic limits. But such a distribution must not be interpreted as a distribution of event frequencies such as the Poisson distribution. It rather expresses the actual state of incomplete knowledge of a physical quantity. The different definitions and interpretations and their quantitative consequences are presented and discussed with two examples. The new approach provides a theoretical basis for the DIN 25482-10 standard presently in preparation for general applications of the characteristic limits. (orig.) [de
da Costa Lobato, Tarcísio; Hauser-Davis, Rachel Ann; de Oliveira, Terezinha Ferreira; Maciel, Marinalva Cardoso; Tavares, Maria Regina Madruga; da Silveira, Antônio Morais; Saraiva, Augusto Cesar Fonseca
2015-02-15
The Amazon area has been increasingly suffering from anthropogenic impacts, especially due to the construction of hydroelectric power plant reservoirs. The analysis and categorization of the trophic status of these reservoirs are of interest to indicate man-made changes in the environment. In this context, the present study aimed to categorize the trophic status of a hydroelectric power plant reservoir located in the Brazilian Amazon by constructing a novel Water Quality Index (WQI) and Trophic State Index (TSI) for the reservoir using major ion concentrations and physico-chemical water parameters determined in the area and taking into account the sampling locations and the local hydrological regimes. After applying statistical analyses (factor analysis and cluster analysis) and establishing a rule base of a fuzzy system to these indicators, the results obtained by the proposed method were then compared to the generally applied Carlson and a modified Lamparelli trophic state index (TSI), specific for trophic regions. The categorization of the trophic status by the proposed fuzzy method was shown to be more reliable, since it takes into account the specificities of the study area, while the Carlson and Lamparelli TSI do not, and, thus, tend to over or underestimate the trophic status of these ecosystems. The statistical techniques proposed and applied in the present study, are, therefore, relevant in cases of environmental management and policy decision-making processes, aiding in the identification of the ecological status of water bodies. With this, it is possible to identify which factors should be further investigated and/or adjusted in order to attempt the recovery of degraded water bodies. Copyright © 2014 Elsevier B.V. All rights reserved.
Rannik, Ü.; Haapanala, S.; Shurpali, N. J.; Mammarella, I.; Lind, S.; Hyvönen, N.; Peltola, O.; Zahniser, M.; Martikainen, P. J.; Vesala, T.
2015-01-01
Four gas analysers capable of measuring nitrous oxide (N2O) concentration at a response time necessary for eddy covariance flux measurements were operated from spring until winter 2011 over a field cultivated with reed canary grass (RCG, Phalaris arundinacea, L.), a perennial bioenergy crop in eastern Finland. The instruments were TGA100A (Campbell Scientific Inc.), CW-TILDAS-CS (Aerodyne Research Inc.), N2O / CO-23d (Los Gatos Research Inc.) and QC-TILDAS-76-CS (Aerodyne Research Inc.). The period with high emissions, lasting for about 2 weeks after fertilization in late May, was characterized by an up to 2 orders of magnitude higher emission, whereas during the rest of the campaign the N2O fluxes were small, from 0.01 to 1 nmol m-2 s-1. Two instruments, CW-TILDAS-CS and N2O / CO-23d, determined the N2O exchange with minor systematic difference throughout the campaign, when operated simultaneously. TGA100A produced the cumulatively highest N2O estimates (with 29% higher values during the period when all instruments were operational). QC-TILDAS-76-CS obtained 36% lower fluxes than CW-TILDAS-CS during the first period, including the emission episode, whereas the correspondence with other instruments during the rest of the campaign was good. The reasons for systematic differences were not identified, suggesting further need for detailed evaluation of instrument performance under field conditions with emphasis on stability, calibration and any other factors that can systematically affect the accuracy of flux measurements. The instrument CW-TILDAS-CS was characterized by the lowest noise level (with a standard deviation of around 0.12 ppb at 10 Hz sampling rate) as compared to N2O / CO-23d and QC-TILDAS-76-CS (around 0.50 ppb) and TGA100A (around 2 ppb). We identified that for all instruments except CW-TILDAS-CS the random error due to instrumental noise was an important source of uncertainty at the 30 min averaging level and the total stochastic error was frequently
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer. However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size
An instrument for the high-statistics measurement of plastic scintillating fibers
International Nuclear Information System (INIS)
Buontempo, S.; Ereditato, A.; Marchetti-Stasi, F.; Riccardi, F.; Strolin, P.
1994-01-01
There is today widespread use of plastic scintillating fibers in particle physics, mainly for calorimetric and tracking applications. In the case of calorimeters, we have to cope with very massive detectors and a large quantity of scintillating fibers. The CHORUS Collaboration has built a new detector to search for ν μ -ν τ oscillations in the CERN neutrino beam. A crucial task of the detector is ruled by the high-energy resolution calorimeter. For its construction more than 400 000 scintillating plastic fibers have been used. In this paper we report on the design and performance of a new instrument for the high-statistics measurement of the fiber properties, in terms of light yield and light attenuation length. The instrument has been successfully used to test about 3% of the total number of fibers before the construction of the calorimeter. ((orig.))
Statistical evaluation of failures and repairs of the V-1 measuring and control system
International Nuclear Information System (INIS)
Laurinec, R.; Korec, J.; Mitosinka, J.; Zarnovican, V.
1984-01-01
A failure record card system was introduced for evaluating the reliability of the measurement and control equipment of the V-1 nuclear power plant. The SPU-800 microcomputer system is used for recording data on magnetic tape and their transmission to the central data processing department. The data are used for evaluating the reliability of components and circuits and a selection is made of the most failure-prone components, and the causes of failures are evaluated as are failure identification, repair and causes of outages. The system provides monthly, annual and total assessment data since the system was commissioned. The results of the statistical evaluation of failures are used for planning preventive maintenance and for determining optimal repair intervals. (E.S.)
High-statistics measurement of the η →3 π0 decay at the Mainz Microtron
Prakhov, S.; Abt, S.; Achenbach, P.; Adlarson, P.; Afzal, F.; Aguar-Bartolomé, P.; Ahmed, Z.; Ahrens, J.; Annand, J. R. M.; Arends, H. J.; Bantawa, K.; Bashkanov, M.; Beck, R.; Biroth, M.; Borisov, N. S.; Braghieri, A.; Briscoe, W. J.; Cherepnya, S.; Cividini, F.; Collicott, C.; Costanza, S.; Denig, A.; Dieterle, M.; Downie, E. J.; Drexler, P.; Ferretti Bondy, M. I.; Fil'kov, L. V.; Fix, A.; Gardner, S.; Garni, S.; Glazier, D. I.; Gorodnov, I.; Gradl, W.; Gurevich, G. M.; Hamill, C. B.; Heijkenskjöld, L.; Hornidge, D.; Huber, G. M.; Käser, A.; Kashevarov, V. L.; Kay, S.; Keshelashvili, I.; Kondratiev, R.; Korolija, M.; Krusche, B.; Lazarev, A.; Lisin, V.; Livingston, K.; Lutterer, S.; MacGregor, I. J. D.; Manley, D. M.; Martel, P. P.; McGeorge, J. C.; Middleton, D. G.; Miskimen, R.; Mornacchi, E.; Mushkarenkov, A.; Neganov, A.; Neiser, A.; Oberle, M.; Ostrick, M.; Otte, P. B.; Paudyal, D.; Pedroni, P.; Polonski, A.; Ron, G.; Rostomyan, T.; Sarty, A.; Sfienti, C.; Sokhoyan, V.; Spieker, K.; Steffen, O.; Strakovsky, I. I.; Strandberg, B.; Strub, Th.; Supek, I.; Thiel, A.; Thiel, M.; Thomas, A.; Unverzagt, M.; Usov, Yu. A.; Wagner, S.; Walford, N. K.; Watts, D. P.; Werthmüller, D.; Wettig, J.; Witthauer, L.; Wolfes, M.; Zana, L. A.; A2 Collaboration at MAMI
2018-06-01
The largest, at the moment, statistics of 7 ×106η →3 π0 decays, based on 6.2 ×107η mesons produced in the γ p →η p reaction, has been accumulated by the A2 Collaboration at the Mainz Microtron, MAMI. It allowed a detailed study of the η →3 π0 dynamics beyond its conventional parametrization with just the quadratic slope parameter α and enabled, for the first time, a measurement of the second-order term and a better understanding of the cusp structure in the neutral decay. The present data are also compared to recent theoretical calculations that predict a nonlinear dependence along the quadratic distance from the Dalitz-plot center.
A statistical study of high-altitude electric fields measured on the Viking satellite
International Nuclear Information System (INIS)
Lindqvist, P.A.; Marklund, G.T.
1990-01-01
Characteristics of high-altitude data from the Viking electric field instrument are presented in a statistical study based on 109 Viking orbits. The study is focused in particular on the signatures of and relationships between various parameters measured by the electric field instrument, such as the parallel and transverse (to B) components of the electric field instrument, such as electric field variability. A major goal of the Viking mission was to investigate the occurrence and properties of parallel electric fields and their role in the auroral acceleration process. The results in this paper on the altitude distribution of the electric field variability confirm earlier findings on the distribution of small-scale electric fields and indicate the presence of parallel fields up to about 11,000 km altitude. The directly measured parallel electric field is also investigated in some detail. It is in general directed upward with an average value of 1 mV/m, but depends on, for example, altitude and plasma density. Possible sources of error in the measurement of the parallel field are also considered and accounted for
Statistical analysis of x-ray stress measurement by centroid method
International Nuclear Information System (INIS)
Kurita, Masanori; Amano, Jun; Sakamoto, Isao
1982-01-01
The X-ray technique allows a nondestructive and rapid measurement of residual stresses in metallic materials. The centroid method has an advantage over other X-ray methods in that it can determine the angular position of a diffraction line, from which the stress is calculated, even with an asymmetrical line profile. An equation for the standard deviation of the angular position of a diffraction line, σsub(p), caused by statistical fluctuation was derived, which is a fundamental source of scatter in X-ray stress measurements. This equation shows that an increase of X-ray counts by a factor of k results in a decrease of σsub(p) by a factor of 1/√k. It also shows that σsub(p) increases rapidly as the angular range used in calculating the centroid increases. It is therefore important to calculate the centroid using the narrow angular range between the two ends of the diffraction line where it starts to deviate from the straight background line. By using quenched structural steels JIS S35C and S45C, the residual stresses and their standard deviations were calculated by the centroid, parabola, Gaussian curve, and half-width methods, and the results were compared. The centroid of a diffraction line was affected greatly by the background line used. The standard deviation of the stress measured by the centroid method was found to be the largest among the four methods. (author)
Statistical measurement of the gamma-ray source-count distribution as a function of energy
Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.
2017-01-01
Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.
Directory of Open Access Journals (Sweden)
Todd C. Pataky
2016-11-01
Full Text Available One-dimensional (1D kinematic, force, and EMG trajectories are often analyzed using zero-dimensional (0D metrics like local extrema. Recently whole-trajectory 1D methods have emerged in the literature as alternatives. Since 0D and 1D methods can yield qualitatively different results, the two approaches may appear to be theoretically distinct. The purposes of this paper were (a to clarify that 0D and 1D approaches are actually just special cases of a more general region-of-interest (ROI analysis framework, and (b to demonstrate how ROIs can augment statistical power. We first simulated millions of smooth, random 1D datasets to validate theoretical predictions of the 0D, 1D and ROI approaches and to emphasize how ROIs provide a continuous bridge between 0D and 1D results. We then analyzed a variety of public datasets to demonstrate potential effects of ROIs on biomechanical conclusions. Results showed, first, that a priori ROI particulars can qualitatively affect the biomechanical conclusions that emerge from analyses and, second, that ROIs derived from exploratory/pilot analyses can detect smaller biomechanical effects than are detectable using full 1D methods. We recommend regarding ROIs, like data filtering particulars and Type I error rate, as parameters which can affect hypothesis testing results, and thus as sensitivity analysis tools to ensure arbitrary decisions do not influence scientific interpretations. Last, we describe open-source Python and MATLAB implementations of 1D ROI analysis for arbitrary experimental designs ranging from one-sample t tests to MANOVA.
Testing of a "smart-pebble" for measuring particle transport statistics
Kitsikoudis, Vasileios; Avgeris, Loukas; Valyrakis, Manousos
2017-04-01
This paper presents preliminary results from novel experiments aiming to assess coarse sediment transport statistics for a range of transport conditions, via the use of an innovative "smart-pebble" device. This device is a waterproof sphere, which has 7 cm diameter and is equipped with a number of sensors that provide information about the velocity, acceleration and positioning of the "smart-pebble" within the flow field. A series of specifically designed experiments are carried out to monitor the entrainment of a "smart-pebble" for fully developed, uniform, turbulent flow conditions over a hydraulically rough bed. Specifically, the bed surface is configured to three sections, each of them consisting of well packed glass beads of slightly increasing size at the downstream direction. The first section has a streamwise length of L1=150 cm and beads size of D1=15 mm, the second section has a length of L2=85 cm and beads size of D2=22 mm, and the third bed section has a length of L3=55 cm and beads size of D3=25.4 mm. Two cameras monitor the area of interest to provide additional information regarding the "smart-pebble" movement. Three-dimensional flow measurements are obtained with the aid of an acoustic Doppler velocimeter along a measurement grid to assess the flow forcing field. A wide range of flow rates near and above the threshold of entrainment is tested, while using four distinct densities for the "smart-pebble", which can affect its transport speed and total momentum. The acquired data are analyzed to derive Lagrangian transport statistics and the implications of such an important experiment for the transport of particles by rolling are discussed. The flow conditions for the initiation of motion, particle accelerations and equilibrium particle velocities (translating into transport rates), statistics of particle impact and its motion, can be extracted from the acquired data, which can be further compared to develop meaningful insights for sediment transport
International Nuclear Information System (INIS)
Wenrich-Verbeek, K.J.; Suits, V.J.
1979-01-01
This report presents the chemical analyses and statistical evaluation of 62 water samples collected in the north-central part of New Mexico near Rio Ojo Caliente. Both spring and surface-water samples were taken throughout the Rio Ojo Caliente drainage basin above and a few miles below the town of La Madera. A high U concentration (15 μg/l) found in the water of the Rio Ojo Caliente near La Madera, Rio Arriba County, New Mexico, during a regional sampling-technique study in August 1975 by the senior author, was investigated further in May 1976 to determine whether stream waters could be effectively used to trace the source of a U anomaly. A detailed study of the tributaries to the Rio Ojo Caliente, involving 29 samples, was conducted during a moderate discharge period, May 1976, so that small tributaries would contain water. This study isolated Canada de la Cueva as the tributary contributing the anomalous U, so that in May 1977, an extremely low discharge period due to the 1977 drought, an additional 33 samples were taken to further define the anomalous area. 6 references, 3 figures, 6 tables
Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.
1983-01-01
The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.
A new support measure to quantify the impact of local optima in phylogenetic analyses.
Brammer, Grant; Sul, Seung-Jin; Williams, Tiffani L
2011-01-01
Phylogentic analyses are often incorrectly assumed to have stabilized to a single optimum. However, a set of trees from a phylogenetic analysis may contain multiple distinct local optima with each optimum providing different levels of support
Berenson, Mark L.; Koppel, Nicole B.; Lord, Richard A.; Chapdelaine, Laura L.
2018-01-01
Typically, the core-required undergraduate business statistics course covers a broad spectrum of topics with applications pertaining to all functional areas of business. The recently updated American Statistical Association's GAISE (Guidelines for Assessment and Instruction in Statistics Education) College Report once again stresses the…
International Nuclear Information System (INIS)
Basu, I.; Chen, M.; Loeck, M.; Al-Samman, T.; Molodov, D.A.
2016-01-01
One of the key aspects influencing microstructural design pathways in metallic systems is grain boundary motion. The present work introduces a method by means of which direct measurement of grain boundary mobility vs. misorientation dependence is made possible. The technique utilizes datasets acquired by means of serial electron backscatter diffraction (EBSD) measurements. The experimental EBSD measurements are collectively analyzed, whereby datasets were used to obtain grain boundary mobility and grain aspect ratio with respect to grain boundary misorientation. The proposed method is further validated using cellular automata (CA) simulations. Single crystal aluminium was cold rolled and scratched in order to nucleate random orientations. Subsequent annealing at 300 °C resulted in grains growing, in the direction normal to the scratch, into a single deformed orientation. Growth selection was observed, wherein the boundaries with misorientations close to Σ7 CSL orientation relationship (38° 〈111〉) migrated considerably faster. The obtained boundary mobility distribution exhibited a non-monotonic behavior with a maximum corresponding to misorientation of 38° ± 2° about 〈111〉 axes ± 4°, which was 10–100 times higher than the mobility values of random high angle boundaries. Correlation with the grain aspect ratio values indicated a strong growth anisotropy displayed by the fast growing grains. The observations have been discussed in terms of the influence of grain boundary character on grain boundary motion during recrystallization. - Highlights: • Statistical microstructure method to measure grain boundary mobility during recrystallization • Method implementation independent of material or crystal structure • Mobility of the Σ7 boundaries in 5N Al was calculated as 4.7 × 10"–"8 m"4/J ⋅ s. • Pronounced growth selection in the recrystallizing nuclei in Al • Boundary mobility values during recrystallization 2–3 orders of magnitude
On the role of complex phases in the quantum statistics of weak measurements
International Nuclear Information System (INIS)
Hofmann, Holger F
2011-01-01
Weak measurements carried out between quantum state preparation and post-selection result in complex values for self-adjoint operators, corresponding to complex conditional probabilities for the projections on specific eigenstates. In this paper it is shown that the complex phases of these weak conditional probabilities describe the dynamic response of the system to unitary transformations. Quantum mechanics thus unifies the statistical overlap of different states with the dynamical structure of transformations between these states. Specifically, it is possible to identify the phase of weak conditional probabilities directly with the action of a unitary transform that maximizes the overlap of initial and final states. This action provides a quantitative measure of how much quantum correlations can diverge from the deterministic relations between physical properties expected from classical physics or hidden variable theories. In terms of quantum information, the phases of weak conditional probabilities thus represent the logical tension between sets of three quantum states that is at the heart of quantum paradoxes. (paper)
Statistics on Near Wall Structures and Shear Stress Distribution from 3D Holographic Measurement.
Sheng, J.; Malkiel, E.; Katz, J.
2007-11-01
Digital Holographic Microscopy performs 3D velocity measurement in the near-wall region of a turbulent boundary layer in a square channel over a smooth wall at Reτ=1,400. Resolution of ˜1μm over a sample volume of 1.5x2x1.5mm (x^+=50, y^+=60, z^+=50) is sufficient for resolving buffer layer and lower log layer structures, and for measuring instantaneous wall shear stress distributions from velocity gradients in the viscous sublayer. Results, based on 700 instantaneous realizations, provide detailed statistics on the spatial distribution of both wall stress components along with characteristic flow structures. Conditional sampling based on maxima and minima of wall shear stresses, as well as examination of instantaneous flow structures, lead to development of a conceptual model for a characteristic flow phenomenon that seems to generating extreme stress events. This structure develops as an initially spanwise vortex element rises away from the surface, due to local disturbance, causing a local stress minimum. Due to increasing velocity with elevation, this element bends downstream, forming a pair of inclined streamwise vortices, aligned at 45^0 to freestream, with ejection-like flow between them. Entrainment of high streamwise momentum on the outer sides of this vortex pair generates streamwise shear stress maxima, 70 δν downstream, which are displaced laterally by 35 δν from the local minimum.
Semi-automated measurement of anatomical structures using statistical and morphological priors
Ashton, Edward A.; Du, Tong
2004-05-01
Rapid, accurate and reproducible delineation and measurement of arbitrary anatomical structures in medical images is a widely held goal, with important applications in both clinical diagnostics and, perhaps more significantly, pharmaceutical trial evaluation. This process requires the ability first to localize a structure within the body, and then to find a best approximation of the structure"s boundaries within a given scan. Structures that are tortuous and small in cross section, such as the hippocampus in the brain or the abdominal aorta, present a particular challenge. Their apparent shape and position can change significantly from slice to slice, and accurate prior shape models for such structures are often difficult to form. In this work, we have developed a system that makes use of both a user-defined shape model and a statistical maximum likelihood classifier to identify and measure structures of this sort in MRI and CT images. Experiments show that this system can reduce analysis time by 75% or more with respect to manual tracing with no loss of precision or accuracy.
Haberman, Shelby J.
2004-01-01
Statistical and measurement properties are examined for features used in essay assessment to determine the generalizability of the features across populations, prompts, and individuals. Data are employed from TOEFL® and GMAT® examinations and from writing for Criterion?.
Davids, J. C.; Rutten, M.; Van De Giesen, N.
2016-12-01
Hydrologic data has traditionally been collected with permanent installations of sophisticated and relatively accurate but expensive monitoring equipment at limited numbers of sites. Consequently, the spatial coverage of the data is limited and costs are high. Achieving adequate maintenance of sophisticated monitoring equipment often exceeds local technical and resource capacity, and permanently deployed monitoring equipment is susceptible to vandalism, theft, and other hazards. Rather than using expensive, vulnerable installations at a few points, SmartPhones4Water (S4W), a form of Citizen Hydrology, leverages widely available mobile technology to gather hydrologic data at many sites in a manner that is repeatable and scalable. However, there is currently a limited understanding of the impact of decreased observational frequency on the accuracy of key streamflow statistics like minimum flow, maximum flow, and runoff. As a first step towards evaluating the tradeoffs between traditional continuous monitoring approaches and emerging Citizen Hydrology methods, we randomly selected 50 active U.S. Geological Survey (USGS) streamflow gauges in California. We used historical 15 minute flow data from 01/01/2008 through 12/31/2014 to develop minimum flow, maximum flow, and runoff values (7 year total) for each gauge. In order to mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, along with their respective distributions, from 50 subsample iterations with four different subsampling intervals (i.e. daily, three day, weekly, and monthly). Based on our results we conclude that, depending on the types of questions being asked, and the watershed characteristics, Citizen Hydrology streamflow measurements can provide useful and accurate information. Depending on watershed characteristics, minimum flows were reasonably estimated with subsample intervals ranging from
International Nuclear Information System (INIS)
Fatmi, H.; Ababou, R.; Matray, J.M.; Joly, C.
2010-01-01
Document available in extended abstract form only. This paper presents methods of statistical analysis and interpretation of hydrogeological signals in clayey formations, e.g., pore water pressure and atmospheric pressure. The purpose of these analyses is to characterize the hydraulic behaviour of this type of formation in the case of a deep repository of Mid- Level/High-Level and Long-lived radioactive wastes, and to study the evolution of the geologic formation and its EDZ (Excavation Damaged Zone) during the excavation of galleries. We focus on galleries Ga98 and Ga03 in the sites of Mont Terri (Jura, Switzerland) and Tournemire (France, Aveyron), through data collected in the BPP- 1 and PH2 boreholes, respectively. The Mont Terri site, crossing the Aalenian Opalinus clay-stone, is an underground laboratory managed by an international consortium, namely the Mont Terri project (Switzerland). The Tournemire site, crossing the Toarcian clay-stone, is an Underground Research facility managed by IRSN (France). We have analysed pore water and atmospheric pressure signals at these sites, sometimes in correlation with other data. The methods of analysis are based on the theory of stationary random signals (correlation functions, Fourier spectra, transfer functions, envelopes), and on multi-resolution wavelet analysis (adapted to nonstationary and evolutionary signals). These methods are also combined with filtering techniques, and they can be used for single signals as well as pairs of signals (cross-analyses). The objective of this work is to exploit pressure measurements in selected boreholes from the two compacted clay sites, in order to: - evaluate phenomena affecting the measurements (earth tides, barometric pressures..); - estimate hydraulic properties (specific storage..) of the clay-stones prior to excavation works and compare them with those estimated by pulse or slug tests on shorter time scales; - analyze the effects of drift excavation on pore pressures
Zamani, Pouya
2017-08-01
Traditional ratio measures of efficiency, including feed conversion ratio (FCR), gross milk efficiency (GME), gross energy efficiency (GEE) and net energy efficiency (NEE) may have some statistical problems including high correlations with milk yield. Residual energy intake (REI) or residual feed intake (RFI) is another criterion, proposed to overcome the problems attributed to the traditional ratio criteria, but it does not account for production or intake levels. For example, the same REI value could be considerable for low producing and negligible for high producing cows. The aim of this study was to propose a new measure of efficiency to overcome the problems attributed to the previous criteria. A total of 1478 monthly records of 268 lactating Holstein cows were used for this study. In addition to FCR, GME, GEE, NEE and REI, a new criterion called proportional residual energy intake (PREI) was calculated as REI to net energy intake ratio and defined as proportion of net energy intake lost as REI. The PREI had an average of -0·02 and range of -0·36 to 0·27, meaning that the least efficient cow lost 0·27 of her net energy intake as REI, while the most efficient animal saved 0·36 of her net energy intake as less REI. Traditional ratio criteria (FCR, GME, GEE and NEE) had high correlations with milk and fat corrected milk yields (absolute values from 0·469 to 0·816), while the REI and PREI had low correlations (0·000 to 0·069) with milk production. The results showed that the traditional ratio criteria (FCR, GME, GEE and NEE) are highly influenced by production traits, while the REI and PREI are independent of production level. Moreover, the PREI adjusts the REI magnitude for intake level. It seems that the PREI could be considered as a worthwhile measure of efficiency for future studies.
Energy Technology Data Exchange (ETDEWEB)
Bassant, Marie-Helene
1971-01-15
The aim of this work was to study the statistical properties of the amplitude of the electroencephalographic signal. The experimental method is described (implantation of electrodes, acquisition and treatment of data). The program of the mathematical analysis is given (calculation of probability density functions, study of stationarity) and the validity of the tests discussed. The results concerned ten rabbits. Trips of EEG were sampled during 40 s. with very short intervals (500 μs). The probability density functions established for different brain structures (especially the dorsal hippocampus) and areas, were compared during sleep, arousal and visual stimulus. Using a Χ{sup 2} test, it was found that the Gaussian distribution assumption was rejected in 96.7 per cent of the cases. For a given physiological state, there was no mathematical reason to reject the assumption of stationarity (in 96 per cent of the cases). (author) [French] Le but de ce travail est d'etudier les proprietes statistiques des amplitudes du signal electroencephalographique. La methode experimentale est decrite (implantation d'electrodes, acquisition et traitement des donnees). Le programme d'analyse mathematique est precise (calcul des courbes de repartition statistique, etude de la stationnarite du signal) et la validite des tests, discutee. Les resultats de l'etude portent sur 10 lapins. Des sequences de 40 s d'EEG sont echantillonnees. La valeur de la tension est prelevee a un pas d'echantillonnage de 500 μs. Les courbes de repartition statistiques sont comparees d'une region de l'encephale a l'autre (l'hippocampe dorsal a ete specialement analyse) ceci pendant le sommeil, l'eveil et des stimulations visuelles. Le test du Χ{sup 2} rejette l'hypothese de distribution normale dans 97 pour cent des cas. Pour un etat physiologique donne, il n'existe pas de raison mathematique a ce que soit repoussee l'hypothese de stationnarite, ceci dans 96.7 pour cent des cas. (auteur)
Ben Ayed, Rayda; Ennouri, Karim; Ercişli, Sezai; Ben Hlima, Hajer; Hanana, Mohsen; Smaoui, Slim; Rebai, Ahmed; Moreau, Fabienne
2018-04-10
Virgin olive oil is appreciated for its particular aroma and taste and is recognized worldwide for its nutritional value and health benefits. The olive oil contains a vast range of healthy compounds such as monounsaturated free fatty acids, especially, oleic acid. The SAD.1 polymorphism localized in the Stearoyl-acyl carrier protein desaturase gene (SAD) was genotyped and showed that it is associated with the oleic acid composition of olive oil samples. However, the effect of polymorphisms in fatty acid-related genes on olive oil monounsaturated and saturated fatty acids distribution in the Tunisian olive oil varieties is not understood. Seventeen Tunisian olive-tree varieties were selected for fatty acid content analysis by gas chromatography. The association of SAD.1 genotypes with the fatty acids composition was studied by statistical and Bayesian modeling analyses. Fatty acid content analysis showed interestingly that some Tunisian virgin olive oil varieties could be classified as a functional food and nutraceuticals due to their particular richness in oleic acid. In fact, the TT-SAD.1 genotype was found to be associated with a higher proportion of mono-unsaturated fatty acids (MUFA), mainly oleic acid (C18:1) (r = - 0.79, p SAD.1 association with the oleic acid composition of olive oil was identified among the studied varieties. This correlation fluctuated between studied varieties, which might elucidate variability in lipidic composition among them and therefore reflecting genetic diversity through differences in gene expression and biochemical pathways. SAD locus would represent an excellent marker for identifying interesting amongst virgin olive oil lipidic composition.
Shin, Yong Beom; Kim, Seong-Jang; Kim, In-Ju; Kim, Yong-Ki; Kim, Dong-Soo; Park, Jae Heung; Yeom, Seok-Ran
2006-06-01
Statistical parametric mapping (SPM) was applied to brain perfusion single photon emission computed tomography (SPECT) images in patients with traumatic brain injury (TBI) to investigate regional cerebral abnormalities compared to age-matched normal controls. Thirteen patients with TBI underwent brain perfusion SPECT were included in this study (10 males, three females, mean age 39.8 +/- 18.2, range 21 - 74). SPM2 software implemented in MATLAB 5.3 was used for spatial pre-processing and analysis and to determine the quantitative differences between TBI patients and age-matched normal controls. Three large voxel clusters of significantly decreased cerebral blood perfusion were found in patients with TBI. The largest clusters were area including medial frontal gyrus (voxel number 3642, peak Z-value = 4.31, 4.27, p = 0.000) in both hemispheres. The second largest clusters were areas including cingulated gyrus and anterior cingulate gyrus of left hemisphere (voxel number 381, peak Z-value = 3.67, 3.62, p = 0.000). Other clusters were parahippocampal gyrus (voxel number 173, peak Z-value = 3.40, p = 0.000) and hippocampus (voxel number 173, peak Z-value = 3.23, p = 0.001) in the left hemisphere. The false discovery rate (FDR) was less than 0.04. From this study, group and individual analyses of SPM2 could clearly identify the perfusion abnormalities of brain SPECT in patients with TBI. Group analysis of SPM2 showed hypoperfusion pattern in the areas including medial frontal gyrus of both hemispheres, cingulate gyrus, anterior cingulate gyrus, parahippocampal gyrus and hippocampus in the left hemisphere compared to age-matched normal controls. Also, left parahippocampal gyrus and left hippocampus were additional hypoperfusion areas. However, these findings deserve further investigation on a larger number of patients to be performed to allow a better validation of objective SPM analysis in patients with TBI.
International Nuclear Information System (INIS)
Stanley, T.D.; Stinnett, R.W.
1981-01-01
The absence of direct measurements of magnetically insulated line voltage necessitated reliance on inferred voltages based on theoretical calculation and current measurements. This paper presents some of the first direct measurements of magnetically insulated transmission line peak voltages. These measurements were made on the Sandia National Laboratories HydraMITE facility. The peak voltage is measured by observing the energy of negative ions produced at the line cathode and accelerated through the line voltage. The ion energy and the charge-to-mass ratio are measured using the Thomson Parabola mass spectrometry technique. This technique uses parallel E and B fields to deflect the ions. The deflected ions are detected using a microchannel plate coupled to a phosphor screen and photographic film. The Thomson Parabola results are compared to Faraday Cup measurements and to calculated voltages based on current measurements. In addition, the significance of observed positive ions is discussed
Models and error analyses of measuring instruments in accountability systems in safeguards control
International Nuclear Information System (INIS)
Dattatreya, E.S.
1977-05-01
Essentially three types of measuring instruments are used in plutonium accountability systems: (1) the bubblers, for measuring the total volume of liquid in the holding tanks, (2) coulometers, titration apparatus and calorimeters, for measuring the concentration of plutonium; and (3) spectrometers, for measuring isotopic composition. These three classes of instruments are modeled and analyzed. Finally, the uncertainty in the estimation of total plutonium in the holding tank is determined
Kromhout, D.
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the
International Nuclear Information System (INIS)
Lee, Kai-Yan; Fung, Chi-Hang Fred; Chau, H F
2013-01-01
We investigate the necessary and sufficient condition for a convex cone of positive semidefinite operators to be fixed by a unital quantum operation ϕ acting on finite-dimensional quantum states. By reducing this problem to the problem of simultaneous diagonalization of the Kraus operators associated with ϕ, we can completely characterize the kinds of quantum states that are fixed by ϕ. Our work has several applications. It gives a simple proof of the structural characterization of a unital quantum operation that acts on finite-dimensional quantum states—a result not explicitly mentioned in earlier studies. It also provides a necessary and sufficient condition for determining what kind of measurement statistics is preserved by a unital quantum operation. Finally, our result clarifies and extends the work of Størmer by giving a proof of a reduction theorem on the unassisted and entanglement-assisted classical capacities, coherent information, and minimal output Renyi entropy of a unital channel acting on a finite-dimensional quantum state. (paper)
Plasma convection in the magnetotail lobes: statistical results from Cluster EDI measurements
Directory of Open Access Journals (Sweden)
S. Haaland
2008-08-01
Full Text Available A major part of the plasma in the Earth's magnetotail is populated through transport of plasma from the solar wind via the magnetotail lobes. In this paper, we present a statistical study of plasma convection in the lobes for different directions of the interplanetary magnetic field and for different geomagnetic disturbance levels. The data set used in this study consists of roughly 340 000 one-minute vector measurements of the plasma convection from the Cluster Electron Drift Instrument (EDI obtained during the period February 2001 to June 2007. The results show that both convection magnitude and direction are largely controlled by the interplanetary magnetic field (IMF. For a southward IMF, there is a strong convection towards the central plasma sheet with convection velocities around 10 km s^{−1}. During periods of northward IMF, the lobe convection is almost stagnant. A B_{y} dominated IMF causes a rotation of the convection patterns in the tail with an oppositely directed dawn-dusk component of the convection for the northern and southern lobe. Our results also show that there is an overall persistent duskward component, which is most likely a result of conductivity gradients in the footpoints of the magnetic field lines in the ionosphere.
Energy Technology Data Exchange (ETDEWEB)
Mekkaoui, Abdessamad [IEK-4 Forschungszentrum Juelich 52428 (Germany)
2013-07-01
A method to derive stochastic differential equations for intermittent plasma density dynamics in magnetic fusion edge plasma is presented. It uses a measured first four moments (mean, variance, Skewness and Kurtosis) and the correlation time of turbulence to write a Pearson equation for the probability distribution function of fluctuations. The Fokker-Planck equation is then used to derive a Langevin equation for the plasma density fluctuations. A theoretical expectations are used as a constraints to fix the nonlinearity structure of the stochastic differential equation. In particular when the quadratically nonlinear dynamics is assumed, then it is shown that the plasma density is driven by a multiplicative Wiener process and evolves on the turbulence correlation time scale, while the linear growth is quadratically damped by the fluctuation level. Strong criteria for statistical discrimination of experimental time series are proposed as an alternative to the Kurtosis-Skewness scaling. This scaling is broadly used in contemporary literature to characterize edge turbulence, but it is inappropriate because a large family of distributions could share this scaling. Strong criteria allow us to focus on the relevant candidate distribution and approach a nonlinear structure of edge turbulence model.
PAFit: A Statistical Method for Measuring Preferential Attachment in Temporal Complex Networks.
Directory of Open Access Journals (Sweden)
Thong Pham
Full Text Available Preferential attachment is a stochastic process that has been proposed to explain certain topological features characteristic of complex networks from diverse domains. The systematic investigation of preferential attachment is an important area of research in network science, not only for the theoretical matter of verifying whether this hypothesized process is operative in real-world networks, but also for the practical insights that follow from knowledge of its functional form. Here we describe a maximum likelihood based estimation method for the measurement of preferential attachment in temporal complex networks. We call the method PAFit, and implement it in an R package of the same name. PAFit constitutes an advance over previous methods primarily because we based it on a nonparametric statistical framework that enables attachment kernel estimation free of any assumptions about its functional form. We show this results in PAFit outperforming the popular methods of Jeong and Newman in Monte Carlo simulations. What is more, we found that the application of PAFit to a publically available Flickr social network dataset yielded clear evidence for a deviation of the attachment kernel from the popularly assumed log-linear form. Independent of our main work, we provide a correction to a consequential error in Newman's original method which had evidently gone unnoticed since its publication over a decade ago.
Basu, Aritra; Mao, S. A.; Fletcher, Andrew; Kanekar, Nissim; Shukurov, Anvar; Schnitzeler, Dominic; Vacca, Valentina; Junklewitz, Henrik
2018-06-01
Deriving the Faraday rotation measure (RM) of quasar absorption line systems, which are tracers of high-redshift galaxies intervening background quasars, is a powerful tool for probing magnetic fields in distant galaxies. Statistically comparing the RM distributions of two quasar samples, with and without absorption line systems, allows one to infer magnetic field properties of the intervening galaxy population. Here, we have derived the analytical form of the probability distribution function (PDF) of RM produced by a single galaxy with an axisymmetric large-scale magnetic field. We then further determine the PDF of RM for one random sight line traversing each galaxy in a population with a large-scale magnetic field prescription. We find that the resulting PDF of RM is dominated by a Lorentzian with a width that is directly related to the mean axisymmetric large-scale field strength of the galaxy population if the dispersion of B0 within the population is smaller than . Provided that RMs produced by the intervening galaxies have been successfully isolated from other RM contributions along the line of sight, our simple model suggests that in galaxies probed by quasar absorption line systems can be measured within ≈50 per cent accuracy without additional constraints on the magneto-ionic medium properties of the galaxies. Finally, we discuss quasar sample selection criteria that are crucial to reliably interpret observations, and argue that within the limitations of the current data base of absorption line systems, high-metallicity damped Lyman-α absorbers are best suited to study galactic dynamo action in distant disc galaxies.
Basu, Aritra; Mao, S. A.; Fletcher, Andrew; Kanekar, Nissim; Shukurov, Anvar; Schnitzeler, Dominic; Vacca, Valentina; Junklewitz, Henrik
2018-03-01
Deriving the Faraday rotation measure (RM) of quasar absorption line systems, which are tracers of high-redshift galaxies intervening background quasars, is a powerful tool for probing magnetic fields in distant galaxies. Statistically comparing the RM distributions of two quasar samples, with and without absorption line systems, allows one to infer magnetic field properties of the intervening galaxy population. Here, we have derived the analytical form of the probability distribution function (PDF) of RM produced by a single galaxy with an axisymmetric large-scale magnetic field. We then further determine the PDF of RM for one random sight line traversing each galaxy in a population with a large-scale magnetic field prescription. We find that the resulting PDF of RM is dominated by a Lorentzian with a width that is directly related to the mean axisymmetric large-scale field strength ⟨B0⟩ of the galaxy population if the dispersion of B0 within the population is smaller than ⟨B0⟩. Provided that RMs produced by the intervening galaxies have been successfully isolated from other RM contributions along the line of sight, our simple model suggests that ⟨B0⟩ in galaxies probed by quasar absorption line systems can be measured within ≈50 per cent accuracy without additional constraints on the magneto-ionic medium properties of the galaxies. Finally, we discuss quasar sample selection criteria that are crucial to reliably interpret observations, and argue that within the limitations of the current database of absorption line systems, high-metallicity damped Lyman-α absorbers are best suited to study galactic dynamo action in distant disc galaxies.
Clinician-patient communication measures: drilling down into assumptions, approaches, and analyses.
Street, Richard L; Mazor, Kathleen M
2017-08-01
To critically examine properties of clinician-patient communication measures and offer suggestions for selecting measures appropriate to the purposes of research or clinical practice assessment. We analyzed different types of communication measures by focusing on their ontological properties. We describe their relative advantages and disadvantages with respect to different types of research questions. Communication measures vary along dimensions of reporter (observer vs. participant), focus of measurement (behavior, meaning, or quality), target, and timing. Observer coded measures of communication behavior function well as dependent variables (e.g., evaluating communication skill interventions, examining variability related to gender or race), but are less effective as predictors of perceptions and health outcomes. Measures of participants' judgments (e.g., what the communication means or how well it was done) capture patients' or clinicians' experiences (e.g., satisfaction) and can be useful for predicting outcomes, especially in longitudinal designs. In the absence of a theoretically coherent set of measures that could be used across research programs and applied setting, users should take steps to select measures with properties that are optimally matched to specific questions. Quality assessments of clinician-patient communication should take into account the timing of the assessment and use measures that drill down into specific aspects of patient experience to mitigate ceiling effects. Copyright © 2017 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Tendal, Britta; Higgins, Julian P T; Jüni, Peter
2009-01-01
difference (SMD), the protocols for the reviews and the trial reports (n=45) were retrieved. DATA EXTRACTION: Five experienced methodologists and five PhD students independently extracted data from the trial reports for calculation of the first SMD result in each review. The observers did not have access...... to the reviews but to the protocols, where the relevant outcome was highlighted. The agreement was analysed at both trial and meta-analysis level, pairing the observers in all possible ways (45 pairs, yielding 2025 pairs of trials and 450 pairs of meta-analyses). Agreement was defined as SMDs that differed less...... than 0.1 in their point estimates or confidence intervals. RESULTS: The agreement was 53% at trial level and 31% at meta-analysis level. Including all pairs, the median disagreement was SMD=0.22 (interquartile range 0.07-0.61). The experts agreed somewhat more than the PhD students at trial level (61...
Czech Academy of Sciences Publication Activity Database
Matějů, Petr; Vitásková, Anna
2006-01-01
Roč. 42, č. 3 (2006), s. 493-516 ISSN 0038-0288 R&D Projects: GA MPS(CZ) 1J/005/04-DP2; GA ČR(CZ) GA403/03/0340 Institutional research plan: CEZ:AV0Z70280505 Keywords : social capital * trust * komparative analyse Subject RIV: AO - Sociology, Demography Impact factor: 0.128, year: 2006
Directory of Open Access Journals (Sweden)
Maxim I. Galchenko
2014-01-01
Full Text Available In the article we consider a case of the analysis of the data connected with educational statistics, namely – result of professional development courses students survey with specialized software usage. Need for expanded statistical results processing, the scheme of carrying out the analysis is shown. Conclusions on a studied case are presented.
Measuring the Success of an Academic Development Programme: A Statistical Analysis
Smith, L. C.
2009-01-01
This study uses statistical analysis to estimate the impact of first-year academic development courses in microeconomics, statistics, accountancy, and information systems, offered by the University of Cape Town's Commerce Academic Development Programme, on students' graduation performance relative to that achieved by mainstream students. The data…
DEFF Research Database (Denmark)
Tybjærg-Hansen, Anne
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...
International Nuclear Information System (INIS)
El-Arabi, A.M.Abd El-Gabar M.; Khalifa, Ibrahim H.
2002-01-01
Factor and cluster analyses as well as the Pearson correlation coefficient have been applied to geochemical data obtained from phosphorite and phosphatic rocks of Duwi Formation exposed at the Red Sea coast, Nile Valley and Western Desert. Sixty-six samples from a total of 71 collected samples were analysed for SiO 2 , TiO 2 , Al 2 O 3 , Fe 2 O 3 , CaO, MgO, Na 2 O, K 2 O, P 2 O 5 , Sr, U and Pb by XRF and their mineral constituents were determined by the use of XRD techniques. In addition, the natural radioactivity of the phosphatic samples due to their uranium, thorium and potassium contents was measured by gamma-spectrometry.The uranium content in the phosphate rocks with P 2 O 5 >15% (average of 106.6 ppm) is higher than in rocks with P 2 O 5 2 O 5 and CaO, whereas it is not related to changes in SiO 2 , TiO 2 , Al 2 O 3 , Fe 2 O 3 , MgO, Na 2 O and K 2 O concentrations.Factor analysis and the Pearson correlation coefficient revealed that uranium behaves geochemically in different ways in the phosphatic sediments and phosphorites in the Red Sea, Nile Valley and Western Desert. In the Red Sea and Western Desert phosphorites, uranium occurs mainly in oxidized U 6+ state where it seems to be fixed by the phosphate ion, forming secondary uranium phosphate minerals such as phosphuranylite.In the Nile Valley phosphorites, ionic substitution of Ca 2+ by U 4+ is the main controlling factor in the concentration of uranium in phosphate rocks. Moreover, fixation of U 6+ by phosphate ion and adsorption of uranium on phosphate minerals play subordinate roles
Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis
2016-07-01
A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
An exploration of the use of simple statistics to measure consensus and stability in Delphi studies
Directory of Open Access Journals (Sweden)
Dixon John
2007-11-01
Full Text Available Abstract Background The criteria for stopping Delphi studies are often subjective. This study aimed to examine whether consensus and stability in the Delphi process can be ascertained by descriptive evaluation of trends in participants' views. Methods A three round email-based Delphi required participants (n = 12 to verify their level of agreement with 8 statements, write comments on each if they considered it necessary and rank the statements for importance. Each statement was analysed quantitatively by the percentage of agreement ratings, importance rankings and the amount of comments made for each statement, and qualitatively using thematic analysis. Importance rankings between rounds were compared by calculating Kappa values to observe trends in how the process impacts on subject's views. Results Evolution of consensus was shown by increase in agreement percentages, convergence of range with standard deviations of importance ratings, and a decrease in the number of comments made. Stability was demonstrated by a trend of increasing Kappa values. Conclusion Following the original use of Delphi in social sciences, Delphi is suggested to be an effective way to gain and measure group consensus in healthcare. However, the proposed analytical process should be followed to ensure maximum validity of results in Delphi methodology for improved evidence of consensual decision-making.
Directory of Open Access Journals (Sweden)
Tomas TOMKO
2016-06-01
Full Text Available The evaluation process of measured data in terms of vibration diagnosis is problematic for timeline constructors. The complexity of such an evaluation is compounded by the fact that it is a process involving a large amount of disparate measurement data. One of the most effective analytical approaches when dealing with large amounts of data is to engage in a process using multidimensional statistical methods, which can provide a picture of the current status of the flexibility of the machinery. The more methods that are used, the more precise the statistical analysis of measurement data, making it possible to obtain a better picture of the current condition of the machinery.
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…
Lyons, L.
2016-01-01
Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.
Freedman, Laurence S; Midthune, Douglas; Carroll, Raymond J; Commins, John M; Arab, Lenore; Baer, David J; Moler, James E; Moshfegh, Alanna J; Neuhouser, Marian L; Prentice, Ross L; Rhodes, Donna; Spiegelman, Donna; Subar, Amy F; Tinker, Lesley F; Willett, Walter; Kipnis, Victor
2015-11-01
Most statistical methods that adjust analyses for dietary measurement error treat an individual's usual intake as a fixed quantity. However, usual intake, if defined as average intake over a few months, varies over time. We describe a model that accounts for such variation and for the proximity of biomarker measurements to self-reports within the framework of a meta-analysis, and apply it to the analysis of data on energy, protein, potassium, and sodium from a set of five large validation studies of dietary self-report instruments using recovery biomarkers as reference instruments. We show that this time-varying usual intake model fits the data better than the fixed usual intake assumption. Using this model, we estimated attenuation factors and correlations with true longer-term usual intake for single and multiple 24-hour dietary recalls (24HRs) and food frequency questionnaires (FFQs) and compared them with those obtained under the "fixed" method. Compared with the fixed method, the estimates using the time-varying model showed slightly larger values of the attenuation factor and correlation coefficient for FFQs and smaller values for 24HRs. In some cases, the difference between the fixed method estimate and the new estimate for multiple 24HRs was substantial. With the new method, while four 24HRs had higher estimated correlations with truth than a single FFQ for absolute intakes of protein, potassium, and sodium, for densities the correlations were approximately equal. Accounting for the time element in dietary validation is potentially important, and points toward the need for longer-term validation studies.
International Nuclear Information System (INIS)
Sekimoto, H.
1987-01-01
The kerma heat production density, tritum production density, and dose in a lithium-fluoride pile with a deuterium-tritum neutron source were calculated with a data processing code, UFO, from the pulse height distribution of a miniature NE213 neutron spectrometer, and compared with the values calculated with a Monte Carlo code, MORSE-CV. Both the UFO and MORSE-CV values agreed with the statistical error (less than 6%) of the MORSE-CV calculations, except for the outer-most point in the pile. The MORSE-CV values were slightly smaller than the UFO values for almost all cases, and this tendency increased with increasing distance from the neutron source
Rudolff, Andrea S; Moens, Yves P S; Driessen, Bernd; Ambrisko, Tamas D
2014-07-01
To assess agreement between infrared (IR) analysers and a refractometer for measurements of isoflurane, sevoflurane and desflurane concentrations and to demonstrate the effect of customized calibration of IR analysers. In vitro experiment. Six IR anaesthetic monitors (Datex-Ohmeda) and a single portable refractometer (Riken). Both devices were calibrated following the manufacturer's recommendations. Gas samples were collected at common gas outlets of anaesthesia machines. A range of agent concentrations was produced by stepwise changes in dial settings: isoflurane (0-5% in 0.5% increments), sevoflurane (0-8% in 1% increments), or desflurane (0-18% in 2% increments). Oxygen flow was 2 L minute(-1) . The orders of testing IR analysers, agents and dial settings were randomized. Duplicate measurements were performed at each setting. The entire procedure was repeated 24 hours later. Bland-Altman analysis was performed. Measurements on day-1 were used to yield calibration equations (IR measurements as dependent and refractometry measurements as independent variables), which were used to modify the IR measurements on day-2. Bias ± limits of agreement for isoflurane, sevoflurane and desflurane were 0.2 ± 0.3, 0.1 ± 0.4 and 0.7 ± 0.9 volume%, respectively. There were significant linear relationships between differences and means for all agents. The IR analysers became less accurate at higher gas concentrations. After customized calibration, the bias became almost zero and the limits of agreement became narrower. If similar IR analysers are used in research studies, they need to be calibrated against a reference method using the agent in question at multiple calibration points overlapping the range of interest. © 2013 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesia and Analgesia.
Progress in the methods for analyses and measurements of environmental radionuclides
International Nuclear Information System (INIS)
1984-01-01
The tenth seminar on environment of the National Institute of Radiological Sciences was held in Chiba on December 9 and 10, 1982, under the joint auspices with Japan Health Physics Society. The recent progress of the measuring techniques for environmental radiation substances is remarkable. The Japanese data on environmental radiation presented to the UN Scientific Committee on the Effect of Atomic Radiation have obtained very high esteem because the data have been reliable due to the progress of measuring techniques. However, this field is in steady progress and changes rapidly, therefore, this seminar was planned. In this report, the history of the analysis and measurement of environmental radioactivity, the method of sampling and pretreatment operation for such environmental specimens as gaseous radionuclides, atmospheric floating dust, soil, agricultural products, sea water and sea bottom sediment, marine life, foods and living bodies, the progress of chemical separation process, the automation of analysis and measurement, the progress of the analysis of low level nuclides with long half-value period, the manual for the analysis and measurement, the quality of the analysis and measurement and its assurance are described. (Kako, I.)
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Dai, Qi; Yang, Yanchun; Wang, Tianming
2008-10-15
Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.
Neutron activation analyses and half-life measurements at the usgs triga reactor
Larson, Robert E.
Neutron activation of materials followed by gamma spectroscopy using high-purity germanium detectors is an effective method for making measurements of nuclear beta decay half-lives and for detecting trace amounts of elements present in materials. This research explores applications of neutron activation analysis (NAA) in two parts. Part 1. High Precision Methods for Measuring Decay Half-Lives, Chapters 1 through 8 Part one develops research methods and data analysis techniques for making high precision measurements of nuclear beta decay half-lives. The change in the electron capture half-life of 51Cr in pure chromium versus chromium mixed in a gold lattice structure is explored, and the 97Ru electron capture decay half-life are compared for ruthenium in a pure crystal versus ruthenium in a rutile oxide state, RuO2. In addition, the beta-minus decay half-life of 71mZn is measured and compared with new high precision findings. Density Functional Theory is used to explain the measured magnitude of changes in electron capture half-life from changes in the surrounding lattice electron configuration. Part 2. Debris Collection Nuclear Diagnostic at the National Ignition Facility, Chapters 9 through 11 Part two explores the design and development of a solid debris collector for use as a diagnostic tool at the National Ignition Facility (NIF). NAA measurements are performed on NIF post-shot debris collected on witness plates in the NIF chamber. In this application NAA is used to detect and quantify the amount of trace amounts of gold from the hohlraum and germanium from the pellet present in the debris collected after a NIF shot. The design of a solid debris collector based on material x-ray ablation properties is given, and calculations are done to predict performance and results for the collection and measurements of trace amounts of gold and germanium from dissociated hohlraum debris.
Spatial analyses of cost efficient measures to reduce N-leaching
DEFF Research Database (Denmark)
Jacobsen, Brian H.; Abildtrup, Jens; Ørum, Jens Erik
(WFD). The analysis shows that the geographical position of the measures are very important in order to achieve the expected nutrient reduction. The current income varies a lot in the River basin and this might influence the choice of cost effective measures to reduce nutrient load. Furthermore a close......The Nitrate Directive has only been implemented satisfactorily in a few EU countries. The Commission have accepted the Danish implementation of the directive based on the Plan for the Aquatic Environment II. The costs of this plan has been calculated to 70 million € or 2,0 € per kg N in reduced...... leaching. The farmers have paid 60% of the costs. The paper then describes an example of a regional analysis covering the River Basin of Ringkøbing Fjord in Denmark, which indicates the type of calculations needed to find the measures and costs in order to comply with parts of the Water Framework Directive...
Energy Technology Data Exchange (ETDEWEB)
Valat, J; Stern, T E
1964-07-01
The rapid measurement of anti-reactivities, in particular very low ones (i.e. a few tens of {beta}) appears to be an interesting method for the automatic start-up a reactor and its optimisation. With this in view, the present report explores the various methods studied essentially from the point of view of the time required for making the measurement with a given statistical accuracy, especially as far as very low activities are concerned. The statistical analysis is applied in turn to: the methods for the natural background noise (auto-correlation and spectral density); the sinusoidal excitation methods for the reactivity or the source, with synchronous detection ; the periodic source excitation method using pulsed neutrons. Finally, the statistical analysis leads to the suggestion of a new method of source excitation using neutronic random square waves combined with an intercorrelation between the random excitation and the resulting output. (authors) [French] La mesure rapide des antireactivites, en particulier celle des tres basses (soit quelques dizaines de {beta}), apparait comme une voie interessante pour le demarrage automatique d'un reacteur et son optimalisation. Dans cette optique, le present rapport explore diverses methodes etudiees essentiellement sous l'angle de la duree de mesure necessaire a une precision relative statistique donnee, plus particulierement en ce qui concerne les tres basses reactivites. L'analyse statistique porte successivement sur: les methodes du bruit de fond naturel (autocorrelation et densite spectrale); les methodes d'excitation sinusoidale de reactivite ou de source, avec detection synchrone; la methode d'excitation periodique de source par neutrons pulses. Enfin l'analyse statistique amene a proposer une methode nouvelle d'excitation de source par creneaux neutroniques aleatoires alliee a une intercorrelation entre l'excitation aleatoire et la sortie resultante. (auteurs)
Biosensor-based analyser. Measurement of glucose, sucrose, lactose, L-lactate and alcohol
Energy Technology Data Exchange (ETDEWEB)
Williams, F.T. Jr. (YSI, Inc., Yellow Springs, OH (United States))
1992-05-01
This paper describes an instrument, the YSI 2700, for the measurement of glucose, sucrose, lactose, L-lactate, and alcohol by means of biosensors. Each biosensor consists of an amperometric, hydrogen peroxide sensitive electrode combined with an immobilized oxidase enzyme trapped between two membranes. Each biosensor differs from the others only in its enzyme layer. The instrument can be used to measure these analytes in complex sample matrices; often directly, e.g. in whole blood and fermentations, after dilution with water, e.g. in molasses and corn syrup, or after extraction into water, e.g. in cheese and cereal products. (orig.).
Riley, Richard D.
2017-01-01
An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945
An on-ice measurement approach to analyse the biomechanics of ice hockey skating.
Directory of Open Access Journals (Sweden)
Erica Buckeridge
Full Text Available Skating is a fundamental movement in ice hockey; however little research has been conducted within the field of hockey skating biomechanics due to the difficulties of on-ice data collection. In this study a novel on-ice measurement approach was tested for reliability, and subsequently implemented to investigate the forward skating technique, as well as technique differences across skill levels. Nine high caliber (High and nine low caliber (Low hockey players performed 30 m forward skating trials. A 3D accelerometer was mounted to the right skate for the purpose of stride detection, with the 2nd and 6th strides defined as acceleration and steady-state, respectively. The activity of five lower extremity muscles was recorded using surface electromyography. Biaxial electro-goniometers were used to quantify hip and knee angles, and in-skate plantar force was measured using instrumented insoles. Reliability was assessed with the coefficient of multiple correlation, which demonstrated moderate (r>0.65 to excellent (r>0.95 scores across selected measured variables. Greater plantar-flexor muscle activity and hip extension were evident during acceleration strides, while steady state strides exhibited greater knee extensor activity and hip abduction range of motion (p<0.05. High caliber exhibited greater hip range of motion and forefoot force application (p<0.05. The successful implementation of this on-ice mobile measurement approach offers potential for athlete monitoring, biofeedback and training advice.
An on-ice measurement approach to analyse the biomechanics of ice hockey skating.
Buckeridge, Erica; LeVangie, Marc C; Stetter, Bernd; Nigg, Sandro R; Nigg, Benno M
2015-01-01
Skating is a fundamental movement in ice hockey; however little research has been conducted within the field of hockey skating biomechanics due to the difficulties of on-ice data collection. In this study a novel on-ice measurement approach was tested for reliability, and subsequently implemented to investigate the forward skating technique, as well as technique differences across skill levels. Nine high caliber (High) and nine low caliber (Low) hockey players performed 30 m forward skating trials. A 3D accelerometer was mounted to the right skate for the purpose of stride detection, with the 2nd and 6th strides defined as acceleration and steady-state, respectively. The activity of five lower extremity muscles was recorded using surface electromyography. Biaxial electro-goniometers were used to quantify hip and knee angles, and in-skate plantar force was measured using instrumented insoles. Reliability was assessed with the coefficient of multiple correlation, which demonstrated moderate (r>0.65) to excellent (r>0.95) scores across selected measured variables. Greater plantar-flexor muscle activity and hip extension were evident during acceleration strides, while steady state strides exhibited greater knee extensor activity and hip abduction range of motion (p<0.05). High caliber exhibited greater hip range of motion and forefoot force application (p<0.05). The successful implementation of this on-ice mobile measurement approach offers potential for athlete monitoring, biofeedback and training advice.
Using Floating Car Data to Analyse the Effects of ITS Measures and Eco-Driving
Directory of Open Access Journals (Sweden)
Alvaro Garcia-Castro
2014-11-01
Full Text Available The road transportation sector is responsible for around 25% of total man-made CO2 emissions worldwide. Considerable efforts are therefore underway to reduce these emissions using several approaches, including improved vehicle technologies, traffic management and changing driving behaviour. Detailed traffic and emissions models are used extensively to assess the potential effects of these measures. However, if the input and calibration data are not sufficiently detailed there is an inherent risk that the results may be inaccurate. This article presents the use of Floating Car Data to derive useful speed and acceleration values in the process of traffic model calibration as a means of ensuring more accurate results when simulating the effects of particular measures. The data acquired includes instantaneous GPS coordinates to track and select the itineraries, and speed and engine performance extracted directly from the on-board diagnostics system. Once the data is processed, the variations in several calibration parameters can be analyzed by comparing the base case model with the measure application scenarios. Depending on the measure, the results show changes of up to 6.4% in maximum speed values, and reductions of nearly 15% in acceleration and braking levels, especially when eco-driving is applied.
Pestman, Wiebe R
2009-01-01
This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.
Kappa statistic to measure agreement beyond chance in free-response assessments.
Carpentier, Marc; Combescure, Christophe; Merlini, Laura; Perneger, Thomas V
2017-04-19
The usual kappa statistic requires that all observations be enumerated. However, in free-response assessments, only positive (or abnormal) findings are notified, but negative (or normal) findings are not. This situation occurs frequently in imaging or other diagnostic studies. We propose here a kappa statistic that is suitable for free-response assessments. We derived the equivalent of Cohen's kappa statistic for two raters under the assumption that the number of possible findings for any given patient is very large, as well as a formula for sampling variance that is applicable to independent observations (for clustered observations, a bootstrap procedure is proposed). The proposed statistic was applied to a real-life dataset, and compared with the common practice of collapsing observations within a finite number of regions of interest. The free-response kappa is computed from the total numbers of discordant (b and c) and concordant positive (d) observations made in all patients, as 2d/(b + c + 2d). In 84 full-body magnetic resonance imaging procedures in children that were evaluated by 2 independent raters, the free-response kappa statistic was 0.820. Aggregation of results within regions of interest resulted in overestimation of agreement beyond chance. The free-response kappa provides an estimate of agreement beyond chance in situations where only positive findings are reported by raters.
Requirements on the provisional safety analyses and technical comparison of safety measures
International Nuclear Information System (INIS)
2010-04-01
The concept of a Geological Underground Repository (SGT) was adopted by the Swiss Federal Council on April 2 nd , 2008. It fixes the goals and the safety technical criteria as well as the procedures for the choice of the site for an underground repository. Those responsible for waste management evaluate possible site regions according to the present status of geological knowledge and based on the safety criteria defined in SGT as well as on technical feasibility. In a first step, they propose geological repository sites for high level (HAA) and for low and intermediate level (SMA) radioactive wastes and justify their choice in a report delivered to the Swiss Federal Office of Energy. The Swiss Federal Council reviews the choices presented and, in the case of positive evaluation, approves them and considers them as an initial orientation. In a second step, based on the possible sites according to step 1, the waste management institution responsible has to reduce the repositories chosen for HAA and SMA by taking into account safety aspects, technical feasibility as well as space planning and socio-economical aspects. In making this choice, safety aspects have the highest priority. The criteria used for the evaluation in the first step have to be defined using provisional quantitative safety analyses. On the basis of the whole appraisal, including space planning and socio-economical aspects, those responsible for waste management propose at least two repository sites for HAA- and SMA-waste. Their selection is then reviewed by the authorities and, in the case of a positive assesment, the selection is taken as an intermediate result. The remaining sites are further studied to examine site choice and the delivery of a request for a design license. If necessary, the requested geological knowledge has to be confirmed by new investigations. Based on the results of the choosing process and a positive evaluation by the safety authorities, the Swiss Federal Council has to
FY 2016 Status Report: CIRFT Testing Data Analyses and Updated Curvature Measurements
Energy Technology Data Exchange (ETDEWEB)
Wang, Jy-An John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Hong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-08-01
This report provides a detailed description of FY15 test result corrections/analysis based on the FY16 Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) test program methodology update used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal transportation conditions. The CIRFT consists of a U-frame testing setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages to a universal testing machine. The curvature of rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are used and clamped to the side connecting plates of the U-frame to capture the deformation of the rod. The contact-based measurement, or three-LVDT-based curvature measurement system, on SNF rods has been proven to be quite reliable in CIRFT testing. However, how the LVDT head contacts the SNF rod may have a significant effect on the curvature measurement, depending on the magnitude and direction of rod curvature. It has been demonstrated that the contact/curvature issues can be corrected by using a correction on the sensor spacing. The sensor spacing defines the separation of the three LVDT probes and is a critical quantity in calculating the rod curvature once the deflections are obtained. The sensor spacing correction can be determined by using chisel-type probes. The method has been critically examined this year and has been shown to be difficult to implement in a hot cell environment, and thus cannot be implemented effectively. A correction based on the proposed equivalent gauge-length has the required flexibility and accuracy and can be appropriately used as a correction factor. The correction method based on the equivalent gauge length has been successfully demonstrated in CIRFT data analysis for the dynamic tests conducted on Limerick (LMK) (17 tests), North Anna (NA) (6 tests), and Catawba mixed oxide (MOX
MEASUREMENTS IN A LIQUID ATOMISER SPRAY USING THE PHASE-DOPPLER PARTICLE ANALYSER
Directory of Open Access Journals (Sweden)
R HADEF
2000-12-01
Full Text Available Experiments have been carried out at atmospheric conditions using a water atomiser spray. A phase Doppler anemometry was used to perform the measurements of the droplets size, their velocity and concentration, and photographs were taken. The results showed that the small particles with low turbulence occupied the central core of the jet displaying a Gaussian profile for the axial velocity component. The large particles were defected towards the outer edges of the jet, due to their higher initial momentum, and displayed relatively high levels of turbulence. The variables measured show that their spatial distributions were nearly symmetrical about the x-axis and although the number density of the droplets is very high in the centred region, most of the pulverised liquid was present in the edges of the spray.
Hébert-Losier, Kim; Beaven, C Martyn
2014-07-01
Jump tests are often used to assess the effect of interventions because their outcomes are reported valid indicators of functional performance. In this study, we examined the reproducibility of performance parameters from 3 common jump tests obtained using the commercially available Kistler Measurement, Analysis and Reporting Software (MARS). On 2 separate days, 32 men performed 3 squat jumps (SJs), 3 countermovement jumps (CMJs), and 3 standing long jumps (LJs) on a Kistler force-plate. On both days, the performance measures from the best jump of each series were extracted using the MARS. Changes in the mean scores, intraclass correlation coefficients (ICCs), and coefficients of variations (CVs) were computed to quantify the between-day reproducibility of each parameter. Moreover, the reproducibility quantifiers specific to the 3 separate jumps were compared using nonparametric tests. Overall, an acceptable between-day reproducibility (mean ± SD, ICC, and CV) of SJ (0.88 ± 0.06 and 7.1 ± 3.8%), CMJ (0.84 ± 0.17 and 5.9 ± 4.1%), and LJ (0.80 ± 0.13 and 8.1 ± 4.1%) measures was found using the MARS, except for parameters directly relating to the rate of force development (i.e., time to maximal force) and change in momentum during countermovement (i.e., negative force impulse) where reproducibility was lower. A greater proportion of the performance measures from the standing LJs had low ICCs and/or high CVs values most likely owing to the complex nature of the LJ test. Practitioners and researchers can use most of the jump test parameters from the MARS with confidence to quantify changes in the functional ability of individuals over time, except for those relating to the rate of force development or change in momentum during countermovement phases of jumps.
New method for eliminating the statistical bias in highly turbulent flow measurements
International Nuclear Information System (INIS)
Nakao, S.I.; Terao, Y.; Hirata, K.I.; Kitakyushu Industrial Research Institute, Fukuoka, Japan)
1987-01-01
A simple method was developed for eliminating statistical bias which can be applied to highly turbulent flows with the sparse and nonuniform seeding conditions. Unlike the method proposed so far, a weighting function was determined based on the idea that the statistical bias could be eliminated if the asymmetric form of the probability density function of the velocity data were corrected. Moreover, the data more than three standard deviations away from the mean were discarded to remove the apparent turbulent intensity resulting from noise. The present method was applied to data obtained in the wake of a block, which provided local turbulent intensities up to about 120 percent, it was found to eliminate the statistical bias with high accuracy. 9 references
Statistical Pattern Recognition
Webb, Andrew R
2011-01-01
Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions. It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,
Measurements and analyses of cosmic-ray exposure rates perturbed by various environmental objects
International Nuclear Information System (INIS)
Fukaya, Mitsuharu; Minato, Susumu
1988-01-01
One-dimensional intensity distributions of cosmic-rays transmitted through various large structural objects were measured to examine the feasibility of 'cosmic-ray radiography'. 1) For the rectungular building, (a) the bulk density estimation by comparison of the observed distribution with the calculated one, and (b) edge detection by differential method, were found to be possible. 2) For the stairs in the subway station, the relation between the intensities and the stairs depths was able to be interpreted by a simple model. These findings indicate that it is possible to correlate transmitted cosmic-ray intensity distributions to the structure and/or the physical quantities of large structural objects. (author)
Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary
2015-06-30
Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase
International Nuclear Information System (INIS)
Baloch, M.J.
2003-01-01
Nine upland cotton varieties/strains were tested over 36 environments in Pakistan so as to determine their stability in yield performance. The regression coefficient (b) was used as a measure of adaptability, whereas parameters such as coefficient of determination (r2) and sum of squared deviations from regression (s/sup 2/d) were used as measure of stability. Although the regression coefficients (b) of all varieties did not deviate significantly from the unit slope, the varieties CRIS-5A. BII-89, DNH-40 and Rehmani gave b value closer to unity implying their better adaptation. Lower s/sub 2/d and higher r/sub 2/ of CRIS- 121 and DNH-40 suggest that both of these are fairly stable. The results indicate that, generally, adaptability and stability parameters are independent of each in as much as not all of the parameters simultaneously favoured one variety over the other excepting the variety DNH-40, which was stable based on majority of the parameters. Principal component analysis revealed that the first two components (latent roots) account for about 91.4% of the total variation. The latent vectors of first principal component (PCA1) were smaller and positive which also suggest that most of the varieties were quite adaptive to all of the test environments. (author)
International Nuclear Information System (INIS)
Jeffers, Nicholas; Nolan, Kevin; Stafford, Jason; Donnelly, Brian
2014-01-01
Piezoelectric fans have been studied extensively and are seen as a promising technology for thermal management due to their ability to provide quiet, reliable cooling with low power consumption. The fluid mechanics of an unconfined piezoelectric fan are complex which is why the majority of the literature to date confines the fan in an attempt to simplify the flow field. This paper investigates the fluid mechanics of an unconfined fan operating in its first vibration frequency mode. The piezoelectric fan used in this study measures 12.7 mm × 70 mm and resonates at 92.5 Hz in air. A custom built experimental facility was developed to capture the fan's flow field using phase locked Particle Image Velocimetry (PIV). The phase locked PIV results are presented in terms of vorticity and show the formation of a horse shoe vortex. A three dimensional A2 criterion constructed from interpolated PIV measurements was used to identify the vortex core in the vicinity of the fan. This analysis was used to clearly identify the formation of a horse shoe vortex that turns into a hairpin vortex before it breaks up due to a combination of vortex shedding and flow along the fan blade. The results presented in this paper contribute to both the fluid dynamics and heat transfer literature concerning first mode fan oscillation.
International Nuclear Information System (INIS)
Lacey, G.; Thenoux, G.; Rodriguez-Roa, F.
2008-01-01
In accordance with the present development of empirical-mechanistic tools, this paper presents an alternative to traditional analysis methods for flexible pavements using a three-dimensional finite element formulation based on a liner-elastic perfectly-plastic Drucker-Pager model for granular soil layers and a linear-elastic stress-strain law for the asphalt layer. From the sensitivity analysis performed, it was found that variations of +-4 degree in the internal friction angle of granular soil layers did not significantly affect the analyzed pavement response. On the other hand, a null dilation angle is conservatively proposed for design purposes. The use of a Light Falling Weight Deflectometer is also proposed as an effective and practical tool for on-site elastic modulus determination of granular soil layers. However, the stiffness value obtained from the tested layer should be corrected when the measured peak deflection and the peak force do not occur at the same time. In addition, some practical observations are given to achieve successful field measurements. The importance of using a 3D FE analysis to predict the maximum tensile strain at the bottom of the asphalt layer (related to pavement fatigue) and the maximum vertical comprehensive strain transmitted to the top of the granular soil layers (related to rutting) is also shown. (author)
Dancer, Diane; Morrison, Kellie; Tarr, Garth
2015-01-01
Peer-assisted study session (PASS) programs have been shown to positively affect students' grades in a majority of studies. This study extends that analysis in two ways: controlling for ability and other factors, with focus on international students, and by presenting results for PASS in business statistics. Ordinary least squares, random effects…
Statistical analysis of longitudinal quality of life data with missing measurements
Zwinderman, A. H.
1992-01-01
The statistical analysis of longitudinal quality of life data in the presence of missing data is discussed. In cancer trials missing data are generated due to the fact that patients die, drop out, or are censored. These missing data are problematic in the monitoring of the quality of life during the
Short-term statistics of waves measured off Ratnagiri, eastern Arabian Sea
Digital Repository Service at National Institute of Oceanography (India)
Amrutha, M.M.; SanilKumar, V.
coast of India have been analyzed to study the short-term statistics of waves covering full one year period. The study indicates that the values of the observed maximum wave height as a function of duration are not consistent with the theoretical...
Alsahli, Mohammad M. M.
Kuwait sea surface temperature (SST) and water clarity are important water characteristics that influence the entire Kuwait coastal ecosystem. The spatial and temporal distributions of these important water characteristics should be well understood to obtain a better knowledge about this productive coastal environment. The aim of this project was therefore to study the spatial and temporal distributions of: Kuwait SST using Moderate Resolution Imaging Spectroradiometer (MODIS) images collected from January 2003 to July 2007; and Kuwait Secchi Disk Depth (SDD), a water clarity measure, using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and MODIS data collected from November 1998 to October 2004 and January 2003 to June 2007, respectively. Kuwait SST was modeled based on the linear relationship between level 2 MODIS SST data and in situ SST data. MODIS SST images showed a significant relationship with in situ SST data ( r2= 0.98, n = 118, RMSE = 0.7°C). Kuwait SST images derived from MODIS data exhibited three spatial patterns of Kuwait SST across the year that were mainly attributed to the northwestern counterclockwise water circulation of the Arabian Gulf, and wind direction and intensity. The temporal variation of Kuwait SST was greatly influenced by the seasonal variation of solar intensity and air temperatures. Kuwait SDD was measured through two steps: first, computing the diffuse light attenuation coefficient at 490 nm, Kd(490), and 488 nm, Kd(488), derived from SeaWiFS and MODIS, respectively, using a semi-analytical algorithm; second, establishing two SDD models based on the empirical relationship of Kd(490) and Kd(488) with in situ SDD data. Kd(490) and Kd(488) showed a significant relationship with in situ SDD data ( r2= 0.67 and r2= 0.68, respectively). Kuwait SDD images showed distinct spatial and temporal patterns of Kuwait water clarity that were mainly attributed to three factors: the Shatt Al-Arab discharge, water circulation, and coastal
Analyses on the measurement of leakage currents in CdZnTe radiation detectors
International Nuclear Information System (INIS)
Mescher, M.J.; Hoburg, J.F.; Schlesinger, T.E.; James, R.B.
1999-01-01
Models that place design constraints on devices which are used to measure the leakage currents in high-resistivity semiconductor materials are presented. If these design constraints are met, these models can then be used to quantitatively predict the surface sheet resistance of devices which are dominated by surface leakage currents. As a result, a means is provided to directly compare passivation techniques which are developed to decrease surface leakage currents. Furthermore, these models illustrate the necessity for inclusion of relevant geometrical data on sample size and shape and electrode configuration when reporting results of surface passivation techniques. These models specifically examine the case where a dc potential is applied across two electrodes on the surface of a semiconductor substrate which has a surface layer with lower resistivity than the bulk material. The authors describe several of the more common configurations used in analyzing passivation techniques for compounds of Cd 1-x Zn x Te (CZT) used for room-temperature radiation detection
Measurement of the CKM angle γ from a combination of B±→Dh± analyses
International Nuclear Information System (INIS)
Aaij, R.; Abellan Beteta, C.; Adeva, B.; Adinolfi, M.; Adrover, C.; Affolder, A.; Ajaltouni, Z.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A.A.; Amato, S.; Amerio, S.; Amhis, Y.; Anderlini, L.; Anderson, J.; Andreassen, R.
2013-01-01
A combination of three LHCb measurements of the CKM angle γ is presented. The decays B ± →DK ± and B ± →Dπ ± are used, where D denotes an admixture of D 0 and D ¯0 mesons, decaying into K + K − , π + π − , K ± π ∓ , K ± π ∓ π ± π ∓ , K S 0 π + π − , or K S 0 K + K − final states. All measurements use a dataset corresponding to 1.0 fb −1 of integrated luminosity. Combining results from B ± →DK ± decays alone a best-fit value of γ=72.0° is found, and confidence intervals are set γ∈[56.4,86.7]°at 68% CL, γ∈[42.6,99.6]°at 95% CL. The best-fit value of γ found from a combination of results from B ± →Dπ ± decays alone, is γ=18.9°, and the confidence intervals γ∈[7.4,99.2]°∪[167.9,176.4]°at 68% CL are set, without constraint at 95% CL. The combination of results from B ± →DK ± and B ± →Dπ ± decays gives a best-fit value of γ=72.6° and the confidence intervals γ∈[55.4,82.3]°at 68% CL, γ∈[40.2,92.7]°at 95% CL are set. All values are expressed modulo 180°, and are obtained taking into account the effect of D 0 –D ¯0 mixing
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
Wen, Dazhi; Kuang, Yuanwen; Zhou, Guoyi
2004-01-01
Air pollution has been of a major problem in the Pearl River Delta of south China, particularly during the last two decades. Emissions of air pollutants from industries have already led to damages in natural communities and environments in a wide range of the Delta area. Leaf parameters such as chlorophyll fluorescence, leaf area (LA), dry weight (DW) and leaf mass per area (LMA) had once been used as specific indexes of environmental stress. This study aims to determine in situ if the daily variation of chlorophyll fluorescence and other ecophysiological parameters in five seedlings of three woody species, Ilex rotunda, Ficus microcarpa and Machilus chinensis, could be used alone or in combination with other measurements for sensitivity indexes to make diagnoses under air pollution stress and, hence, to choose the correct tree species for urban afforestation in the Delta area. Five seedlings of each species were transplanted in pot containers after their acclimation under shadowing conditions. Chlorophyll fluorescence measurements were made in situ by a portable fluorometer (OS-30, Opti-sciences, U.S.A). Ten random samples of leaves were picked from each species for LA measurements by area-meter (CI-203, CID, Inc., U.S.A). DW was determined after the leaf samples were dried to a constant weight at 65 degrees C. LMA was calculated as the ratio of DW/LA. Leaf N content was analyzed according to the Kjeldhal method, and the extraction of pigments was carried out according Lin et al. The daily mean Fv/Fm (Fv is the variable fluorescence and Fm is the maximum fluorescence) analysis showed that Ilex rotunda and Ficus microcarpa were more highly resistant to pollution stress, followed by Machilus chinensis, implying that the efficiency of photosystem II in I. rotunda was less affected by air pollutants than the other two species. Little difference in daily change of Fv/Fm in I. rotunda between the polluted and the clean site was also observed. However, a relatively large
Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei
2017-04-01
The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.
Raymond, Mark R; Clauser, Brian E; Furman, Gail E
2010-10-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
Sileshi, G
2006-10-01
Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
Racial measurement and statistical field in Brazilian census (1872-1940: a convergent approach
Directory of Open Access Journals (Sweden)
Alexandre de Paiva Rio Camargo
2009-12-01
Full Text Available This paper investigates the meanings given by the racial classification in several Brazilian census. It proposes a convergent analysis of the social-political conventions established for the research (1872, 1890, 1940 or omission (1920 of the racial item of the surveys in different historical moments and the emergency of the technical community of statisticians in the midst of changing in paradigm census. As a method, it assumes a circularity between the social system of racial classification, the interpretations chapters on national identity hired to introduce the census – “O povo brasileiro e sua evolução” by Oliveira Vianna (1920, and “A cultura brasileira” by Fernando de Azevedo (1940 – and the role played by the technical requirements in the taken positions of the statisticians. The article analyzes the reports written by the census committees and organizers at the verbal level, in order to compare the arguments presented to the informations and crossings obtained in the matrix level. It takes the 1940 census as a turning point in the statistical activity because it reveals the structural conflict between the policy function primarily reserved for statistics and the consecration of the technical competence. Accordingly, this paper addresses the gradual release of statistical ideology from the political propaganda on the color.
Zhang, Zhang; Li, Jun; Cui, Peng; Ding, Feng; Li, Ang; Townsend, Jeffrey P; Yu, Jun
2012-01-01
measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have
How to statistically analyze nano exposure measurement results: Using an ARIMA time series approach
Klein Entink, R.H.; Fransman, W.; Brouwer, D.H.
2011-01-01
Measurement strategies for exposure to nano-sized particles differ from traditional integrated sampling methods for exposure assessment by the use of real-time instruments. The resulting measurement series is a time series, where typically the sequential measurements are not independent from each
The Length of a Pestle: A Class Exercise in Measurement and Statistical Analysis.
O'Reilly, James E.
1986-01-01
Outlines the simple exercise of measuring the length of an object as a concrete paradigm of the entire process of making chemical measurements and treating the resulting data. Discusses the procedure, significant figures, measurement error, spurious data, rejection of results, precision and accuracy, and student responses. (TW)
Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye
2016-01-13
A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.
Directory of Open Access Journals (Sweden)
L. Bressan
2016-01-01
reconstructed sea level (RSL, the background slope (BS and the control function (CF. These functions are examined through a traditional spectral fast Fourier transform (FFT analysis and also through a statistical analysis, showing that they can be characterised by probability distribution functions PDFs such as the Student's t distribution (IS and RSL and the beta distribution (CF. As an example, the method has been applied to data from the tide-gauge station of Siracusa, Italy.
International Nuclear Information System (INIS)
Souza, R.C.; Jones, B.G.
1986-01-01
An experimental study of particles suspended in fully developed turbulent water flow in a vertical pipe was done. Three series of experiments were conducted to investigate the statistical behaviour of particles in nondilute turbulent suspension flow, for two particle densities and particle sizes, and for several particle volume loadings ranging from 0 to 1 percent. The mean free fall velocity of the particles was determined at these various particle volume loadings, and the phenomenon of cluster formation was observed. The precise volume loading which gives the maximum relative settling velocity was observed to depend on particle density and size. (E.G.) [pt
Sumner, Jeremy G; Taylor, Amelia; Holland, Barbara R; Jarvis, Peter D
2017-12-01
Recently there has been renewed interest in phylogenetic inference methods based on phylogenetic invariants, alongside the related Markov invariants. Broadly speaking, both these approaches give rise to polynomial functions of sequence site patterns that, in expectation value, either vanish for particular evolutionary trees (in the case of phylogenetic invariants) or have well understood transformation properties (in the case of Markov invariants). While both approaches have been valued for their intrinsic mathematical interest, it is not clear how they relate to each other, and to what extent they can be used as practical tools for inference of phylogenetic trees. In this paper, by focusing on the special case of binary sequence data and quartets of taxa, we are able to view these two different polynomial-based approaches within a common framework. To motivate the discussion, we present three desirable statistical properties that we argue any invariant-based phylogenetic method should satisfy: (1) sensible behaviour under reordering of input sequences; (2) stability as the taxa evolve independently according to a Markov process; and (3) explicit dependence on the assumption of a continuous-time process. Motivated by these statistical properties, we develop and explore several new phylogenetic inference methods. In particular, we develop a statistically bias-corrected version of the Markov invariants approach which satisfies all three properties. We also extend previous work by showing that the phylogenetic invariants can be implemented in such a way as to satisfy property (3). A simulation study shows that, in comparison to other methods, our new proposed approach based on bias-corrected Markov invariants is extremely powerful for phylogenetic inference. The binary case is of particular theoretical interest as-in this case only-the Markov invariants can be expressed as linear combinations of the phylogenetic invariants. A wider implication of this is that, for
Savastru, D.; Dontu, Simona; Savastru, Roxana; Sterian, Andreea Rodica
2013-01-01
Our knowledge about surroundings can be achieved by observations and measurements but both are influenced by errors (noise). Therefore one of the first tasks is to try to eliminate the noise by constructing instruments with high accuracy. But any real observed and measured system is characterized by natural limits due to the deterministic nature of the measured information. The present work is dedicated to the identification of these limits. We have analyzed some algorithms for selection and ...
Improved radiograph measurement inter-observer reliability by use of statistical shape models
Energy Technology Data Exchange (ETDEWEB)
Pegg, E.C., E-mail: elise.pegg@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Mellon, S.J., E-mail: stephen.mellon@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Salmon, G. [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Alvand, A., E-mail: abtin.alvand@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Pandit, H., E-mail: hemant.pandit@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Murray, D.W., E-mail: david.murray@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Gill, H.S., E-mail: richie.gill@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom)
2012-10-15
Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements.
Improved radiograph measurement inter-observer reliability by use of statistical shape models
International Nuclear Information System (INIS)
Pegg, E.C.; Mellon, S.J.; Salmon, G.; Alvand, A.; Pandit, H.; Murray, D.W.; Gill, H.S.
2012-01-01
Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements
Energy Technology Data Exchange (ETDEWEB)
Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others
2013-09-01
We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.
Measurement Matters: Comparing Old and New Definitions of Rape in Federal Statistical Reporting.
Bierie, David M; Davis-Siegel, James C
2015-10-01
National statistics on the incidence of rape play an important role in the work of policymakers and academics. The Uniform Crime Reports (UCR) have provided some of the most widely used and influential statistics on the incidence of rape across the United States over the past 80 years. The definition of rape used by UCR changed in 2012 to include substantially more types of sexual assault. This article draws on 20 years of data from the National Incident-Based Reporting System to describe the impact this definitional change will have on estimates of the incidence of rape and trends over time. Drawing on time series as well as panel random effects methodologies, we show that 40% of sexual assaults have been excluded by the prior definition and that the magnitude of this error has grown over time. However, the overall trend in rape over time (year-to-year change) was not substantially different when comparing events meeting the prior definition and the subgroups of sexual assault that will now be counted. © The Author(s) 2014.
Measuring the data universe data integration using statistical data and metadata exchange
Stahl, Reinhold
2018-01-01
This richly illustrated book provides an easy-to-read introduction to the challenges of organizing and integrating modern data worlds, explaining the contribution of public statistics and the ISO standard SDMX (Statistical Data and Metadata Exchange). As such, it is a must for data experts as well those aspiring to become one. Today, exponentially growing data worlds are increasingly determining our professional and private lives. The rapid increase in the amount of globally available data, fueled by search engines and social networks but also by new technical possibilities such as Big Data, offers great opportunities. But whatever the undertaking – driving the block chain revolution or making smart phones even smarter – success will be determined by how well it is possible to integrate, i.e. to collect, link and evaluate, the required data. One crucial factor in this is the introduction of a cross-domain order system in combination with a standardization of the data structure. Using everyday examples, th...
DEFF Research Database (Denmark)
Gade, Anders Christian; Siebein, G. W.; Chiang, W.
1993-01-01
as for entire rooms. Measurements data from all three teams were used in the models to assess the sensitivity of the models to expect variations in measurements. The results were compared to the previous work of Barron, Gade, and Hook among others. [Work supported by the National Science Foundation and Concert...
Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf
2017-06-01
Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
Directory of Open Access Journals (Sweden)
Rafdzah Zaki
2013-06-01
Full Text Available Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.
Duncan, Fiona; Haigh, Carol
2013-10-01
To explore and improve the quality of continuous epidural analgesia for pain relief using Statistical Process Control tools. Measuring the quality of pain management interventions is complex. Intermittent audits do not accurately capture the results of quality improvement initiatives. The failure rate for one intervention, epidural analgesia, is approximately 30% in everyday practice, so it is an important area for improvement. Continuous measurement and analysis are required to understand the multiple factors involved in providing effective pain relief. Process control and quality improvement Routine prospectively acquired data collection started in 2006. Patients were asked about their pain and side effects of treatment. Statistical Process Control methods were applied for continuous data analysis. A multidisciplinary group worked together to identify reasons for variation in the data and instigated ideas for improvement. The key measure for improvement was a reduction in the percentage of patients with an epidural in severe pain. The baseline control charts illustrated the recorded variation in the rate of several processes and outcomes for 293 surgical patients. The mean visual analogue pain score (VNRS) was four. There was no special cause variation when data were stratified by surgeons, clinical area or patients who had experienced pain before surgery. Fifty-seven per cent of patients were hypotensive on the first day after surgery. We were able to demonstrate a significant improvement in the failure rate of epidurals as the project continued with quality improvement interventions. Statistical Process Control is a useful tool for measuring and improving the quality of pain management. The applications of Statistical Process Control methods offer the potential to learn more about the process of change and outcomes in an Acute Pain Service both locally and nationally. We have been able to develop measures for improvement and benchmarking in routine care that
Statistical properties of four effect-size measures for mediation models.
Miočević, Milica; O'Rourke, Holly P; MacKinnon, David P; Brown, Hendricks C
2018-02-01
This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.
Energy Technology Data Exchange (ETDEWEB)
NONE
2010-05-15
This document reports various analyses performed within the frame of the preparation and filming of a TV documentary on the Belgium National Institute of Radio-elements. It reports gamma radiation measurements performed at the vicinity of the institute, discusses the possible origin of its increase at the vicinity of the institute, analyses of sludge samples coming from a wastewater treatment works, and analyses of milk, cabbage, mosses and sediments collected by residents
Methods for Measurement and Statistical Analysis of the Frangibility of Strengthened Glass
Directory of Open Access Journals (Sweden)
Zhongzhi eTang
2015-06-01
Full Text Available Chemically strengthened glass features a surface compression and a balancing central tension (CT in the interior of the glass. A greater CT is usually associated with a higher level of stored elastic energy in the glass. During a fracture event, release of a greater amount of stored energy can lead to frangibility, i.e., shorter crack branching distances, smaller fragment size, and ejection of small fragments from the glass. In this paper, the frangibility and fragmentation behaviors of a series of chemically strengthened glass samples are studied using two different manual testing methods and an automated tester. Both immediate and delayed fracture events were observed. A statistical method is proposed to determine the probability of frangible fracture for glasses ion exchanged under a specific set of conditions, and analysis is performed to understand the dependence of frangibility probability on sample thickness, CT, and testing method. We also propose a more rigorous set of criteria for qualifying frangibility.
A Statistical Approach for Selecting Buildings for Experimental Measurement of HVAC Needs
Directory of Open Access Journals (Sweden)
Malinowski Paweł
2017-03-01
Full Text Available This article presents a statistical methodology for selecting representative buildings for experimentally evaluating the performance of HVAC systems, especially in terms of energy consumption. The proposed approach is based on the k-means method. The algorithm for this method is conceptually simple, allowing it to be easily implemented. The method can be applied to large quantities of data with unknown distributions. The method was tested using numerical experiments to determine the hourly, daily, and yearly heat values and the domestic hot water demands of residential buildings in Poland. Due to its simplicity, the proposed approach is very promising for use in engineering applications and is applicable to testing the performance of many HVAC systems.
Health and human rights: a statistical measurement framework using household survey data in Uganda.
Wesonga, Ronald; Owino, Abraham; Ssekiboobo, Agnes; Atuhaire, Leonard; Jehopio, Peter
2015-05-03
Health is intertwined with human rights as is clearly reflected in the right to life. Promotion of health practices in the context of human rights can be accomplished if there is a better understanding of the level of human rights observance. In this paper, we evaluate and present an appraisal for a possibility of applying household survey to study the determinants of health and human rights and also derive the probability that human rights are observed; an important ingredient into the national planning framework. Data from the Uganda National Governance Baseline Survey were used. A conceptual framework for predictors of a hybrid dependent variable was developed and both bivariate and multivariate statistical techniques employed. Multivariate post estimation computations were derived after evaluations of the significance of coefficients of health and human rights predictors. Findings, show that household characteristics of respondents considered in this study were statistically significant (p human rights observance. For example, a unit increase of respondents' schooling levels results in an increase of about 34% level of positively assessing human rights observance. Additionally, the study establishes, through the three models presented, that household assessment of health and human rights observance was 20% which also represents how much of the entire continuum of human rights is demanded. Findings propose important evidence for monitoring and evaluation of health in the context human rights using household survey data. They provide a benchmark for health and human rights assessments with a focus on international and national development plans to achieve socio-economic transformation and health in society.
Statistical length of DNA based on AFM image measured by a computer
International Nuclear Information System (INIS)
Chen Xinqing; Qiu Xijun; Zhang Yi; Hu Jun; Wu Shiying; Huang Yibo; Ai Xiaobai; Li Minqian
2001-01-01
Taking advantage of image processing technology, the contour length of DNA molecule was measured automatically by a computer. Based on the AFM image of DNA, the topography of DNA was simulated into a curve. Then the DNA length was measured automatically by inserting mode. It was shown that the experimental length of a naturally deposited DNA (180.4 +- 16.4 nm) was well consistent with the theoretical length (185.0 nm). Comparing to other methods, the present approach had advantages of precision and automatism. The stretched DNA was also measured. It present approach had advantages of precision and automatism. The stretched DNA was also measured. It was shown that the experimental length (343.6 +- 20.7 nm) was much longer than the theoretical length (307.0 nm). This result indicated that the stretching process had a distinct effect on the DNA length. However, the method provided here avoided the DNA-stretching effect
International Nuclear Information System (INIS)
Adams, T.; Batra, P.; Bugel, Leonard G.; Camilleri, Leslie Loris; Conrad, Janet Marie; Fisher, Peter H.; Formaggio, Joseph Angelo; Karagiorgi, Georgia S.; )
2009-01-01
We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDFs). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parameterized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of 'Beyond the Standard Model' physics
Directory of Open Access Journals (Sweden)
Dai Guangyao
2018-01-01
Full Text Available Cirrus clouds affect the energy budget and hydrological cycle of the earth’s atmosphere. The Tibetan Plateau (TP plays a significant role in the global and regional climate. Optical and geometrical properties of cirrus clouds in the TP were measured in July-August 2014 by lidar and radiosonde. The statistics and temperature dependences of the corresponding properties are analyzed. The cirrus cloud formations are discussed with respect to temperature deviation and dynamic processes.
Jourdren, Laurent; Delaveau, Thierry; Marquenet, Emelie; Jacq, Claude; Garcia, Mathilde
2010-07-01
Recent improvements in microscopy technology allow detection of single molecules of RNA, but tools for large-scale automatic analyses of particle distributions are lacking. An increasing number of imaging studies emphasize the importance of mRNA localization in the definition of cell territory or the biogenesis of cell compartments. CORSEN is a new tool dedicated to three-dimensional (3D) distance measurements from imaging experiments especially developed to access the minimal distance between RNA molecules and cellular compartment markers. CORSEN includes a 3D segmentation algorithm allowing the extraction and the characterization of the cellular objects to be processed--surface determination, aggregate decomposition--for minimal distance calculations. CORSEN's main contribution lies in exploratory statistical analysis, cell population characterization, and high-throughput assays that are made possible by the implementation of a batch process analysis. We highlighted CORSEN's utility for the study of relative positions of mRNA molecules and mitochondria: CORSEN clearly discriminates mRNA localized to the vicinity of mitochondria from those that are translated on free cytoplasmic polysomes. Moreover, it quantifies the cell-to-cell variations of mRNA localization and emphasizes the necessity for statistical approaches. This method can be extended to assess the evolution of the distance between specific mRNAs and other cellular structures in different cellular contexts. CORSEN was designed for the biologist community with the concern to provide an easy-to-use and highly flexible tool that can be applied for diverse distance quantification issues.
International Nuclear Information System (INIS)
Leavell, W.H.; Mullens, J.A.
1981-01-01
A computational algorithm has been developed to measure transient, phase-interface velocity in two-phase, steam-water systems. The algorithm will be used to measure the transient velocity of steam-water mixture during simulated PWR reflood experiments. By utilizing signals produced by two, spatially separated impedance probes immersed in a two-phase mixture, the algorithm computes the average transit time of mixture fluctuations moving between the two probes. This transit time is computed by first, measuring the phase shift between the two probe signals after transformation to the frequency domain and then computing the phase shift slope by a weighted least-squares fitting technique. Our algorithm, which has been tested with both simulated and real data, is able to accurately track velocity transients as fast as 4 m/s/s
Energy Technology Data Exchange (ETDEWEB)
Bedington, Robert, E-mail: r.bedington@nus.edu.sg; Kataria, Dhiren; Smith, Alan
2015-09-01
The CATS (Cylindrical And Tiny Spectrometer) electrostatic optics geometry features multiple nested cylindrical analysers to simultaneously measure multiple energies of electron and multiple energies of ion in a configuration that is targeted at miniaturisation and MEMS fabrication. In the prototyped model, two configurations of cylindrical analyser were used, featuring terminating side-plates that caused particle trajectories to either converge (C type) or diverge (D type) in the axial direction. Simulations show how these different electrode configurations affect the particle focussing and instrument parameters; C-type providing greater throughputs but D-type providing higher resolving powers. The simulations were additionally used to investigate unexpected plate spacing variations in the as-built model, revealing that the k-factors are most sensitive to the width of the inter-electrode spacing at its narrowest point. - Highlights: • A new nested cylindrical miniaturised electrostatic analyser geometry is described. • “Converging” (C) and “diverging” (D) type channel properties are investigated. • C channels are shown to have greater throughputs and D greater resolving powers. • Plate factors are shown to be sensitive to the minimum in inter-electrode spacing.
Directory of Open Access Journals (Sweden)
Susanne Unverzagt
Full Text Available This study is an in-depth-analysis to explain statistical heterogeneity in a systematic review of implementation strategies to improve guideline adherence of primary care physicians in the treatment of patients with cardiovascular diseases. The systematic review included randomized controlled trials from a systematic search in MEDLINE, EMBASE, CENTRAL, conference proceedings and registers of ongoing studies. Implementation strategies were shown to be effective with substantial heterogeneity of treatment effects across all investigated strategies. Primary aim of this study was to explain different effects of eligible trials and to identify methodological and clinical effect modifiers. Random effects meta-regression models were used to simultaneously assess the influence of multimodal implementation strategies and effect modifiers on physician adherence. Effect modifiers included the staff responsible for implementation, level of prevention and definition pf the primary outcome, unit of randomization, duration of follow-up and risk of bias. Six clinical and methodological factors were investigated as potential effect modifiers of the efficacy of different implementation strategies on guideline adherence in primary care practices on the basis of information from 75 eligible trials. Five effect modifiers were able to explain a substantial amount of statistical heterogeneity. Physician adherence was improved by 62% (95% confidence interval (95% CI 29 to 104% or 29% (95% CI 5 to 60% in trials where other non-medical professionals or nurses were included in the implementation process. Improvement of physician adherence was more successful in primary and secondary prevention of cardiovascular diseases by around 30% (30%; 95% CI -2 to 71% and 31%; 95% CI 9 to 57%, respectively compared to tertiary prevention. This study aimed to identify effect modifiers of implementation strategies on physician adherence. Especially the cooperation of different health
Statistical classification of road pavements using near field vehicle rolling noise measurements.
Paulo, Joel Preto; Coelho, J L Bento; Figueiredo, Mário A T
2010-10-01
Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.
K2 and K2*: efficient alignment-free sequence similarity measurement based on Kendall statistics.
Lin, Jie; Adjeroh, Donald A; Jiang, Bing-Hua; Jiang, Yue
2018-05-15
Alignment-free sequence comparison methods can compute the pairwise similarity between a huge number of sequences much faster than sequence-alignment based methods. We propose a new non-parametric alignment-free sequence comparison method, called K2, based on the Kendall statistics. Comparing to the other state-of-the-art alignment-free comparison methods, K2 demonstrates competitive performance in generating the phylogenetic tree, in evaluating functionally related regulatory sequences, and in computing the edit distance (similarity/dissimilarity) between sequences. Furthermore, the K2 approach is much faster than the other methods. An improved method, K2*, is also proposed, which is able to determine the appropriate algorithmic parameter (length) automatically, without first considering different values. Comparative analysis with the state-of-the-art alignment-free sequence similarity methods demonstrates the superiority of the proposed approaches, especially with increasing sequence length, or increasing dataset sizes. The K2 and K2* approaches are implemented in the R language as a package and is freely available for open access (http://community.wvu.edu/daadjeroh/projects/K2/K2_1.0.tar.gz). yueljiang@163.com. Supplementary data are available at Bioinformatics online.
Behr, Guilherme A; Patel, Jay P; Coote, Marg; Moreira, Jose C F; Gelain, Daniel P; Steiner, Meir; Frey, Benicio N
2017-05-01
Previous studies have reported that salivary concentrations of certain hormones correlate with their respective serum levels. However, most of these studies did not control for potential blood contamination in saliva. In the present study we developed a statistical method to test the amount of blood contamination that needs to be avoided in saliva samples for the following hormones: cortisol, estradiol, progesterone, testosterone and oxytocin. Saliva and serum samples were collected from 38 healthy, medication-free women (mean age=33.8±7.3yr.; range=19-45). Serum and salivary hormonal levels and the amount of transferrin in saliva samples were determined using enzyme immunoassays. Salivary transferrin levels did not correlate with salivary cortisol or estradiol (up to 3mg/dl), but they were positively correlated with salivary testosterone, progesterone and oxytocin (phormones in order to determine the level of blood contamination that might affect specific hormonal salivary concentrations. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
A statistical examination of the practical problems of measurement in accountancy tanks
International Nuclear Information System (INIS)
Davies, W.; Good, P.T.; Hamlin, A.G.
1979-01-01
In the first part of the paper the general problems of measurement in large accountancy tanks are considered. The generalized tank is assumed to have an extended geometry for the avoidance of criticality, to be fitted with pneumatic level indicating devices and with temperature sensors, and to contain liquid to be accounted, such as that derived from irradiated fuel elements, which is sufficiently active to generate appreciable heat and also radiolytic gases. Possible uncertainties contributed to the final measurement of fissile material contained in or discharged from the tank by the effects of hydrostatic heads, temperature, radiolysis, surface tension, and drainage are considered in detail. The magnitude of these is established from practical data and the errors combined in order to estimate the best possible performance, which, under the specified conditions, appears to be about +-0.3% (1 sigma). The implications for the design of large accountancy tanks are considered, with particular reference to the design of accountancy tanks for future plants where the above precision may not be adequate. The second part of the paper considers practical approaches to the problem of ensuring that actual performance of the measuring system approaches the best possible as closely as possible. In particular, a system of operation in which the accountancy tank is utilized essentially as a fixed volume, with the measuring systems restricted to determining small variations from this nominal volume, offers considerable promise
A statistical approach for measuring dislocations in 2D photonic crystals
DEFF Research Database (Denmark)
Malureanu, Radu; Frandsen, Lars Hagedorn
2008-01-01
In this paper, a comparison between the placement accuracy of lattice atoms in photonic crystal structures fabricated with different lithographic techniques is made. Using atomic force microscopy measurements and self-developed algorithms for calculating the holes position within less than 0.01nm...
Velthof, G.L.; Oenema, O.
1995-01-01
Accurate estimates of total nitrous oxide (N2O) losses from grasslands derived from flux-chamber measurements are hampered by the large spatial and temporal variability of N2O fluxes from these sites. In this study, four methods for the calculation o
Quantum statistics measurements using 2-, 3- and 4-pion Bose-Einstein correlations
CERN. Geneva
2014-01-01
We also present measurements of the source radii with 3-pion Bose-Einstein cumulants in pp, p-Pb, and Pb-Pb collisions. The resulting comparisons of the radii in all three systems at similar multiplicity has implications on the hydrodynamic modeling of high-energy collisions.
International Nuclear Information System (INIS)
Felipe, A.; Martin, M.; Valdes, T.
1992-01-01
The quantification of radiological environmental contamination is usually carried out by mean of sample measurements around the emission points. These data are submitted to the so called Lower Limit of Detection which makes data to be statistically censored. The following topics have been included in our work: (a) Correction of mean values of the radiological contamination levels by the estimation of its distribution. (b) Development of the computer programs to carry out the former correction of estimators. (c) Estimation of the existing correlation among the different types of measurements. (author)
International Nuclear Information System (INIS)
Walder, A.J.; Freedman, P.A.
1992-01-01
An inductively coupled plasma source was coupled to a magnetic sector mass analyser equipped with seven Faraday detectors. An electrostatic filter located between the plasma source and the magnetic sector was used to create a double focusing system. Isotopic ratio measurements of uranium and lead standards revealed levels of internal and external precision comparable to those obtained using thermal inonization mass spectrometry. An external precision of 0.014% was obtained from the 235 U: 238 U measurement of six samples of a National Bureau of Standards (NBS) Standard Reference Material (SRM) U-500, while an RSD of 0.022% was obtained from the 206 Pb: 204 Pb measurement of six samples of NBS SRM Pb-981. Measured isotopic ratios deviated from the NBS value by approximately 0.9% per atomic mass unit. This deviation approximates to a linear function of mass bias and can therefore be corrected for by the analysis of standards. The analysis of NBS SRM Sr-987 revealed superior levels of internal and external precision. The normalization of the 87 Sr: 86 Sr ratio to the 86 Sr: 88 Sr ratio reduced the RSD to approximately 0.008%. The measured ratio was within 0.01% of the NBS value and the day-to-day reproducibility was consistent within one standard deviation. (author)
Energy Technology Data Exchange (ETDEWEB)
Stoven, G.; Klann, R.; Zhong, Z.; Nuclear Engineering Division
2007-08-28
The OSMOSE program is a collaboration on reactor physics experiments between the United States Department of Energy and the France Commissariat Energie Atomique. At the working level, it is a collaborative effort between the Argonne National Laboratory and the CEA Cadarache Research Center. The objective of this program is to measure very accurate integral reaction rates in representative spectra for the actinides important to future nuclear system designs, and to provide the experimental data for improving the basic nuclear data files. The main outcome of the OSMOSE measurement program will be an experimental database of reactivity-worth measurements in different neutron spectra for the heavy nuclides. This database can then be used as a benchmark to verify and validate reactor analysis codes. The OSMOSE program (Oscillation in Minerve of isotopes in Eupraxic Spectra) aims at improving neutronic predictions of advanced nuclear fuels through oscillation measurements in the MINERVE facility on samples containing the following separated actinides: {sup 232}Th, {sup 233}U, {sup 234}U, {sup 235}U, {sup 236}U, {sup 238}U, {sup 237}Np, {sup 238}Pu, {sup 239}Pu, {sup 240}Pu, {sup 241}Pu, {sup 242}Pu, {sup 241}Am, {sup 243}Am, {sup 244}Cm, and {sup 245}Cm. The first part of this report provides an overview of the experimental protocol and the typical processing of a series of experimental results which is currently performed at CEA-Cadarache. In the second part of the report, improvements to this technique are presented, as well as the program that was created to process oscillation measurement results from the MINERVE facility in the future.
International Nuclear Information System (INIS)
Engstroem, L.
1983-01-01
This paper reports the relative population of the levels 3p, 3d, 4d, 5d, 4f, 5g, 6g, 6h, 7h, 7i, 8i and 8k in Na-like sulfur, S VI, after beam-foil excitation at an energy of 3 MeV. For the first time the ANDC technique has been used to obtain the relative efficiency calibration of the detection system at discrete points in the wavelength interval 400-5000 A, from the analyses of measured decay curves. The advantages and limitations of this method are discussed. The populations obtained with this new technique are compared to previous measurements in multiply ionized atoms. The preferential population of the 3p and 3d levels observed in other Na-like ions is now accurately established. For the higher lying levels an almost constant population is observed. (Auth.)
Directory of Open Access Journals (Sweden)
Jin-song BAO
2008-03-01
Full Text Available Pasting properties are among the most important characteristics of starch, determining its applications in food processing and other industries. Pasting temperature derived from the Rapid Visco-analyser (RVA (Newport Scientific, in most cases, is overestimated by the Thermocline for Windows software program. Here, two methods facilitating accurate measurement of pasting temperature by RVA were described. One is to change parameter setting to ‘screen’ the true point where the pasting viscosity begins to increase, the other is to manually record the time (T1 when the pasting viscosity begins to increase and calculate the pasting temperature with the formula of (45/3.8×(T1–1+50 for rice flour. The latter method gave a manually determined pasting temperature which was significantly correlated with the gelatinization temperature measured by differential scanning calorimetry.
Harrison, J; Hodson, A W; Skillen, A W; Stappenbeck, R; Agius, L; Alberti, K G
1988-03-01
Methods are described for the analysis of glucose, lactate, pyruvate, alanine, glycerol, 3-hydroxybutyrate and acetoacetate in perchloric acid extracts of human blood, using the Cobas Bio centrifugal analyser fitted with a fluorimetric attachment. Intra-assay and inter-assay coefficients of variation ranged from 1.9 to 7.9% and from 1.0 to 7.2% respectively. Correlation coefficients ranged from 0.96 to 0.99 against established continuous-flow and manual spectrophotometric methods. All seven metabolites can be measured using a single perchloric acid extract of 20 microliter of blood. The versatility of the assays is such that as little as 100 pmol pyruvate, 3-hydroxybutyrate or as much as 15 nmol glucose can be measured in the same 20 microliter extract.
Murillo, Lourdes García; Cortese, Samuele; Anderson, David; Martino, Adriana Di; Castellanos, Francisco Xavier
2015-01-01
Introduction Our aim was to assess differences in movement measures in Attention-Deficit/Hyperactivity Disorder (ADHD) vs. typically developing (TD) controls. Methods We performed meta-analyses of published studies on motion measures contrasting ADHD with controls. We also conducted a case-control study with children/adolescents (n=61 TD, n=62 ADHD) and adults (n=30 TD, n=19 ADHD) using the McLean Motion Activity Test, semi-structured diagnostic interviews and the Behavior Rating Inventory of Executive Function and Conners (Parent, Teacher; Self) Rating Scales. Results Meta-analyses revealed medium-to-large effect sizes for actigraph (standardized mean difference [SMD]: 0.64, 95% Confidence interval (CI): 0.43, 0.85) and motion tracking systems (SDM: 0.92, 95% CI: 0.65, 1.20) measures in differentiating individuals with ADHD from controls. Effects sizes were similar in studies of children/adolescents ([SMD]:0.75, 95% CI: 0.50, 1.01) and of adults ([SMD]: 0.73, 95% CI: 0.46, 1.00). In our sample, ADHD groups differed significantly in number of Head Movements (p=0.02 in children; p=0.002 in adults), Displacement (p=0.009/pADHD (d=0.45, 95% CI: 0.08, 0.82). In the concurrent go/no-go task, reaction time variability was significantly greater in ADHD (pADHD even in adults. Our results suggest that objective locomotion measures may be particularly useful in evaluating adults with possible ADHD. PMID:25770940
Taneja, S R; Gupta, R C; Kumar, Jagdish; Thariyan, K K; Verma, Sanjeev
2005-01-01
Clinical chemistry analyser is a high-performance microcontroller-based photometric biochemical analyser to measure various blood biochemical parameters such as blood glucose, urea, protein, bilirubin, and so forth, and also to measure and observe enzyme growth occurred while performing the other biochemical tests such as ALT (alkaline amino transferase), amylase, AST (aspartate amino transferase), and so forth. These tests are of great significance in biochemistry and used for diagnostic purposes and classifying various disorders and diseases such as diabetes, liver malfunctioning, renal diseases, and so forth. An inexpensive clinical chemistry analyser developed by the authors is described in this paper. This is an open system in which any reagent kit available in the market can be used. The system is based on the principle of absorbance transmittance photometry. System design is based around 80C31 microcontroller with RAM, EPROM, and peripheral interface devices. The developed system incorporates light source, an optical module, interference filters of various wave lengths, peltier device for maintaining required temperature of the mixture in flow cell, peristaltic pump for sample aspiration, graphic LCD display for displaying blood parameters, patients test results and kinetic test graph, 40 columns mini thermal printer, and also 32-key keyboard for executing various functions. The lab tests conducted on the instrument include versatility of the analyzer, flexibility of the software, and treatment of sample. The prototype was tested and evaluated over 1000 blood samples successfully for seventeen blood parameters. Evaluation was carried out at Government Medical College and Hospital, the Department of Biochemistry. The test results were found to be comparable with other standard instruments.
Directory of Open Access Journals (Sweden)
Wei Cao
2017-09-01
Full Text Available The coal mining has brought a series of ecological problems and environmental problems in permafrost regions. Taking Muli coal-mining area as an example, this article attempts to analyse the environmental harms caused by coal mining and its protection measures in permafrost regions of Qinghai–Tibet Plateau. This article analyses the influence of open mining on the surrounding permafrost around the open pit by using the numerical simulation. The results show that (1 based on the interrelation between coal mining and permafrost environment, these main environmental harm include the permafrost change and the natural environment change in cold regions; (2 once the surface temperature rises due to open mining, the permafrost will disappear with the increase of exploitation life. If considering the solar radiation, the climate conditions and the geological condition around the pit edge, the maximum thaw depth will be more than 2 m; (3 the protection measures are proposed to avoid the disadvantage impact on the permafrost environment caused by coal mining. It will provide a scientific basis for the resource development and environment protection in cold regions.
Siegelman, Noam; Bogaerts, Louisa; Kronenfeld, Ofer; Frost, Ram
2017-10-07
From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible "statistical" properties that are the object of learning. Much less attention has been given to defining what "learning" is in the context of "statistical learning." One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL. © 2017 Cognitive Science Society, Inc.
Energy Technology Data Exchange (ETDEWEB)
Bruschewski, Martin; Schiffer, Heinz-Peter [Technische Universitaet Darmstadt, Institute of Gas Turbines and Aerospace Propulsion, Darmstadt (Germany); Freudenhammer, Daniel [Technische Universitaet Darmstadt, Institute of Fluid Mechanics and Aerodynamics, Center of Smart Interfaces, Darmstadt (Germany); Buchenberg, Waltraud B. [University Medical Center Freiburg, Medical Physics, Department of Radiology, Freiburg (Germany); Grundmann, Sven [University of Rostock, Institute of Fluid Mechanics, Rostock (Germany)
2016-05-15
Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75% is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented. (orig.)
Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven
2016-05-01
Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.
DEFF Research Database (Denmark)
Edberg, Anna; Freyhult, Eva; Sand, Salomon
- and inter-national data excerpts. For example, major PCA loadings helped deciphering both shared and disparate features, relating to food groups, across Danish and Swedish preschool consumers. Data interrogation, reliant on the above-mentioned composite techniques, disclosed one outlier dietary prototype...... prototype with the latter property was identified also in the Danish data material, but without low consumption of Vegetables or Fruit & berries. The second MDA-type of data interrogation involved Supervised Learning, also known as Predictive Modelling. These exercises involved the Random Forest (RF...... not elaborated on in-depth, output from several analyses suggests a preference for energy-based consumption data for Cluster Analysis and Predictive Modelling, over those appearing as weight....
Raffelt, David A.; Smith, Robert E.; Ridgway, Gerard R.; Tournier, J-Donald; Vaughan, David N.; Rose, Stephen; Henderson, Robert; Connelly, Alan
2015-01-01
In brain regions containing crossing fibre bundles, voxel-average diffusion MRI measures such as fractional anisotropy (FA) are difficult to interpret, and lack within-voxel single fibre population specificity. Recent work has focused on the development of more interpretable quantitative measures that can be associated with a specific fibre population within a voxel containing crossing fibres (herein we use fixel to refer to a specific fibre population within a single voxel). Unfortunately, traditional 3D methods for smoothing and cluster-based statistical inference cannot be used for voxel-based analysis of these measures, since the local neighbourhood for smoothing and cluster formation can be ambiguous when adjacent voxels may have different numbers of fixels, or ill-defined when they belong to different tracts. Here we introduce a novel statistical method to perform whole-brain fixel-based analysis called connectivity-based fixel enhancement (CFE). CFE uses probabilistic tractography to identify structurally connected fixels that are likely to share underlying anatomy and pathology. Probabilistic connectivity information is then used for tract-specific smoothing (prior to the statistical analysis) and enhancement of the statistical map (using a threshold-free cluster enhancement-like approach). To investigate the characteristics of the CFE method, we assessed sensitivity and specificity using a large number of combinations of CFE enhancement parameters and smoothing extents, using simulated pathology generated with a range of test-statistic signal-to-noise ratios in five different white matter regions (chosen to cover a broad range of fibre bundle features). The results suggest that CFE input parameters are relatively insensitive to the characteristics of the simulated pathology. We therefore recommend a single set of CFE parameters that should give near optimal results in future studies where the group effect is unknown. We then demonstrate the proposed method
International Nuclear Information System (INIS)
Roach, J.F.
1992-01-01
The electrical insulation system of the SSC long dipole magnets is reviewed and potential dielectric failure modes discussed. Electrical insulation fabrication and assembly issues with respect to rate production manufacturability are addressed. The automation required for rate assembly of electrical insulation components will require critical online visual and dielectric screening tests to insure production quality. Storage and assembly areas must bc designed to prevent foreign particles from becoming entrapped in the insulation during critical coil winding, molding, and collaring operations. All hand assembly procedures involving dielectrics must be performed with rigorous attention to their impact on insulation integrity. Individual dipole magnets must have a sufficiently low probability of electrical insulation failure under all normal and fault mode voltage conditions such that the series of magnets in the SSC rings have acceptable Mean Time Between Failure (MTBF) with respect to dielectric mode failure events. Statistical models appropriate for large electrical system breakdown failure analysis are applied to the SSC magnet rings. The MTBF of the SSC system is related to failure data base for individual dipole magnet samples
Directory of Open Access Journals (Sweden)
Özlem TÜRKŞEN
2018-03-01
Full Text Available Some of the experimental designs can be composed of replicated response measures in which the replications cannot be identified exactly and may have uncertainty different than randomness. Then, the classical regression analysis may not be proper to model the designed data because of the violation of probabilistic modeling assumptions. In this case, fuzzy regression analysis can be used as a modeling tool. In this study, the replicated response values are newly formed to fuzzy numbers by using descriptive statistics of replications and golden ratio. The main aim of the study is obtaining the most suitable fuzzy model for replicated response measures through fuzzification of the replicated values by taking into account the data structure of the replications in statistical framework. Here, the response and unknown model coefficients are considered as triangular type-1 fuzzy numbers (TT1FNs whereas the inputs are crisp. Predicted fuzzy models are obtained according to the proposed fuzzification rules by using Fuzzy Least Squares (FLS approach. The performances of the predicted fuzzy models are compared by using Root Mean Squared Error (RMSE criteria. A data set from the literature, called wheel cover component data set, is used to illustrate the performance of the proposed approach and the obtained results are discussed. The calculation results show that the combined formulation of the descriptive statistics and the golden ratio is the most preferable fuzzification rule according to the well-known decision making method, called TOPSIS, for the data set.
Energy Technology Data Exchange (ETDEWEB)
Chang, Wen-Kuei; Hong, Tianzhen
2013-01-01
Occupancy profile is one of the driving factors behind discrepancies between the measured and simulated energy consumption of buildings. The frequencies of occupants leaving their offices and the corresponding durations of absences have significant impact on energy use and the operational controls of buildings. This study used statistical methods to analyze the occupancy status, based on measured lighting-switch data in five-minute intervals, for a total of 200 open-plan (cubicle) offices. Five typical occupancy patterns were identified based on the average daily 24-hour profiles of the presence of occupants in their cubicles. These statistical patterns were represented by a one-square curve, a one-valley curve, a two-valley curve, a variable curve, and a flat curve. The key parameters that define the occupancy model are the average occupancy profile together with probability distributions of absence duration, and the number of times an occupant is absent from the cubicle. The statistical results also reveal that the number of absence occurrences decreases as total daily presence hours decrease, and the duration of absence from the cubicle decreases as the frequency of absence increases. The developed occupancy model captures the stochastic nature of occupants moving in and out of cubicles, and can be used to generate a more realistic occupancy schedule. This is crucial for improving the evaluation of the energy saving potential of occupancy based technologies and controls using building simulations. Finally, to demonstrate the use of the occupancy model, weekday occupant schedules were generated and discussed.
Statistical analysis of vehicle loads measured with three different vehicle weighing devices
CSIR Research Space (South Africa)
Mkhize, ZQP
2005-07-01
Full Text Available MEASURED WITH THREE DIFFERENT VEHICLE WEIGHING DEVICES Z Q P MKHIZE and M DE BEER CSIR Transportek, PO Box 395, Pretoria, 0001 ABSTRACT This study introduces a new scale for weighing individual tyres of slow moving vehicles. The new technology... that vehicles exert on pavements plays a vital part in the deterioration of the structural and functional capacity of the road. It also influences the safety of the vehicles, especially when vehicles are operated under overloaded and/or inappropriately loaded...
International Nuclear Information System (INIS)
Bettella, D; Murari, A; Stamp, M; Testa, D
2003-01-01
Direct measurements of tokamak plasmas isotope composition are in general quite difficult and have therefore been very seldom performed. On the other hand, the importance of this measurement is going to increase, as future experiments will be progressively focused on plasmas approaching reactor conditions. In this paper, we report for the first time encouraging experimental evidence supporting a new method to determine the radial profile of the density ratio n H /(n H + n D ), based on neutral particle analyser (NPA) measurements. The measurements have been performed in JET with the ISotope SEParator (ISEP), a NPA device specifically developed to measure the energy spectra of the three hydrogen isotopes with very high accuracy and low cross-talk. The data presented here have been collected in two different experimental conditions. In the first case, the density ratio has been kept constant during the discharge. The isotope ratio derived from the ISEP has been compared with the results of visible spectroscopy at the edge and with the isotope composition derived from an Alfven eigenmodes active diagnostic (AEAD) system at about half the minor radius for the discharges reported in this paper. A preliminary evaluation of the additional heating effects on the measurements has also been carried out. In the second set of experiments, the isotope composition of deuterium plasmas has been abruptly changed with suitable short blips of hydrogen, in order to assess the capability of the method to study the transport of the hydrogen isotope species. Future developments of the methodology and its applications to the evaluation of hydrogen transport coefficients are also briefly discussed. The results obtained so far motivate further development of the technique, which constitutes one of the few candidate diagnostic approaches viable for ITER
Directory of Open Access Journals (Sweden)
E. A. K. Ford
2008-02-01
Full Text Available Data from the Fabry-Perot Interferometers at KEOPS (Sweden, Sodankylä (Finland, and Svalbard (Norway, have been analysed for gravity wave activity on all the clear nights from 2000 to 2006. A total of 249 nights were available from KEOPS, 133 from Sodankylä and 185 from the Svalbard FPI. A Lomb-Scargle analysis was performed on each of these nights to identify the periods of any wave activity during the night. Comparisons between many nights of data allow the general characteristics of the waves that are present in the high latitude upper thermosphere to be determined. Comparisons were made between the different parameters: the atomic oxygen intensities, the thermospheric winds and temperatures, and for each parameter the distribution of frequencies of the waves was determined. No dependence on the number of waves on geomagnetic activity levels, or position in the solar cycle, was found. All the FPIs have had different detectors at various times, producing different time resolutions of the data, so comparisons between the different years, and between data from different sites, showed how the time resolution determines which waves are observed. In addition to the cutoff due to the Nyquist frequency, poor resolution observations significantly reduce the number of short-period waves (<1 h period that may be detected with confidence. The length of the dataset, which is usually determined by the length of the night, was the main factor influencing the number of long period waves (>5 h detected. Comparisons between the number of gravity waves detected at KEOPS and Sodankylä over all the seasons showed a similar proportion of waves to the number of nights used for both sites, as expected since the two sites are at similar latitudes and therefore locations with respect to the auroral oval, confirming this as a likely source region. Svalbard showed fewer waves with short periods than KEOPS data for a season when both had the same time resolution data
Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition
Dahl, Milo D.
2013-01-01
Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.
International Nuclear Information System (INIS)
Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.
2010-01-01
Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.
Using complete measurement statistics for optimal device-independent randomness evaluation
International Nuclear Information System (INIS)
Nieto-Silleras, O; Pironio, S; Silman, J
2014-01-01
The majority of recent works investigating the link between non-locality and randomness, e.g. in the context of device-independent cryptography, do so with respect to some specific Bell inequality, usually the CHSH inequality. However, the joint probabilities characterizing the measurement outcomes of a Bell test are richer than just the degree of violation of a single Bell inequality. In this work we show how to take this extra information into account in a systematic manner in order to optimally evaluate the randomness that can be certified from non-local correlations. We further show that taking into account the complete set of outcome probabilities is equivalent to optimizing over all possible Bell inequalities, thereby allowing us to determine the optimal Bell inequality for certifying the maximal amount of randomness from a given set of non-local correlations. (paper)
A Comprehensive Statistically-Based Method to Interpret Real-Time Flowing Measurements
Energy Technology Data Exchange (ETDEWEB)
Keita Yoshioka; Pinan Dawkrajai; Analis A. Romero; Ding Zhu; A. D. Hill; Larry W. Lake
2007-01-15
With the recent development of temperature measurement systems, continuous temperature profiles can be obtained with high precision. Small temperature changes can be detected by modern temperature measuring instruments such as fiber optic distributed temperature sensor (DTS) in intelligent completions and will potentially aid the diagnosis of downhole flow conditions. In vertical wells, since elevational geothermal changes make the wellbore temperature sensitive to the amount and the type of fluids produced, temperature logs can be used successfully to diagnose the downhole flow conditions. However, geothermal temperature changes along the wellbore being small for horizontal wells, interpretations of a temperature log become difficult. The primary temperature differences for each phase (oil, water, and gas) are caused by frictional effects. Therefore, in developing a thermal model for horizontal wellbore, subtle temperature changes must be accounted for. In this project, we have rigorously derived governing equations for a producing horizontal wellbore and developed a prediction model of the temperature and pressure by coupling the wellbore and reservoir equations. Also, we applied Ramey's model (1962) to the build section and used an energy balance to infer the temperature profile at the junction. The multilateral wellbore temperature model was applied to a wide range of cases at varying fluid thermal properties, absolute values of temperature and pressure, geothermal gradients, flow rates from each lateral, and the trajectories of each build section. With the prediction models developed, we present inversion studies of synthetic and field examples. These results are essential to identify water or gas entry, to guide flow control devices in intelligent completions, and to decide if reservoir stimulation is needed in particular horizontal sections. This study will complete and validate these inversion studies.
Directory of Open Access Journals (Sweden)
A. D. Love
2010-01-01
Full Text Available Raw materials used in cement manufacturing normally have varying chemical compositions and require regular analyses for plant control purposes. This is achieved by using several analytical instruments, such as XRF and ICP. The values obtained for the major elements Ca, Si, Fe and Al, are used to calculate the plant control parameters Lime Saturation Factor (LSF, Silica Ratio (SR and Alumina Modulus (AM. These plant control parameters are used to regulate the mixing and blending of various raw meal components and to operate the plant optimally. Any errors and large ﬂuctuations in these plant parameters not only inﬂuence the quality of the cement produced, but also have a major effect on the cost of production of cement clinker through their inﬂuence on the energy consumption and residence time in the kiln. This paper looks at the role that statistical variances in the analytical measurements of the major elements Ca, Si, Fe and Al can have on the ultimate LSF, SR and AM values calculated from these measurements. The inﬂuence of too high and too low values of the LSF, SR and AM on clinker quality and energy consumption is discussed, and acceptable variances in these three parameters, based on plant experiences, are established. The effect of variances in the LSF, SR and AM parameters on the production costs is then analysed, and it is shown that variations of as large as 30% and as little as 5% can potentially occur. The LSF calculation incorporates most chemical elements and therefore is prone to the largest number of variations due to statistical variances in the analytical determinations of the chemical elements. Despite all these variations in LSF values they actually produced the smallest inﬂuence on the production cost of the clinker. It is therefore concluded that the LSF value is the most practical parameter for plant control purposes.
Understanding Statistics - Cancer Statistics
Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.
On Some Statistical Properties of GRBs with Measured Redshifts Having Peaks in Optical Light Curves
Directory of Open Access Journals (Sweden)
Grigorii Beskin
2013-01-01
Full Text Available We studied the subset of optical light curves of gamma-ray bursts with measured redshifts and well-sampled R band data that have clearly detected peaks. Among 43 such events, 11 are promptoptical peaks (P, coincident with gamma-ray activity, 22 are purely afterglows (A, and 10 more carrythe signatures of an underlying activity (A(U. We studied pair correlations of their gamma-ray andoptical parameters, e.g. total energetics, peak optical luminosities, and durations. The main outcomeof our study is the detection of source frame correlations between both optical peak luminosity and total energy and the redshift for classes A and A(U, and the absence of such a correlation for class Pevents. This result seems to provide evidence of the cosmological evolution of a medium around the burst defining class A and A(U energetics, and the absence of cosmological evolution of the internal properties of GRB engines. We also discuss some other prominent correlations.
Smith, G. L.; Bess, T. D.; Minnis, P.
1983-01-01
The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.
Directory of Open Access Journals (Sweden)
Cécile Noel
2016-01-01
Full Text Available Hydrocarbon-contaminated aquifers can be successfully remediated through enhanced biodegradation. However, in situ monitoring of the treatment by piezometers is expensive and invasive and might be insufficient as the information provided is restricted to vertical profiles at discrete locations. An alternative method was tested in order to improve the robustness of the monitoring. Geophysical methods, electrical resistivity (ER and induced polarization (IP, were combined with gas analyses, CO2 concentration, and its carbon isotopic ratio, to develop a less invasive methodology for monitoring enhanced biodegradation of hydrocarbons. The field implementation of this monitoring methodology, which lasted from February 2014 until June 2015, was carried out at a BTEX-polluted site under aerobic biotreatment. Geophysical monitoring shows a more conductive and chargeable area which corresponds to the contaminated zone. In this area, high CO2 emissions have been measured with an isotopic signature demonstrating that the main source of CO2 on this site is the biodegradation of hydrocarbon fuels. Besides, the evolution of geochemical and geophysical data over a year seems to show the seasonal variation of bacterial activity. Combining geophysics with gas analyses is thus promising to provide a new methodology for in situ monitoring.
Directory of Open Access Journals (Sweden)
Jihye Kim
2013-09-01
Full Text Available Gene set analysis is a powerful tool for interpreting a genome-wide association study result and is gaining popularity these days. Comparison of the gene sets obtained for a variety of traits measured from a single genetic epidemiology dataset may give insights into the biological mechanisms underlying these traits. Based on the previously published single nucleotide polymorphism (SNP genotype data on 8,842 individuals enrolled in the Korea Association Resource project, we performed a series of systematic genome-wide association analyses for 49 quantitative traits of basic epidemiological, anthropometric, or blood chemistry parameters. Each analysis result was subjected to subsequent gene set analyses based on Gene Ontology (GO terms using gene set analysis software, GSA-SNP, identifying a set of GO terms significantly associated to each trait (pcorr < 0.05. Pairwise comparison of the traits in terms of the semantic similarity in their GO sets revealed surprising cases where phenotypically uncorrelated traits showed high similarity in terms of biological pathways. For example, the pH level was related to 7 other traits that showed low phenotypic correlations with it. A literature survey implies that these traits may be regulated partly by common pathways that involve neuronal or nerve systems.
Energy Technology Data Exchange (ETDEWEB)
Weber, T., E-mail: thomas.weber@physik.uni-erlangen.de [University of Erlangen-Nuremberg, ECAP - Erlangen Center for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany); Bartl, P.; Durst, J. [University of Erlangen-Nuremberg, ECAP - Erlangen Center for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany); Haas, W. [University of Erlangen-Nuremberg, ECAP - Erlangen Center for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany); University of Erlangen-Nuremberg, Pattern Recognition Lab, Martensstr. 3, 91058 Erlangen (Germany); Michel, T.; Ritter, A.; Anton, G. [University of Erlangen-Nuremberg, ECAP - Erlangen Center for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany)
2011-08-21
In the last decades, phase-contrast imaging using a Talbot-Lau grating interferometer is possible even with a low-brilliance X-ray source. With the potential of increasing the soft-tissue contrast, this method is on its way into medical imaging. For this purpose, the knowledge of the underlying physics of this technique is necessary. With this paper, we would like to contribute to the understanding of grating-based phase-contrast imaging by presenting results on measurements and simulations regarding the noise behaviour of the differential phases. These measurements were done using a microfocus X-ray tube with a hybrid, photon-counting, semiconductor Medipix2 detector. The additional simulations were performed by our in-house developed phase-contrast simulation tool 'SPHINX', combining both wave and particle contributions of the simulated photons. The results obtained by both of these methods show the same behaviour. Increasing the number of photons leads to a linear decrease of the standard deviation of the phase. The number of used phase steps has no influence on the standard deviation, if the total number of photons is held constant. Furthermore, the probability density function (pdf) of the reconstructed differential phases was analysed. It turned out that the so-called von Mises distribution is the physically correct pdf, which was also confirmed by measurements. This information advances the understanding of grating-based phase-contrast imaging and can be used to improve image quality.
Energy Technology Data Exchange (ETDEWEB)
Lawrie, Scott R., E-mail: scott.lawrie@stfc.ac.uk [ISIS Neutron and Muon Facility, STFC Rutherford Appleton Laboratory, Harwell Oxford, OX11 0QX (United Kingdom); John Adams Institute for Accelerator Science, Department of Physics, University of Oxford (United Kingdom); Faircloth, Daniel C.; Letchford, Alan P.; Perkins, Mike; Whitehead, Mark O.; Wood, Trevor [ISIS Neutron and Muon Facility, STFC Rutherford Appleton Laboratory, Harwell Oxford, OX11 0QX (United Kingdom)
2015-04-08
In order to facilitate the testing of advanced H{sup −} ion sources for the ISIS and Front End Test Stand (FETS) facilities at the Rutherford Appleton Laboratory (RAL), a Vessel for Extraction and Source Plasma Analyses (VESPA) has been constructed. This will perform the first detailed plasma measurements on the ISIS Penning-type H{sup −} ion source using emission spectroscopic techniques. In addition, the 30-year-old extraction optics are re-designed from the ground up in order to fully transport the beam. Using multiple beam and plasma diagnostics devices, the ultimate aim is improve H{sup −} production efficiency and subsequent transport for either long-term ISIS user operations or high power FETS requirements. The VESPA will also accommodate and test a new scaled-up Penning H{sup −} source design. This paper details the VESPA design, construction and commissioning, as well as initial beam and spectroscopy results.
van Heeswijk, Miriam M; Lambregts, Doenja M J; Maas, Monique; Lahaye, Max J; Ayas, Z; Slenter, Jos M G M; Beets, Geerard L; Bakers, Frans C H; Beets-Tan, Regina G H
2017-06-01
The apparent diffusion coefficient (ADC) is a potential prognostic imaging marker in rectal cancer. Typically, mean ADC values are used, derived from precise manual whole-volume tumor delineations by experts. The aim was first to explore whether non-precise circular delineation combined with histogram analysis can be a less cumbersome alternative to acquire similar ADC measurements and second to explore whether histogram analyses provide additional prognostic information. Thirty-seven patients who underwent a primary staging MRI including diffusion-weighted imaging (DWI; b0, 25, 50, 100, 500, 1000; 1.5 T) were included. Volumes-of-interest (VOIs) were drawn on b1000-DWI: (a) precise delineation, manually tracing tumor boundaries (2 expert readers), and (b) non-precise delineation, drawing circular VOIs with a wide margin around the tumor (2 non-experts). Mean ADC and histogram metrics (mean, min, max, median, SD, skewness, kurtosis, 5th-95th percentiles) were derived from the VOIs and delineation time was recorded. Measurements were compared between the two methods and correlated with prognostic outcome parameters. Median delineation time reduced from 47-165 s (precise) to 21-43 s (non-precise). The 45th percentile of the non-precise delineation showed the best correlation with the mean ADC from the precise delineation as the reference standard (ICC 0.71-0.75). None of the mean ADC or histogram parameters showed significant prognostic value; only the total tumor volume (VOI) was significantly larger in patients with positive clinical N stage and mesorectal fascia involvement. When performing non-precise tumor delineation, histogram analysis (in specific 45th ADC percentile) may be used as an alternative to obtain similar ADC values as with precise whole tumor delineation. Histogram analyses are not beneficial to obtain additional prognostic information.
International Nuclear Information System (INIS)
Hertzler, C.L.; Atwood, C.L.; Harris, G.A.
1989-09-01
A search was made of statistical literature that might be applicable in environmental assessment contexts, when some of the measured quantities are reported as less than detectable (LTD). Over 60 documents were reviewed, and the findings are described in this report. The methodological areas considered are parameter estimation (point estimates and confidence intervals), tolerance intervals and prediction intervals, regression, trend analysis, comparisons of populations (including two-sample comparisons and analysis of variance), and goodness of fit tests. The conclusions are summarized at the end of the report. 68 refs., 1 tab
Bruña, Ricardo; Poza, Jesús; Gómez, Carlos; García, María; Fernández, Alberto; Hornero, Roberto
2012-06-01
Alzheimer's disease (AD) is the most common cause of dementia. Over the last few years, a considerable effort has been devoted to exploring new biomarkers. Nevertheless, a better understanding of brain dynamics is still required to optimize therapeutic strategies. In this regard, the characterization of mild cognitive impairment (MCI) is crucial, due to the high conversion rate from MCI to AD. However, only a few studies have focused on the analysis of magnetoencephalographic (MEG) rhythms to characterize AD and MCI. In this study, we assess the ability of several parameters derived from information theory to describe spontaneous MEG activity from 36 AD patients, 18 MCI subjects and 26 controls. Three entropies (Shannon, Tsallis and Rényi entropies), one disequilibrium measure (based on Euclidean distance ED) and three statistical complexities (based on Lopez Ruiz-Mancini-Calbet complexity LMC) were used to estimate the irregularity and statistical complexity of MEG activity. Statistically significant differences between AD patients and controls were obtained with all parameters (p validation procedure was applied. The accuracies reached 83.9% and 65.9% to discriminate AD and MCI subjects from controls, respectively. Our findings suggest that MCI subjects exhibit an intermediate pattern of abnormalities between normal aging and AD. Furthermore, the proposed parameters provide a new description of brain dynamics in AD and MCI.
International Nuclear Information System (INIS)
Hanot, C.; Riaud, P.; Absil, O.; Mennesson, B.; Martin, S.; Liewer, K.; Loya, F.; Mawet, D.; Serabyn, E.
2011-01-01
A new 'self-calibrated' statistical analysis method has been developed for the reduction of nulling interferometry data. The idea is to use the statistical distributions of the fluctuating null depth and beam intensities to retrieve the astrophysical null depth (or equivalently the object's visibility) in the presence of fast atmospheric fluctuations. The approach yields an accuracy much better (about an order of magnitude) than is presently possible with standard data reduction methods, because the astrophysical null depth accuracy is no longer limited by the magnitude of the instrumental phase and intensity errors but by uncertainties on their probability distributions. This approach was tested on the sky with the two-aperture fiber nulling instrument mounted on the Palomar Hale telescope. Using our new data analysis approach alone-and no observations of calibrators-we find that error bars on the astrophysical null depth as low as a few 10 -4 can be obtained in the near-infrared, which means that null depths lower than 10 -3 can be reliably measured. This statistical analysis is not specific to our instrument and may be applicable to other interferometers.
Nick, Todd G
2007-01-01
Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.
Karunathilaka, Sanjeewa R; Kia, Ali-Reza Fardin; Srigley, Cynthia; Chung, Jin Kyu; Mossoba, Magdi M
2016-10-01
A rapid tool for evaluating authenticity was developed and applied to the screening of extra virgin olive oil (EVOO) retail products by using Fourier-transform near infrared (FT-NIR) spectroscopy in combination with univariate and multivariate data analysis methods. Using disposable glass tubes, spectra for 62 reference EVOO, 10 edible oil adulterants, 20 blends consisting of EVOO spiked with adulterants, 88 retail EVOO products and other test samples were rapidly measured in the transmission mode without any sample preparation. The univariate conformity index (CI) and the multivariate supervised soft independent modeling of class analogy (SIMCA) classification tool were used to analyze the various olive oil products which were tested for authenticity against a library of reference EVOO. Better discrimination between the authentic EVOO and some commercial EVOO products was observed with SIMCA than with CI analysis. Approximately 61% of all EVOO commercial products were flagged by SIMCA analysis, suggesting that further analysis be performed to identify quality issues and/or potential adulterants. Due to its simplicity and speed, FT-NIR spectroscopy in combination with multivariate data analysis can be used as a complementary tool to conventional official methods of analysis to rapidly flag EVOO products that may not belong to the class of authentic EVOO. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Leake, M. A.
1982-01-01
Planetary imagery techniques, errors in measurement or degradation assignment, and statistical formulas are presented with respect to cratering data. Base map photograph preparation, measurement of crater diameters and sampled area, and instruments used are discussed. Possible uncertainties, such as Sun angle, scale factors, degradation classification, and biases in crater recognition are discussed. The mathematical formulas used in crater statistics are presented.
International Nuclear Information System (INIS)
Zeituni, Carlos A.; Moura, Eduardo S.; Rostelato, Maria Elisa C.M.; Manzoli, Jose E.; Moura, Joao Augusto; Feher, Anselmo; Karam, Dib
2009-01-01
In order to provide the dosimetry for Iodine-125 seed production in Brazil, Harshaw thermoluminescent dosimeters (TLD-100) will be used. Even if measurements with TLD-100 of the same batch of fabrication are performed, the response will not be the same. As a consequence, they must be measured one by one. These dosimeters are LiF type with a micro-cube (1 mm x 1 mm x 1 mm) shape. Irradiations were performed using Iodine-125 seeds to guarantee the same absorbed dose of 5 Gy in each dosimeter. It has been used a Solid Water Phantom with three concentrically circle with 20 mm, 50 mm and 70 mm diameters. The angle of positions used was 0 deg, 30 deg, 60 deg and 90 deg. Of course there are 2 positions in 0 deg and 90 deg and 4 positions in 30 deg and 60 deg. These complete procedures were carried out five times in order to compare the data and minimize the systematic error. The iodine-125 seed used in the experiment was take off in each measure and put again turning his position 180 deg to guarantee the systematic error was minimized. This paper presents also a little discussion about the statistical difference in the measurement and the calculation procedure to determine the systematic error in these measurements. (author)
Energy Technology Data Exchange (ETDEWEB)
Zeituni, Carlos A.; Moura, Eduardo S.; Rostelato, Maria Elisa C.M.; Manzoli, Jose E.; Moura, Joao Augusto; Feher, Anselmo, E-mail: czeituni@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP) Sao Paulo, SP (Brazil); Karam, Dib [Universidade de Sao Paulo (USP Leste), Sao Paulo, SP (Brazil). Escola de Artes, Ciencias e Humanidades
2009-07-01
In order to provide the dosimetry for Iodine-125 seed production in Brazil, Harshaw thermoluminescent dosimeters (TLD-100) will be used. Even if measurements with TLD-100 of the same batch of fabrication are performed, the response will not be the same. As a consequence, they must be measured one by one. These dosimeters are LiF type with a micro-cube (1 mm x 1 mm x 1 mm) shape. Irradiations were performed using Iodine-125 seeds to guarantee the same absorbed dose of 5 Gy in each dosimeter. It has been used a Solid Water Phantom with three concentrically circle with 20 mm, 50 mm and 70 mm diameters. The angle of positions used was 0 deg, 30 deg, 60 deg and 90 deg. Of course there are 2 positions in 0 deg and 90 deg and 4 positions in 30 deg and 60 deg. These complete procedures were carried out five times in order to compare the data and minimize the systematic error. The iodine-125 seed used in the experiment was take off in each measure and put again turning his position 180 deg to guarantee the systematic error was minimized. This paper presents also a little discussion about the statistical difference in the measurement and the calculation procedure to determine the systematic error in these measurements. (author)
International Nuclear Information System (INIS)
Saubamea, B.
1998-12-01
This thesis presents a new method to measure the temperature of ultracold atoms from the spatial autocorrelation function of the atomic wave-packets. We thus determine the temperature of metastable helium-4 atoms cooled by velocity selective dark resonance, a method known to cool the atoms below the temperature related to the emission or the absorption of a single photon by an atom at rest, namely the recoil temperature. This cooling mechanism prepares each atom in a coherent superposition of two wave-packets with opposite mean momenta, which are initially superimposed and then drift apart. By measuring the temporal decay of their overlap, we have access to the Fourier transform of the momentum distribution of the atoms. Using this method, we can measure temperatures as low as 5 nK, 800 times as small as the recoil temperature. Moreover we study in detail the exact shape of the momentum distribution and compare the experimental results with two different theoretical approaches: a quantum Monte Carlo simulation and an analytical model based on Levy statistics. We compare the calculated line shape with the one deduced from simulations, and each theoretical model with experimental data. A very good agreement is found with each approach. We thus demonstrate the validity of the statistical model of sub-recoil cooling and give the first experimental evidence of some of its characteristics: the absence of steady-state, the self-similarity and the non Lorentzian shape of the momentum distribution of the cooled atoms. All these aspects are related to the non ergodicity of sub-recoil cooling. (author)