#### Sample records for perform inferential statistical

1. Performing Inferential Statistics Prior to Data Collection

Science.gov (United States)

Trafimow, David; MacDonald, Justin A.

2017-01-01

Typically, in education and psychology research, the investigator collects data and subsequently performs descriptive and inferential statistics. For example, a researcher might compute group means and use the null hypothesis significance testing procedure to draw conclusions about the populations from which the groups were drawn. We propose an…

2. A Statistical Primer: Understanding Descriptive and Inferential Statistics

OpenAIRE

Gillian Byrne

2007-01-01

As libraries and librarians move more towards evidence‐based decision making, the data being generated in libraries is growing. Understanding the basics of statistical analysis is crucial for evidence‐based practice (EBP), in order to correctly design and analyze researchas well as to evaluate the research of others. This article covers the fundamentals of descriptive and inferential statistics, from hypothesis construction to sampling to common statistical techniques including chi‐square, co...

3. Descriptive and inferential statistical methods used in burns research.

Science.gov (United States)

Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars

2010-05-01

Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals

4. An introduction to inferential statistics: A review and practical guide

Energy Technology Data Exchange (ETDEWEB)

Marshall, Gill, E-mail: gill.marshall@cumbria.ac.u [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom); Jonker, Leon [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom)

2011-02-15

Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.

5. An introduction to inferential statistics: A review and practical guide

International Nuclear Information System (INIS)

Marshall, Gill; Jonker, Leon

2011-01-01

Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.

6. The Development of Introductory Statistics Students' Informal Inferential Reasoning and Its Relationship to Formal Inferential Reasoning

Science.gov (United States)

Jacob, Bridgette L.

2013-01-01

The difficulties introductory statistics students have with formal statistical inference are well known in the field of statistics education. "Informal" statistical inference has been studied as a means to introduce inferential reasoning well before and without the formalities of formal statistical inference. This mixed methods study…

7. [The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].

Science.gov (United States)

Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel

2017-01-01

The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

8. The research protocol VI: How to choose the appropriate statistical test. Inferential statistics

Directory of Open Access Journals (Sweden)

Eric Flores-Ruiz

2017-10-01

Full Text Available The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

9. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

Science.gov (United States)

Tuuli, Methodius G; Odibo, Anthony O

2011-08-01

The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

10. Applying contemporary philosophy in mathematics and statistics education : The perspective of inferentialism

NARCIS (Netherlands)

Schindler, Maike; Mackrell, Kate; Pratt, Dave; Bakker, A.

2017-01-01

Schindler, M., Mackrell, K., Pratt, D., & Bakker, A. (2017). Applying contemporary philosophy in mathematics and statistics education: The perspective of inferentialism. In G. Kaiser (Ed.). Proceedings of the 13th International Congress on Mathematical Education, ICME-13

11. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

Science.gov (United States)

Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

1999-01-01

Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

12. Inferential Statistics from Black Hispanic Breast Cancer Survival Data

Directory of Open Access Journals (Sweden)

Hafiz M. R. Khan

2014-01-01

Full Text Available In this paper we test the statistical probability models for breast cancer survival data for race and ethnicity. Data was collected from breast cancer patients diagnosed in United States during the years 1973–2009. We selected a stratified random sample of Black Hispanic female patients from the Surveillance Epidemiology and End Results (SEER database to derive the statistical probability models. We used three common model building criteria which include Akaike Information Criteria (AIC, Bayesian Information Criteria (BIC, and Deviance Information Criteria (DIC to measure the goodness of fit tests and it was found that Black Hispanic female patients survival data better fit the exponentiated exponential probability model. A novel Bayesian method was used to derive the posterior density function for the model parameters as well as to derive the predictive inference for future response. We specifically focused on Black Hispanic race. Markov Chain Monte Carlo (MCMC method was used for obtaining the summary results of posterior parameters. Additionally, we reported predictive intervals for future survival times. These findings would be of great significance in treatment planning and healthcare resource allocation.

13. A Response to White and Gorard: Against Inferential Statistics: How and Why Current Statistics Teaching Gets It Wrong

Science.gov (United States)

Nicholson, James; Ridgway, Jim

2017-01-01

White and Gorard make important and relevant criticisms of some of the methods commonly used in social science research, but go further by criticising the logical basis for inferential statistical tests. This paper comments briefly on matters we broadly agree on with them and more fully on matters where we disagree. We agree that too little…

14. Inferential statistics, power estimates, and study design formalities continue to suppress biomedical innovation

OpenAIRE

Kern, Scott E.

2014-01-01

Innovation is the direct intended product of certain styles in research, but not of others. Fundamental conflicts between descriptive vs inferential statistics, deductive vs inductive hypothesis testing, and exploratory vs pre-planned confirmatory research designs have been played out over decades, with winners and losers and consequences. Longstanding warnings from both academics and research-funding interests have failed to influence effectively the course of these battles. The NIH publicly...

15. Surface Area of Patellar Facets: Inferential Statistics in the Iraqi Population

Directory of Open Access Journals (Sweden)

Ahmed Al-Imam

2017-01-01

Full Text Available Background. The patella is the largest sesamoid bone in the body; its three-dimensional complexity necessitates biomechanical perfection. Numerous pathologies occur at the patellofemoral unit which may end in degenerative changes. This study aims to test the presence of statistical correlation between the surface areas of patellar facets and other patellar morphometric parameters. Materials and Methods. Forty dry human patellae were studied. The morphometry of each patella was measured using a digital Vernier Caliper, electronic balance, and image analyses software known as ImageJ. The patellar facetal surface area was correlated with patellar weight, height, width, and thickness. Results. Inferential statistics proved the existence of linear correlation of total facetal surface area and patellar weight, height, width, and thickness. The correlation was strongest for surface area versus patellar weight. The lateral facetal area was found persistently larger than the medial facetal area, the p value was found to be <0.001 (one-tailed t-test for right patellae, and another significant p value of < 0.001 (one-tailed t-test was found for left patellae. Conclusion. These data are vital for the restoration of the normal biomechanics of the patellofemoral unit; these are to be consulted during knee surgeries and implant designs and can be of an indispensable anthropometric, interethnic, and biometric value.

16. Against Inferential Statistics: How and Why Current Statistics Teaching Gets It Wrong

Science.gov (United States)

White, Patrick; Gorard, Stephen

2017-01-01

Recent concerns about a shortage of capacity for statistical and numerical analysis skills among social science students and researchers have prompted a range of initiatives aiming to improve teaching in this area. However, these projects have rarely re-evaluated the content of what is taught to students and have instead focussed primarily on…

17. On optimal feedforward and ILC : the role of feedback for optimal performance and inferential control

NARCIS (Netherlands)

van Zundert, J.C.D.; Oomen, T.A.E

2017-01-01

The combination of feedback control with inverse model feedforward control or iterative learning control is known to yield high performance. The aim of this paper is to clarify the role of feedback in the design of feedforward controllers, with specific attention to the inferential situation. Recent

18. Why inferential statistics are inappropriate for development studies and how the same data can be better used

OpenAIRE

Ballinger, Clint

2011-01-01

The purpose of this paper is twofold: 1) to highlight the widely ignored but fundamental problem of 'superpopulations' for the use of inferential statistics in development studies. We do not to dwell on this problem however as it has been sufficiently discussed in older papers by statisticians that social scientists have nevertheless long chosen to ignore; the interested reader can turn to those for greater detail. 2) to show that descriptive statistics both avoid the problem of s...

19. On inferentialism

Science.gov (United States)

2017-12-01

This article is a critical commentary on inferentialism in mathematics education. In the first part, I comment on some of the major shortcomings that inferentialists see in the theoretical underpinnings of representationalist, empiricist, and socioconstructivist mathematics education theories. I discuss in particular the criticism that inferentialism makes of the social dimension as conceptualized by socioconstructivism and the question related to the objectivity of knowledge. In the second part, I discuss some of the theoretical foundations of inferentialism in mathematics education and try to answer the question of whether or not inferentialism overcomes the individual-social divide. In the third part, I speculate on what I think inferentialism accomplishes and what I think it does not.

20. Inferential, non-parametric statistics to assess the quality of probabilistic forecast systems

NARCIS (Netherlands)

Maia, A.H.N.; Meinke, H.B.; Lennox, S.; Stone, R.C.

2007-01-01

Many statistical forecast systems are available to interested users. To be useful for decision making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and its statistical manifestation have been firmly established, the forecasts must

1. Selecting the most appropriate inferential statistical test for your quantitative research study.

Science.gov (United States)

Bettany-Saltikov, Josette; Whittaker, Victoria Jane

2014-06-01

To discuss the issues and processes relating to the selection of the most appropriate statistical test. A review of the basic research concepts together with a number of clinical scenarios is used to illustrate this. Quantitative nursing research generally features the use of empirical data which necessitates the selection of both descriptive and statistical tests. Different types of research questions can be answered by different types of research designs, which in turn need to be matched to a specific statistical test(s). Discursive paper. This paper discusses the issues relating to the selection of the most appropriate statistical test and makes some recommendations as to how these might be dealt with. When conducting empirical quantitative studies, a number of key issues need to be considered. Considerations for selecting the most appropriate statistical tests are discussed and flow charts provided to facilitate this process. When nursing clinicians and researchers conduct quantitative research studies, it is crucial that the most appropriate statistical test is selected to enable valid conclusions to be made. © 2013 John Wiley & Sons Ltd.

2. Inferentializing Semantics

Czech Academy of Sciences Publication Activity Database

Peregrin, Jaroslav

2010-01-01

Roč. 39, č. 3 (2010), s. 255-274 ISSN 0022-3611 R&D Projects: GA ČR(CZ) GA401/07/0904 Institutional research plan: CEZ:AV0Z90090514 Keywords : inference * proof theory * model theory * inferentialism * semantics Subject RIV: AA - Philosophy ; Religion

3. Inferential Statistics in "Language Teaching Research": A Review and Ways Forward

Science.gov (United States)

Lindstromberg, Seth

2016-01-01

This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…

4. Inferential statistics of electron backscatter diffraction data from within individual crystalline grains

DEFF Research Database (Denmark)

Bachmann, Florian; Hielscher, Ralf; Jupp, Peter E.

2010-01-01

-spatial statistical analysis adapts ideas borrowed from the Bingham quaternion distribution on . Special emphasis is put on the mathematical definition and the numerical determination of a `mean orientation' characterizing the crystallographic grain as well as on distinguishing several types of symmetry......Highly concentrated distributed crystallographic orientation measurements within individual crystalline grains are analysed by means of ordinary statistics neglecting their spatial reference. Since crystallographic orientations are modelled as left cosets of a given subgroup of SO(3), the non...... of the orientation distribution with respect to the mean orientation, like spherical, prolate or oblate symmetry. Applications to simulated as well as to experimental data are presented. All computations have been done with the free and open-source texture toolbox MTEX....

5. Imaging of neural oscillations with embedded inferential and group prevalence statistics

Science.gov (United States)

2018-01-01

Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be

6. The Cost of Thinking about False Beliefs: Evidence from Adults' Performance on a Non-Inferential Theory of Mind Task

Science.gov (United States)

Apperly, Ian A.; Back, Elisa; Samson, Dana; France, Lisa

2008-01-01

Much of what we know about other people's beliefs comes non-inferentially from what people tell us. Developmental research suggests that 3-year-olds have difficulty processing such information: they suffer interference from their own knowledge of reality when told about someone's false belief (e.g., [Wellman, H. M., & Bartsch, K. (1988). Young…

7. Predictive capacity of a non-radioisotopic local lymph node assay using flow cytometry, LLNA:BrdU-FCM: Comparison of a cutoff approach and inferential statistics.

Science.gov (United States)

Kim, Da-Eun; Yang, Hyeri; Jang, Won-Hee; Jung, Kyoung-Mi; Park, Miyoung; Choi, Jin Kyu; Jung, Mi-Sook; Jeon, Eun-Young; Heo, Yong; Yeo, Kyung-Wook; Jo, Ji-Hoon; Park, Jung Eun; Sohn, Soo Jung; Kim, Tae Sung; Ahn, Il Young; Jeong, Tae-Cheon; Lim, Kyung-Min; Bae, SeungJin

2016-01-01

In order for a novel test method to be applied for regulatory purposes, its reliability and relevance, i.e., reproducibility and predictive capacity, must be demonstrated. Here, we examine the predictive capacity of a novel non-radioisotopic local lymph node assay, LLNA:BrdU-FCM (5-bromo-2'-deoxyuridine-flow cytometry), with a cutoff approach and inferential statistics as a prediction model. 22 reference substances in OECD TG429 were tested with a concurrent positive control, hexylcinnamaldehyde 25%(PC), and the stimulation index (SI) representing the fold increase in lymph node cells over the vehicle control was obtained. The optimal cutoff SI (2.7≤cutoff <3.5), with respect to predictive capacity, was obtained by a receiver operating characteristic curve, which produced 90.9% accuracy for the 22 substances. To address the inter-test variability in responsiveness, SI values standardized with PC were employed to obtain the optimal percentage cutoff (42.6≤cutoff <57.3% of PC), which produced 86.4% accuracy. A test substance may be diagnosed as a sensitizer if a statistically significant increase in SI is elicited. The parametric one-sided t-test and non-parametric Wilcoxon rank-sum test produced 77.3% accuracy. Similarly, a test substance could be defined as a sensitizer if the SI means of the vehicle control, and of the low, middle, and high concentrations were statistically significantly different, which was tested using ANOVA or Kruskal-Wallis, with post hoc analysis, Dunnett, or DSCF (Dwass-Steel-Critchlow-Fligner), respectively, depending on the equal variance test, producing 81.8% accuracy. The absolute SI-based cutoff approach produced the best predictive capacity, however the discordant decisions between prediction models need to be examined further. Copyright © 2015 Elsevier Inc. All rights reserved.

8. EDI Performance Statistics

Data.gov (United States)

U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...

9. Ludics, dialogue and inferentialism

Directory of Open Access Journals (Sweden)

Alain Lecomte

2013-12-01

Full Text Available In this paper, we try to show that Ludics, a (pre-logical framework invented by J-Y. Girard, enables us to rethink some of the relationships between Philosophy, Semantics and Pragmatics. In particular, Ludics helps to shed light on the nature of dialogue and to articulate features of Brandom's inferentialism.

10. A Framework to Support Research on Informal Inferential Reasoning

Science.gov (United States)

Zieffler, Andrew; Garfield, Joan; delMas, Robert; Reading, Chris

2008-01-01

Informal inferential reasoning is a relatively recent concept in the research literature. Several research studies have defined this type of cognitive process in slightly different ways. In this paper, a working definition of informal inferential reasoning based on an analysis of the key aspects of statistical inference, and on research from…

11. Attitude towards statistics and performance among post-graduate students

Science.gov (United States)

Rosli, Mira Khalisa; Maat, Siti Mistima

2017-05-01

For student to master Statistics is a necessity, especially for those post-graduates that are involved in the research field. The purpose of this research was to identify the attitude towards Statistics among the post-graduates and to determine the relationship between the attitude towards Statistics and post-graduates' of Faculty of Education, UKM, Bangi performance. 173 post-graduate students were chosen randomly to participate in the study. These students registered in Research Methodology II course that was introduced by faculty. A survey of attitude toward Statistics using 5-points Likert scale was used for data collection purposes. The instrument consists of four components such as affective, cognitive competency, value and difficulty. The data was analyzed using the SPSS version 22 in producing the descriptive and inferential Statistics output. The result of this research showed that there is a medium and positive relation between attitude towards statistics and students' performance. As a conclusion, educators need to access students' attitude towards the course to accomplish the learning outcomes.

12. From inferential statistics to climate knowledge

OpenAIRE

H. N. Maia, A.; Meinke, H.

2006-01-01

International audience; Climate variability and change are risk factors for climate sensitive activities such as agriculture. Managing these risks requires "climate knowledge", i.e. a sound understanding of causes and consequences of climate variability and knowledge of potential management options that are suitable in light of the climatic risks posed. Often such information about prognostic variables (e.g. yield, rainfall, run-off) is provided in probabilistic terms (e.g. via cumulative dis...

13. Conditionals and inferential connections: A hypothetical inferential theory.

Science.gov (United States)

Douven, Igor; Elqayam, Shira; Singmann, Henrik; van Wijnbergen-Huitink, Janneke

2018-03-01

Intuition suggests that for a conditional to be evaluated as true, there must be some kind of connection between its component clauses. In this paper, we formulate and test a new psychological theory to account for this intuition. We combined previous semantic and psychological theorizing to propose that the key to the intuition is a relevance-driven, satisficing-bounded inferential connection between antecedent and consequent. To test our theory, we created a novel experimental paradigm in which participants were presented with a soritical series of objects, notably colored patches (Experiments 1 and 4) and spheres (Experiment 2), or both (Experiment 3), and were asked to evaluate related conditionals embodying non-causal inferential connections (such as "If patch number 5 is blue, then so is patch number 4"). All four experiments displayed a unique response pattern, in which (largely determinate) responses were sensitive to parameters determining inference strength, as well as to consequent position in the series, in a way analogous to belief bias. Experiment 3 showed that this guaranteed relevance can be suppressed, with participants reverting to the defective conditional. Experiment 4 showed that this pattern can be partly explained by a measure of inference strength. This pattern supports our theory's "principle of relevant inference" and "principle of bounded inference," highlighting the dual processing characteristics of the inferential connection. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

14. The Relationship between Test Anxiety and Academic Performance of Students in Vital Statistics Course

Directory of Open Access Journals (Sweden)

Shirin Iranfar

2013-12-01

Full Text Available Introduction: Test anxiety is a common phenomenon among students and is one of the problems of educational system. The present study was conducted to investigate the test anxiety in vital statistics course and its association with academic performance of students at Kermanshah University of Medical Sciences. This study was descriptive-analytical and the study sample included the students studying in nursing and midwifery, paramedicine and health faculties that had taken vital statistics course and were selected through census method. Sarason questionnaire was used to analyze the test anxiety. Data were analyzed by descriptive and inferential statistics. The findings indicated no significant correlation between test anxiety and score of vital statistics course.

15. Comparative Gender Performance in Business Statistics.

Science.gov (United States)

Mogull, Robert G.

1989-01-01

Comparative performance of male and female students in introductory and intermediate statistics classes was examined for over 16 years at a state university. Gender means from 97 classes and 1,609 males and 1,085 females revealed a probabilistic--although statistically insignificant--superior performance by female students that appeared to…

16. Statistical learning methods: Basics, control and performance

Energy Technology Data Exchange (ETDEWEB)

Zimmermann, J. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de

2006-04-01

The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms.

17. Statistical learning methods: Basics, control and performance

International Nuclear Information System (INIS)

Zimmermann, J.

2006-01-01

The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms

18. The use of statistics in real and simulated investigations performed by undergraduate health sciences' students

OpenAIRE

Pimenta, Rui; Nascimento, Ana; Vieira, Margarida; Costa, Elísio

2010-01-01

In previous works, we evaluated the statistical reasoning ability acquired by health sciences’ students carrying out their final undergraduate project. We found that these students achieved a good level of statistical literacy and reasoning in descriptive statistics. However, concerning inferential statistics the students did not reach a similar level. Statistics educators therefore claim for more effective ways to learn statistics such as project based investigations. These can be simulat...

19. [Inferential evaluation of intimacy based on observation of interpersonal communication].

Science.gov (United States)

Kimura, Masanori

2015-06-01

How do people inferentially evaluate others' levels of intimacy with friends? We examined the inferential evaluation of intimacy based on the observation of interpersonal communication. In Experiment 1, participants (N = 41) responded to questions after observing conversations between friends. Results indicated that participants inferentially evaluated not only goodness of communication, but also intimacy between friends, using an expressivity heuristic approach. In Experiment 2, we investigated how inferential evaluation of intimacy was affected by prior information about relationships and by individual differences in face-to-face interactional ability. Participants (N = 64) were divided into prior- and no-prior-information groups and all performed the same task as in Experiment 1. Additionally, their interactional ability was assessed. In the prior-information group, individual differences had no effect on inferential evaluation of intimacy. On the other hand, in the no-prior-information group, face-to-face interactional ability partially influenced evaluations of intimacy. Finally, we discuss the fact that to understand one's social environment, it is important to observe others' interpersonal communications.

20. Performance modeling, loss networks, and statistical multiplexing

CERN Document Server

Mazumdar, Ravi

2009-01-01

This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

1. An introduction to inferentialism in mathematics education

Science.gov (United States)

Derry, Jan

2017-12-01

This paper introduces the philosophical work of Robert Brandom, termed inferentialism, which underpins this collection and argues that it offers rich theoretical resources for reconsidering many of the challenges and issues that have arisen in mathematics education. Key to inferentialism is the privileging of the inferential over the representational in an account of meaning; and of direct concern here is the theoretical relevance of this to the process by which learners gain knowledge. Inferentialism requires that the correct application of a concept is to be understood in terms of inferential articulation, simply put, understanding it as having meaning only as part of a set of related concepts. The paper explains how Brandom's account of the meaning is inextricably tied to freedom and it is our responsiveness to reasons involving norms which makes humans a distinctive life form. In an educational context norms, function to delimit the domain in which knowledge is acquired and it is here that the neglect of our responsiveness to reasons is significant, not only for Brandom but also for Vygotsky, with implications for how knowledge is understood in mathematics classrooms. The paper explains the technical terms in Brandom's account of meaning, such as deontic scorekeeping, illustrating these through examples to show how the inferential articulation of a concept, and thus its correct application, is made visible. Inferentialism fosters the possibility of overcoming some of the thorny old problems that have seen those on the side of facts and disciplines opposed to those whose primary concern is the meaning making of learners.

2. Performance modeling, stochastic networks, and statistical multiplexing

CERN Document Server

Mazumdar, Ravi R

2013-01-01

This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

3. A statistical model for predicting muscle performance

Science.gov (United States)

Byerly, Diane Leslie De Caix

The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

4. Back to basics: an introduction to statistics.

Science.gov (United States)

Halfens, R J G; Meijers, J M M

2013-05-01

In the second in the series, Professor Ruud Halfens and Dr Judith Meijers give an overview of statistics, both descriptive and inferential. They describe the first principles of statistics, including some relevant inferential tests.

5. Statistical analysis in MSW collection performance assessment.

Science.gov (United States)

Teixeira, Carlos Afonso; Avelino, Catarina; Ferreira, Fátima; Bentes, Isabel

2014-09-01

The increase of Municipal Solid Waste (MSW) generated over the last years forces waste managers pursuing more effective collection schemes, technically viable, environmentally effective and economically sustainable. The assessment of MSW services using performance indicators plays a crucial role for improving service quality. In this work, we focus on the relevance of regular system monitoring as a service assessment tool. In particular, we select and test a core-set of MSW collection performance indicators (effective collection distance, effective collection time and effective fuel consumption) that highlights collection system strengths and weaknesses and supports pro-active management decision-making and strategic planning. A statistical analysis was conducted with data collected in mixed collection system of Oporto Municipality, Portugal, during one year, a week per month. This analysis provides collection circuits' operational assessment and supports effective short-term municipality collection strategies at the level of, e.g., collection frequency and timetables, and type of containers. Copyright © 2014 Elsevier Ltd. All rights reserved.

6. The use of regularization in inferential measurements

International Nuclear Information System (INIS)

Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.

1999-01-01

Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)

7. Inferentialism and the Compositionality of Meaning

Czech Academy of Sciences Publication Activity Database

Peregrin, Jaroslav

2009-01-01

Roč. 1, č. 1 (2009), s. 154-181 ISSN 1877-3095 R&D Projects: GA ČR(CZ) GA401/07/0904 Institutional research plan: CEZ:AV0Z90090514 Keywords : inferentialism * compositionality * semantics Subject RIV: AA - Philosophy ; Religion

8. Inferential misconceptions and replication crisis

Directory of Open Access Journals (Sweden)

Norbert Hirschauer

2016-12-01

Full Text Available Misinterpretations of the p value and the introduction of bias through arbitrary analytical choices have been discussed in the literature for decades. Nonetheless, they seem to have persisted in empirical research, and criticisms of p value misuses have increased in the recent past due to the non-replicability of many studies. Unfortunately, the critical concerns that have been raised in the literature are scattered over many disciplines, often linguistically confusing, and differing in their main reasons for criticisms. Misuses and misinterpretations of the p value are currently being discussed intensely under the label “replication crisis” in many academic disciplines and journals, ranging from specialized scientific journals to Nature and Science. In a drastic response to the crisis, the editors of the journal Basic and Applied Social Psychology even decided to ban the use of p values from future publications at the beginning of 2015, a fact that has certainly added fuel to the discussions in the relevant scientific forums. Finally, in early March, the American Statistical Association released a brief formal statement on p values that explicitly addresses misuses and misinterpretations. In this context, we systematize the most serious flaws related to the p value and discuss suggestions of how to prevent them and reduce the rate of false discoveries in the future.

9. Statistics Anxiety, Trait Anxiety, Learning Behavior, and Academic Performance

Science.gov (United States)

Macher, Daniel; Paechter, Manuela; Papousek, Ilona; Ruggeri, Kai

2012-01-01

The present study investigated the relationship between statistics anxiety, individual characteristics (e.g., trait anxiety and learning strategies), and academic performance. Students enrolled in a statistics course in psychology (N = 147) filled in a questionnaire on statistics anxiety, trait anxiety, interest in statistics, mathematical…

10. A Model of Statistics Performance Based on Achievement Goal Theory.

Science.gov (United States)

Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

2003-01-01

Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

11. Two degree of freedom PID based inferential control of continuous bioreactor for ethanol production.

Science.gov (United States)

Pachauri, Nikhil; Singh, Vijander; Rani, Asha

2017-05-01

12. Descriptive and inferential statistics for the SANREM CRSP project database

OpenAIRE

Villca, E.I.

2008-01-01

This working paper presents survey data, discusses methodology, and draws some conclusions about livelihood strategies in Bolivia in the Jatun Mayu watershed. LTRA-3 (Watershed-based NRM for Small-scale Agriculture)

13. Appraisal of within- and between-laboratory reproducibility of non-radioisotopic local lymph node assay using flow cytometry, LLNA:BrdU-FCM: comparison of OECD TG429 performance standard and statistical evaluation.

Science.gov (United States)

Yang, Hyeri; Na, Jihye; Jang, Won-Hee; Jung, Mi-Sook; Jeon, Jun-Young; Heo, Yong; Yeo, Kyung-Wook; Jo, Ji-Hoon; Lim, Kyung-Min; Bae, SeungJin

2015-05-05

Mouse local lymph node assay (LLNA, OECD TG429) is an alternative test replacing conventional guinea pig tests (OECD TG406) for the skin sensitization test but the use of a radioisotopic agent, (3)H-thymidine, deters its active dissemination. New non-radioisotopic LLNA, LLNA:BrdU-FCM employs a non-radioisotopic analog, 5-bromo-2'-deoxyuridine (BrdU) and flow cytometry. For an analogous method, OECD TG429 performance standard (PS) advises that two reference compounds be tested repeatedly and ECt(threshold) values obtained must fall within acceptable ranges to prove within- and between-laboratory reproducibility. However, this criteria is somewhat arbitrary and sample size of ECt is less than 5, raising concerns about insufficient reliability. Here, we explored various statistical methods to evaluate the reproducibility of LLNA:BrdU-FCM with stimulation index (SI), the raw data for ECt calculation, produced from 3 laboratories. Descriptive statistics along with graphical representation of SI was presented. For inferential statistics, parametric and non-parametric methods were applied to test the reproducibility of SI of a concurrent positive control and the robustness of results were investigated. Descriptive statistics and graphical representation of SI alone could illustrate the within- and between-laboratory reproducibility. Inferential statistics employing parametric and nonparametric methods drew similar conclusion. While all labs passed within- and between-laboratory reproducibility criteria given by OECD TG429 PS based on ECt values, statistical evaluation based on SI values showed that only two labs succeeded in achieving within-laboratory reproducibility. For those two labs that satisfied the within-lab reproducibility, between-laboratory reproducibility could be also attained based on inferential as well as descriptive statistics. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

14. Statistical evaluation of diagnostic performance topics in ROC analysis

CERN Document Server

Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

2016-01-01

Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

15. A hierarchical inferential method for indoor scene classification

Directory of Open Access Journals (Sweden)

Jiang Jingzhe

2017-12-01

Full Text Available Indoor scene classification forms a basis for scene interaction for service robots. The task is challenging because the layout and decoration of a scene vary considerably. Previous studies on knowledge-based methods commonly ignore the importance of visual attributes when constructing the knowledge base. These shortcomings restrict the performance of classification. The structure of a semantic hierarchy was proposed to describe similarities of different parts of scenes in a fine-grained way. Besides the commonly used semantic features, visual attributes were also introduced to construct the knowledge base. Inspired by the processes of human cognition and the characteristics of indoor scenes, we proposed an inferential framework based on the Markov logic network. The framework is evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

16. Fault detection in IRIS reactor secondary loop using inferential models

International Nuclear Information System (INIS)

Perillo, Sergio R.P.; Upadhyaya, Belle R.; Hines, J. Wesley

2013-01-01

The development of fault detection algorithms is well-suited for remote deployment of small and medium reactors, such as the IRIS, and the development of new small modular reactors (SMR). However, an extensive number of tests are still to be performed for new engineering aspects and components that are not yet proven technology in the current PWRs, and present some technological challenges for its deployment since many of its features cannot be proven until a prototype plant is built. In this work, an IRIS plant simulation platform was developed using a Simulink® model. The dynamic simulation was utilized in obtaining inferential models that were used to detect faults artificially added to the secondary system simulations. The implementation of data-driven models and the results are discussed. (author)

17. Self-assessed performance improves statistical fusion of image labels

International Nuclear Information System (INIS)

Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

2014-01-01

Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

18. Self-assessed performance improves statistical fusion of image labels

Energy Technology Data Exchange (ETDEWEB)

Bryan, Frederick W., E-mail: frederick.w.bryan@vanderbilt.edu; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Reich, Daniel S. [Translational Neuroradiology Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, Maryland 20892 (United States); Landman, Bennett A. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); and Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37235 (United States)

2014-03-15

Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

19. Improved custom statistics visualization for CA Performance Center data

CERN Document Server

Talevi, Iacopo

2017-01-01

The main goal of my project is to understand and experiment the possibilities that CA Performance Center (CA PC) offers for creating custom applications to display stored information through interesting visual means, such as maps. In particular, I have re-written some of the network statistics web pages in order to fetch data from new statistics modules in CA PC, which has its own API, and stop using the RRD data.

20. Inferentialism as an alternative to socioconstructivism in mathematics education

Science.gov (United States)

Noorloos, Ruben; Taylor, Samuel D.; Bakker, Arthur; Derry, Jan

2017-12-01

The purpose of this article is to draw the attention of mathematics education researchers to a relatively new semantic theory called inferentialism, as developed by the philosopher Robert Brandom. Inferentialism is a semantic theory which explains concept formation in terms of the inferences individuals make in the context of an intersubjective practice of acknowledging, attributing, and challenging one another's commitments. The article argues that inferentialism can help to overcome certain problems that have plagued the various forms of constructivism, and socioconstructivism in particular. Despite the range of socioconstructivist positions on offer, there is reason to think that versions of these problems will continue to haunt socioconstructivism. The problems are that socioconstructivists (i) have not come to a satisfactory resolution of the social-individual dichotomy, (ii) are still threatened by relativism, and (iii) have been vague in their characterization of what construction is. We first present these problems; then we introduce inferentialism, and finally we show how inferentialism can help to overcome the problems. We argue that inferentialism (i) contains a powerful conception of norms that can overcome the social-individual dichotomy, (ii) draws attention to the reality that constrains our inferences, and (iii) develops a clearer conception of learning in terms of the mastering of webs of reasons. Inferentialism therefore represents a powerful alternative theoretical framework to socioconstructivism.

1. A Divergence Statistics Extension to VTK for Performance Analysis

Energy Technology Data Exchange (ETDEWEB)

Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

2015-02-01

This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

2. THESEE-3, Orgel Reactor Performance and Statistic Hot Channel Factors

International Nuclear Information System (INIS)

Chambaud, B.

1974-01-01

1 - Nature of physical problem solved: The code applies to a heavy-water moderated organic-cooled reactor channel. Different fuel cluster models can be used (circular or hexagonal patterns). The code gives coolant temperatures and velocities and cladding temperatures throughout the channel and also channel performances, such as power, outlet temperature, boiling and burn-out safety margins (see THESEE-1). In a further step, calculations are performed with statistical values obtained by random retrieval of geometrical in- put data and taking into account construction tolerances, vibrations, etc. The code evaluates the mean value and standard deviation for the more important thermal and hydraulic parameters. 2 - Method of solution: First step calculations are performed for nominal values of parameters by solving iteratively the non-linear system of equations which give the pressure drops in subchannels of the current zone (see THESEE-1). Then a Gaussian probability distribution of possible statistical values of the geometrical input data is assumed. A random number generation routine determines the statistical case. Calculations are performed in the same way as for the nominal case. In the case of several channels, statistical performances must be adjusted to equalize the normal pressure drop. A special subroutine (AVERAGE) then determines the mean value and standard deviation, and thus probability functions of the most significant thermal and hydraulic results. 3 - Restrictions on the complexity of the problem: Maximum 7 fuel clusters, each divided into 10 axial zones. Fuel bundle geometries are restricted to the following models - circular pattern 6/7, 18/19, 36/67 rods, with or without fillers. The fuel temperature distribution is not studied. The probability distribution of the statistical input is assumed to be a Gaussian function. The principle of random retrieval of statistical values is correct, but some additional correlations could be found from a more

3. Inferential Role and the Ideal of Deductive Logic

Directory of Open Access Journals (Sweden)

Thomas Hofweber

2010-11-01

Full Text Available Although there is a prima facie strong case for a close connection between the meaning and inferential role of certain expressions, this connection seems seriously threatened by the semantic and logical paradoxes which rely on these inferential roles. Some philosophers have drawn radical conclusions from the paradoxes for the theory of meaning in general, and for which sentences in our language are true. I criticize these overreactions, and instead propose to distinguish two conceptions of inferential role. This distinction is closely tied to two conceptions of deductive logic, and it is the key, I argue, for understanding ﬁrst the connection between meaning and inferential role, and second what the paradoxes show more generally.

4. Beyond the Story Map: Inferential Comprehension via Character Perspective

Science.gov (United States)

McTigue, Erin; Douglass, April; Wright, Katherine L.; Hodges, Tracey S.; Franks, Amanda D.

2015-01-01

Inferential comprehension requires both emotional intelligence and cognitive skills, however instructional comprehension strategies typically underemphasize the emotional contribution. This article documents an intervention used by diverse third grade students which centers on teaching story comprehension through character perspective-taking…

5. Statistical inference for the lifetime performance index based on generalised order statistics from exponential distribution

Science.gov (United States)

2015-04-01

In manufacturing industries, the lifetime of an item is usually characterised by a random variable X and considered to be satisfactory if X exceeds a given lower lifetime limit L. The probability of a satisfactory item is then ηL := P(X ≥ L), called conforming rate. In industrial companies, however, the lifetime performance index, proposed by Montgomery and denoted by CL, is widely used as a process capability index instead of the conforming rate. Assuming a parametric model for the random variable X, we show that there is a connection between the conforming rate and the lifetime performance index. Consequently, the statistical inferences about ηL and CL are equivalent. Hence, we restrict ourselves to statistical inference for CL based on generalised order statistics, which contains several ordered data models such as usual order statistics, progressively Type-II censored data and records. Various point and interval estimators for the parameter CL are obtained and optimal critical regions for the hypothesis testing problems concerning CL are proposed. Finally, two real data-sets on the lifetimes of insulating fluid and ball bearings, due to Nelson (1982) and Caroni (2002), respectively, and a simulated sample are analysed.

6. Computational statistics handbook with Matlab

CERN Document Server

Martinez, Wendy L

2007-01-01

Prefaces Introduction What Is Computational Statistics? An Overview of the Book Probability Concepts Introduction Probability Conditional Probability and Independence Expectation Common Distributions Sampling Concepts Introduction Sampling Terminology and Concepts Sampling Distributions Parameter Estimation Empirical Distribution Function Generating Random Variables Introduction General Techniques for Generating Random Variables Generating Continuous Random Variables Generating Discrete Random Variables Exploratory Data Analysis Introduction Exploring Univariate Data Exploring Bivariate and Trivariate Data Exploring Multidimensional Data Finding Structure Introduction Projecting Data Principal Component Analysis Projection Pursuit EDA Independent Component Analysis Grand Tour Nonlinear Dimensionality Reduction Monte Carlo Methods for Inferential Statistics Introduction Classical Inferential Statistics Monte Carlo Methods for Inferential Statist...

7. Statistical analysis of RHIC beam position monitors performance

Science.gov (United States)

Calaga, R.; Tomás, R.

2004-04-01

A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

8. Statistical analysis of RHIC beam position monitors performance

Directory of Open Access Journals (Sweden)

R. Calaga

2004-04-01

Full Text Available A detailed statistical analysis of beam position monitors (BPM performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

9. Adaptive inferential sensors based on evolving fuzzy models.

Science.gov (United States)

Angelov, Plamen; Kordon, Arthur

2010-04-01

A new technique to the design and use of inferential sensors in the process industry is proposed in this paper, which is based on the recently introduced concept of evolving fuzzy models (EFMs). They address the challenge that the modern process industry faces today, namely, to develop such adaptive and self-calibrating online inferential sensors that reduce the maintenance costs while keeping the high precision and interpretability/transparency. The proposed new methodology makes possible inferential sensors to recalibrate automatically, which reduces significantly the life-cycle efforts for their maintenance. This is achieved by the adaptive and flexible open-structure EFM used. The novelty of this paper lies in the following: (1) the overall concept of inferential sensors with evolving and self-developing structure from the data streams; (2) the new methodology for online automatic selection of input variables that are most relevant for the prediction; (3) the technique to detect automatically a shift in the data pattern using the age of the clusters (and fuzzy rules); (4) the online standardization technique used by the learning procedure of the evolving model; and (5) the application of this innovative approach to several real-life industrial processes from the chemical industry (evolving inferential sensors, namely, eSensors, were used for predicting the chemical properties of different products in The Dow Chemical Company, Freeport, TX). It should be noted, however, that the methodology and conclusions of this paper are valid for the broader area of chemical and process industries in general. The results demonstrate that well-interpretable and with-simple-structure inferential sensors can automatically be designed from the data stream in real time, which predict various process variables of interest. The proposed approach can be used as a basis for the development of a new generation of adaptive and evolving inferential sensors that can address the

10. Inferential reasoning by exclusion in children (Homo sapiens).

Science.gov (United States)

Hill, Andrew; Collier-Baker, Emma; Suddendorf, Thomas

2012-08-01

The cups task is the most widely adopted forced-choice paradigm for comparative studies of inferential reasoning by exclusion. In this task, subjects are presented with two cups, one of which has been surreptitiously baited. When the empty cup is shaken or its interior shown, it is possible to infer by exclusion that the alternative cup contains the reward. The present study extends the existing body of comparative work to include human children (Homo sapiens). Like chimpanzees (Pan troglodytes) that were tested with the same equipment and near-identical procedures, children aged three to five made apparent inferences using both visual and auditory information, although the youngest children showed the least-developed ability in the auditory modality. However, unlike chimpanzees, children of all ages used causally irrelevant information in a control test designed to examine the possibility that their apparent auditory inferences were the product of contingency learning (the duplicate cups test). Nevertheless, the children's ability to reason by exclusion was corroborated by their performance on a novel verbal disjunctive syllogism test, and we found preliminary evidence consistent with the suggestion that children used their causal-logical understanding to reason by exclusion in the cups task, but subsequently treated the duplicate cups information as symbolic or communicative, rather than causal. Implications for future comparative research are discussed. 2012 APA, all rights reserved

11. Crop identification technology assessment for remote sensing. (CITARS) Volume 9: Statistical analysis of results

Science.gov (United States)

Davis, B. J.; Feiveson, A. H.

1975-01-01

Results are presented of CITARS data processing in raw form. Tables of descriptive statistics are given along with descriptions and results of inferential analyses. The inferential results are organized by questions which CITARS was designed to answer.

12. Gas energy meter for inferential determination of thermophysical properties of a gas mixture at multiple states of the gas

Science.gov (United States)

Morrow, Thomas B [San Antonio, TX; Kelner, Eric [San Antonio, TX; Owen, Thomas E [Helotes, TX

2008-07-08

A gas energy meter that acquires the data and performs the processing for an inferential determination of one or more gas properties, such as heating value, molecular weight, or density. The meter has a sensor module that acquires temperature, pressure, CO2, and speed of sound data. Data is acquired at two different states of the gas, which eliminates the need to determine the concentration of nitrogen in the gas. A processing module receives this data and uses it to perform a "two-state" inferential algorithm.

13. Statistical performance evaluation of ECG transmission using wireless networks.

Science.gov (United States)

Shakhatreh, Walid; Gharaibeh, Khaled; Al-Zaben, Awad

2013-07-01

This paper presents simulation of the transmission of biomedical signals (using ECG signal as an example) over wireless networks. Investigation of the effect of channel impairments including SNR, pathloss exponent, path delay and network impairments such as packet loss probability; on the diagnosability of the received ECG signal are presented. The ECG signal is transmitted through a wireless network system composed of two communication protocols; an 802.15.4- ZigBee protocol and an 802.11b protocol. The performance of the transmission is evaluated using higher order statistics parameters such as kurtosis and Negative Entropy in addition to the common techniques such as the PRD, RMS and Cross Correlation.

14. Statistical Analysis of EGFR Structures’ Performance in Virtual Screening

Science.gov (United States)

Li, Yan; Li, Xiang; Dong, Zigang

2015-01-01

In this work the ability of EGFR structures to distinguish true inhibitors from decoys in docking and MM-PBSA is assessed by statistical procedures. The docking performance depends critically on the receptor conformation and bound state. The enrichment of known inhibitors is well correlated with the difference between EGFR structures rather than the bound-ligand property. The optimal structures for virtual screening can be selected based purely on the complex information. And the mixed combination of distinct EGFR conformations is recommended for ensemble docking. In MM-PBSA, a variety of EGFR structures have identically good performance in the scoring and ranking of known inhibitors, indicating that the choice of the receptor structure has little effect on the screening. PMID:26476847

15. Magnetic resonance imaging of the wrist: Diagnostic performance statistics

International Nuclear Information System (INIS)

Hobby, Jonathan L.; Tom, Brian D.M.; Bearcroft, Philip W.P.; Dixon, Adrian K.

2001-01-01

AIM: To review the published diagnostic performance statistics for magnetic resonance imaging (MRI) of the wrist for tears of the triangular fibrocartilage complex, the intrinsic carpal ligaments, and for osteonecrosis of the carpal bones. MATERIALS AND METHODS: We used Medline and Embase to search the English language literature. Studies evaluating the diagnostic performance of MRI of the wrist in living patients with surgical confirmation of MR findings were identified. RESULTS: We identified 11 studies reporting the diagnostic performance of MRI for tears of the triangular fibrocartilage complex for a total of 410 patients, six studies for the scapho-lunate ligament (159 patients), six studies for the luno-triquetral ligament (142 patients) and four studies (56 patients) for osteonecrosis of the carpal bones. CONCLUSIONS: Magnetic resonance imaging is an accurate means of diagnosing tears of the triangular fibrocartilage and carpal osteonecrosis. Although MRI is highly specific for tears of the intrinsic carpal ligaments, its sensitivity is low. The diagnostic performance of MRI in the wrist is improved by using high-resolution T2* weighted 3D gradient echo sequences. Using current imaging techniques without intra-articular contrast medium, magnetic resonance imaging cannot reliably exclude tears of the intrinsic carpal ligaments. Hobby, J.L. (2001)

16. A statistical approach to nuclear fuel design and performance

Science.gov (United States)

Cunning, Travis Andrew

As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance

17. Inferential ecosystem models, from network data to prediction

Science.gov (United States)

James S. Clark; Pankaj Agarwal; David M. Bell; Paul G. Flikkema; Alan Gelfand; Xuanlong Nguyen; Eric Ward; Jun Yang

2011-01-01

Recent developments suggest that predictive modeling could begin to play a larger role not only for data analysis, but also for data collection. We address the example of efficient wireless sensor networks, where inferential ecosystem models can be used to weigh the value of an observation against the cost of data collection. Transmission costs make observations ââ...

18. Inferential Style, School Teachers, and Depressive Symptoms in College Students

Directory of Open Access Journals (Sweden)

Caroline M. Pittard,

2018-04-01

Full Text Available Depressive symptoms affect around half of students at some point during college. According to the hopelessness theory of depression, making negative inferences about stressful events is a vulnerability for developing depression. Negative and socioemotional teaching behavior can be stressors that are associated with depression in school students. First-time college freshmen completed the Cognitive Style Questionnaire (CSQ, Teaching Behavior Questionnaire (TBQ, and Center for Epidemiological Studies Depression Scale (CES-D. While completing the TBQ, participants reported on a teacher from prior education to college. Multiple regression analysis found significant effects of the independent variables (four teaching behavior types, inferential style, and interactions between the four teaching behavior types and inferential style on the dependent variable (depressive symptoms. More specifically, negative and socio-emotional teaching behavior were positively associated with depressive symptoms and instructional and organizational teaching behavior were negatively associated with depressive symptoms. Both organizational and negative teaching behavior interacted significantly with inferential style. Organizational and negative teaching behavior shared different relationships with depressive symptoms depending upon an individual‟s level of inferential style. Promotion of instructional and organizational teaching behavior in school as well as the reduction of negative teaching behavior may be useful in reducing students‟ depressive symptoms.

19. Inferentialism in mathematics education : introduction to a special issue

NARCIS (Netherlands)

Bakker, Arthur; Hußmann, Stephan

2017-01-01

Inferentialism, as developed by the philosopher Robert Brandom (1994, 2000), is a theory of meaning. The theory has wide-ranging implications in various fields but this special issue concentrates on the use and content of concepts. The key idea, relevant to mathematics education research, is that

20. Probability Theory Plus Noise: Descriptive Estimation and Inferential Judgment.

Science.gov (United States)

Costello, Fintan; Watts, Paul

2018-01-01

We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks. Copyright © 2018 Cognitive Science Society, Inc.

1. Inferential backbone assignment for sparse data

International Nuclear Information System (INIS)

Vitek, Olga; Bailey-Kellogg, Chris; Craig, Bruce; Vitek, Jan

2006-01-01

This paper develops an approach to protein backbone NMR assignment that effectively assigns large proteins while using limited sets of triple-resonance experiments. Our approach handles proteins with large fractions of missing data and many ambiguous pairs of pseudoresidues, and provides a statistical assessment of confidence in global and position-specific assignments. The approach is tested on an extensive set of experimental and synthetic data of up to 723 residues, with match tolerances of up to 0.5 ppm for C α and C β resonance types. The tests show that the approach is particularly helpful when data contain experimental noise and require large match tolerances. The keys to the approach are an empirical Bayesian probability model that rigorously accounts for uncertainty in the data at all stages in the analysis, and a hybrid stochastic tree-based search algorithm that effectively explores the large space of possible assignments

2. Statistics

CERN Document Server

Hayslett, H T

1991-01-01

Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

3. Statistics

Science.gov (United States)

Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

4. Brain Evolution and Human Neuropsychology: The Inferential Brain Hypothesis

Science.gov (United States)

Koscik, Timothy R.; Tranel, Daniel

2013-01-01

Collaboration between human neuropsychology and comparative neuroscience has generated invaluable contributions to our understanding of human brain evolution and function. Further cross-talk between these disciplines has the potential to continue to revolutionize these fields. Modern neuroimaging methods could be applied in a comparative context, yielding exciting new data with the potential of providing insight into brain evolution. Conversely, incorporating an evolutionary base into the theoretical perspectives from which we approach human neuropsychology could lead to novel hypotheses and testable predictions. In the spirit of these objectives, we present here a new theoretical proposal, the Inferential Brain Hypothesis, whereby the human brain is thought to be characterized by a shift from perceptual processing to inferential computation, particularly within the social realm. This shift is believed to be a driving force for the evolution of the large human cortex. PMID:22459075

5. Statistics

International Nuclear Information System (INIS)

2005-01-01

For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

6. Evaluation of the performance of Moses statistical engine adapted to ...

African Journals Online (AJOL)

... of Moses statistical engine adapted to English-Arabic language combination. ... of Artificial Intelligence (AI) dedicated to Natural Language Processing (NLP). ... and focuses on SMT, then introducing the features of the open source Moses ...

7. Regularization methods for inferential sensing in nuclear power plants

International Nuclear Information System (INIS)

Hines, J.W.; Gribok, A.V.; Attieh, I.; Uhrig, R.E.

2000-01-01

Inferential sensing is the use of information related to a plant parameter to infer its actual value. The most common method of inferential sensing uses a mathematical model to infer a parameter value from correlated sensor values. Collinearity in the predictor variables leads to an ill-posed problem that causes inconsistent results when data based models such as linear regression and neural networks are used. This chapter presents several linear and non-linear inferential sensing methods including linear regression and neural networks. Both of these methods can be modified from their original form to solve ill-posed problems and produce more consistent results. We will compare these techniques using data from Florida Power Corporation's Crystal River Nuclear Power Plant to predict the drift in a feedwater flow sensor. According to a report entitled 'Feedwater Flow Measurement in U.S. Nuclear Power Generation Stations' that was commissioned by the Electric Power Research Institute, venturi meter fouling is 'the single most frequent cause' for derating in Pressurized Water Reactors. This chapter presents several viable solutions to this problem. (orig.)

8. Statistics

International Nuclear Information System (INIS)

2001-01-01

For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

9. Statistics

International Nuclear Information System (INIS)

2000-01-01

For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

10. Statistics

International Nuclear Information System (INIS)

1999-01-01

For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

11. Statistics

International Nuclear Information System (INIS)

2003-01-01

For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

12. Statistics

International Nuclear Information System (INIS)

2004-01-01

For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

13. Statistics

International Nuclear Information System (INIS)

2000-01-01

For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

14. Statistical cluster analysis and diagnosis of nuclear system level performance

International Nuclear Information System (INIS)

Teichmann, T.; Levine, M.M.; Samanta, P.K.; Kato, W.Y.

1985-01-01

The complexity of individual nuclear power plants and the importance of maintaining reliable and safe operations makes it desirable to complement the deterministic analyses of these plants by corresponding statistical surveys and diagnoses. Based on such investigations, one can then explore, statistically, the anticipation, prevention, and when necessary, the control of such failures and malfunctions. This paper, and the accompanying one by Samanta et al., describe some of the initial steps in exploring the feasibility of setting up such a program on an integrated and global (industry-wide) basis. The conceptual statistical and data framework was originally outlined in BNL/NUREG-51609, NUREG/CR-3026, and the present work aims at showing how some important elements might be implemented in a practical way (albeit using hypothetical or simulated data)

15. Statistical inference

CERN Document Server

Rohatgi, Vijay K

2003-01-01

Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

16. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics.

Science.gov (United States)

Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

2017-01-01

In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural

17. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

Science.gov (United States)

Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

2017-01-01

In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural

18. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

Directory of Open Access Journals (Sweden)

Manuela Paechter

2017-07-01

Full Text Available In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men. Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in

19. Performance Monitoring System: Summary of Lock Statistics. Revision 1.

Science.gov (United States)

1985-12-01

2751 84 4057 4141 526 798 18 1342 5727 19 5523 3996 4587 8583 1056 1630 35 2721 6536LOCK A DAMI 2 AUXILIARY I Ins NO DATA RECORDD FOR THIS LOCK- " LOCK I...TOTAL (KTOMS) ’ - (AVt OPNP ETC) ’’ ,q [ " ARKANSAS RIVER "" FORRELL LOCK IP 7A/3TRC 9/N83 UPBOUID STATISTICS ISO 53 42 M6 553 356 909 221 41 21 M8

20. Translator’s inferential excursions, with imagination in the background

Directory of Open Access Journals (Sweden)

Bożena Tokarz

2014-01-01

Full Text Available In a literary work, signals that trigger reader’s inferential excursions allow the reader’s imagination to identify with and control the represented world. They constitute an important element of sense-generating mechanism. Thanks to imagination, the translator imitates the inferential mechanism of the original on various level’s of the text’s structure, activating the imagination of the reader. The translator’s imagination is bi- or multivalent in having the linguistic-semiotic, literary, and cultural quality. Although it manifests itself in language, it goes beyond the boundaries of language. Imagination is a form of consciousness which has no object of its own, and a medium connecting a specific non-imaginary knowledge with representations. It constitutes a mind faculty shaped on the basis of sensory and mental perception. It is derived from individual principles of perception and cognition data processing. It usually requires a stymulus to activate the capabilities of the imagining subject. As a mind faculty, imagination is based on the mental capability common to all people, which is the ability to create chains of associations.Translator’s respect for inferential excursions in the original text is necessary for retaining the original meaning, regardless of whether they occur on the phonetic-phonological level (as in Ionesco’s The Chairs, or on the level of image-semantic and syntactic relations (as in translation of Apollinaire’s Zone, or on the level of syntax (as in translation of Mrożek’s short stories into Slovenian, or on the level of cultural communication (as in Slovenian translation of Gombrowicz’s Trans-Atlantic.

1. A statistical analysis on the leak detection performance of ...

Chinedu Duru

2017-11-09

Nov 9, 2017 ... of underground and overground pipelines with wireless sensor networks through the .... detection performance analysis of pipeline leakage. This study and ..... case and apply to all materials transported through the pipeline.

2. Statistical and Machine Learning Models to Predict Programming Performance

OpenAIRE

Bergin, Susan

2006-01-01

This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

3. Descriptive statistics.

Science.gov (United States)

Nick, Todd G

2007-01-01

Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.

4. Artificial Intelligence for Inferential Control of Crude Oil Stripping Process

Directory of Open Access Journals (Sweden)

Mehdi Ebnali

2018-01-01

Full Text Available Stripper columns are used for sweetening crude oil, and they must hold product hydrogen sulfide content as near the set points as possible in the faces of upsets. Since product    quality cannot be measured easily and economically online, the control of product quality is often achieved by maintaining a suitable tray temperature near its set point. Tray temperature control method, however, is not a proper option for a multi-component stripping column because the tray temperature does not correspond exactly to the product composition. To overcome this problem, secondary measurements can be used to infer the product quality and adjust the values of the manipulated variables. In this paper, we have used a novel inferential control approach base on adaptive network fuzzy inference system (ANFIS for stripping process. ANFIS with different learning algorithms is used for modeling the process and building a composition estimator to estimate the composition of the bottom product. The developed estimator is tested, and the results show that the predictions made by ANFIS structure are in good agreement with the results of simulation by ASPEN HYSYS process simulation package. In addition, inferential control by the implementation of ANFIS-based online composition estimator in a cascade control scheme is superior to traditional tray temperature control method based on less integral time absolute error and low duty consumption in reboiler.

5. A new ore reserve estimation method, Yang Chizhong filtering and inferential measurement method, and its application

International Nuclear Information System (INIS)

Wu Jingqin.

1989-01-01

Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described

6. Why do people show minimal knowledge updating with task experience: inferential deficit or experimental artifact?

Science.gov (United States)

Hertzog, Christopher; Price, Jodi; Burpee, Ailis; Frentzel, William J; Feldstein, Simeon; Dunlosky, John

2009-01-01

Students generally do not have highly accurate knowledge about strategy effectiveness for learning, such as that imagery is superior to rote repetition. During multiple study-test trials using both strategies, participants' predictions about performance on List 2 do not markedly differ for the two strategies, even though List 1 recall is substantially greater for imagery. Two experiments evaluated whether such deficits in knowledge updating about the strategy effects were due to an experimental artifact or to inaccurate inferences about the effects the strategies had on recall. Participants studied paired associates on two study-test trials--they were instructed to study half using imagery and half using rote repetition. Metacognitive judgements tapped the quality of inferential processes about the strategy effects during the List 1 test and tapped gains in knowledge about the strategies across lists. One artifactual explanation--noncompliance with strategy instructions--was ruled out, whereas manipulations aimed at supporting the data available to inferential processes improved but did not fully repair knowledge updating.

7. Use of statistical process control in evaluation of academic performance

Directory of Open Access Journals (Sweden)

Ezequiel Gibbon Gautério

2014-05-01

Full Text Available The aim of this article was to study some indicators of academic performance (number of students per class, dropout rate, failure rate and scores obtained by the students to identify a pattern of behavior that would enable to implement improvements in the teaching-learning process. The sample was composed of five classes of undergraduate courses in Engineering. The data were collected for three years. Initially an exploratory analysis with analytical and graphical techniques was performed. An analysis of variance and Tukey’s test investigated some sources of variability. This information was used in the construction of control charts. We have found evidence that classes with more students are associated with higher failure rates and lower mean. Moreover, when the course was later in the curriculum, the students had higher scores. The results showed that although they have been detected some special causes interfering in the process, it was possible to stabilize it and to monitor it.

8. Testing the performance of a blind burst statistic

Energy Technology Data Exchange (ETDEWEB)

Vicere, A [Istituto di Fisica, Universita di Urbino (Italy); Calamai, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Campagna, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Conforto, G [Istituto di Fisica, Universita di Urbino (Italy); Cuoco, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Dominici, P [Istituto di Fisica, Universita di Urbino (Italy); Fiori, I [Istituto di Fisica, Universita di Urbino (Italy); Guidi, G M [Istituto di Fisica, Universita di Urbino (Italy); Losurdo, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Martelli, F [Istituto di Fisica, Universita di Urbino (Italy); Mazzoni, M [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Perniola, B [Istituto di Fisica, Universita di Urbino (Italy); Stanga, R [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Vetrano, F [Istituto di Fisica, Universita di Urbino (Italy)

2003-09-07

In this work, we estimate the performance of a method for the detection of burst events in the data produced by interferometric gravitational wave detectors. We compute the receiver operating characteristics in the specific case of a simulated noise having the spectral density expected for Virgo, using test signals taken from a library of possible waveforms emitted during the collapse of the core of type II supernovae.

9. Statistical Analysis of the Grid Connected Photovoltaic System Performance Ratio

Directory of Open Access Journals (Sweden)

Javier Vilariño-García

2017-05-01

Full Text Available A methodology based on the application of variance analysis and Tukey's method to a data set of solar radiation in the plane of the photovoltaic modules and the corresponding values of power delivered to the grid at intervals of 10 minutes presents from sunrise to sunset during the 52 weeks of the year 2013. These data were obtained through a monitoring system located in a photovoltaic plant of 10 MW of rated power located in Cordoba, consisting of 16 transformers and 98 investors. The application of the comparative method among the middle of the performance index of the processing centers to detect with an analysis of variance if there is significant difference in average at least the rest at a level of significance of 5% and then by testing Tukey which one or more processing centers that are below average due to a fault to be detected and corrected are.

10. Spacecraft control center automation using the generic inferential executor (GENIE)

Science.gov (United States)

Hartley, Jonathan; Luczak, Ed; Stump, Doug

1996-01-01

The increasing requirement to dramatically reduce the cost of mission operations led to increased emphasis on automation technology. The expert system technology used at the Goddard Space Flight Center (MD) is currently being applied to the automation of spacecraft control center activities. The generic inferential executor (GENIE) is a tool which allows pass automation applications to be constructed. The pass script templates constructed encode the tasks necessary to mimic flight operations team interactions with the spacecraft during a pass. These templates can be configured with data specific to a particular pass. Animated graphical displays illustrate the progress during the pass. The first GENIE application automates passes of the solar, anomalous and magnetospheric particle explorer (SAMPEX) spacecraft.

11. Psychometric Evaluation of the Italian Adaptation of the Test of Inferential and Creative Thinking

Science.gov (United States)

Faraci, Palmira; Hell, Benedikt; Schuler, Heinz

2016-01-01

This article describes the psychometric properties of the Italian adaptation of the "Analyse des Schlussfolgernden und Kreativen Denkens" (ASK; Test of Inferential and Creative Thinking) for measuring inferential and creative thinking. The study aimed to (a) supply evidence for the factorial structure of the instrument, (b) describe its…

12. Negative inferential style, emotional clarity, and life stress: integrating vulnerabilities to depression in adolescence.

Science.gov (United States)

Stange, Jonathan P; Alloy, Lauren B; Flynn, Megan; Abramson, Lyn Y

2013-01-01

Negative inferential style and deficits in emotional clarity have been identified as vulnerability factors for depression in adolescence, particularly when individuals experience high levels of life stress. However, previous research has not integrated these characteristics when evaluating vulnerability to depression. In the present study, a racially diverse community sample of 256 early adolescents (ages 12 and 13) completed a baseline visit and a follow-up visit 9 months later. Inferential style, emotional clarity, and depressive symptoms were assessed at baseline, and intervening life events and depressive symptoms were assessed at follow-up. Hierarchical linear regressions indicated that there was a significant three-way interaction between adolescents' weakest-link negative inferential style, emotional clarity, and intervening life stress predicting depressive symptoms at follow-up, controlling for initial depressive symptoms. Adolescents with low emotional clarity and high negative inferential styles experienced the greatest increases in depressive symptoms following life stress. Emotional clarity buffered against the impact of life stress on depressive symptoms among adolescents with negative inferential styles. Similarly, negative inferential styles exacerbated the impact of life stress on depressive symptoms among adolescents with low emotional clarity. These results provide evidence of the utility of integrating inferential style and emotional clarity as constructs of vulnerability in combination with life stress in the identification of adolescents at risk for depression. They also suggest the enhancement of emotional clarity as a potential intervention technique to protect against the effects of negative inferential styles and life stress on depression in early adolescence.

13. Effects of Stress and Working Memory Capacity on Foreign Language Readers' Inferential Processing during Comprehension

Science.gov (United States)

Rai, Manpreet K.; Loschky, Lester C.; Harris, Richard Jackson; Peck, Nicole R.; Cook, Lindsay G.

2011-01-01

Although stress is frequently claimed to impede foreign language (FL) reading comprehension, it is usually not explained how. We investigated the effects of stress, working memory (WM) capacity, and inferential complexity on Spanish FL readers' inferential processing during comprehension. Inferences, although necessary for reading comprehension,…

14. A Genetic-Neuro-Fuzzy inferential model for diagnosis of tuberculosis

Directory of Open Access Journals (Sweden)

Mumini Olatunji Omisore

2017-01-01

Full Text Available Tuberculosis is a social, re-emerging infectious disease with medical implications throughout the globe. Despite efforts, the coverage of tuberculosis disease (with HIV prevalence in Nigeria rose from 2.2% in 1991 to 22% in 2013 and the orthodox diagnosis methods available for Tuberculosis diagnosis were been faced with a number of challenges which can, if measure not taken, increase the spread rate; hence, there is a need for aid in diagnosis of the disease. This study proposes a technique for intelligent diagnosis of TB using Genetic-Neuro-Fuzzy Inferential method to provide a decision support platform that can assist medical practitioners in administering accurate, timely, and cost effective diagnosis of Tuberculosis. Performance evaluation observed, using a case study of 10 patients from St. Francis Catholic Hospital Okpara-In-Land (Delta State, Nigeria, shows sensitivity and accuracy results of 60% and 70% respectively which are within the acceptable range of predefined by domain experts.

15. Can We Use Polya’s Method to Improve Students’ Performance in the Statistics Classes?

Directory of Open Access Journals (Sweden)

Indika Wickramasinghe

2015-01-01

Full Text Available In this study, Polya’s problem-solving method is introduced in a statistics class in an effort to enhance students’ performance. Teaching the method was applied to one of the two introductory-level statistics classes taught by the same instructor, and a comparison was made between the performances in the two classes. The results indicate there was a significant improvement of the students’ performance in the class in which Polya’s method was introduced.

16. Differences in game-related statistics of basketball performance by game location for men's winning and losing teams.

Science.gov (United States)

Gómez, Miguel A; Lorenzo, Alberto; Barakat, Rubén; Ortega, Enrique; Palao, José M

2008-02-01

The aim of the present study was to identify game-related statistics that differentiate winning and losing teams according to game location. The sample included 306 games of the 2004-2005 regular season of the Spanish professional men's league (ACB League). The independent variables were game location (home or away) and game result (win or loss). The game-related statistics registered were free throws (successful and unsuccessful), 2- and 3-point field goals (successful and unsuccessful), offensive and defensive rebounds, blocks, assists, fouls, steals, and turnovers. Descriptive and inferential analyses were done (one-way analysis of variance and discriminate analysis). The multivariate analysis showed that winning teams differ from losing teams in defensive rebounds (SC = .42) and in assists (SC = .38). Similarly, winning teams differ from losing teams when they play at home in defensive rebounds (SC = .40) and in assists (SC = .41). On the other hand, winning teams differ from losing teams when they play away in defensive rebounds (SC = .44), assists (SC = .30), successful 2-point field goals (SC = .31), and unsuccessful 3-point field goals (SC = -.35). Defensive rebounds and assists were the only game-related statistics common to all three analyses.

17. Inferential smart sensing for feedwater flowrate in PWRs

International Nuclear Information System (INIS)

Na, M. G.; Hwang, I. J.; Lee, Y. J.

2006-01-01

The feedwater flowrate that is measured by Venturi flow meters in most pressurized water reactors can be over-measured because of the fouling phenomena that make corrosion products accumulate in the Venturi meters. Therefore, in this work, two kinds of methods, a support vector regression method and a fuzzy modeling method, combined with a sequential probability ratio test, are used in order to accurately estimate online the feedwater flowrate, and also to monitor the status of the existing hardware sensors. Also, the data for training the support vector machines and the fuzzy model are selected by using a subtractive clustering scheme to use informative data from among all acquired data. The proposed inferential sensing and monitoring algorithm is verified by using the acquired real plant data of Yonggwang Nuclear Power Plant Unit 3. In the simulations, it was known that the root mean squared error and the relative maximum error are so small and the proposed method early detects the degradation of an existing hardware sensor. (authors)

18. The role of working memory in inferential sentence comprehension.

Science.gov (United States)

Pérez, Ana Isabel; Paolieri, Daniela; Macizo, Pedro; Bajo, Teresa

2014-08-01

Existing literature on inference making is large and varied. Trabasso and Magliano (Discourse Process 21(3):255-287, 1996) proposed the existence of three types of inferences: explicative, associative and predictive. In addition, the authors suggested that these inferences were related to working memory (WM). In the present experiment, we investigated whether WM capacity plays a role in our ability to answer comprehension sentences that require text information based on these types of inferences. Participants with high and low WM span read two narratives with four paragraphs each. After each paragraph was read, they were presented with four true/false comprehension sentences. One required verbatim information and the other three implied explicative, associative and predictive inferential information. Results demonstrated that only the explicative and predictive comprehension sentences required WM: participants with high verbal WM were more accurate in giving explanations and also faster at making predictions relative to participants with low verbal WM span; in contrast, no WM differences were found in the associative comprehension sentences. These results are interpreted in terms of the causal nature underlying these types of inferences.

19. Inferential revision in narrative texts: An ERP study.

Science.gov (United States)

Pérez, Ana; Cain, Kate; Castellanos, María C; Bajo, Teresa

2015-11-01

We evaluated the process of inferential revision during text comprehension in adults. Participants with high or low working memory read short texts, in which the introduction supported two plausible concepts (e.g., 'guitar/violin'), although one was more probable ('guitar'). There were three possible continuations: a neutral sentence, which did not refer back to either concept; a no-revise sentence, which referred to a general property consistent with either concept (e.g., '…beautiful curved body'); and a revise sentence, which referred to a property that was consistent with only the less likely concept (e.g., '…matching bow'). Readers took longer to read the sentence in the revise condition, indicating that they were able to evaluate their comprehension and detect a mismatch. In a final sentence, a target noun referred to the alternative concept supported in the revise condition (e.g., 'violin'). ERPs indicated that both working memory groups were able to evaluate their comprehension of the text (P3a), but only high working memory readers were able to revise their initial incorrect interpretation (P3b) and integrate the new information (N400) when reading the revise sentence. Low working memory readers had difficulties inhibiting the no-longer-relevant interpretation and thus failed to revise their situation model, and they experienced problems integrating semantically related information into an accurate memory representation.

20. The t-test: An Influential Inferential Tool in Chaplaincy and Other Healthcare Research.

Science.gov (United States)

Jankowski, Katherine R B; Flannelly, Kevin J; Flannelly, Laura T

2018-01-01

The t-test developed by William S. Gosset (also known as Student's t-test and the two-sample t-test) is commonly used to compare one sample mean on a measure with another sample mean on the same measure. The outcome of the t-test is used to draw inferences about how different the samples are from each other. It is probably one of the most frequently relied upon statistics in inferential research. It is easy to use: a researcher can calculate the statistic with three simple tools: paper, pen, and a calculator. A computer program can quickly calculate the t-test for large samples. The ease of use can result in the misuse of the t-test. This article discusses the development of the original t-test, basic principles of the t-test, two additional types of t-tests (the one-sample t-test and the paired t-test), and recommendations about what to consider when using the t-test to draw inferences in research.

1. Humans make efficient use of natural image statistics when performing spatial interpolation.

Science.gov (United States)

D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

2013-12-16

Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

2. Global health business: the production and performativity of statistics in Sierra Leone and Germany.

Science.gov (United States)

Erikson, Susan L

2012-01-01

The global push for health statistics and electronic digital health information systems is about more than tracking health incidence and prevalence. It is also experienced on the ground as means to develop and maintain particular norms of health business, knowledge, and decision- and profit-making that are not innocent. Statistics make possible audit and accountability logics that undergird the management of health at a distance and that are increasingly necessary to the business of health. Health statistics are inextricable from their social milieus, yet as business artifacts they operate as if they are freely formed, objectively originated, and accurate. This article explicates health statistics as cultural forms and shows how they have been produced and performed in two very different countries: Sierra Leone and Germany. In both familiar and surprising ways, this article shows how statistics and their pursuit organize and discipline human behavior, constitute subject positions, and reify existing relations of power.

3. Business Statistics: A Comparison of Student Performance in Three Learning Modes

Science.gov (United States)

Simmons, Gerald R.

2014-01-01

The purpose of this study was to compare the performance of three teaching modes and age groups of business statistics sections in terms of course exam scores. The research questions were formulated to determine the performance of the students within each teaching mode, to compare each mode in terms of exam scores, and to compare exam scores by…

4. The CEO performance effect : Statistical issues and a complex fit perspective

NARCIS (Netherlands)

2012-01-01

How CEOs affect strategy and performance is important to strategic management research. We show that sophisticated statistical analysis alone is problematic for establishing the magnitude and causes of CEO impact on performance. We discuss three problem areas that substantially distort the

5. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

Science.gov (United States)

Hansen, John P

2003-01-01

Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

6. The relationship between magical thinking, inferential confusion and obsessive-compulsive symptoms.

Science.gov (United States)

Goods, N A R; Rees, C S; Egan, S J; Kane, R T

2014-01-01

Inferential confusion is an under-researched faulty reasoning process in obsessive-compulsive disorder (OCD). Based on an overreliance on imagined possibilities, it shares similarities with the extensively researched construct of thought-action fusion (TAF). While TAF has been proposed as a specific subset of the broader construct of magical thinking, the relationship between inferential confusion and magical thinking is unexplored. The present study investigated this relationship, and hypothesised that magical thinking would partially mediate the relationship between inferential confusion and obsessive-compulsive symptoms. A non-clinical sample of 201 participants (M = 34.94, SD = 15.88) were recruited via convenience sampling. Regression analyses found the hypothesised mediating relationship was supported, as magical thinking did partially mediate the relationship between inferential confusion and OC symptoms. Interestingly, inferential confusion had the stronger relationship with OC symptoms in comparison to the other predictor variables. Results suggest that inferential confusion can both directly and indirectly (via magical thinking) impact on OC symptoms. Future studies with clinical samples should further investigate these constructs to determine whether similar patterns emerge, as this may eventually inform which cognitive errors to target in treatment of OCD.

7. High performance statistical computing with parallel R: applications to biology and climate modelling

International Nuclear Information System (INIS)

Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

2006-01-01

Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

8. Nursing students' attitudes toward statistics: Effect of a biostatistics course and association with examination performance.

Science.gov (United States)

Kiekkas, Panagiotis; Panagiotarou, Aliki; Malja, Alvaro; Tahirai, Daniela; Zykai, Rountina; Bakalis, Nick; Stefanopoulos, Nikolaos

2015-12-01

Although statistical knowledge and skills are necessary for promoting evidence-based practice, health sciences students have expressed anxiety about statistics courses, which may hinder their learning of statistical concepts. To evaluate the effects of a biostatistics course on nursing students' attitudes toward statistics and to explore the association between these attitudes and their performance in the course examination. One-group quasi-experimental pre-test/post-test design. Undergraduate nursing students of the fifth or higher semester of studies, who attended a biostatistics course. Participants were asked to complete the pre-test and post-test forms of The Survey of Attitudes Toward Statistics (SATS)-36 scale at the beginning and end of the course respectively. Pre-test and post-test scale scores were compared, while correlations between post-test scores and participants' examination performance were estimated. Among 156 participants, post-test scores of the overall SATS-36 scale and of the Affect, Cognitive Competence, Interest and Effort components were significantly higher than pre-test ones, indicating that the course was followed by more positive attitudes toward statistics. Among 104 students who participated in the examination, higher post-test scores of the overall SATS-36 scale and of the Affect, Difficulty, Interest and Effort components were significantly but weakly correlated with higher examination performance. Students' attitudes toward statistics can be improved through appropriate biostatistics courses, while positive attitudes contribute to higher course achievements and possibly to improved statistical skills in later professional life. Copyright © 2015 Elsevier Ltd. All rights reserved.

9. Statistical inference a short course

CERN Document Server

Panik, Michael J

2012-01-01

A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causal

10. Statistical Control Charts: Performances of Short Term Stock Trading in Croatia

Directory of Open Access Journals (Sweden)

Dumičić Ksenija

2015-03-01

Full Text Available Background: The stock exchange, as a regulated financial market, in modern economies reflects their economic development level. The stock market indicates the mood of investors in the development of a country and is an important ingredient for growth. Objectives: This paper aims to introduce an additional statistical tool used to support the decision-making process in stock trading, and it investigate the usage of statistical process control (SPC methods into the stock trading process. Methods/Approach: The individual (I, exponentially weighted moving average (EWMA and cumulative sum (CUSUM control charts were used for gaining trade signals. The open and the average prices of CROBEX10 index stocks on the Zagreb Stock Exchange were used in the analysis. The statistical control charts capabilities for stock trading in the short-run were analysed. Results: The statistical control chart analysis pointed out too many signals to buy or sell stocks. Most of them are considered as false alarms. So, the statistical control charts showed to be not so much useful in stock trading or in a portfolio analysis. Conclusions: The presence of non-normality and autocorellation has great impact on statistical control charts performances. It is assumed that if these two problems are solved, the use of statistical control charts in a portfolio analysis could be greatly improved.

11. SAS and R data management, statistical analysis, and graphics

CERN Document Server

Kleinman, Ken

2009-01-01

An All-in-One Resource for Using SAS and R to Carry out Common TasksProvides a path between languages that is easier than reading complete documentationSAS and R: Data Management, Statistical Analysis, and Graphics presents an easy way to learn how to perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferential procedures, regression analysis, and the creation of graphics, along with more complex applicat

12. Using R for Data Management, Statistical Analysis, and Graphics

CERN Document Server

Horton, Nicholas J

2010-01-01

This title offers quick and easy access to key element of documentation. It includes worked examples across a wide variety of applications, tasks, and graphics. "Using R for Data Management, Statistical Analysis, and Graphics" presents an easy way to learn how to perform an analytical task in R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation and vast number of add-on packages. Organized by short, clear descriptive entries, the book covers many common tasks, such as data management, descriptive summaries, inferential proc

13. Bayesian statistical evaluation of peak area measurements in gamma spectrometry

International Nuclear Information System (INIS)

Silva, L.; Turkman, A.; Paulino, C.D.

2010-01-01

We analyze results from determinations of peak areas for a radioactive source containing several radionuclides. The statistical analysis was performed using Bayesian methods based on the usual Poisson model for observed counts. This model does not appear to be a very good assumption for the counting system under investigation, even though it is not questioned as a whole by the inferential procedures adopted. We conclude that, in order to avoid incorrect inferences on relevant quantities, one must proceed to a further study that allows us to include missing influence parameters and to select a model explaining the observed data much better.

14. Effects of Concept Mapping Strategy on Learning Performance in Business and Economics Statistics

Science.gov (United States)

Chiou, Chei-Chang

2009-01-01

A concept map (CM) is a hierarchically arranged, graphic representation of the relationships among concepts. Concept mapping (CMING) is the process of constructing a CM. This paper examines whether a CMING strategy can be useful in helping students to improve their learning performance in a business and economics statistics course. A single…

15. Exploring Statistics Anxiety: Contrasting Mathematical, Academic Performance and Trait Psychological Predictors

Science.gov (United States)

Bourne, Victoria J.

2018-01-01

Statistics anxiety is experienced by a large number of psychology students, and previous research has examined a range of potential correlates, including academic performance, mathematical ability and psychological predictors. These varying predictors are often considered separately, although there may be shared variance between them. In the…

16. Flipping the Classroom and Student Performance in Advanced Statistics: Evidence from a Quasi-Experiment

Science.gov (United States)

Touchton, Michael

2015-01-01

I administer a quasi-experiment using undergraduate political science majors in statistics classes to evaluate whether "flipping the classroom" (the treatment) alters students' applied problem-solving performance and satisfaction relative to students in a traditional classroom environment (the control). I also assess whether general…

17. Changes in Math Prerequisites and Student Performance in Business Statistics: Do Math Prerequisites Really Matter?

OpenAIRE

Jeffrey J. Green; Courtenay C. Stone; Abera Zegeye; Thomas A. Charles

2007-01-01

We use a binary probit model to assess the impact of several changes in math prerequisites on student performance in an undergraduate business statistics course. While the initial prerequisites did not necessarily provide students with the necessary math skills, our study, the first to examine the effect of math prerequisite changes, shows that these changes were deleterious to student performance. Our results helped convince the College of Business to change the math prerequisite again begin...

18. European downstream oil industry safety performance. Statistical summary of reported incidents 2009

International Nuclear Information System (INIS)

Burton, A.; Den Haan, K.H.

2010-10-01

The sixteenth such report by CONCAWE, this issue includes statistics on workrelated personal injuries for the European downstream oil industry's own employees as well as contractors for the year 2009. Data were received from 33 companies representing more than 97% of the European refining capacity. Trends over the last sixteen years are highlighted and the data are also compared to similar statistics from related industries. In addition, this report presents the results of the first Process Safety Performance Indicator data gathering exercise amongst the CONCAWE membership.

19. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

Energy Technology Data Exchange (ETDEWEB)

Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

2009-08-15

The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

20. Performance in College Chemistry: a Statistical Comparison Using Gender and Jungian Personality Type

Science.gov (United States)

Greene, Susan V.; Wheeler, Henry R.; Riley, Wayne D.

This study sorted college introductory chemistry students by gender and Jungian personality type. It recognized differences from the general population distribution and statistically compared the students' grades with their Jungian personality types. Data from 577 female students indicated that ESFP (extroverted, sensory, feeling, perceiving) and ENFP (extroverted, intuitive, feeling, perceiving) profiles performed poorly at statistically significant levels when compared with the distribution of females enrolled in introductory chemistry. The comparable analysis using data from 422 male students indicated that the poorly performing male profiles were ISTP (introverted, sensory, thinking, perceiving) and ESTP (extroverted, sensory, thinking, perceiving). ESTJ (extroverted, sensory, thinking, judging) female students withdrew from the course at a statistically significant level. For both genders, INTJ (introverted, intuitive, thinking, judging) students were the best performers. By examining the documented characteristics of Jungian profiles that correspond with poorly performing students in chemistry, one may more effectively assist the learning process and the retention of these individuals in the fields of natural science, engineering, and technology.

Directory of Open Access Journals (Sweden)

Thilina Indrajie Wickramaarachchi

2014-10-01

2. Statistical modelling of networked human-automation performance using working memory capacity.

Science.gov (United States)

Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

2014-01-01

This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

3. Statistical analyses of the performance of Macedonian investment and pension funds

Directory of Open Access Journals (Sweden)

Petar Taleski

2015-10-01

Full Text Available The foundation of the post-modern portfolio theory is creating a portfolio based on a desired target return. This specifically applies to the performance of investment and pension funds that provide a rate of return meeting payment requirements from investment funds. A desired target return is the goal of an investment or pension fund. It is the primary benchmark used to measure performances, dynamic monitoring and evaluation of the risk–return ratio on investment funds. The analysis in this paper is based on monthly returns of Macedonian investment and pension funds (June 2011 - June 2014. Such analysis utilizes the basic, but highly informative statistical characteristic moments like skewness, kurtosis, Jarque–Bera, and Chebyishev’s Inequality. The objective of this study is to perform a trough analysis, utilizing the above mentioned and other types of statistical techniques (Sharpe, Sortino, omega, upside potential, Calmar, Sterling to draw relevant conclusions regarding the risks and characteristic moments in Macedonian investment and pension funds. Pension funds are the second largest segment of the financial system, and has great potential for further growth due to constant inflows from pension insurance. The importance of investment funds for the financial system in the Republic of Macedonia is still small, although open-end investment funds have been the fastest growing segment of the financial system. Statistical analysis has shown that pension funds have delivered a significantly positive volatility-adjusted risk premium in the analyzed period more so than investment funds.

4. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.

Science.gov (United States)

Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin

2017-06-01

Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https

5. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

International Nuclear Information System (INIS)

Tang Jie; Nett, Brian E; Chen Guanghong

2009-01-01

Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

6. Implementation of Statistical Process Control: Evaluating the Mechanical Performance of a Candidate Silicone Elastomer Docking Seal

Science.gov (United States)

Oravec, Heather Ann; Daniels, Christopher C.

2014-01-01

The National Aeronautics and Space Administration has been developing a novel docking system to meet the requirements of future exploration missions to low-Earth orbit and beyond. A dynamic gas pressure seal is located at the main interface between the active and passive mating components of the new docking system. This seal is designed to operate in the harsh space environment, but is also to perform within strict loading requirements while maintaining an acceptable level of leak rate. In this study, a candidate silicone elastomer seal was designed, and multiple subscale test articles were manufactured for evaluation purposes. The force required to fully compress each test article at room temperature was quantified and found to be below the maximum allowable load for the docking system. However, a significant amount of scatter was observed in the test results. Due to the stochastic nature of the mechanical performance of this candidate docking seal, a statistical process control technique was implemented to isolate unusual compression behavior from typical mechanical performance. The results of this statistical analysis indicated a lack of process control, suggesting a variation in the manufacturing phase of the process. Further investigation revealed that changes in the manufacturing molding process had occurred which may have influenced the mechanical performance of the seal. This knowledge improves the chance of this and future space seals to satisfy or exceed design specifications.

7. A laboratory evaluation of the influence of weighing gauges performance on extreme events statistics

Science.gov (United States)

Colli, Matteo; Lanza, Luca

2014-05-01

The effects of inaccurate ground based rainfall measurements on the information derived from rain records is yet not much documented in the literature. La Barbera et al. (2002) investigated the propagation of the systematic mechanic errors of tipping bucket type rain gauges (TBR) into the most common statistics of rainfall extremes, e.g. in the assessment of the return period T (or the related non-exceedance probability) of short-duration/high intensity events. Colli et al. (2012) and Lanza et al. (2012) extended the analysis to a 22-years long precipitation data set obtained from a virtual weighing type gauge (WG). The artificial WG time series was obtained basing on real precipitation data measured at the meteo-station of the University of Genova and modelling the weighing gauge output as a linear dynamic system. This approximation was previously validated with dedicated laboratory experiments and is based on the evidence that the accuracy of WG measurements under real world/time varying rainfall conditions is mainly affected by the dynamic response of the gauge (as revealed during the last WMO Field Intercomparison of Rainfall Intensity Gauges). The investigation is now completed by analyzing actual measurements performed by two common weighing gauges, the OTT Pluvio2 load-cell gauge and the GEONOR T-200 vibrating-wire gauge, since both these instruments demonstrated very good performance under previous constant flow rate calibration efforts. A laboratory dynamic rainfall generation system has been arranged and validated in order to simulate a number of precipitation events with variable reference intensities. Such artificial events were generated basing on real world rainfall intensity (RI) records obtained from the meteo-station of the University of Genova so that the statistical structure of the time series is preserved. The influence of the WG RI measurements accuracy on the associated extreme events statistics is analyzed by comparing the original intensity

8. Teachers' Literal and Inferential Talk in Early Childhood and Special Education Classrooms

Science.gov (United States)

Sembiante, Sabrina F.; Dynia, Jaclyn M.; Kaderavek, Joan N.; Justice, Laura M.

2018-01-01

Research Findings: This study examined preschool teachers' literal talk (LT) and inferential talk (IT) during shared book readings in early childhood education (ECE) and early childhood special education (ECSE) classrooms. We aimed to characterize and compare teachers' LT and IT in these 2 classroom contexts and determine whether differences in LT…

9. A comparison of linear and nonlinear statistical techniques in performance attribution.

Science.gov (United States)

Chan, N H; Genovese, C R

2001-01-01

Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

10. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

Science.gov (United States)

Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

2012-01-01

Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

11. Performance evaluation of a hybrid-passive landfill leachate treatment system using multivariate statistical techniques

Energy Technology Data Exchange (ETDEWEB)

Wallace, Jack, E-mail: jack.wallace@ce.queensu.ca [Department of Civil Engineering, Queen’s University, Ellis Hall, 58 University Avenue, Kingston, Ontario K7L 3N6 (Canada); Champagne, Pascale, E-mail: champagne@civil.queensu.ca [Department of Civil Engineering, Queen’s University, Ellis Hall, 58 University Avenue, Kingston, Ontario K7L 3N6 (Canada); Monnier, Anne-Charlotte, E-mail: anne-charlotte.monnier@insa-lyon.fr [National Institute for Applied Sciences – Lyon, 20 Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

2015-01-15

Highlights: • Performance of a hybrid passive landfill leachate treatment system was evaluated. • 33 Water chemistry parameters were sampled for 21 months and statistically analyzed. • Parameters were strongly linked and explained most (>40%) of the variation in data. • Alkalinity, ammonia, COD, heavy metals, and iron were criteria for performance. • Eight other parameters were key in modeling system dynamics and criteria. - Abstract: A pilot-scale hybrid-passive treatment system operated at the Merrick Landfill in North Bay, Ontario, Canada, treats municipal landfill leachate and provides for subsequent natural attenuation. Collected leachate is directed to a hybrid-passive treatment system, followed by controlled release to a natural attenuation zone before entering the nearby Little Sturgeon River. The study presents a comprehensive evaluation of the performance of the system using multivariate statistical techniques to determine the interactions between parameters, major pollutants in the leachate, and the biological and chemical processes occurring in the system. Five parameters (ammonia, alkalinity, chemical oxygen demand (COD), “heavy” metals of interest, with atomic weights above calcium, and iron) were set as criteria for the evaluation of system performance based on their toxicity to aquatic ecosystems and importance in treatment with respect to discharge regulations. System data for a full range of water quality parameters over a 21-month period were analyzed using principal components analysis (PCA), as well as principal components (PC) and partial least squares (PLS) regressions. PCA indicated a high degree of association for most parameters with the first PC, which explained a high percentage (>40%) of the variation in the data, suggesting strong statistical relationships among most of the parameters in the system. Regression analyses identified 8 parameters (set as independent variables) that were most frequently retained for modeling

12. FREQFIT: Computer program which performs numerical regression and statistical chi-squared goodness of fit analysis

International Nuclear Information System (INIS)

Hofland, G.S.; Barton, C.C.

1990-01-01

The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program's results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig

13. The statistical analysis techniques to support the NGNP fuel performance experiments

Energy Technology Data Exchange (ETDEWEB)

Pham, Binh T., E-mail: Binh.Pham@inl.gov; Einerson, Jeffrey J.

2013-10-15

This paper describes the development and application of statistical analysis techniques to support the Advanced Gas Reactor (AGR) experimental program on Next Generation Nuclear Plant (NGNP) fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel temperature) is regulated by the He–Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the NGNP Data Management and Analysis System for automated processing and qualification of the AGR measured data. The neutronic and thermal code simulation results are used for comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the fuel temperature within a given range.

14. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

International Nuclear Information System (INIS)

Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

1978-06-01

Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

15. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

Science.gov (United States)

Angeler, David G; Viedma, Olga; Moreno, José M

2009-11-01

Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

16. Statistical properties of a utility measure of observer performance compared to area under the ROC curve

Science.gov (United States)

Abbey, Craig K.; Samuelson, Frank W.; Gallas, Brandon D.; Boone, John M.; Niklason, Loren T.

2013-03-01

The receiver operating characteristic (ROC) curve has become a common tool for evaluating diagnostic imaging technologies, and the primary endpoint of such evaluations is the area under the curve (AUC), which integrates sensitivity over the entire false positive range. An alternative figure of merit for ROC studies is expected utility (EU), which focuses on the relevant region of the ROC curve as defined by disease prevalence and the relative utility of the task. However if this measure is to be used, it must also have desirable statistical properties keep the burden of observer performance studies as low as possible. Here, we evaluate effect size and variability for EU and AUC. We use two observer performance studies recently submitted to the FDA to compare the EU and AUC endpoints. The studies were conducted using the multi-reader multi-case methodology in which all readers score all cases in all modalities. ROC curves from the study were used to generate both the AUC and EU values for each reader and modality. The EU measure was computed assuming an iso-utility slope of 1.03. We find mean effect sizes, the reader averaged difference between modalities, to be roughly 2.0 times as big for EU as AUC. The standard deviation across readers is roughly 1.4 times as large, suggesting better statistical properties for the EU endpoint. In a simple power analysis of paired comparison across readers, the utility measure required 36% fewer readers on average to achieve 80% statistical power compared to AUC.

17. Predicting energy performance of a net-zero energy building: A statistical approach

International Nuclear Information System (INIS)

Kneifel, Joshua; Webb, David

2016-01-01

Highlights: • A regression model is applied to actual energy data from a net-zero energy building. • The model is validated through a rigorous statistical analysis. • Comparisons are made between model predictions and those of a physics-based model. • The model is a viable baseline for evaluating future models from the energy data. - Abstract: Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid Climate Zone, and compares these

18. The Heuristic Value of p in Inductive Statistical Inference

Directory of Open Access Journals (Sweden)

Joachim I. Krueger

2017-06-01

Full Text Available Many statistical methods yield the probability of the observed data – or data more extreme – under the assumption that a particular hypothesis is true. This probability is commonly known as ‘the’ p-value. (Null Hypothesis Significance Testing ([NH]ST is the most prominent of these methods. The p-value has been subjected to much speculation, analysis, and criticism. We explore how well the p-value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p-value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p-value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.

19. The Heuristic Value of p in Inductive Statistical Inference.

Science.gov (United States)

Krueger, Joachim I; Heck, Patrick R

2017-01-01

Many statistical methods yield the probability of the observed data - or data more extreme - under the assumption that a particular hypothesis is true. This probability is commonly known as 'the' p -value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p -value has been subjected to much speculation, analysis, and criticism. We explore how well the p -value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p -value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p -value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.

20. Effect of altitude on physiological performance: a statistical analysis using results of international football games.

Science.gov (United States)

McSharry, Patrick E

2007-12-22

To assess the effect of altitude on match results and physiological performance of a large and diverse population of professional athletes. Statistical analysis of international football (soccer) scores and results. FIFA extensive database of 1460 football matches in 10 countries spanning over 100 years. Altitude had a significant (Pnegative impact on physiological performance as revealed through the overall underperformance of low altitude teams when playing against high altitude teams in South America. High altitude teams score more and concede fewer goals with increasing altitude difference. Each additional 1000 m of altitude difference increases the goal difference by about half of a goal. The probability of the home team winning for two teams from the same altitude is 0.537, whereas this rises to 0.825 for a home team with an altitude difference of 3695 m (such as Bolivia v Brazil) and falls to 0.213 when the altitude difference is -3695 m (such as Brazil v Bolivia). Altitude provides a significant advantage for high altitude teams when playing international football games at both low and high altitudes. Lowland teams are unable to acclimatise to high altitude, reducing physiological performance. As physiological performance does not protect against the effect of altitude, better predictors of individual susceptibility to altitude illness would facilitate team selection.

1. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

Science.gov (United States)

Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

2015-02-01

Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

2. Effect of the Target Motion Sampling Temperature Treatment Method on the Statistics and Performance

Science.gov (United States)

Viitanen, Tuomas; Leppänen, Jaakko

2014-06-01

Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very strong resonances. This effect is actually not related to the usage of sampled responses, but is instead an inherent property of the TMS tracking method and concerns both EBT and 0 K calculations.

3. Teaching Statistics Using Classic Psychology Research: An Activities-Based Approach

Science.gov (United States)

Holmes, Karen Y.; Dodd, Brett A.

2012-01-01

In this article, we discuss a collection of active learning activities derived from classic psychology studies that illustrate the appropriate use of descriptive and inferential statistics. (Contains 2 tables.)

4. Inferential framework for non-stationary dynamics: theory and applications

International Nuclear Information System (INIS)

Duggento, Andrea; Luchinsky, Dmitri G; McClintock, Peter V E; Smelyanskiy, Vadim N

2009-01-01

An extended Bayesian inference framework is presented, aiming to infer time-varying parameters in non-stationary nonlinear stochastic dynamical systems. The convergence of the method is discussed. The performance of the technique is studied using, as an example, signal reconstruction for a system of neurons modeled by FitzHugh–Nagumo oscillators: it is applied to reconstruction of the model parameters and elements of the measurement matrix, as well as to inference of the time-varying parameters of the non-stationary system. It is shown that the proposed approach is able to reconstruct unmeasured (hidden) variables of the system, to determine the model parameters, to detect stepwise changes of control parameters for each oscillator and to track the continuous evolution of the control parameters in the adiabatic limit

5. The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments

International Nuclear Information System (INIS)

Pham, Bihn T.; Einerson, Jeffrey J.

2010-01-01

This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory's Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automated processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.

6. Inference and the Introductory Statistics Course

Science.gov (United States)

Pfannkuch, Maxine; Regan, Matt; Wild, Chris; Budgett, Stephanie; Forbes, Sharleen; Harraway, John; Parsonage, Ross

2011-01-01

This article sets out some of the rationale and arguments for making major changes to the teaching and learning of statistical inference in introductory courses at our universities by changing from a norm-based, mathematical approach to more conceptually accessible computer-based approaches. The core problem of the inferential argument with its…

7. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control

International Nuclear Information System (INIS)

Létourneau, Daniel; McNiven, Andrea; Keller, Harald; Wang, An; Amin, Md Nurul; Pearce, Jim; Norrlinger, Bernhard; Jaffray, David A.

2014-01-01

Purpose: High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. Methods: The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3–4 times/week over a period of 10–11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ±0.5 and ±1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. Results: The precision of the MLC performance monitoring QC test and the MLC itself was within ±0.22 mm for most MLC leaves

8. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control.

Science.gov (United States)

Létourneau, Daniel; Wang, An; Amin, Md Nurul; Pearce, Jim; McNiven, Andrea; Keller, Harald; Norrlinger, Bernhard; Jaffray, David A

2014-12-01

High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3-4 times/week over a period of 10-11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ± 0.5 and ± 1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. The precision of the MLC performance monitoring QC test and the MLC itself was within ± 0.22 mm for most MLC leaves and the majority of the

9. LHCb: Statistical Comparison of CPU performance for LHCb applications on the Grid

CERN Multimedia

Graciani, R

2009-01-01

The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks, clusters,…) Both applications are based on the Gaudi and LHCb software frameworks. Gauss uses Pythia and Geant as underlying libraries for the simulation of the collision and the later passage of the generated particles through the LHCb detector. While Brunel makes use of LHCb specific code to process the data from each sub-detector. Both applications are CPU bound. Large Monte Carlo productions or data reconstructions running on the Grid are an ideal benchmark to compare the performance of the different CPU models for each case. Since the processed events are only statistically comparable, only statistical comparison of the...

10. The Statistical Analysis of Relation between Compressive and Tensile/Flexural Strength of High Performance Concrete

Directory of Open Access Journals (Sweden)

Kępniak M.

2016-12-01

Full Text Available This paper addresses the tensile and flexural strength of HPC (high performance concrete. The aim of the paper is to analyse the efficiency of models proposed in different codes. In particular, three design procedures from: the ACI 318 [1], Eurocode 2 [2] and the Model Code 2010 [3] are considered. The associations between design tensile strength of concrete obtained from these three codes and compressive strength are compared with experimental results of tensile strength and flexural strength by statistical tools. Experimental results of tensile strength were obtained in the splitting test. Based on this comparison, conclusions are drawn according to the fit between the design methods and the test data. The comparison shows that tensile strength and flexural strength of HPC depend on more influential factors and not only compressive strength.

11. The nano-mechanical signature of Ultra High Performance Concrete by statistical nanoindentation techniques

International Nuclear Information System (INIS)

Sorelli, Luca; Constantinides, Georgios; Ulm, Franz-Josef; Toutlemonde, Francois

2008-01-01

Advances in engineering the microstructure of cementitious composites have led to the development of fiber reinforced Ultra High Performance Concretes (UHPC). The scope of this paper is twofold, first to characterize the nano-mechanical properties of the phases governing the UHPC microstructure by means of a novel statistical nanoindentation technique; then to upscale those nanoscale properties, by means of continuum micromechanics, to the macroscopic scale of engineering applications. In particular, a combined investigation of nanoindentation, scanning electron microscope (SEM) and X-ray Diffraction (XRD) indicates that the fiber-matrix transition zone is relatively defect free. On this basis, a four-level multiscale model with defect free interfaces allows to accurately determine the composite stiffness from the measured nano-mechanical properties. Besides evidencing the dominant role of high density calcium silicate hydrates and the stiffening effect of residual clinker, the suggested model may become a useful tool for further optimizing cement-based engineered composites

12. Statistics in the pharmacy literature.

Science.gov (United States)

Lee, Charlene M; Soin, Herpreet K; Einarson, Thomas R

2004-09-01

Research in statistical methods is essential for maintenance of high quality of the published literature. To update previous reports of the types and frequencies of statistical terms and procedures in research studies of selected professional pharmacy journals. We obtained all research articles published in 2001 in 6 journals: American Journal of Health-System Pharmacy, The Annals of Pharmacotherapy, Canadian Journal of Hospital Pharmacy, Formulary, Hospital Pharmacy, and Journal of the American Pharmaceutical Association. Two independent reviewers identified and recorded descriptive and inferential statistical terms/procedures found in the methods, results, and discussion sections of each article. Results were determined by tallying the total number of times, as well as the percentage, that each statistical term or procedure appeared in the articles. One hundred forty-four articles were included. Ninety-eight percent employed descriptive statistics; of these, 28% used only descriptive statistics. The most common descriptive statistical terms were percentage (90%), mean (74%), standard deviation (58%), and range (46%). Sixty-nine percent of the articles used inferential statistics, the most frequent being chi(2) (33%), Student's t-test (26%), Pearson's correlation coefficient r (18%), ANOVA (14%), and logistic regression (11%). Statistical terms and procedures were found in nearly all of the research articles published in pharmacy journals. Thus, pharmacy education should aim to provide current and future pharmacists with an understanding of the common statistical terms and procedures identified to facilitate the appropriate appraisal and consequential utilization of the information available in research articles.

13. Oil pipeline performance review 1995, 1996, 1997, 1998 : Technical/statistical report

International Nuclear Information System (INIS)

2000-12-01

This document provides a summary of the pipeline performance and reportable pipeline failures of liquid hydrocarbon pipelines in Canada, for the years 1995 through 1998. The year 1994 was the last one for which the Oil Pipeline Performance Review (OPPR) was published on an annual basis. The OPPR will continue to be published until such time as the Pipeline Risk Assesment Sub-Committee (PRASC) has obtained enough pipeline failure data to be aggregated into a meaningful report. The shifts in the mix of reporting pipeline companies is apparent in the data presented, comparing the volumes transported and the traffic volume during the previous ten-year period. Another table presents a summary of the failures which occurred during the period under consideration, 1995-1998, allowing for a comparison with the data for the previous ten-year period. From the current perspective and from an historical context, this document provides a statistical review of the performance of the pipelines, covering refined petroleum product pipelines, clean oil pipelines and High Vapour Pressure (HVP) pipelines downstream of battery limits. Classified as reportable are spills of 1.5 cubic metre or more of liquid hydrocarbons, any amount of HVP material, any incident involving an injury, a death, a fire, or an explosion. For those companies that responded to the survey, the major items, including number of failures and volumes released are accurate. Samples of the forms used for collecting the information are provided within the document. 6 tabs., 1 fig

14. Statistical process control as a tool for controlling operating room performance: retrospective analysis and benchmarking.

Science.gov (United States)

Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao

2010-10-01

There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.

15. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

Science.gov (United States)

Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

2015-02-01

Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

16. Neural Networks. Diagnostic and inferential measurements; Reti neurali. Diagnostica e misure inferenziali

Energy Technology Data Exchange (ETDEWEB)

Bonavita, N. [Apc Group Leader, Abb Industria, Genua (Italy); Parisini, T. [Milan Politecnico, Milan (Italy). Dipt. di Elettronica e Informazione

2000-09-01

In this work, the use of neural approximating networks is described in the context of fault diagnosis of industrial plants with a particular emphasis to the technique of inferential measurements. The proposed methodology is related to the current literature emphasizing advantages and disadvantages of the analytical redundancy concept. The use of neural approximators for the generation of inferential measurement is described in the context of industrial distributed control systems. [Italian] In questo articolo viene descritta l'utilizzazione degli approssimatori neurali in problemi di diagnostica d'impianto con particolare riferimento alla tecnica delle misure inferenziali. Viene fornito un inquadramento della metodologia rispetto alla letteratura attuale mettendo in risalto vantaggi e svantaggi del concetto di ridondanza analitica. L'uso degli approssimatori neurali per la generazione di misure inferenziali e' illustrato in un contesto di sistemi di controllo distribuito di tipo industriale.

17. Cognitive stimulation of pupils with Down syndrome: A study of inferential talk during book-sharing.

Science.gov (United States)

Engevik, Liv Inger; Næss, Kari-Anne B; Hagtvet, Bente E

2016-08-01

In the education of pupils with Down syndrome, "simplifying" literal talk and concrete stimulation have typically played a dominant role. This explorative study investigated the extent to which teachers stimulated abstract cognitive functions via inferential talk during book-sharing and how pupils with Down syndrome responded. Dyadic interactions (N=7) were videotaped, transcribed and coded to identify levels of abstraction in teacher utterances and to evaluate the adequacy of pupil responses. One-third of the teachers' utterances contained high levels of abstraction and promoted inferential talk. Six of the seven children predominantly responded in ways which revealed inferential thinking. Dialog excerpts highlighted individual, contextual and interactional factors contributing to variations in the findings. Contrary to previous claims, the children with Down syndrome in the current sample appear able to draw inferences beyond the "here-and-now" with teacher support. This finding highlights the educational relevance and importance of higher-order cognitive stimulation of pupils with intellectual disabilities, to foster independent metacognitive skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

18. Cross-categorization of legal concepts across boundaries of legal systems: in consideration of inferential links

DEFF Research Database (Denmark)

Glückstad, Fumiko Kano; Herlau, Tue; Schmidt, Mikkel Nørgaard

2014-01-01

This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp’s probabili......This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp...... and Griffiths in Behav Brain Sci 4:629–640, 2001), the probabilistic model of theory formation, i.e., the Infinite Relational Model (IRM) first introduced by Kemp et al. (The twenty-first national conference on artificial intelligence, 2006, Cognition 114:165–196, 2010) and its extended model, i.e., the normal...... to the International Standard Classification of Education. The main contribution of this work is the proposal of a conceptual framework of the cross-categorization approach that, inspired by Sartor (Artif Intell Law 17:217–251, 2009), attempts to explain reasoner’s inferential mechanisms....

19. Turking Statistics: Student-Generated Surveys Increase Student Engagement and Performance

Science.gov (United States)

Whitley, Cameron T.; Dietz, Thomas

2018-01-01

Thirty years ago, Hubert M. Blalock Jr. published an article in "Teaching Sociology" about the importance of teaching statistics. We honor Blalock's legacy by assessing how using Amazon Mechanical Turk (MTurk) in statistics classes can enhance student learning and increase statistical literacy among social science gradaute students. In…

20. Effect of the Target Motion Sampling temperature treatment method on the statistics and performance

International Nuclear Information System (INIS)

Viitanen, Tuomas; Leppänen, Jaakko

2015-01-01

Highlights: • Use of the Target Motion Sampling (TMS) method with collision estimators is studied. • The expected values of the estimators agree with NJOY-based reference. • In most practical cases also the variances of the estimators are unaffected by TMS. • Transport calculation slow-down due to TMS dominates the impact on figures-of-merit. - Abstract: Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very

1. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

Science.gov (United States)

Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

2015-08-01

We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

2. Inferential Processor.

Science.gov (United States)

1982-01-01

dlutilization,o Groupe d’Intelligence Artificielle , Universite d’Aix- Marseille, Luminy, France, September 1975. 4. Clocksint W.F. and C.S. Mellish...for reasoning in higher-order logics such as the first-order predicate calculus; the latter is required for applications in artificial intelligence ...analysis and evaluation of intelligence reports, the preparation and analysis of tactical I methods and principles, the formulation and interpretation of

3. Long-Term Propagation Statistics and Availability Performance Assessment for Simulated Terrestrial Hybrid FSO/RF System

Directory of Open Access Journals (Sweden)

Fiser Ondrej

2011-01-01

Full Text Available Long-term monthly and annual statistics of the attenuation of electromagnetic waves that have been obtained from 6 years of measurements on a free space optical path, 853 meters long, with a wavelength of 850 nm and on a precisely parallel radio path with a frequency of 58 GHz are presented. All the attenuation events observed are systematically classified according to the hydrometeor type causing the particular event. Monthly and yearly propagation statistics on the free space optical path and radio path are obtained. The influence of individual hydrometeors on attenuation is analysed. The obtained propagation statistics are compared to the calculated statistics using ITU-R models. The calculated attenuation statistics both at 850 nm and 58 GHz underestimate the measured statistics for higher attenuation levels. The availability performance of a simulated hybrid FSO/RF system is analysed based on the measured data.

4. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

Energy Technology Data Exchange (ETDEWEB)

GREER DA; THIEN MG

2012-01-12

The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

5. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

Science.gov (United States)

Xu, Yue; Zhao, Tingchen; Li, Ding

2018-01-01

An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

6. The Impact of Time Difference between Satellite Overpass and Ground Observation on Cloud Cover Performance Statistics

Directory of Open Access Journals (Sweden)

Jędrzej S. Bojanowski

2014-12-01

Full Text Available Cloud property data sets derived from passive sensors onboard the polar orbiting satellites (such as the NOAA’s Advanced Very High Resolution Radiometer have global coverage and now span a climatological time period. Synoptic surface observations (SYNOP are often used to characterize the accuracy of satellite-based cloud cover. Infrequent overpasses of polar orbiting satellites combined with the 3- or 6-h SYNOP frequency lead to collocation time differences of up to 3 h. The associated collocation error degrades the cloud cover performance statistics such as the Hanssen-Kuiper’s discriminant (HK by up to 45%. Limiting the time difference to 10 min, on the other hand, introduces a sampling error due to a lower number of corresponding satellite and SYNOP observations. This error depends on both the length of the validated time series and the SYNOP frequency. The trade-off between collocation and sampling error call for an optimum collocation time difference. It however depends on cloud cover characteristics and SYNOP frequency, and cannot be generalized. Instead, a method is presented to reconstruct the unbiased (true HK from HK affected by the collocation differences, which significantly (t-test p < 0.01 improves the validation results.

7. Examining the Performance of Statistical Downscaling Methods: Toward Matching Applications to Data Products

Science.gov (United States)

Dixon, K. W.; Lanzante, J. R.; Adams-Smith, D.

2017-12-01

Several challenges exist when seeking to use future climate model projections in a climate impacts study. A not uncommon approach is to utilize climate projection data sets derived from more than one future emissions scenario and from multiple global climate models (GCMs). The range of future climate responses represented in the set is sometimes taken to be indicative of levels of uncertainty in the projections. Yet, GCM outputs are deemed to be unsuitable for direct use in many climate impacts applications. GCM grids typically are viewed as being too coarse. Additionally, regional or local-scale biases in a GCM's simulation of the contemporary climate that may not be problematic from a global climate modeling perspective may be unacceptably large for a climate impacts application. Statistical downscaling (SD) of climate projections - a type of post-processing that uses observations to inform the refinement of GCM projections - is often used in an attempt to account for GCM biases and to provide additional spatial detail. "What downscaled climate projection is the best one to use" is a frequently asked question, but one that is not always easy to answer, as it can be dependent on stakeholder needs and expectations. Here we present results from a perfect model experimental design illustrating how SD method performance can vary not only by SD method, but how performance can also vary by location, season, climate variable of interest, amount of projected climate change, SD configuration choices, and whether one is interested in central tendencies or the tails of the distribution. Awareness of these factors can be helpful when seeking to determine the suitability of downscaled climate projections for specific climate impacts applications. It also points to the potential value of considering more than one SD data product in a study, so as to acknowledge uncertainties associated with the strengths and weaknesses of different downscaling methods.

8. Application of descriptive statistics in analysis of experimental data

OpenAIRE

2008-01-01

Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...

9. Using Facebook Data to Turn Introductory Statistics Students into Consultants

Science.gov (United States)

2017-01-01

Facebook provides businesses and organizations with copious data that describe how users are interacting with their page. This data affords an excellent opportunity to turn introductory statistics students into consultants to analyze the Facebook data using descriptive and inferential statistics. This paper details a semester-long project that…

10. Applying Statistical Process Quality Control Methodology to Educational Settings.

Science.gov (United States)

Blumberg, Carol Joyce

A subset of Statistical Process Control (SPC) methodology known as Control Charting is introduced. SPC methodology is a collection of graphical and inferential statistics techniques used to study the progress of phenomena over time. The types of control charts covered are the null X (mean), R (Range), X (individual observations), MR (moving…

11. Illustrating Sampling Distribution of a Statistic: Minitab Revisited

Science.gov (United States)

Johnson, H. Dean; Evans, Marc A.

2008-01-01

Understanding the concept of the sampling distribution of a statistic is essential for the understanding of inferential procedures. Unfortunately, this topic proves to be a stumbling block for students in introductory statistics classes. In efforts to aid students in their understanding of this concept, alternatives to a lecture-based mode of…

12. Review of Statistical Analyses Resulting from Performance of HLDWD-DWPF-005

International Nuclear Information System (INIS)

Beck, R.S.

1997-01-01

The Engineering Department at the Defense Waste Processing Facility (DWPF) has reviewed two reports from the Statistical Consulting Section (SCS) involving the statistical analysis of test results for analysis of small sample inserts (references 1 ampersand 2). The test results cover two proposed analytical methods, a room temperature hydrofluoric acid preparation (Cold Chem) and a sodium peroxide/sodium hydroxide fusion modified for insert samples (Modified Fusion). The reports support implementation of the proposed small sample containers and analytical methods at DWPF. Hydragard sampler valve performance was typical of previous results (reference 3). Using an element from each major feed stream. lithium from the frit and iron from the sludge, the sampler was determined to deliver a uniform mixture in either sample container.The lithium to iron ratios were equivalent for the standard 15 ml vial and the 3 ml insert.The proposed method provide equivalent analyses as compared to the current methods. The biases associated with the proposed methods on a vitrified basis are less than 5% for major elements. The sum of oxides for the proposed method compares favorably with the sum of oxides for the conventional methods. However, the average sum of oxides for the Cold Chem method was 94.3% which is below the minimum required recovery of 95%. Both proposed methods, cold Chem and Modified Fusion, will be required at first to provide an accurate analysis which will routinely meet the 95% and 105% average sum of oxides limit for Product Composition Control System (PCCS).Issued to be resolved during phased implementation are as follows: (1) Determine calcine/vitrification factor for radioactive feed; (2) Evaluate covariance matrix change against process operating ranges to determine optimum sample size; (3) Evaluate sources for low sum of oxides; and (4) Improve remote operability of production versions of equipment and instruments for installation in 221-S.The specifics of

13. Introduction of a Journal Excerpt Activity Improves Undergraduate Students' Performance in Statistics

Science.gov (United States)

Rabin, Laura A.; Nutter-Upham, Katherine E.

2010-01-01

We describe an active learning exercise intended to improve undergraduate students' understanding of statistics by grounding complex concepts within a meaningful, applied context. Students in a journal excerpt activity class read brief excerpts of statistical reporting from published research articles, answered factual and interpretive questions,…

14. The Effects of Pre-Lecture Quizzes on Test Anxiety and Performance in a Statistics Course

Science.gov (United States)

Brown, Michael J.; Tallon, Jennifer

2015-01-01

The purpose of our study was to examine the effects of pre-lecture quizzes in a statistics course. Students (N = 70) from 2 sections of an introductory statistics course served as participants in this study. One section completed pre-lecture quizzes whereas the other section did not. Completing pre-lecture quizzes was associated with improved exam…

15. Effect of Task Presentation on Students' Performances in Introductory Statistics Courses

Science.gov (United States)

Tomasetto, Carlo; Matteucci, Maria Cristina; Carugati, Felice; Selleri, Patrizia

2009-01-01

Research on academic learning indicates that many students experience major difficulties with introductory statistics and methodology courses. We hypothesized that students' difficulties may depend in part on the fact that statistics tasks are commonly viewed as related to the threatening domain of math. In two field experiments which we carried…

16. Data Collection Manual for Academic and Research Library Network Statistics and Performance Measures.

Science.gov (United States)

Shim, Wonsik "Jeff"; McClure, Charles R.; Fraser, Bruce T.; Bertot, John Carlo

This manual provides a beginning approach for research libraries to better describe the use and users of their networked services. The manual also aims to increase the visibility and importance of developing such statistics and measures. Specific objectives are: to identify selected key statistics and measures that can describe use and users of…

17. Development of a statistical shape model of multi-organ and its performance evaluation

International Nuclear Information System (INIS)

Nakada, Misaki; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru

2010-01-01

Existing statistical shape modeling methods for an organ can not take into account the correlation between neighboring organs. This study focuses on a level set distribution model and proposes two modeling methods for multiple organs that can take into account the correlation between neighboring organs. The first method combines level set functions of multiple organs into a vector. Subsequently it analyses the distribution of the vectors of a training dataset by a principal component analysis and builds a multiple statistical shape model. Second method constructs a statistical shape model for each organ independently and assembles component scores of different organs in a training dataset so as to generate a vector. It analyses the distribution of the vectors of to build a statistical shape model of multiple organs. This paper shows results of applying the proposed methods trained by 15 abdominal CT volumes to unknown 8 CT volumes. (author)

18. Inferential monitoring of global change impact on biodiversity through remote sensing and species distribution modeling

Science.gov (United States)

Sangermano, Florencia

2009-12-01

The world is suffering from rapid changes in both climate and land cover which are the main factors affecting global biodiversity. These changes may affect ecosystems by altering species distributions, population sizes, and community compositions, which emphasizes the need for a rapid assessment of biodiversity status for conservation and management purposes. Current approaches on monitoring biodiversity rely mainly on long term observations of predetermined sites, which require large amounts of time, money and personnel to be executed. In order to overcome problems associated with current field monitoring methods, the main objective of this dissertation is the development of framework for inferential monitoring of the impact of global change on biodiversity based on remotely sensed data coupled with species distribution modeling techniques. Several research pieces were performed independently in order to fulfill this goal. First, species distribution modeling was used to identify the ranges of 6362 birds, mammals and amphibians in South America. Chapter 1 compares the power of different presence-only species distribution methods for modeling distributions of species with different response curves to environmental gradients and sample sizes. It was found that there is large variability in the power of the methods for modeling habitat suitability and species ranges, showing the importance of performing, when possible, a preliminary gradient analysis of the species distribution before selecting the method to be used. Chapter 2 presents a new methodology for the redefinition of species range polygons. Using a method capable of establishing the uncertainty in the definition of existing range polygons, the automated procedure identifies the relative importance of bioclimatic variables for the species, predicts their ranges and generates a quality assessment report to explore prediction errors. Analysis using independent validation data shows the power of this

19. Experiments performed with a functional model based on statistical discrimination in mixed nuclear radiation field

International Nuclear Information System (INIS)

Valcov, N.; Celarel, A.; Purghel, L.

1999-01-01

By using the statistical discrimination technique, the components of on ionization current, due to a mixed radiation field, may be simultaneously measured. A functional model, including a serially manufactured gamma-ray ratemeter was developed, as an intermediate step in the design of specialised nuclear instrumentation, in order to check the concept of statistical discrimination method. The obtained results are in good agreement with the estimations of the statistical discrimination method. The main characteristics of the functional model are the following: - dynamic range of measurement: >300: l; - simultaneous measurement of natural radiation background and gamma-ray fields; - accuracy (for equal exposure rates from gamma's and natural radiation background): 17%, for both radiation fields; - minimum detectable exposure rate: 2μR/h. (authors)

20. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

International Nuclear Information System (INIS)

Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

2010-01-01

Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

1. Performance evaluation of CT measurements made on step gauges using statistical methodologies

DEFF Research Database (Denmark)

Angel, J.; De Chiffre, L.; Kruth, J.P.

2015-01-01

In this paper, a study is presented in which statistical methodologies were applied to evaluate the measurement of step gauges on an X-ray computed tomography (CT) system. In particular, the effects of step gauge material density and orientation were investigated. The step gauges consist of uni......- and bidirectional lengths. By confirming the repeatability of measurements made on the test system, the number of required scans in the design of experiment (DOE) was reduced. The statistical model was checked using model adequacy principles; model adequacy checking is an important step in validating...

Science.gov (United States)

Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

2015-10-01

Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

3. Quality of reporting statistics in two Indian pharmacology journals

OpenAIRE

2011-01-01

Objective: To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. Materials and Methods: All original articles published since 2002 were downloaded from the journals′ (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of...

4. Using inferential sensors for quality control of Everglades Depth Estimation Network water-level data

Science.gov (United States)

Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul

2016-09-29

The Everglades Depth Estimation Network (EDEN), with over 240 real-time gaging stations, provides hydrologic data for freshwater and tidal areas of the Everglades. These data are used to generate daily water-level and water-depth maps of the Everglades that are used to assess biotic responses to hydrologic change resulting from the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. The generation of EDEN daily water-level and water-depth maps is dependent on high quality real-time data from water-level stations. Real-time data are automatically checked for outliers by assigning minimum and maximum thresholds for each station. Small errors in the real-time data, such as gradual drift of malfunctioning pressure transducers, are more difficult to immediately identify with visual inspection of time-series plots and may only be identified during on-site inspections of the stations. Correcting these small errors in the data often is time consuming and water-level data may not be finalized for several months. To provide daily water-level and water-depth maps on a near real-time basis, EDEN needed an automated process to identify errors in water-level data and to provide estimates for missing or erroneous water-level data.The Automated Data Assurance and Management (ADAM) software uses inferential sensor technology often used in industrial applications. Rather than installing a redundant sensor to measure a process, such as an additional water-level station, inferential sensors, or virtual sensors, were developed for each station that make accurate estimates of the process measured by the hard sensor (water-level gaging station). The inferential sensors in the ADAM software are empirical models that use inputs from one or more proximal stations. The advantage of ADAM is that it provides a redundant signal to the sensor in the field without the environmental threats associated with field conditions at stations (flood or hurricane, for example). In the

5. Performance of the S - [chi][squared] Statistic for Full-Information Bifactor Models

Science.gov (United States)

Li, Ying; Rupp, Andre A.

2011-01-01

This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…

6. Course Modality Choice and Student Performance in Business Statistics Courses in Post Secondary Institutions

Science.gov (United States)

2011-01-01

Limited research has been conducted on the role of course modality choice (face-to-face [FTF] or online [OL]) on course grades. At the study site, an independent college, the research problem was the lack of research on the proportions of undergraduate students who completed a statistics course as part of their academic program, in either OL or…

7. Characteristics and Performance of Students in an Online Section of Business Statistics

Science.gov (United States)

Dutton, John; Dutton, Marilyn

2005-01-01

We compare students in online and lecture sections of a business statistics class taught simultaneously by the same instructor using the same content, assignments, and exams in the fall of 2001. Student data are based on class grades, registration records, and two surveys. The surveys asked for information on preparedness, reasons for section…

8. Student Performance in an Introductory Business Statistics Course: Does Delivery Mode Matter?

Science.gov (United States)

Haughton, Jonathan; Kelly, Alison

2015-01-01

Approximately 600 undergraduates completed an introductory business statistics course in 2013 in one of two learning environments at Suffolk University, a mid-sized private university in Boston, Massachusetts. The comparison group completed the course in a traditional classroom-based environment, whereas the treatment group completed the course in…

9. Flipped Statistics Class Results: Better Performance than Lecture over One Year Later

Science.gov (United States)

Winquist, Jennifer R.; Carlson, Keith A.

2014-01-01

In this paper, we compare an introductory statistics course taught using a flipped classroom approach to the same course taught using a traditional lecture based approach. In the lecture course, students listened to lecture, took notes, and completed homework assignments. In the flipped course, students read relatively simple chapters and answered…

10. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

Science.gov (United States)

McCray, Wilmon Wil L., Jr.

The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization

11. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

Energy Technology Data Exchange (ETDEWEB)

Li, Ke; Tang, Jie [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)

2014-04-15

Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial

12. Performance of Generating Plant: Managing the Changes. Part 2: Thermal Generating Plant Unavailability Factors and Availability Statistics

Energy Technology Data Exchange (ETDEWEB)

Curley, G. Michael [North American Electric Reliability Corporation (United States); Mandula, Jiri [International Atomic Energy Agency (IAEA)

2008-05-15

The WEC Committee on the Performance of Generating Plant (PGP) has been collecting and analysing power plant performance statistics worldwide for more than 30 years and has produced regular reports, which include examples of advanced techniques and methods for improving power plant performance through benchmarking. A series of reports from the various working groups was issued in 2008. This reference presents the results of Working Group 2 (WG2). WG2's main task is to facilitate the collection and input on an annual basis of power plant performance data (unit-by-unit and aggregated data) into the WEC PGP database. The statistics will be collected for steam, nuclear, gas turbine and combined cycle, hydro and pump storage plant. WG2 will also oversee the ongoing development of the availability statistics database, including the contents, the required software, security issues and other important information. The report is divided into two sections: Thermal generating, combined cycle/co-generation, combustion turbine, hydro and pumped storage unavailability factors and availability statistics; and nuclear power generating units.

13. INFLUENCE OF STOCHASTIC NOISE STATISTICS ON KALMAN FILTER PERFORMANCE BASED ON VIDEO TARGET TRACKING

Institute of Scientific and Technical Information of China (English)

Chen Ken; Napolitano; Zhang Yun; Li Dong

2010-01-01

The system stochastic noises involved in Kalman filtering are preconditioned on being ideally white and Gaussian distributed. In this research,efforts are exerted on exploring the influence of the noise statistics on Kalman filtering from the perspective of video target tracking quality. The correlation of tracking precision to both the process and measurement noise covariance is investigated; the signal-to-noise power density ratio is defined; the contribution of predicted states and measured outputs to Kalman filter behavior is discussed; the tracking precision relative sensitivity is derived and applied in this study case. The findings are expected to pave the way for future study on how the actual noise statistics deviating from the assumed ones impacts on the Kalman filter optimality and degradation in the application of video tracking.

14. Statistical Diagnosis Method of Conductor Motions in Superconducting Magnets to Predict their Quench Performance

CERN Document Server

Khomenko, B A; Rijllart, A; Sanfilippo, S; Siemko, A

2001-01-01

Premature training quenches are usually caused by the transient energy released within the magnet coil as it is energised. Two distinct varieties of disturbances exist. They are thought to be electrical and mechanical in origin. The first type of disturbance comes from non-uniform current distribution in superconducting cables whereas the second one usually originates from conductor motions or micro-fractures of insulating materials under the action of Lorentz forces. All of these mechanical events produce in general a rapid variation of the voltages in the so-called quench antennas and across the magnet coil, called spikes. A statistical method to treat the spatial localisation and the time occurrence of spikes will be presented. It allows identification of the mechanical weak points in the magnet without need to increase the current to provoke a quench. The prediction of the quench level from detailed analysis of the spike statistics can be expected.

15. Performance comparison of multi-detector detection statistics in targeted compact binary coalescence GW search

OpenAIRE

Haris, K; Pai, Archana

2016-01-01

Global network of advanced Interferometric gravitational wave (GW) detectors are expected to be on-line soon. Coherent observation of GW from a distant compact binary coalescence (CBC) with a network of interferometers located in different continents give crucial information about the source such as source location and polarization information. In this paper we compare different multi-detector network detection statistics for CBC search. In maximum likelihood ratio (MLR) based detection appro...

16. Optimization of the gas turbine-modular helium reactor using statistical methods to maximize performance without compromising system design margins

International Nuclear Information System (INIS)

Lommers, L.J.; Parme, L.L.; Shenoy, A.S.

1995-07-01

This paper describes a statistical approach for determining the impact of system performance and design uncertainties on power plant performance. The objectives of this design approach are to ensure that adequate margin is provided, that excess margin is minimized, and that full advantage can be taken of unconsumed margin. It is applicable to any thermal system in which these factors are important. The method is demonstrated using the Gas Turbine Modular Helium Reactor as an example. The quantitative approach described allows the characterization of plant performance and the specification of the system design requirements necessary to achieve the desired performance with high confidence. Performance variations due to design evolution, inservice degradation, and basic performance uncertainties are considered. The impact of all performance variabilities is combined using Monte Carlo analysis to predict the range of expected operation

17. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

Science.gov (United States)

Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

2015-09-01

Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

18. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

Science.gov (United States)

Breunig, Nancy A.

Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

19. An inferential and descriptive statistical examination of the relationship between cumulative work metrics and injury in Major League Baseball pitchers.

Science.gov (United States)

Karakolis, Thomas; Bhan, Shivam; Crotin, Ryan L

2013-08-01

In Major League Baseball (MLB), games pitched, total innings pitched, total pitches thrown, innings pitched per game, and pitches thrown per game are used to measure cumulative work. Often, pitchers are allocated limits, based on pitches thrown per game and total innings pitched in a season, in an attempt to prevent future injuries. To date, the efficacy in predicting injuries from these cumulative work metrics remains in question. It was hypothesized that the cumulative work metrics would be a significant predictor for future injury in MLB pitchers. Correlations between cumulative work for pitchers during 2002-07 and injury days in the following seasons were examined using regression analyses to test this hypothesis. Each metric was then "binned" into smaller cohorts to examine trends in the associated risk of injury for each cohort. During the study time period, 27% of pitchers were injured after a season in which they pitched. Although some interesting trends were noticed during the binning process, based on the regression analyses, it was found that no cumulative work metric was a significant predictor for future injury. It was concluded that management of a pitcher's playing schedule based on these cumulative work metrics alone could not be an effective means of preventing injury. These findings indicate that an integrated approach to injury prevention is required. This approach will likely involve advanced cumulative work metrics and biomechanical assessment.

20. An Application of Interactive Computer Graphics to the Study of Inferential Statistics and the General Linear Model

Science.gov (United States)

1991-09-01

matrix, the Regression Sum of Squares (SSR) and Error Sum of Squares (SSE) are also displayed as a percentage of the Total Sum of Squares ( SSTO ...vector when the student compares the SSR to the SSE. In addition to the plot, the actual values of SSR, SSE, and SSTO are also provided. Figure 3 gives the...Es ainSpace = E 3 Error- Eor Space =n t! L . Pro~cio q Yonto Pro~rct on of Y onto the simaton, pac ror Space SSR SSEL0.20 IV = 14,1 +IErrorI 2 SSTO

1. Using the Coefficient of Confidence to Make the Philosophical Switch from a Posteriori to a Priori Inferential Statistics

Science.gov (United States)

Trafimow, David

2017-01-01

There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…

2. A study of the feasibility of statistical analysis of airport performance simulation

Science.gov (United States)

Myers, R. H.

1982-01-01

The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.

3. Evaluation of Bending Strength of Carburized Gears Based on Inferential Identification of Principal Surface Layer Defects

Science.gov (United States)

Masuyama, Tomoya; Inoue, Katsumi; Yamanaka, Masashi; Kitamura, Kenichi; Saito, Tomoyuki

High load capacity of carburized gears originates mainly from the hardened layer and induced residual stress. On the other hand, surface decarburization, which causes a nonmartensitic layer, and inclusions such as oxides and segregation act as latent defects which considerably reduce fatigue strength. In this connection, the authors have proposed a formula of strength evaluation by separately quantifying defect influence. However, the principal defect which limits strength of gears with several different defects remains unclarified. This study presents a method of inferential identification of principal defects based on test results of carburized gears made of SCM420 clean steel, gears with both an artificial notch and nonmartensitic layer at the tooth fillet, and so forth. It clarifies practical uses of presented methods, and strength of carburized gears can be evaluated by focusing on principal defect size.

4. Assessment and Certification of Neonatal Incubator Sensors through an Inferential Neural Network

Directory of Open Access Journals (Sweden)

José Medeiros de Araújo

2013-11-01

Full Text Available Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU. Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal.

5. On the use of recognition in inferential decision making: An overview of the debate

Directory of Open Access Journals (Sweden)

Rudiger F. Pohl

2011-07-01

Full Text Available I describe and discuss the sometimes heated controversy surrounding the recognition heuristic (RH as a model of inferential decision making. After briefly recapitulating the history of the RH up to its current version, I critically evaluate several specific assumptions and predictions of the RH and its surrounding framework: recognition as a memory-based process; the RH as a cognitive process model; proper conditions of testing the RH; measures of using the RH; reasons for not using the RH; the RH as a non-compensatory strategy; evidence for a Less-is-more effect (LIME; and the RH as part of the toolbox. The collection of these controversial issues may help to better understand the debate, to further sharpen the RH theory, and to develop ideas for future research.

6. Development of 4S and related technologies. (3) Statistical evaluation of safety performance of 4S on ULOF event

International Nuclear Information System (INIS)

Ishii, Kyoko; Matsumiya, Hisato; Horie, Hideki; Miyagi, Kazumi

2009-01-01

The purpose of this work is to evaluate quantitatively and statistically the safety performance of Super-Safe, Small, and Simple reactor (4S) by analyzing with ARGO code, a plant dynamics code for a sodium-cooled fast reactor. In this evaluation, an Anticipated Transient Without Scram (ATWS) is assumed, and an Unprotected Loss of Flow (ULOF) event is selected as a typical ATWS case. After a metric concerned with safety design is defined as performance factor a Phenomena Identification Ranking Table (PIRT) is produced in order to select the plausible phenomena that affect the metric. Then a sensitivity analysis is performed for the parameters related to the selected plausible phenomena. Finally the metric is evaluated with statistical methods whether it satisfies the given safety acceptance criteria. The result is as follows: The Cumulative Damage Fraction (CDF) for the cladding is defined as a metric, and the statistical estimation of the one-sided upper tolerance limit of 95 percent probability at a 95 percent confidence level in CDF is within the safety acceptance criterion; CDF < 0.1. The result shows that the 4S safety performance is acceptable in the ULOF event. (author)

7. Analytical review based on statistics on good and poor financial performance of LPD in Bangli regency.

Science.gov (United States)

Yasa, I. B. A.; Parnata, I. K.; Susilawati, N. L. N. A. S.

2018-01-01

This study aims to apply analytical review model to analyze the influence of GCG, accounting conservatism, financial distress models and company size on good and poor financial performance of LPD in Bangli Regency. Ordinal regression analysis is used to perform analytical review, so that obtained the influence and relationship between variables to be considered further audit. Respondents in this study were LPDs in Bangli Regency, which amounted to 159 LPDs of that number 100 LPDs were determined as randomly selected samples. The test results found GCG and company size have a significant effect on both the good and poor financial performance, while the conservatism and financial distress model has no significant effect. The influence of the four variables on the overall financial performance of 58.8%, while the remaining 41.2% influenced by other variables. Size, FDM and accounting conservatism are variables, which are further recommended to be audited.

8. Statistical techniques for automating the detection of anomalous performance in rotating machinery

International Nuclear Information System (INIS)

Piety, K.R.; Magette, T.E.

1979-01-01

The level of technology utilized in automated systems that monitor industrial rotating equipment and the potential of alternative surveillance methods are assessed. It is concluded that changes in surveillance methodology would upgrade ongoing programs and yet still be practical for implementation. An improved anomaly recognition methodology is formulated and implemented on a minicomputer system. The effectiveness of the monitoring system was evaluated in laboratory tests on a small rotor assembly, using vibrational signals from both displacement probes and accelerometers. Time and frequency domain descriptors are selected to compose an overall signature that characterizes the monitored equipment. Limits for normal operation of the rotor assembly are established automatically during an initial learning period. Thereafter, anomaly detection is accomplished by applying an approximate statistical test to each signature descriptor. As demonstrated over months of testing, this monitoring system is capable of detecting anomalous conditions while exhibiting a false alarm rate below 0.5%

9. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

Science.gov (United States)

Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

2017-01-01

This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

10. Cosmological Non-Gaussian Signature Detection: Comparing Performance of Different Statistical Tests

Directory of Open Access Journals (Sweden)

O. Forni

2005-09-01

Full Text Available Currently, it appears that the best method for non-Gaussianity detection in the cosmic microwave background (CMB consists in calculating the kurtosis of the wavelet coefficients. We know that wavelet-kurtosis outperforms other methods such as the bispectrum, the genus, ridgelet-kurtosis, and curvelet-kurtosis on an empirical basis, but relatively few studies have compared other transform-based statistics, such as extreme values, or more recent tools such as higher criticism (HC, or proposed Ã¢Â€Âœbest possibleÃ¢Â€Â choices for such statistics. In this paper, we consider two models for transform-domain coefficients: (a a power-law model, which seems suited to the wavelet coefficients of simulated cosmic strings, and (b a sparse mixture model, which seems suitable for the curvelet coefficients of filamentary structure. For model (a, if power-law behavior holds with finite 8th moment, excess kurtosis is an asymptotically optimal detector, but if the 8th moment is not finite, a test based on extreme values is asymptotically optimal. For model (b, if the transform coefficients are very sparse, a recent test, higher criticism, is an optimal detector, but if they are dense, kurtosis is an optimal detector. Empirical wavelet coefficients of simulated cosmic strings have power-law character, infinite 8th moment, while curvelet coefficients of the simulated cosmic strings are not very sparse. In all cases, excess kurtosis seems to be an effective test in moderate-resolution imagery.

11. Fisher statistics for analysis of diffusion tensor directional information.

Science.gov (United States)

Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P

2012-04-30

A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.

12. Applied multivariate statistical analysis

CERN Document Server

Härdle, Wolfgang Karl

2015-01-01

Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

13. Relationship between school culture and students\\' performance in ...

African Journals Online (AJOL)

... a longer history of offering French subject characterized by high expectations for and recognition of academic and co-curricula achievement, parental involvement, ... standard deviations) and inferential statistics (Pearson's product moment ...

14. Effects of Working Memory Capacity and Content Familiarity on Literal and Inferential Comprehension in L2 Reading

Science.gov (United States)

Alptekin, Cem; Ercetin, Gulcan

2011-01-01

This study examines the effects of working memory capacity and content familiarity on literal and inferential comprehension in second language (L2) reading. Participants were 62 Turkish university students with an advanced English proficiency level. Working memory capacity was measured through a computerized version of a reading span test, whereas…

15. Statistical properties of indicators of first-year performance at university

African Journals Online (AJOL)

and tails of bivariate distributions composed of university average performance and a school .... Superimposed on the probability histogram in Figure 1 is this density estimate (solid line). ..... (1999b) was adjusted by adding a small ... average FYWUM is larger than 50% for the first time, and the Grade 12 average mark is.

16. Do Different Mental Models Influence Cybersecurity Behavior? Evaluations via Statistical Reasoning Performance

Directory of Open Access Journals (Sweden)

Gary L. Brase

2017-11-01

Full Text Available Cybersecurity research often describes people as understanding internet security in terms of metaphorical mental models (e.g., disease risk, physical security risk, or criminal behavior risk. However, little research has directly evaluated if this is an accurate or productive framework. To assess this question, two experiments asked participants to respond to a statistical reasoning task framed in one of four different contexts (cybersecurity, plus the above alternative models. Each context was also presented using either percentages or natural frequencies, and these tasks were followed by a behavioral likelihood rating. As in previous research, consistent use of natural frequencies promoted correct Bayesian reasoning. There was little indication, however, that any of the alternative mental models generated consistently better understanding or reasoning over the actual cybersecurity context. There was some evidence that different models had some effects on patterns of responses, including the behavioral likelihood ratings, but these effects were small, as compared to the effect of the numerical format manipulation. This points to a need to improve the content of actual internet security warnings, rather than working to change the models users have of warnings.

17. Do Different Mental Models Influence Cybersecurity Behavior? Evaluations via Statistical Reasoning Performance.

Science.gov (United States)

Brase, Gary L; Vasserman, Eugene Y; Hsu, William

2017-01-01

Cybersecurity research often describes people as understanding internet security in terms of metaphorical mental models (e.g., disease risk, physical security risk, or criminal behavior risk). However, little research has directly evaluated if this is an accurate or productive framework. To assess this question, two experiments asked participants to respond to a statistical reasoning task framed in one of four different contexts (cybersecurity, plus the above alternative models). Each context was also presented using either percentages or natural frequencies, and these tasks were followed by a behavioral likelihood rating. As in previous research, consistent use of natural frequencies promoted correct Bayesian reasoning. There was little indication, however, that any of the alternative mental models generated consistently better understanding or reasoning over the actual cybersecurity context. There was some evidence that different models had some effects on patterns of responses, including the behavioral likelihood ratings, but these effects were small, as compared to the effect of the numerical format manipulation. This points to a need to improve the content of actual internet security warnings, rather than working to change the models users have of warnings.

18. Statistical physics of fracture: scientific discovery through high-performance computing

International Nuclear Information System (INIS)

Kumar, Phani; Nukala, V V; Simunovic, Srdan; Mills, Richard T

2006-01-01

The paper presents the state-of-the-art algorithmic developments for simulating the fracture of disordered quasi-brittle materials using discrete lattice systems. Large scale simulations are often required to obtain accurate scaling laws; however, due to computational complexity, the simulations using the traditional algorithms were limited to small system sizes. We have developed two algorithms: a multiple sparse Cholesky downdating scheme for simulating 2D random fuse model systems, and a block-circulant preconditioner for simulating 2D random fuse model systems. Using these algorithms, we were able to simulate fracture of largest ever lattice system sizes (L = 1024 in 2D, and L = 64 in 3D) with extensive statistical sampling. Our recent simulations on 1024 processors of Cray-XT3 and IBM Blue-Gene/L have further enabled us to explore fracture of 3D lattice systems of size L = 200, which is a significant computational achievement. These largest ever numerical simulations have enhanced our understanding of physics of fracture; in particular, we analyze damage localization and its deviation from percolation behavior, scaling laws for damage density, universality of fracture strength distribution, size effect on the mean fracture strength, and finally the scaling of crack surface roughness

19. Clues as information, the semiotic gap, and inferential investigative processes, or making a (very small) contribution to the new discipline, Forensic Semiotics

DEFF Research Database (Denmark)

Sørensen, Bent; Thellefsen, Torkild Leo; Thellefsen, Martin Muderspach

2017-01-01

In this article, we try to contribute to the new discipline Forensic Semiotics – a discipline introduced by the Canadian polymath Marcel Danesi. We focus on clues as information and criminal investigative processes as inferential. These inferential (and Peircean) processes have a certain complexity...

20. Noisy EEG signals classification based on entropy metrics. Performance assessment using first and second generation statistics.

Science.gov (United States)

Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio

2017-08-01

This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.

1. Introduction to statistics using interactive MM*Stat elements

CERN Document Server

Härdle, Wolfgang Karl; Rönz, Bernd

2015-01-01

MM*Stat, together with its enhanced online version with interactive examples, offers a flexible tool that facilitates the teaching of basic statistics. It covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). MM*Stat is also designed to help students rework class material independently and to promote comprehension with the help of additional examples. Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students’ knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical...

2. Nuclear power plant performance statistics. Comparison with fossil-fired units

International Nuclear Information System (INIS)

Tabet, C.; Laue, H.J.; Qureshi, A.; Skjoeldebrand, R.; White, D.

1983-01-01

The joint UNIPEDE/World Energy Conference Committee on Availability of Thermal Generating Plants has a mandate to study the availability of thermal plants and the different factors that influence it. This has led to the collection and publication at the Congress of the World Energy Conference (WEC) every third year of availability and unavailability factors to be used in systems reliability studies and operations and maintenance planning. For nuclear power plants the joint UNIPEDE/WEC Committee relies on the IAEA to provide availability and unavailability data. The IAEA has published an annual report with operating data from nuclear plants in its Member States since 1971, covering in addition back data from the early 1960s. These reports have developed over the years and in the early 1970s the format was brought into close conformity with that used by UNIPEDE and WEC to report performance of fossil-fired generating plants. Since 1974 an annual analytical summary report has been prepared. In 1981 all information on operating experience with nuclear power plants was placed in a computer file for easier reference. The computerized Power Reactor Information System (PRIS) ensures that data are easily retrievable and at its present level it remains compatible with various national systems. The objectives for the IAEA data collection and evaluation have developed significantly since 1970. At first, the IAEA primarily wanted to enable the individual power plant operator to compare the performance of his own plant with that of others of the same type; when enough data had been collected, they provided the basis for assessment of the fundamental performance parameters used in economic project studies; now, the data base merits being used in setting availability objectives for power plant operations. (author)

3. Statistical multi-model approach for performance assessment of cooling tower

International Nuclear Information System (INIS)

Pan, Tian-Hong; Shieh, Shyan-Shu; Jang, Shi-Shang; Tseng, Wen-Hung; Wu, Chan-Wei; Ou, Jenq-Jang

2011-01-01

This paper presents a data-driven model-based assessment strategy to investigate the performance of a cooling tower. In order to achieve this objective, the operations of a cooling tower are first characterized using a data-driven method, multiple models, which presents a set of local models in the format of linear equations. Satisfactory fuzzy c-mean clustering algorithm is used to classify operating data into several groups to build local models. The developed models are then applied to predict the performance of the system based on design input parameters provided by the manufacturer. The tower characteristics are also investigated using the proposed models via the effects of the water/air flow ratio. The predicted results tend to agree well with the calculated tower characteristics using actual measured operating data from an industrial plant. By comparison with the design characteristic curve provided by the manufacturer, the effectiveness of cooling tower can be obtained in the end. A case study conducted in a commercial plant demonstrates the validity of proposed approach. It should be noted that this is the first attempt to assess the cooling efficiency which is deviated from the original design value using operating data for an industrial scale process. Moreover, the evaluated process need not interrupt the normal operation of the cooling tower. This should be of particular interest in industrial applications.

4. Box-Behnken statistical design to optimize thermal performance of energy storage systems

Science.gov (United States)

2018-05-01

Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).

5. Closed loop statistical performance analysis of N-K knock controllers

Science.gov (United States)

Peyton Jones, James C.; Shayestehmanesh, Saeed; Frey, Jesse

2017-09-01

The closed loop performance of engine knock controllers cannot be rigorously assessed from single experiments or simulations because knock behaves as a random process and therefore the response belongs to a random distribution also. In this work a new method is proposed for computing the distributions and expected values of the closed loop response, both in steady state and in response to disturbances. The method takes as its input the control law, and the knock propensity characteristic of the engine which is mapped from open loop steady state tests. The method is applicable to the 'n-k' class of knock controllers in which the control action is a function only of the number of cycles n since the last control move, and the number k of knock events that have occurred in this time. A Cumulative Summation (CumSum) based controller falls within this category, and the method is used to investigate the performance of the controller in a deeper and more rigorous way than has previously been possible. The results are validated using onerous Monte Carlo simulations, which confirm both the validity of the method and its high computational efficiency.

6. Box-Behnken statistical design to optimize thermal performance of energy storage systems

Science.gov (United States)

2017-11-01

Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).

7. Practical Statistics

CERN Document Server

Lyons, L.

2016-01-01

Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

8. Design and performance characteristics of solar adsorption refrigeration system using parabolic trough collector: Experimental and statistical optimization technique

International Nuclear Information System (INIS)

Abu-Hamdeh, Nidal H.; Alnefaie, Khaled A.; Almitani, Khalid H.

2013-01-01

Highlights: • The successes of using olive waste/methanol as an adsorbent/adsorbate pair. • The experimental gross cycle coefficient of performance obtained was COP a = 0.75. • Optimization showed expanding adsorbent mass to a certain range increases the COP. • The statistical optimization led to optimum tank volume between 0.2 and 0.3 m 3 . • Increasing the collector area to a certain range increased the COP. - Abstract: The current work demonstrates a developed model of a solar adsorption refrigeration system with specific requirements and specifications. The recent scheme can be employed as a refrigerator and cooler unit suitable for remote areas. The unit runs through a parabolic trough solar collector (PTC) and uses olive waste as adsorbent with methanol as adsorbate. Cooling production, COP (coefficient of performance, and COP a (cycle gross coefficient of performance) were used to assess the system performance. The system’s design optimum parameters in this study were arrived to through statistical and experimental methods. The lowest temperature attained in the refrigerated space was 4 °C and the equivalent ambient temperature was 27 °C. The temperature started to decrease steadily at 20:30 – when the actual cooling started – until it reached 4 °C at 01:30 in the next day when it rose again. The highest COP a obtained was 0.75

9. Joint statistics of partial sums of ordered exponential variates and performance of GSC RAKE receivers over rayleigh fading channel

KAUST Repository

Nam, Sungsik

2011-08-01

Spread spectrum receivers with generalized selection combining (GSC) RAKE reception were proposed and have been studied as alternatives to the classical two fundamental schemes: maximal ratio combining and selection combining because the number of diversity paths increases with the transmission bandwidth. Previous work on performance analyses of GSC RAKE receivers based on the signal to noise ratio focused on the development of methodologies to derive exact closed-form expressions for various performance measures. However, some open problems related to the performance evaluation of GSC RAKE receivers still remain to be solved such as the exact performance analysis of the capture probability and an exact assessment of the impact of self-interference on GSC RAKE receivers. The major difficulty in these problems is to derive some joint statistics of ordered exponential variates. With this motivation in mind, we capitalize in this paper on some new order statistics results to derive exact closed-form expressions for the capture probability and outage probability of GSC RAKE receivers subject to self-interference over independent and identically distributed Rayleigh fading channels, and compare it to that of partial RAKE receivers. © 2011 IEEE.

10. Statistical analyses of variability/reproducibility of environmentally assisted cyclic crack growth rate data utilizing JAERI Material Performance Database (JMPD)

International Nuclear Information System (INIS)

Tsuji, Hirokazu; Yokoyama, Norio; Nakajima, Hajime; Kondo, Tatsuo

1993-05-01

Statistical analyses were conducted by using the cyclic crack growth rate data for pressure vessel steels stored in the JAERI Material Performance Database (JMPD), and comparisons were made on variability and/or reproducibility of the data between obtained by ΔK-increasing and by ΔK-constant type tests. Based on the results of the statistical analyses, it was concluded that ΔK-constant type tests are generally superior to the commonly used ΔK-increasing type ones from the viewpoint of variability and/or reproducibility of the data. Such a tendency was more pronounced in the tests conducted in simulated LWR primary coolants than those in air. (author)

11. Identification of robust statistical downscaling methods based on a comprehensive suite of performance metrics for South Korea

Science.gov (United States)

Eum, H. I.; Cannon, A. J.

2015-12-01

Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the

12. The ‘39 steps’: an algorithm for performing statistical analysis of data on energy intake and expenditure

Directory of Open Access Journals (Sweden)

John R. Speakman

2013-03-01

Full Text Available The epidemics of obesity and diabetes have aroused great interest in the analysis of energy balance, with the use of organisms ranging from nematode worms to humans. Although generating energy-intake or -expenditure data is relatively straightforward, the most appropriate way to analyse the data has been an issue of contention for many decades. In the last few years, a consensus has been reached regarding the best methods for analysing such data. To facilitate using these best-practice methods, we present here an algorithm that provides a step-by-step guide for analysing energy-intake or -expenditure data. The algorithm can be used to analyse data from either humans or experimental animals, such as small mammals or invertebrates. It can be used in combination with any commercial statistics package; however, to assist with analysis, we have included detailed instructions for performing each step for three popular statistics packages (SPSS, MINITAB and R. We also provide interpretations of the results obtained at each step. We hope that this algorithm will assist in the statistically appropriate analysis of such data, a field in which there has been much confusion and some controversy.

13. Statistical Techniques For Real-time Anomaly Detection Using Spark Over Multi-source VMware Performance Data

Energy Technology Data Exchange (ETDEWEB)

Solaimani, Mohiuddin [Univ. of Texas-Dallas, Richardson, TX (United States); Iftekhar, Mohammed [Univ. of Texas-Dallas, Richardson, TX (United States); Khan, Latifur [Univ. of Texas-Dallas, Richardson, TX (United States); Thuraisingham, Bhavani [Univ. of Texas-Dallas, Richardson, TX (United States); Ingram, Joey Burton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

2015-09-01

Anomaly detection refers to the identi cation of an irregular or unusual pat- tern which deviates from what is standard, normal, or expected. Such deviated patterns typically correspond to samples of interest and are assigned different labels in different domains, such as outliers, anomalies, exceptions, or malware. Detecting anomalies in fast, voluminous streams of data is a formidable chal- lenge. This paper presents a novel, generic, real-time distributed anomaly detection framework for heterogeneous streaming data where anomalies appear as a group. We have developed a distributed statistical approach to build a model and later use it to detect anomaly. As a case study, we investigate group anomaly de- tection for a VMware-based cloud data center, which maintains a large number of virtual machines (VMs). We have built our framework using Apache Spark to get higher throughput and lower data processing time on streaming data. We have developed a window-based statistical anomaly detection technique to detect anomalies that appear sporadically. We then relaxed this constraint with higher accuracy by implementing a cluster-based technique to detect sporadic and continuous anomalies. We conclude that our cluster-based technique out- performs other statistical techniques with higher accuracy and lower processing time.

14. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

Science.gov (United States)

Shaikh, Masood Ali

2017-09-01

Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

15. Pre-service primary school teachers’ knowledge of informal statistical inference

NARCIS (Netherlands)

de Vetten, Arjen; Schoonenboom, Judith; Keijzer, Ronald; van Oers, Bert

2018-01-01

The ability to reason inferentially is increasingly important in today’s society. It is hypothesized here that engaging primary school students in informal statistical reasoning (ISI), defined as making generalizations without the use of formal statistical tests, will help them acquire the

16. Forecasting of a ground-coupled heat pump performance using neural networks with statistical data weighting pre-processing

Energy Technology Data Exchange (ETDEWEB)

Esen, Hikmet; Esen, Mehmet [Department of Mechanical Education, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey); Inalli, Mustafa [Department of Mechanical Engineering, Faculty of Engineering, Firat University, 23279 Elazig (Turkey); Sengur, Abdulkadir [Department of Electronic and Computer Science, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey)

2008-04-15

The objective of this work is to improve the performance of an artificial neural network (ANN) with a statistical weighted pre-processing (SWP) method to learn to predict ground source heat pump (GCHP) systems with the minimum data set. Experimental studies were completed to obtain training and test data. Air temperatures entering/leaving condenser unit, water-antifreeze solution entering/leaving the horizontal ground heat exchangers and ground temperatures (1 and 2 m) were used as input layer, while the output is coefficient of performance (COP) of system. Some statistical methods, such as the root-mean squared (RMS), the coefficient of multiple determinations (R{sup 2}) and the coefficient of variation (cov) is used to compare predicted and actual values for model validation. It is found that RMS value is 0.074, R{sup 2} value is 0.9999 and cov value is 2.22 for SCG6 algorithm of only ANN structure. It is also found that RMS value is 0.002, R{sup 2} value is 0.9999 and cov value is 0.076 for SCG6 algorithm of SWP-ANN structure. The simulation results show that the SWP based networks can be used an alternative way in these systems. Therefore, instead of limited experimental data found in literature, faster and simpler solutions are obtained using hybridized structures such as SWP-ANN. (author)

17. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

Science.gov (United States)

Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

2018-03-01

In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

18. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics

Directory of Open Access Journals (Sweden)

Haejoon Jung

2018-01-01

Full Text Available As an intrinsic part of the Internet of Things (IoT ecosystem, machine-to-machine (M2M communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

19. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics.

Science.gov (United States)

Jung, Haejoon; Lee, In-Ho

2018-01-12

As an intrinsic part of the Internet of Things (IoT) ecosystem, machine-to-machine (M2M) communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave) communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC) device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs) with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy) of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

20. Mediator effect of statistical process control between Total Quality Management (TQM) and business performance in Malaysian Automotive Industry

Science.gov (United States)

Ahmad, M. F.; Rasi, R. Z.; Zakuan, N.; Hisyamudin, M. N. N.

2015-12-01

In today's highly competitive market, Total Quality Management (TQM) is vital management tool in ensuring a company can success in their business. In order to survive in the global market with intense competition amongst regions and enterprises, the adoption of tools and techniques are essential in improving business performance. There are consistent results between TQM and business performance. However, only few previous studies have examined the mediator effect namely statistical process control (SPC) between TQM and business performance. A mediator is a third variable that changes the association between an independent variable and an outcome variable. This study present research proposed a TQM performance model with mediator effect of SPC with structural equation modelling, which is a more comprehensive model for developing countries, specifically for Malaysia. A questionnaire was prepared and sent to 1500 companies from automotive industry and the related vendors in Malaysia, giving a 21.8 per cent rate. Attempts were made at findings significant impact of mediator between TQM practices and business performance showed that SPC is important tools and techniques in TQM implementation. The result concludes that SPC is partial correlation between and TQM and BP with indirect effect (IE) is 0.25 which can be categorised as high moderator effect.

1. Understanding Statistics - Cancer Statistics

Science.gov (United States)

Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

2. Do narcissism and emotional intelligence win us friends? : modeling dynamics of peer popularity using inferential network analysis

OpenAIRE

Czarna, Anna; Leifeld, Philip; Śmieja-Nęcka, Magdalena; Dufner, Michael; Salovey, Peter

2016-01-01

This research investigated effects of narcissism and emotional intelligence (EI) on popularity in social networks. In a longitudinal field study, we examined the dynamics of popularity in 15 peer groups in two waves (N = 273). We measured narcissism, ability EI, and explicit and implicit self-esteem. In addition, we measured popularity at zero acquaintance and 3 months later. We analyzed the data using inferential network analysis (temporal exponential random graph modeling, TERGM) accounting...

3. Performance of statistical process control methods for regional surgical site infection surveillance: a 10-year multicentre pilot study.

Science.gov (United States)

Baker, Arthur W; Haridy, Salah; Salem, Joseph; Ilieş, Iulian; Ergai, Awatef O; Samareh, Aven; Andrianas, Nicholas; Benneyan, James C; Sexton, Daniel J; Anderson, Deverick J

2017-11-24

Traditional strategies for surveillance of surgical site infections (SSI) have multiple limitations, including delayed and incomplete outbreak detection. Statistical process control (SPC) methods address these deficiencies by combining longitudinal analysis with graphical presentation of data. We performed a pilot study within a large network of community hospitals to evaluate performance of SPC methods for detecting SSI outbreaks. We applied conventional Shewhart and exponentially weighted moving average (EWMA) SPC charts to 10 previously investigated SSI outbreaks that occurred from 2003 to 2013. We compared the results of SPC surveillance to the results of traditional SSI surveillance methods. Then, we analysed the performance of modified SPC charts constructed with different outbreak detection rules, EWMA smoothing factors and baseline SSI rate calculations. Conventional Shewhart and EWMA SPC charts both detected 8 of the 10 SSI outbreaks analysed, in each case prior to the date of traditional detection. Among detected outbreaks, conventional Shewhart chart detection occurred a median of 12 months prior to outbreak onset and 22 months prior to traditional detection. Conventional EWMA chart detection occurred a median of 7 months prior to outbreak onset and 14 months prior to traditional detection. Modified Shewhart and EWMA charts additionally detected several outbreaks earlier than conventional SPC charts. Shewhart and SPC charts had low false-positive rates when used to analyse separate control hospital SSI data. Our findings illustrate the potential usefulness and feasibility of real-time SPC surveillance of SSI to rapidly identify outbreaks and improve patient safety. Further study is needed to optimise SPC chart selection and calculation, statistical outbreak detection rules and the process for reacting to signals of potential outbreaks. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights

4. A statistical approach to estimating effects of performance shaping factors on human error probabilities of soft controls

International Nuclear Information System (INIS)

Kim, Yochan; Park, Jinkyun; Jung, Wondea; Jang, Inseok; Hyun Seong, Poong

2015-01-01

Despite recent efforts toward data collection for supporting human reliability analysis, there remains a lack of empirical basis in determining the effects of performance shaping factors (PSFs) on human error probabilities (HEPs). To enhance the empirical basis regarding the effects of the PSFs, a statistical methodology using a logistic regression and stepwise variable selection was proposed, and the effects of the PSF on HEPs related with the soft controls were estimated through the methodology. For this estimation, more than 600 human error opportunities related to soft controls in a computerized control room were obtained through laboratory experiments. From the eight PSF surrogates and combinations of these variables, the procedure quality, practice level, and the operation type were identified as significant factors for screen switch and mode conversion errors. The contributions of these significant factors to HEPs were also estimated in terms of a multiplicative form. The usefulness and limitation of the experimental data and the techniques employed are discussed herein, and we believe that the logistic regression and stepwise variable selection methods will provide a way to estimate the effects of PSFs on HEPs in an objective manner. - Highlights: • It is necessary to develop an empirical basis for the effects of the PSFs on the HEPs. • A statistical method using a logistic regression and variable selection was proposed. • The effects of PSFs on the HEPs of soft controls were empirically investigated. • The significant factors were identified and their effects were estimated

5. Quality of reporting statistics in two Indian pharmacology journals.

Science.gov (United States)

2011-04-01

To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. All original articles published since 2002 were downloaded from the journals' (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of reporting of method of description and central tendencies. Inferential statistics was evaluated on the basis of fulfilling of assumption of statistical methods and appropriateness of statistical tests. Values are described as frequencies, percentage, and 95% confidence interval (CI) around the percentages. Inappropriate descriptive statistics was observed in 150 (78.1%, 95% CI 71.7-83.3%) articles. Most common reason for this inappropriate descriptive statistics was use of mean ± SEM at the place of "mean (SD)" or "mean ± SD." Most common statistical method used was one-way ANOVA (58.4%). Information regarding checking of assumption of statistical test was mentioned in only two articles. Inappropriate statistical test was observed in 61 (31.7%, 95% CI 25.6-38.6%) articles. Most common reason for inappropriate statistical test was the use of two group test for three or more groups. Articles published in two Indian pharmacology journals are not devoid of statistical errors.

6. Statistical Analysis of Research Data | Center for Cancer Research

Science.gov (United States)

Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

7. Statistical Model and Performance Analysis of a Novel Multilevel Polarization Modulation in Local “Twisted” Fibers

Directory of Open Access Journals (Sweden)

Pierluigi Perrone

2017-01-01

Full Text Available Transmission demand continues to grow and higher capacity optical communication systems are required to economically meet this ever-increasing need for communication services. This article expands and deepens the study of a novel optical communication system for high-capacity Local Area Networks (LANs, based on twisted optical fibers. The complete statistical behavior of this system is shown, designed for more efficient use of the fiber single-channel capacity by adopting an unconventional multilevel polarization modulation (called “bands of polarization”. Starting from simulative results, a possible reference mathematical model is proposed. Finally, the system performance is analyzed in the presence of shot-noise (coherent detection or thermal noise (direct detection.

8. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

Science.gov (United States)

Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

2013-03-01

To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

9. Walking performance: correlation between energy cost of walking and walking participation. new statistical approach concerning outcome measurement.

Directory of Open Access Journals (Sweden)

Marco Franceschini

Full Text Available Walking ability, though important for quality of life and participation in social and economic activities, can be adversely affected by neurological disorders, such as Spinal Cord Injury, Stroke, Multiple Sclerosis or Traumatic Brain Injury. The aim of this study is to evaluate if the energy cost of walking (CW, in a mixed group of chronic patients with neurological diseases almost 6 months after discharge from rehabilitation wards, can predict the walking performance and any walking restriction on community activities, as indicated by Walking Handicap Scale categories (WHS. One hundred and seven subjects were included in the study, 31 suffering from Stroke, 26 from Spinal Cord Injury and 50 from Multiple Sclerosis. The multivariable binary logistical regression analysis has produced a statistical model with good characteristics of fit and good predictability. This model generated a cut-off value of.40, which enabled us to classify correctly the cases with a percentage of 85.0%. Our research reveal that, in our subjects, CW is the only predictor of the walking performance of in the community, to be compared with the score of WHS. We have been also identifying a cut-off value of CW cost, which makes a distinction between those who can walk in the community and those who cannot do it. In particular, these values could be used to predict the ability to walk in the community when discharged from the rehabilitation units, and to adjust the rehabilitative treatment to improve the performance.

10. Adaptive statistical iterative reconstruction and Veo: assessment of image quality and diagnostic performance in CT colonography at various radiation doses.

Science.gov (United States)

Yoon, Min A; Kim, Se Hyung; Lee, Jeong Min; Woo, Hyoun Sik; Lee, Eun Sun; Ahn, Se Jin; Han, Joon Koo

2012-01-01

To evaluate the diagnostic performance of computed tomography (CT) colonography (CTC) reconstructed with different levels of adaptive statistical iterative reconstruction (ASiR, GE Healthcare) and Veo (model-based iterative reconstruction, GE Healthcare) at various tube currents in detection of polyps in porcine colon phantoms. Five porcine colon phantoms with 46 simulated polyps were scanned at different radiation doses (10, 30, and 50 mA s) and were reconstructed using filtered back projection (FBP), ASiR (20%, 40%, and 60%) and Veo. Eleven data sets for each phantom (10-mA s FBP, 10-mA s 20% ASiR, 10-mA s 40% ASiR, 10-mA s 60% ASiR, 10-mA s Veo, 30-mA s FBP, 30-mA s 20% ASiR, 30-mA s 40% ASiR, 30-mA s 60% ASiR, 30-mA s Veo, and 50-mA s FBP) yielded a total of 55 data sets. Polyp detection sensitivity and confidence level of 2 independent observers were evaluated with the McNemar test, the Fisher exact test, and receiver operating characteristic curve analysis. Comparative analyses of overall image quality score, measured image noise, and interpretation time were also performed. Per-polyp detection sensitivities and specificities were highest in 10-mA s Veo, 30-mA s FBP, 30-mA s 60% ASiR, and 50-mA s FBP (sensitivity, 100%; specificity, 100%). The area-under-the-curve values for the overall performance of each data set was also highest (1.000) at 50-mA s FBP, 30-mA s FBP, 30-mA s 60% ASiR, and 10-mA s Veo. Images reconstructed with ASiR showed statistically significant improvement in per-polyp detection sensitivity as the percent level of per-polyp sensitivity increased (10-mA s FBP vs 10-mA s 20% ASiR, P = 0.011; 10-mA s FBP vs 10-mA s 40% ASiR, P = 0.000; 10-mA s FBP vs 10-mA s 60% ASiR, P = 0.000; 10-mA s 20% ASiR vs 40% ASiR, P = 0.034). Overall image quality score was highest at 30-mA s Veo and 50-mA s FBP. The quantitative measurement of the image noise was lowest at 30-mA s Veo and second lowest at 10-mA s Veo. There was a trend of decrease in time

11. Have Basic Mathematical Skills Grown Obsolete in the Computer Age: Assessing Basic Mathematical Skills and Forecasting Performance in a Business Statistics Course

Science.gov (United States)

Noser, Thomas C.; Tanner, John R.; Shah, Situl

2008-01-01

The purpose of this study was to measure the comprehension of basic mathematical skills of students enrolled in statistics classes at a large regional university, and to determine if the scores earned on a basic math skills test are useful in forecasting student performance in these statistics classes, and to determine if students' basic math…

12. Risk assessment of student performance in the International Foundations of Medicine Clinical Science Examination by the use of statistical modeling.

Science.gov (United States)

David, Michael C; Eley, Diann S; Schafer, Jennifer; Davies, Leo

2016-01-01

The primary aim of this study was to assess the predictive validity of cumulative grade point average (GPA) for performance in the International Foundations of Medicine (IFOM) Clinical Science Examination (CSE). A secondary aim was to develop a strategy for identifying students at risk of performing poorly in the IFOM CSE as determined by the National Board of Medical Examiners' International Standard of Competence. Final year medical students from an Australian university medical school took the IFOM CSE as a formative assessment. Measures included overall IFOM CSE score as the dependent variable, cumulative GPA as the predictor, and the factors age, gender, year of enrollment, international or domestic status of student, and language spoken at home as covariates. Multivariable linear regression was used to measure predictor and covariate effects. Optimal thresholds of risk assessment were based on receiver-operating characteristic (ROC) curves. Cumulative GPA (nonstandardized regression coefficient [B]: 81.83; 95% confidence interval [CI]: 68.13 to 95.53) and international status (B: -37.40; 95% CI: -57.85 to -16.96) from 427 students were found to be statistically associated with increased IFOM CSE performance. Cumulative GPAs of 5.30 (area under ROC [AROC]: 0.77; 95% CI: 0.72 to 0.82) and 4.90 (AROC: 0.72; 95% CI: 0.66 to 0.78) were identified as being thresholds of significant risk for domestic and international students, respectively. Using cumulative GPA as a predictor of IFOM CSE performance and accommodating for differences in international status, it is possible to identify students who are at risk of failing to satisfy the National Board of Medical Examiners' International Standard of Competence.

13. a Statistical Analysis on the System Performance of a Bluetooth Low Energy Indoor Positioning System in a 3d Environment

Science.gov (United States)

Haagmans, G. G.; Verhagen, S.; Voûte, R. L.; Verbree, E.

2017-09-01

Since GPS tends to fail for indoor positioning purposes, alternative methods like indoor positioning systems (IPS) based on Bluetooth low energy (BLE) are developing rapidly. Generally, IPS are deployed in environments covered with obstacles such as furniture, walls, people and electronics influencing the signal propagation. The major factor influencing the system performance and to acquire optimal positioning results is the geometry of the beacons. The geometry of the beacons is limited to the available infrastructure that can be deployed (number of beacons, basestations and tags), which leads to the following challenge: Given a limited number of beacons, where should they be placed in a specified indoor environment, such that the geometry contributes to optimal positioning results? This paper aims to propose a statistical model that is able to select the optimal configuration that satisfies the user requirements in terms of precision. The model requires the definition of a chosen 3D space (in our case 7 × 10 × 6 meter), number of beacons, possible user tag locations and a performance threshold (e.g. required precision). For any given set of beacon and receiver locations, the precision, internal- and external reliability can be determined on forehand. As validation, the modeled precision has been compared with observed precision results. The measurements have been performed with an IPS of BlooLoc at a chosen set of user tag locations for a given geometric configuration. Eventually, the model is able to select the optimal geometric configuration out of millions of possible configurations based on a performance threshold (e.g. required precision).

14. A PERFORMANCE COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORKS AND MULTIVARIATE STATISTICAL METHODS IN FORECASTING FINANCIAL STRENGTH RATING IN TURKISH BANKING SECTOR

Directory of Open Access Journals (Sweden)

MELEK ACAR BOYACIOĞLU

2013-06-01

Full Text Available Financial strength rating indicates the fundamental financial strength of a bank. The aim of financial strength rating is to measure a bank’s fundamental financial strength excluding the external factors. External factors can stem from the working environment or can be linked with the outside protective support mechanisms. With the evaluation, the rating of a bank free from outside supportive factors is being sought. Also the financial fundamental, franchise value, the variety of assets and working environment of a bank are being evaluated in this context. In this study, a model has been developed in order to predict the financial strength rating of Turkish banks. The methodology of this study is as follows: Selecting variables to be used in the model, creating a data set, choosing the techniques to be used and the evaluation of classification success of the techniques. It is concluded that the artificial neural network system shows a better performance in terms of classification of financial strength rating in comparison to multivariate statistical methods in the raining set. On the other hand, there is no meaningful difference could be found in the validation set in which the prediction performances of the employed techniques are tested.

15. SIMPLIFIED PREDICTIVE MODELS FOR CO₂ SEQUESTRATION PERFORMANCE ASSESSMENT RESEARCH TOPICAL REPORT ON TASK #3 STATISTICAL LEARNING BASED MODELS

Energy Technology Data Exchange (ETDEWEB)

Mishra, Srikanta; Schuetter, Jared

2014-11-01

We compare two approaches for building a statistical proxy model (metamodel) for CO₂ geologic sequestration from the results of full-physics compositional simulations. The first approach involves a classical Box-Behnken or Augmented Pairs experimental design with a quadratic polynomial response surface. The second approach used a space-filling maxmin Latin Hypercube sampling or maximum entropy design with the choice of five different meta-modeling techniques: quadratic polynomial, kriging with constant and quadratic trend terms, multivariate adaptive regression spline (MARS) and additivity and variance stabilization (AVAS). Simulations results for CO₂ injection into a reservoir-caprock system with 9 design variables (and 97 samples) were used to generate the data for developing the proxy models. The fitted models were validated with using an independent data set and a cross-validation approach for three different performance metrics: total storage efficiency, CO₂ plume radius and average reservoir pressure. The Box-Behnken–quadratic polynomial metamodel performed the best, followed closely by the maximin LHS–kriging metamodel.

16. Measurement of volatile organic compounds emitted in libraries and archives: an inferential indicator of paper decay?

Directory of Open Access Journals (Sweden)

Gibson Lorraine T

2012-05-01

Full Text Available Abstract Background A sampling campaign of indoor air was conducted to assess the typical concentration of indoor air pollutants in 8 National Libraries and Archives across the U.K. and Ireland. At each site, two locations were chosen that contained various objects in the collection (paper, parchment, microfilm, photographic material etc. and one location was chosen to act as a sampling reference location (placed in a corridor or entrance hallway. Results Of the locations surveyed, no measurable levels of sulfur dioxide were detected and low formaldehyde vapour (-3 was measured throughout. Acetic and formic acids were measured in all locations with, for the most part, higher acetic acid levels in areas with objects compared to reference locations. A large variety of volatile organic compounds (VOCs was measured in all locations, in variable concentrations, however furfural was the only VOC to be identified consistently at higher concentration in locations with paper-based collections, compared to those locations without objects. To cross-reference the sampling data with VOCs emitted directly from books, further studies were conducted to assess emissions from paper using solid phase microextraction (SPME fibres and a newly developed method of analysis; collection of VOCs onto a polydimethylsiloxane (PDMS elastomer strip. Conclusions In this study acetic acid and furfural levels were consistently higher in concentration when measured in locations which contained paper-based items. It is therefore suggested that both acetic acid and furfural (possibly also trimethylbenzenes, ethyltoluene, decane and camphor may be present in the indoor atmosphere as a result of cellulose degradation and together may act as an inferential non-invasive marker for the deterioration of paper. Direct VOC sampling was successfully achieved using SPME fibres and analytes found in the indoor air were also identified as emissive by-products from paper. Finally a new non

17. Evidence-based orthodontics. Current statistical trends in published articles in one journal.

Science.gov (United States)

Law, Scott V; Chudasama, Dipak N; Rinchuse, Donald J

2010-09-01

To ascertain the number, type, and overall usage of statistics in American Journal of Orthodontics and Dentofacial (AJODO) articles for 2008. These data were then compared to data from three previous years: 1975, 1985, and 2003. The frequency and distribution of statistics used in the AJODO original articles for 2008 were dichotomized into those using statistics and those not using statistics. Statistical procedures were then broadly divided into descriptive statistics (mean, standard deviation, range, percentage) and inferential statistics (t-test, analysis of variance). Descriptive statistics were used to make comparisons. In 1975, 1985, 2003, and 2008, AJODO published 72, 87, 134, and 141 original articles, respectively. The percentage of original articles using statistics was 43.1% in 1975, 75.9% in 1985, 94.0% in 2003, and 92.9% in 2008; original articles using statistics stayed relatively the same from 2003 to 2008, with only a small 1.1% decrease. The percentage of articles using inferential statistical analyses was 23.7% in 1975, 74.2% in 1985, 92.9% in 2003, and 84.4% in 2008. Comparing AJODO publications in 2003 and 2008, there was an 8.5% increase in the use of descriptive articles (from 7.1% to 15.6%), and there was an 8.5% decrease in articles using inferential statistics (from 92.9% to 84.4%).

18. Risk assessment of student performance in the International Foundations of Medicine Clinical Science Examination by the use of statistical modeling

Directory of Open Access Journals (Sweden)

David MC

2016-12-01

Full Text Available Michael C David,1 Diann S Eley,2 Jennifer Schafer,2 Leo Davies,3 1School of Public Health, 2School of Medicine, The University of Queensland, Herston, QLD, 3Sydney Medical School, The University of Sydney, NSW, Australia Purpose: The primary aim of this study was to assess the predictive validity of cumulative grade point average (GPA for performance in the International Foundations of Medicine (IFOM Clinical Science Examination (CSE. A secondary aim was to develop a strategy for identifying students at risk of performing poorly in the IFOM CSE as determined by the National Board of Medical Examiners’ International Standard of Competence. Methods: Final year medical students from an Australian university medical school took the IFOM CSE as a formative assessment. Measures included overall IFOM CSE score as the dependent variable, cumulative GPA as the predictor, and the factors age, gender, year of enrollment, international or domestic status of student, and language spoken at home as covariates. Multivariable linear regression was used to measure predictor and covariate effects. Optimal thresholds of risk assessment were based on receiver-operating characteristic (ROC curves. Results: Cumulative GPA (nonstandardized regression coefficient [B]: 81.83; 95% confidence interval [CI]: 68.13 to 95.53 and international status (B: –37.40; 95% CI: –57.85 to –16.96 from 427 students were found to be statistically associated with increased IFOM CSE ­performance. Cumulative GPAs of 5.30 (area under ROC [AROC]: 0.77; 95% CI: 0.72 to 0.82 and 4.90 (AROC: 0.72; 95% CI: 0.66 to 0.78 were identified as being thresholds of significant risk for domestic and international students, respectively. Conclusion: Using cumulative GPA as a predictor of IFOM CSE performance and accommodating for differences in international status, it is possible to identify students who are at risk of failing to satisfy the National Board of Medical Examiners’ International

19. Evaluation of Solid Rocket Motor Component Data Using a Commercially Available Statistical Software Package

Science.gov (United States)

Stefanski, Philip L.

2015-01-01

Commercially available software packages today allow users to quickly perform the routine evaluations of (1) descriptive statistics to numerically and graphically summarize both sample and population data, (2) inferential statistics that draws conclusions about a given population from samples taken of it, (3) probability determinations that can be used to generate estimates of reliability allowables, and finally (4) the setup of designed experiments and analysis of their data to identify significant material and process characteristics for application in both product manufacturing and performance enhancement. This paper presents examples of analysis and experimental design work that has been conducted using Statgraphics®(Registered Trademark) statistical software to obtain useful information with regard to solid rocket motor propellants and internal insulation material. Data were obtained from a number of programs (Shuttle, Constellation, and Space Launch System) and sources that include solid propellant burn rate strands, tensile specimens, sub-scale test motors, full-scale operational motors, rubber insulation specimens, and sub-scale rubber insulation analog samples. Besides facilitating the experimental design process to yield meaningful results, statistical software has demonstrated its ability to quickly perform complex data analyses and yield significant findings that might otherwise have gone unnoticed. One caveat to these successes is that useful results not only derive from the inherent power of the software package, but also from the skill and understanding of the data analyst.

20. The use and misuse of statistical methodologies in pharmacology research.

Science.gov (United States)

Marino, Michael J

2014-01-01

Descriptive, exploratory, and inferential statistics are necessary components of hypothesis-driven biomedical research. Despite the ubiquitous need for these tools, the emphasis on statistical methods in pharmacology has become dominated by inferential methods often chosen more by the availability of user-friendly software than by any understanding of the data set or the critical assumptions of the statistical tests. Such frank misuse of statistical methodology and the quest to reach the mystical αstatistical training. Perhaps more critically, a poor understanding of statistical tools limits the conclusions that may be drawn from a study by divorcing the investigator from their own data. The net result is a decrease in quality and confidence in research findings, fueling recent controversies over the reproducibility of high profile findings and effects that appear to diminish over time. The recent development of "omics" approaches leading to the production of massive higher dimensional data sets has amplified these issues making it clear that new approaches are needed to appropriately and effectively mine this type of data. Unfortunately, statistical education in the field has not kept pace. This commentary provides a foundation for an intuitive understanding of statistics that fosters an exploratory approach and an appreciation for the assumptions of various statistical tests that hopefully will increase the correct use of statistics, the application of exploratory data analysis, and the use of statistical study design, with the goal of increasing reproducibility and confidence in the literature. Copyright © 2013. Published by Elsevier Inc.

1. Joint statistics of partial sums of ordered exponential variates and performance of GSC RAKE receivers over rayleigh fading channel

KAUST Repository

Nam, Sungsik; Hasna, Mazen Omar; Alouini, Mohamed-Slim

2011-01-01

-interference on GSC RAKE receivers. The major difficulty in these problems is to derive some joint statistics of ordered exponential variates. With this motivation in mind, we capitalize in this paper on some new order statistics results to derive exact closed

2. Subjectivism as an unavoidable feature of ecological statistics

Directory of Open Access Journals (Sweden)

Martínez–Abraín, A.

2014-12-01

Full Text Available We approach here the handling of previous information when performing statistical inference in ecology, both when dealing with model specification and selection, and when dealing with parameter estimation. We compare the perspectives of this problem from the frequentist and Bayesian schools, including objective and subjective Bayesians. We show that the issue of making use of previous information and making a priori decisions is not only a reality for Bayesians but also for frequentists. However, the latter tend to overlook this because of the common difficulty of having previous information available on the magnitude of the effect that is thought to be biologically relevant. This prior information should be fed into a priori power tests when looking for the necessary sample sizes to couple statistical and biological significances. Ecologists should make a greater effort to make use of available prior information because this is their most legitimate contribution to the inferential process. Parameter estimation and model selection would benefit if this was done, allowing a more reliable accumulation of knowledge, and hence progress, in the biological sciences.

3. Impact of Autocorrelation on Principal Components and Their Use in Statistical Process Control

DEFF Research Database (Denmark)

Vanhatalo, Erik; Kulahci, Murat

2015-01-01

A basic assumption when using principal component analysis (PCA) for inferential purposes, such as in statistical process control (SPC), is that the data are independent in time. In many industrial processes, frequent sampling and process dynamics make this assumption unrealistic rendering sampled...

4. Human capital accumulation and its effect on agribusiness performance: the case of China.

Science.gov (United States)

Udimal, Thomas Bilaliib; Jincai, Zhuang; Ayamba, Emmanuel Caesar; Sarpong, Patrick Boateng

2017-09-01

This study investigates the effect of accumulated human capital on the performance of agribusinesses in China. Four hundred fifty agribusiness owners were interviewed for the study. Growth in sales over the last 5 years was used as a measure of performance. The following variables were reviewed and captured as those constituting human capital: education, raised in the area, parents being entrepreneurs, attending business seminars/trade fairs, managerial experience, similar work experience, cooperative membership, and training. Logit regression model and inferential statistics were used to analyze the data. The logit regression model was used to analyze the effect of accumulated human capital on growth in sales. The inferential statistics on the other hand was used to measure the association between age, education, sex, provinces, and the categories of growth. Our study found that parents who are entrepreneurs and attend business seminars/trade fairs, as well as have managerial experience, similar work experience, education, and training, display a statistically significant positive effect on the growth in sales.

5. Statistical evaluation of the performance of gridded monthly precipitation products from reanalysis data, satellite estimates, and merged analyses over China

Science.gov (United States)

Deng, Xueliang; Nie, Suping; Deng, Weitao; Cao, Weihua

2018-04-01

In this study, we compared the following four different gridded monthly precipitation products: the National Centers for Environmental Prediction version 2 (NCEP-2) reanalysis data, the satellite-based Climate Prediction Center Morphing technique (CMORPH) data, the merged satellite-gauge Global Precipitation Climatology Project (GPCP) data, and the merged satellite-gauge-model data from the Beijing Climate Center Merged Estimation of Precipitation (BMEP). We evaluated the performances of these products using monthly precipitation observations spanning the period of January 2003 to December 2013 from a dense, national, rain gauge network in China. Our assessment involved several statistical techniques, including spatial pattern, temporal variation, bias, root-mean-square error (RMSE), and correlation coefficient (CC) analysis. The results show that NCEP-2, GPCP, and BMEP generally overestimate monthly precipitation at the national scale and CMORPH underestimates it. However, all of the datasets successfully characterized the northwest to southeast increase in the monthly precipitation over China. Because they include precipitation gauge information from the Global Telecommunication System (GTS) network, GPCP and BMEP have much smaller biases, lower RMSEs, and higher CCs than NCEP-2 and CMORPH. When the seasonal and regional variations are considered, NCEP-2 has a larger error over southern China during the summer. CMORPH poorly reproduces the magnitude of the precipitation over southeastern China and the temporal correlation over western and northwestern China during all seasons. BMEP has a lower RMSE and higher CC than GPCP over eastern and southern China, where the station network is dense. In contrast, BMEP has a lower CC than GPCP over western and northwestern China, where the gauge network is relatively sparse.

6. Early warning of limit-exceeding concentrations of cyanobacteria and cyanotoxins in drinking water reservoirs by inferential modelling.

Science.gov (United States)

Recknagel, Friedrich; Orr, Philip T; Bartkow, Michael; Swanepoel, Annelie; Cao, Hongqing

2017-11-01

7. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study.

Science.gov (United States)

Nour-Eldein, Hebatallah

2016-01-01

With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.

8. Spatial analysis statistics, visualization, and computational methods

CERN Document Server

Oyana, Tonny J

2015-01-01

An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

9. Performance of oil industry cross-country pipelines in Western Europe: statistical summary of reported spillages, 1979

Energy Technology Data Exchange (ETDEWEB)

de Waal, A.; Hayward, P.; Panisi, C.; Groenhof, J.

This report presents statistical data relating to spillages from oil industry cross-country pipelines during the calendar year 1979, with comments and comparisons for the five year period 1975-1979. (Copyright (c) CONCAWE 1980.)

10. The Role of Statistics and Research Methods in the Academic Success of Psychology Majors: Do Performance and Enrollment Timing Matter?

Science.gov (United States)

Freng, Scott; Webber, David; Blatter, Jamin; Wing, Ashley; Scott, Walter D.

2011-01-01

Comprehension of statistics and research methods is crucial to understanding psychology as a science (APA, 2007). However, psychology majors sometimes approach methodology courses with derision or anxiety (Onwuegbuzie & Wilson, 2003; Rajecki, Appleby, Williams, Johnson, & Jeschke, 2005); consequently, students may postpone…

11. Equivalent statistics and data interpretation.

Science.gov (United States)

Francis, Gregory

2017-08-01

Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

12. Experimental statistics

CERN Document Server

Natrella, Mary Gibbons

1963-01-01

Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

13. Using the expected detection delay to assess the performance of different multivariate statistical process monitoring methods for multiplicative and drift faults.

Science.gov (United States)

Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Peng, Kaixiang

2017-03-01

Using the expected detection delay (EDD) index to measure the performance of multivariate statistical process monitoring (MSPM) methods for constant additive faults have been recently developed. This paper, based on a statistical investigation of the T 2 - and Q-test statistics, extends the EDD index to the multiplicative and drift fault cases. As well, it is used to assess the performance of common MSPM methods that adopt these two test statistics. Based on how to use the measurement space, these methods can be divided into two groups, those which consider the complete measurement space, for example, principal component analysis-based methods, and those which only consider some subspace that reflects changes in key performance indicators, such as partial least squares-based methods. Furthermore, a generic form for them to use T 2 - and Q-test statistics are given. With the extended EDD index, the performance of these methods to detect drift and multiplicative faults is assessed using both numerical simulations and the Tennessee Eastman process. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

14. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

Science.gov (United States)

Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

2017-01-01

If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

15. Statistical thermodynamics

International Nuclear Information System (INIS)

Lim, Gyeong Hui

2008-03-01

This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

16. Do Narcissism and Emotional Intelligence Win Us Friends? Modeling Dynamics of Peer Popularity Using Inferential Network Analysis.

Science.gov (United States)

Czarna, Anna Z; Leifeld, Philip; Śmieja, Magdalena; Dufner, Michael; Salovey, Peter

2016-09-27

This research investigated effects of narcissism and emotional intelligence (EI) on popularity in social networks. In a longitudinal field study, we examined the dynamics of popularity in 15 peer groups in two waves (N = 273). We measured narcissism, ability EI, and explicit and implicit self-esteem. In addition, we measured popularity at zero acquaintance and 3 months later. We analyzed the data using inferential network analysis (temporal exponential random graph modeling, TERGM) accounting for self-organizing network forces. People high in narcissism were popular, but increased less in popularity over time than people lower in narcissism. In contrast, emotionally intelligent people increased more in popularity over time than less emotionally intelligent people. The effects held when we controlled for explicit and implicit self-esteem. These results suggest that narcissism is rather disadvantageous and that EI is rather advantageous for long-term popularity. © 2016 by the Society for Personality and Social Psychology, Inc.

17. Inferential comprehension of 3-6 year olds within the context of story grammar: a scoping review.

Science.gov (United States)

Filiatrault-Veilleux, Paméla; Bouchard, Caroline; Trudeau, Natacha; Desmarais, Chantal

2015-01-01

The ability to make inferences plays a crucial role in reading comprehension and the educational success of school-aged children. However, it starts to unfold much earlier than school entry and literacy. Given that it is likely to be targeted in speech language therapy, it would be useful for clinicians to have access to information about a developmental sequence of inferential comprehension. Yet, at this time, there is no clear proposition of the way in which this ability develops in young children prior to school entry. To reduce the knowledge gap with regards to inferential comprehension in young children by conducting a scoping review of the literature. The two objectives of this research are: (1) to describe typically developing children's comprehension of causal inferences targeting elements of story grammar, with the goal of proposing milestones in the development of this ability; and (2) to highlight key elements of the methodology used to gather this information in a paediatric population. A total of 16 studies from six databases that met the inclusion criteria were qualitatively analysed in the context of a scoping review. This methodological approach was used to identify common themes and gaps in the knowledge base to achieve the intended objectives. Results permit the description of key elements in the development of six types of causal inference targeting elements of story grammar in children between 3 and 6 years old. Results also demonstrate the various methods used to assess this ability in young children and highlight particularly interesting procedures for use with this younger population. These findings point to the need for additional studies to understand this ability better and to develop strategies to stimulate an evidence-based developmental sequence in children from an early age. © 2015 The Authors. International Journal of Language & Communication Disorders published by John Wiley & Sons Ltd on behalf of Royal College of Speech and

18. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

Directory of Open Access Journals (Sweden)

Hamid Reza Marateb

2014-01-01

Full Text Available Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD. Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.

19. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

Science.gov (United States)

Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

2014-01-01

Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

20. Computational Performance Optimisation for Statistical Analysis of the Effect of Nano-CMOS Variability on Integrated Circuits

Directory of Open Access Journals (Sweden)

Zheng Xie

2013-01-01

Full Text Available The intrinsic variability of nanoscale VLSI technology must be taken into account when analyzing circuit designs to predict likely yield. Monte-Carlo- (MC- and quasi-MC- (QMC- based statistical techniques do this by analysing many randomised or quasirandomised copies of circuits. The randomisation must model forms of variability that occur in nano-CMOS technology, including “atomistic” effects without intradie correlation and effects with intradie correlation between neighbouring devices. A major problem is the computational cost of carrying out sufficient analyses to produce statistically reliable results. The use of principal components analysis, behavioural modeling, and an implementation of “Statistical Blockade” (SB is shown to be capable of achieving significant reduction in the computational costs. A computation time reduction of 98.7% was achieved for a commonly used asynchronous circuit element. Replacing MC by QMC analysis can achieve further computation reduction, and this is illustrated for more complex circuits, with the results being compared with those of transistor-level simulations. The “yield prediction” analysis of SRAM arrays is taken as a case study, where the arrays contain up to 1536 transistors modelled using parameters appropriate to 35 nm technology. It is reported that savings of up to 99.85% in computation time were obtained.

1. Optimization of Biodiesel-Diesel Blended Fuel Properties and Engine Performance with Ether Additive Using Statistical Analysis and Response Surface Methods

Directory of Open Access Journals (Sweden)

Obed M. Ali

2015-12-01

Full Text Available In this study, the fuel properties and engine performance of blended palm biodiesel-diesel using diethyl ether as additive have been investigated. The properties of B30 blended palm biodiesel-diesel fuel were measured and analyzed statistically with the addition of 2%, 4%, 6% and 8% (by volume diethyl ether additive. The engine tests were conducted at increasing engine speeds from 1500 rpm to 3500 rpm and under constant load. Optimization of independent variables was performed using the desirability approach of the response surface methodology (RSM with the goal of minimizing emissions and maximizing performance parameters. The experiments were designed using a statistical tool known as design of experiments (DoE based on RSM.

2. Cancer Statistics

Science.gov (United States)

... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...

3. National transportation statistics 2010

Science.gov (United States)

2010-01-01

National Transportation Statistics presents statistics on the U.S. transportation system, including its physical components, safety record, economic performance, the human and natural environment, and national security. This is a large online documen...

4. Children's inferential styles, 5-HTTLPR genotype, and maternal expressed emotion-criticism: An integrated model for the intergenerational transmission of depression.

Science.gov (United States)

Gibb, Brandon E; Uhrlass, Dorothy J; Grassia, Marie; Benas, Jessica S; McGeary, John

2009-11-01

The authors tested a model for the intergenerational transmission of depression integrating specific genetic (5-HTTLPR), cognitive (inferential style), and environmental (mother depressive symptoms and expressed-emotion criticism [EE-Crit]) risk factors. Supporting the hypothesis that maternal depression is associated with elevated levels of stress in children's lives, mothers with a history of major depressive disorder (MDD) exhibited higher depressive symptoms across a 6-month multiwave follow-up than mothers with no depression history. In addition, partially supporting our hypothesis, levels of maternal criticism during the follow-up were significantly related to mothers' current depressive symptoms but not to history of MDD. Finally, the authors found support for an integrated Gene x Cognition x Environment model of risk. Specifically, among children with negative inferential styles regarding their self-characteristics, there was a clear dose response of 5-HTTLPR genotype moderating the relation between maternal criticism and children's depressive symptoms, with the highest depressive symptoms during the follow-up observed among children carrying 2 copies of the 5-HTTLPR lower expressing alleles (short [S] or long [LG]) who also exhibited negative inferential styles for self-characteristics and who experienced high levels of EE-Crit. In contrast, children with positive inferential styles exhibited low depressive symptoms regardless of 5-HTTLPR genotype or level of maternal criticism. PsycINFO Database Record 2009 APA, all rights reserved.

5. Application of pedagogy reflective in statistical methods course and practicum statistical methods

Science.gov (United States)

Julie, Hongki

2017-08-01

Subject Elementary Statistics, Statistical Methods and Statistical Methods Practicum aimed to equip students of Mathematics Education about descriptive statistics and inferential statistics. The students' understanding about descriptive and inferential statistics were important for students on Mathematics Education Department, especially for those who took the final task associated with quantitative research. In quantitative research, students were required to be able to present and describe the quantitative data in an appropriate manner, to make conclusions from their quantitative data, and to create relationships between independent and dependent variables were defined in their research. In fact, when students made their final project associated with quantitative research, it was not been rare still met the students making mistakes in the steps of making conclusions and error in choosing the hypothetical testing process. As a result, they got incorrect conclusions. This is a very fatal mistake for those who did the quantitative research. There were some things gained from the implementation of reflective pedagogy on teaching learning process in Statistical Methods and Statistical Methods Practicum courses, namely: 1. Twenty two students passed in this course and and one student did not pass in this course. 2. The value of the most accomplished student was A that was achieved by 18 students. 3. According all students, their critical stance could be developed by them, and they could build a caring for each other through a learning process in this course. 4. All students agreed that through a learning process that they undergo in the course, they can build a caring for each other.

6. Uncertainties in repository performance from spatial variability of hydraulic conductivities - statistical estimation and stochastic simulation using PROPER

International Nuclear Information System (INIS)

Lovius, L.; Norman, S.; Kjellbert, N.

1990-02-01

An assessment has been made of the impact of spatial variability on the performance of a KBS-3 type repository. The uncertainties in geohydrologically related performance measures have been investigated using conductivity data from one of the Swedish study sites. The analysis was carried out with the PROPER code and the FSCF10 submodel. (authors)

7. STATCAT, Statistical Analysis of Parametric and Non-Parametric Data

International Nuclear Information System (INIS)

David, Hugh

1990-01-01

1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required

8. New Closed-Form Results on Ordered Statistics of Partial Sums of Gamma Random Variables and its Application to Performance Evaluation in the Presence of Nakagami Fading

KAUST Repository

Nam, Sung Sik

2017-06-19

Complex wireless transmission systems require multi-dimensional joint statistical techniques for performance evaluation. Here, we first present the exact closed-form results on order statistics of any arbitrary partial sums of Gamma random variables with the closedform results of core functions specialized for independent and identically distributed Nakagami-m fading channels based on a moment generating function-based unified analytical framework. These both exact closed-form results have never been published in the literature. In addition, as a feasible application example in which our new offered derived closed-form results can be applied is presented. In particular, we analyze the outage performance of the finger replacement schemes over Nakagami fading channels as an application of our method. Note that these analysis results are directly applicable to several applications, such as millimeter-wave communication systems in which an antenna diversity scheme operates using an finger replacement schemes-like combining scheme, and other fading scenarios. Note also that the statistical results can provide potential solutions for ordered statistics in any other research topics based on Gamma distributions or other advanced wireless communications research topics in the presence of Nakagami fading.

9. Statistical methods for assays with limits of detection: Serum bile acid as a differentiator between patients with normal colons, adenomas, and colorectal cancer

Directory of Open Access Journals (Sweden)

Bonnie LaFleur

2011-01-01

Full Text Available In analytic chemistry a detection limit (DL is the lowest measurable amount of an analyte that can be distinguished from a blank; many biomedical measurement technologies exhibit this property. From a statistical perspective, these data present inferential challenges because instead of precise measures, one only has information that the value is somewhere between 0 and the DL (below detection limit, BDL. Substitution of BDL values, with 0 or the DL can lead to biased parameter estimates and a loss of statistical power. Statistical methods that make adjustments when dealing with these types of data, often called left-censored data, are available in many commercial statistical packages. Despite this availability, the use of these methods is still not widespread in biomedical literature. We have reviewed the statistical approaches of dealing with BDL values, and used simulations to examine the performance of the commonly used substitution methods and the most widely available statistical methods. We have illustrated these methods using a study undertaken at the Vanderbilt-Ingram Cancer Center, to examine the serum bile acid levels in patients with colorectal cancer and adenoma. We have found that the modern methods for BDL values identify disease-related differences that are often missed, with statistically naive approaches.

10. To improve the quality of the statistical analysis of papers published in the Journal of the Korean Society for Therapeutic Radiology and Oncology

International Nuclear Information System (INIS)

Park, Hee Chul; Choi, Doo Ho; Ahn, Song Vogue

2008-01-01

To improve the quality of the statistical analysis of papers published in the Journal of the Korean Society for Therapeutic Radiology and Oncology (JKOSTRO) by evaluating commonly encountered errors. Materials and Methods: Papers published in the JKOSTRO from January 2006 to December 2007 were reviewed for methodological and statistical validity using a modified version of Ahn's checklist. A statistician reviewed individual papers and evaluated the list items in the checklist for each paper. To avoid the potential assessment error by the statistician who lacks expertise in the field of radiation oncology; the editorial board of the JKOSTRO reviewed each checklist for individual articles. A frequency analysis of the list items was performed using SAS (version 9.0, SAS Institute, NC, USA) software. Results: A total of 73 papers including 5 case reports and 68 original articles were reviewed. Inferential statistics was used in 46 papers. The most commonly adopted statistical methodology was a survival analysis (58.7%). Only 19% of papers were free of statistical errors. Errors of omission were encountered in 34 (50.0%) papers. Errors of commission were encountered in 35 (51.5%) papers. Twenty-one papers (30.9%) had both errors of omission and commission. Conclusion: A variety of statistical errors were encountered in papers published in the JKOSTRO. The current study suggests that a more thorough review of the statistical analysis is needed for manuscripts submitted in the JKOSTRO

11. Statistical Real-time Model for Performance Prediction of Ship Detection from Microsatellite Electro-Optical Imagers

Directory of Open Access Journals (Sweden)

Lapierre FabianD

2010-01-01

Full Text Available Abstract For locating maritime vessels longer than 45 meters, such vessels are required to set up an Automatic Identification System (AIS used by vessel traffic services. However, when a boat is shutting down its AIS, there are no means to detect it in open sea. In this paper, we use Electro-Optical (EO imagers for noncooperative vessel detection when the AIS is not operational. As compared to radar sensors, EO sensors have lower cost, lower payload, and better computational processing load. EO sensors are mounted on LEO microsatellites. We propose a real-time statistical methodology to estimate sensor Receiver Operating Characteristic (ROC curves. It does not require the computation of the entire image received at the sensor. We then illustrate the use of this methodology to design a simple simulator that can help sensor manufacturers in optimizing the design of EO sensors for maritime applications.

12. Analysis of Statistical Methods Currently used in Toxicology Journals.

Science.gov (United States)

Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

2014-09-01

Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

13. Dry deposition of reactive nitrogen to European ecosystems: a comparison of inferential models across the NitroEurope network

Directory of Open Access Journals (Sweden)

C. R. Flechard

2011-03-01

Full Text Available Inferential models have long been used to determine pollutant dry deposition to ecosystems from measurements of air concentrations and as part of national and regional atmospheric chemistry and transport models, and yet models still suffer very large uncertainties. An inferential network of 55 sites throughout Europe for atmospheric reactive nitrogen (Nr was established in 2007, providing ambient concentrations of gaseous NH3, NO2, HNO3 and HONO and aerosol NH4+ and NO3 as part of the NitroEurope Integrated Project.

Network results providing modelled inorganic Nr dry deposition to the 55 monitoring sites are presented, using four existing dry deposition routines, revealing inter-model differences and providing ensemble average deposition estimates. Dry deposition is generally largest over forests in regions with large ambient NH3 concentrations, exceeding 30–40 kg N ha−1 yr−1 over parts of the Netherlands and Belgium, while some remote forests in Scandinavia receive less than 2 kg N ha−1 yr−1. Turbulent Nr deposition to short vegetation ecosystems is generally smaller than to forests due to reduced turbulent exchange, but also because NH3 inputs to fertilised, agricultural systems are limited by the presence of a substantial NH3 source in the vegetation, leading to periods of emission as well as deposition.

Differences between models reach a factor 2–3 and are often greater than differences between monitoring sites. For soluble Nr gases such as NH3 and HNO3, the non-stomatal pathways are responsible for most of the annual uptake over many surfaces, especially the non-agricultural land uses, but parameterisations of the sink strength vary considerably among models. For aerosol NH4

14. System identification for robust and inferential control : with applications to ILC and precision motion systems

NARCIS (Netherlands)

Oomen, T.A.E.

2010-01-01

Feedback control is able to improve the performance of systems in the presence of uncertain dynamical behavior and disturbances. Although a properly designed controller can cope with large uncertainty, certain knowledge regarding the system behavior is crucial for control design. Hence, high

15. Comparing Two Inferential Approaches to Handling Measurement Error in Mixed-Mode Surveys

Directory of Open Access Journals (Sweden)

Buelens Bart

2017-06-01

Full Text Available Nowadays sample survey data collection strategies combine web, telephone, face-to-face, or other modes of interviewing in a sequential fashion. Measurement bias of survey estimates of means and totals are composed of different mode-dependent measurement errors as each data collection mode has its own associated measurement error. This article contains an appraisal of two recently proposed methods of inference in this setting. The first is a calibration adjustment to the survey weights so as to balance the survey response to a prespecified distribution of the respondents over the modes. The second is a prediction method that seeks to correct measurements towards a benchmark mode. The two methods are motivated differently but at the same time coincide in some circumstances and agree in terms of required assumptions. The methods are applied to the Labour Force Survey in the Netherlands and are found to provide almost identical estimates of the number of unemployed. Each method has its own specific merits. Both can be applied easily in practice as they do not require additional data collection beyond the regular sequential mixed-mode survey, an attractive element for national statistical institutes and other survey organisations.

16. Falling in the elderly: Do statistical models matter for performance criteria of fall prediction? Results from two large population-based studies.

Science.gov (United States)

Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Fantino, Bruno; Levinoff, Elise J; Allali, Gilles; Beauchet, Olivier

2016-01-01

To compare performance criteria (i.e., sensitivity, specificity, positive predictive value, negative predictive value, area under receiver operating characteristic curve and accuracy) of linear and non-linear statistical models for fall risk in older community-dwellers. Participants were recruited in two large population-based studies, "Prévention des Chutes, Réseau 4" (PCR4, n=1760, cross-sectional design, retrospective collection of falls) and "Prévention des Chutes Personnes Agées" (PCPA, n=1765, cohort design, prospective collection of falls). Six linear statistical models (i.e., logistic regression, discriminant analysis, Bayes network algorithm, decision tree, random forest, boosted trees), three non-linear statistical models corresponding to artificial neural networks (multilayer perceptron, genetic algorithm and neuroevolution of augmenting topologies [NEAT]) and the adaptive neuro fuzzy interference system (ANFIS) were used. Falls ≥1 characterizing fallers and falls ≥2 characterizing recurrent fallers were used as outcomes. Data of studies were analyzed separately and together. NEAT and ANFIS had better performance criteria compared to other models. The highest performance criteria were reported with NEAT when using PCR4 database and falls ≥1, and with both NEAT and ANFIS when pooling data together and using falls ≥2. However, sensitivity and specificity were unbalanced. Sensitivity was higher than specificity when identifying fallers, whereas the converse was found when predicting recurrent fallers. Our results showed that NEAT and ANFIS were non-linear statistical models with the best performance criteria for the prediction of falls but their sensitivity and specificity were unbalanced, underscoring that models should be used respectively for the screening of fallers and the diagnosis of recurrent fallers. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

17. Experimental uncertainty estimation and statistics for data having interval uncertainty.

Energy Technology Data Exchange (ETDEWEB)

Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

2007-05-01

This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

18. Correlation of accident statistics to whiplash performance parameters using the RID 3D and BioRID dummy

NARCIS (Netherlands)

Cappon, H.J.; Hell, W.; Hoschopf, H.; Muser, M.; Song, E.; Wismans, J.S.H.M.

2005-01-01

Injury criteria are crucial in whiplash protection evaluations. Therefore, the real-life rear impact performance of eight car seats was compared with various injury criteria using linear correlation techniques. Two dummies, BioRID and RID 3D, and two types of pulses were used: generic and car

19. Statistical modelling of citation exchange between statistics journals.

Science.gov (United States)

Varin, Cristiano; Cattelan, Manuela; Firth, David

2016-01-01

Rankings of scholarly journals based on citation data are often met with scepticism by the scientific community. Part of the scepticism is due to disparity between the common perception of journals' prestige and their ranking based on citation counts. A more serious concern is the inappropriate use of journal rankings to evaluate the scientific influence of researchers. The paper focuses on analysis of the table of cross-citations among a selection of statistics journals. Data are collected from the Web of Science database published by Thomson Reuters. Our results suggest that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized. Inferential conclusions require care to avoid potential overinterpretation of insignificant differences between journal ratings. Comparison with published ratings of institutions from the UK's research assessment exercise shows strong correlation at aggregate level between assessed research quality and journal citation 'export scores' within the discipline of statistics.

20. Statistical performance of image cytometry for DNA, lipids, cytokeratin, & CD45 in a model system for circulation tumor cell detection.

Science.gov (United States)

Futia, Gregory L; Schlaepfer, Isabel R; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A

2017-07-01

Detection of circulating tumor cells (CTCs) in a blood sample is limited by the sensitivity and specificity of the biomarker panel used to identify CTCs over other blood cells. In this work, we present Bayesian theory that shows how test sensitivity and specificity set the rarity of cell that a test can detect. We perform our calculation of sensitivity and specificity on our image cytometry biomarker panel by testing on pure disease positive (D + ) populations (MCF7 cells) and pure disease negative populations (D - ) (leukocytes). In this system, we performed multi-channel confocal fluorescence microscopy to image biomarkers of DNA, lipids, CD45, and Cytokeratin. Using custom software, we segmented our confocal images into regions of interest consisting of individual cells and computed the image metrics of total signal, second spatial moment, spatial frequency second moment, and the product of the spatial-spatial frequency moments. We present our analysis of these 16 features. The best performing of the 16 features produced an average separation of three standard deviations between D + and D - and an average detectable rarity of ∼1 in 200. We performed multivariable regression and feature selection to combine multiple features for increased performance and showed an average separation of seven standard deviations between the D + and D - populations making our average detectable rarity of ∼1 in 480. Histograms and receiver operating characteristics (ROC) curves for these features and regressions are presented. We conclude that simple regression analysis holds promise to further improve the separation of rare cells in cytometry applications. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

1. Evaluating performance measures to determine training effectiveness

International Nuclear Information System (INIS)

Klemm, R.W.; Feiza, A.S.

1987-01-01

This research was conceived and dedicated to helping the CECo training organization become a more integrated part of the corporate business. The target population for this study was nuclear and fossil generating station employees who directly impacted the production of electricity. The target sample (n = 150) included: instrument, mechanical, and electrical maintenance personnel; control room operators; engineers, radiation chemists, and other technical specialists; and equipment operators and attendants. A total of four instruments were utilized by this study. Three instruments were administered to the generating station personnel. These included a demographic form, a learning style profile, and a motivational style profile. The focal instrument, a performance skills rating form, was administered to supervisory personnel. Data analysis consisted of three major parts. Part one established internal consistency through Cronbach alpha statistics. Part two provides summary statistics and breakdown tables for important variables. Part three provides inferential statistics responding to the research questions. All six Performance Skills variables discriminated significantly between the trained and non-trained groups (p .001). In all cases, the mean value for the trained group exceeded the mean value for the non-trained group. Implications for further research indicate that training does have a quantifiable effect on job performance

2. Statistical Analysis of Long-Term Trend of Performance, Production and Cultivated Area of 17 Field Crops Khorasan Razavi Province

Directory of Open Access Journals (Sweden)

H. Zareabyaneh

2014-12-01

Full Text Available Any planning for the future requires estimates of future conditions. It is possible to study changes over time series. In this study, changes of production and cultivated area of 17 field crops of Khorasan Razavi province in a 25-year period were determined with Mann - Kendall test, Sen’s Estimator Slope and linear regression. Analysis of the three tests showed that performance of 76.5% from yield, 88.2% from area under cultivation and 55.8% from agricultural production were significant at the 0.01 and 0.05 level. On the other hand, trend of yields 58.8% was increase, 17.7% was reduced and 23.5% was no significant trend. Similarly, trend of 23.5% from area under cultivation was acreage, 64.7% was reduction, and 11.8% was no significant trend. For production variable, 29.4% was significantly increased and 29.4% was significant reduction. More detailed analysis showed that performance, production and area under cultivation of three crops of cotton, grain and tomatoes increased significantly. Results of all three methods showed the highest trend of negatively performance and area under cultivation variation is related to pea and melon respectively. Furthermore, most of the positive trend in production of tomatoes and grain, performance in onions, potatoes and tomatoes and area under cultivation in tomato observed. The results showed that linear trend and the nonparametric tests of important products of province: wheat, barley, sugar beet, cotton, melons, watermelons and tomatoes in 0.01 were significant. This result shows the importance of these yields in gross state province product.

3. Mobile Jump Assessment (mJump): A Descriptive and Inferential Study.

Science.gov (United States)

Mateos-Angulo, Alvaro; Galán-Mercant, Alejandro; Cuesta-Vargas, Antonio

2015-08-26

Vertical jump tests are used in athletics and rehabilitation to measure physical performance in people of different age ranges and fitness. Jumping ability can be analyzed through different variables, and the most commonly used are fly time and jump height. They can be obtained by a variety of measuring devices, but most are limited to laboratory use only. The current generation of smartphones contains inertial sensors that are able to record kinematic variables for human motion analysis, since they are tools for easy access and portability for clinical use. The aim of this study was to describe and analyze the kinematics characteristics using the inertial sensor incorporated in the iPhone 4S, the lower limbs strength through a manual dynamometer, and the jump variables obtained with a contact mat in the squat jump and countermovement jump tests (fly time and jump height) from a cohort of healthy people. A cross sectional study was conducted on a population of healthy young adults. Twenty-seven participants performed three trials (n=81 jumps) of squat jump and countermovement jump tests. Acceleration variables were measured through a smartphone's inertial sensor. Additionally, jump variables from a contact mat and lower limbs dynamometry were collected. In the present study, the kinematic variables derived from acceleration through the inertial sensor of a smartphone iPhone 4S, dynamometry of lower limbs with a handheld dynamometer, and the height and flight time with a contact mat have been described in vertical jump tests from a cohort of young healthy subjects. The development of the execution has been described, examined and identified in a squat jump test and countermovement jump test under acceleration variables that were obtained with the smartphone. The built-in iPhone 4S inertial sensor is able to measure acceleration variables while performing vertical jump tests for the squat jump and countermovement jump in healthy young adults. The acceleration

4. Industrial statistics with Minitab

CERN Document Server

Cintas, Pere Grima; Llabres, Xavier Tort-Martorell

2012-01-01

Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores

5. Movement Remote Command and Control in the Military Technical Systems through the Inferential Method

Directory of Open Access Journals (Sweden)

Grigore Dumitru

2017-04-01

Full Text Available The research described in the paper at had approaches the interaction between the bio-signal and the remote movement control, by introducing an original mathematical model, regarding the psychophysiological inference from the EDA response bio-signals. The experiments were performed using an adequate design, consisting of two different techniques regarding bio-signals, in order to obtain, in variables corresponding to each of them, the same type of electrical behaviour. In order to establish the projective functions, a direct measurement method was used on the levels of potential in the epidermis’s alternative current, of base-(SPL and response-type (SPR, the acquisition being executed with an integrated technical system, patented by the author in year 2013.

6. Ten years statistics of wind direction and wind velocity measurements performed at the Karlsruhe Nuclear Research Center

International Nuclear Information System (INIS)

Becker, M.; Dilger, H.

1979-06-01

The measurements of wind direction and wind velocity performed at 60 m and 200 m height were evaluated for one year each and frequency distributions of the measured values were established. The velocity was divided into 1 m/s steps and the direction into 10 0 sectors. The frequency distribution of the wind direction reveals three maxima located in the southwest, northeast and north, respectively. The maximum of the frequency distribution of the wind velocity occurs between 4 and 5 m/s at 200 m height and between 3 and 4 m/s at 60 m height. (orig.) [de

7. New adaptive statistical iterative reconstruction ASiR-V: Assessment of noise performance in comparison to ASiR.

Science.gov (United States)

De Marco, Paolo; Origgi, Daniela

2018-03-01

To assess the noise characteristics of the new adaptive statistical iterative reconstruction (ASiR-V) in comparison to ASiR. A water phantom was acquired with common clinical scanning parameters, at five different levels of CTDI vol . Images were reconstructed with different kernels (STD, SOFT, and BONE), different IR levels (40%, 60%, and 100%) and different slice thickness (ST) (0.625 and 2.5 mm), both for ASiR-V and ASiR. Noise properties were investigated and noise power spectrum (NPS) was evaluated. ASiR-V significantly reduced noise relative to FBP: noise reduction was in the range 23%-60% for a 0.625 mm ST and 12%-64% for the 2.5 mm ST. Above 2 mGy, noise reduction for ASiR-V had no dependence on dose. Noise reduction for ASIR-V has dependence on ST, being greater for STD and SOFT kernels at 2.5 mm. For the STD kernel ASiR-V has greater noise reduction for both ST, if compared to ASiR. For the SOFT kernel, results varies according to dose and ST, while for BONE kernel ASIR-V shows less noise reduction. NPS for CT Revolution has dose dependent behavior at lower doses. NPS for ASIR-V and ASiR is similar, showing a shift toward lower frequencies as the IR level increases for STD and SOFT kernels. The NPS is different between ASiR-V and ASIR with BONE kernel. NPS for ASiR-V appears to be ST dependent, having a shift toward lower frequencies for 2.5 mm ST. ASiR-V showed greater noise reduction than ASiR for STD and SOFT kernels, while keeping the same NPS. For the BONE kernel, ASiR-V presents a completely different behavior, with less noise reduction and modified NPS. Noise properties of the ASiR-V are dependent on reconstruction slice thickness. The noise properties of ASiR-V suggest the need for further measurements and efforts to establish new CT protocols to optimize clinical imaging. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

8. Usage Statistics

Science.gov (United States)

... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...

9. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2009 and since 1971

Energy Technology Data Exchange (ETDEWEB)

Davis, P.M.; Dubois, J.; Gambardella, F.; Sanchez-Garcia, E.; Uhlig, F.

2011-05-15

CONCAWE has collected 39 years of spillage data on European cross-country oil pipelines. At about 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 870 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2009 and a full historical perspective since 1971. The performance over the whole 39 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in line tools (intelligence pigs) is also reported. 5 spillage incidents were reported in 2009, corresponding to 0.14 spillages per 1000 km of line, well below the 5-year average of 0.28 and the long-term running average of 0.53, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 4 incidents were due to mechanical failure and 1 was connected to past third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

10. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2007 and since 1971

International Nuclear Information System (INIS)

2009-11-01

CONCAWE has collected 37 years of spillage data on European cross-country oil pipelines. At over 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 800 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2007 and a full historical perspective since 1971. The performance over the whole 37 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by intelligence pigs is also reported. 9 spillage incidents were reported in 2007, corresponding to 0.28 spillages per 1000 km of line, just under the 5-year average and well below the long-term running average of 0.55, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 1 incident was due to mechanical failure, 2 incidents to corrosion and 6 were connected to third party activities. Over the long term, third party activities is the main cause of spillage incidents.

11. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2010 and since 1971

International Nuclear Information System (INIS)

Davis, P.M.; Dubois, J.; Gambardella, F.; Sanchez-Garcia, E.; Uhlig, F.

2011-12-01

CONCAWE has collected 40 years of spillage data on European cross-country oil pipelines. At about 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 800 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2010 and a full historical perspective since 1971. The performance over the whole 40 years is analysed in various ways, including gross and net spillage volumes, and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in-line tools (intelligence pigs) is also reported. 4 spillage incidents were reported in 2010, corresponding to 0.12 spillages per 1000 km of line, well below the 5-year average of 0.25 and the long-term running average of 0.52, which has been steadily decreasing over the years from a value of 1.2 in the mid-70s. There were no fires, fatalities or injuries connected with these spills. 2 incidents were due to mechanical failure, 1 to external corrosion, and 1 was connected to past third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

12. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2010 and since 1971

Energy Technology Data Exchange (ETDEWEB)

Davis, P.M.; Dubois, J.; Gambardella, F.; Sanchez-Garcia, E.; Uhlig, F.

2011-12-15

CONCAWE has collected 40 years of spillage data on European cross-country oil pipelines. At about 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 800 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2010 and a full historical perspective since 1971. The performance over the whole 40 years is analysed in various ways, including gross and net spillage volumes, and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in-line tools (intelligence pigs) is also reported. 4 spillage incidents were reported in 2010, corresponding to 0.12 spillages per 1000 km of line, well below the 5-year average of 0.25 and the long-term running average of 0.52, which has been steadily decreasing over the years from a value of 1.2 in the mid-70s. There were no fires, fatalities or injuries connected with these spills. 2 incidents were due to mechanical failure, 1 to external corrosion, and 1 was connected to past third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

13. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2007 and since 1971

Energy Technology Data Exchange (ETDEWEB)

NONE

2009-11-15

CONCAWE has collected 37 years of spillage data on European cross-country oil pipelines. At over 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 800 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2007 and a full historical perspective since 1971. The performance over the whole 37 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by intelligence pigs is also reported. 9 spillage incidents were reported in 2007, corresponding to 0.28 spillages per 1000 km of line, just under the 5-year average and well below the long-term running average of 0.55, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 1 incident was due to mechanical failure, 2 incidents to corrosion and 6 were connected to third party activities. Over the long term, third party activities is the main cause of spillage incidents.

14. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2008 and since 1971

Energy Technology Data Exchange (ETDEWEB)

Davis, P.M.; Dubois, J.; Gambardella, F.; Uhlig, F.

2010-06-15

CONCAWE has collected 38 years of spillage data on European cross-country oil pipelines. At over 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 780 million m{sup 3} per year of crude oil and oil products. This report covers the performance of these pipelines in 2008 and a full historical perspective since 1971. The performance over the whole 38 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in line tools (intelligence pigs) is also reported. 12 spillage incidents were reported in 2008, corresponding to 0.34 spillages per 1000 km of line, somewhat above the 5-year average of 0.28 but well below the long-term running average of 0.54, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 7 incidents were due to mechanical failure, 1 incident to corrosion and 4 were connected to third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

15. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2009 and since 1971

International Nuclear Information System (INIS)

Davis, P.M.; Dubois, J.; Gambardella, F.; Sanchez-Garcia, E.; Uhlig, F.

2011-05-01

CONCAWE has collected 39 years of spillage data on European cross-country oil pipelines. At about 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 870 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2009 and a full historical perspective since 1971. The performance over the whole 39 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in line tools (intelligence pigs) is also reported. 5 spillage incidents were reported in 2009, corresponding to 0.14 spillages per 1000 km of line, well below the 5-year average of 0.28 and the long-term running average of 0.53, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 4 incidents were due to mechanical failure and 1 was connected to past third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

16. Mathematical statistics

CERN Document Server

Pestman, Wiebe R

2009-01-01

This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

17. Frog Statistics

Science.gov (United States)

Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics

18. Act first, think later: the presence and absence of inferential planning in problem solving.

Science.gov (United States)

Ormerod, Thomas C; Macgregor, James N; Chronicle, Edward P; Dewald, Andrew D; Chu, Yun

2013-10-01

Planning is fundamental to successful problem solving, yet individuals sometimes fail to plan even one step ahead when it lies within their competence to do so. In this article, we report two experiments in which we explored variants of a ball-weighing puzzle, a problem that has only two steps, yet nonetheless yields performance consistent with a failure to plan. The results fit a computational model in which a solver's attempts are determined by two heuristics: maximization of the apparent progress made toward the problem goal and minimization of the problem space in which attempts are sought. The effectiveness of these heuristics was determined by lookahead, defined operationally as the number of steps evaluated in a planned move. Where move outcomes cannot be visualized but must be inferred, planning is constrained to the point where some individuals apply zero lookahead, which with n-ball problems yields seemingly irrational unequal weighs. Applying general-purpose heuristics with or without lookahead accounts for a range of rational and irrational phenomena found with insight and noninsight problems.

19. Coupling 2D/3D registration method and statistical model to perform 3D reconstruction from partial x-rays images data.

Science.gov (United States)

Cresson, T; Chav, R; Branchaud, D; Humbert, L; Godbout, B; Aubert, B; Skalli, W; De Guise, J A

2009-01-01

3D reconstructions of the spine from a frontal and sagittal radiographs is extremely challenging. The overlying features of soft tissues and air cavities interfere with image processing. It is also difficult to obtain information that is accurate enough to reconstruct complete 3D models. To overcome these problems, the proposed method efficiently combines the partial information contained in two images from a patient with a statistical 3D spine model generated from a database of scoliotic patients. The algorithm operates through two simultaneous iterating processes. The first one generates a personalized vertebra model using a 2D/3D registration process with bone boundaries extracted from radiographs, while the other one infers the position and the shape of other vertebrae from the current estimation of the registration process using a statistical 3D model. Experimental evaluations have shown good performances of the proposed approach in terms of accuracy and robustness when compared to CT-scan.

20. Statistical model based iterative reconstruction (MBIR) in clinical CT systems. Part II. Experimental assessment of spatial resolution performance

Energy Technology Data Exchange (ETDEWEB)

Li, Ke; Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Garrett, John; Ge, Yongshuai [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States)

2014-07-15

Purpose: Statistical model based iterative reconstruction (MBIR) methods have been introduced to clinical CT systems and are being used in some clinical diagnostic applications. The purpose of this paper is to experimentally assess the unique spatial resolution characteristics of this nonlinear reconstruction method and identify its potential impact on the detectabilities and the associated radiation dose levels for specific imaging tasks. Methods: The thoracic section of a pediatric phantom was repeatedly scanned 50 or 100 times using a 64-slice clinical CT scanner at four different dose levels [CTDI{sub vol} =4, 8, 12, 16 (mGy)]. Both filtered backprojection (FBP) and MBIR (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for image reconstruction and results were compared with one another. Eight test objects in the phantom with contrast levels ranging from 13 to 1710 HU were used to assess spatial resolution. The axial spatial resolution was quantified with the point spread function (PSF), while the z resolution was quantified with the slice sensitivity profile. Both were measured locally on the test objects and in the image domain. The dependence of spatial resolution on contrast and dose levels was studied. The study also features a systematic investigation of the potential trade-off between spatial resolution and locally defined noise and their joint impact on the overall image quality, which was quantified by the image domain-based channelized Hotelling observer (CHO) detectability index d′. Results: (1) The axial spatial resolution of MBIR depends on both radiation dose level and image contrast level, whereas it is supposedly independent of these two factors in FBP. The axial spatial resolution of MBIR always improved with an increasing radiation dose level and/or contrast level. (2) The axial spatial resolution of MBIR became equivalent to that of FBP at some transitional contrast level, above which MBIR demonstrated superior spatial resolution than

1. Investigation and statistical modeling of InAs-based double gate tunnel FETs for RF performance enhancement

Science.gov (United States)

Poorvasha, S.; Lakshmi, B.

2018-05-01

In this paper, RF performance analysis of InAs-based double gate (DG) tunnel field effect transistors (TFETs) is investigated in both qualitative and quantitative fashion. This investigation is carried out by varying the geometrical and doping parameters of TFETs to extract various RF parameters, unity gain cut-off frequency (f t), maximum oscillation frequency (f max), intrinsic gain and admittance (Y) parameters. An asymmetric gate oxide is introduced in the gate-drain overlap and compared with that of DG TFETs. Higher ON-current (I ON) of about 0.2 mA and less leakage current (I OFF) of 29 fA is achieved for DG TFET with gate-drain overlap. Due to increase in transconductance (g m), higher f t and intrinsic gain is attained for DG TFET with gate-drain overlap. Higher f max of 985 GHz is obtained for drain doping of 5 × 1017 cm‑3 because of the reduced gate-drain capacitance (C gd) with DG TFET with gate-drain overlap. In terms of Y-parameters, gate oxide thickness variation offers better performance due to the reduced values of C gd. A second order numerical polynomial model is generated for all the RF responses as a function of geometrical and doping parameters. The simulation results are compared with this numerical model where the predicted values match with the simulated values. Project supported by the Department of Science and Technology, Government of India under SERB Scheme (No. SERB/F/2660).

2. DEVELOPMENT OF A LOW COST INFERENTIAL NATURAL GAS ENERGY FLOW RATE PROTOTYPE RETROFIT MODULE

Energy Technology Data Exchange (ETDEWEB)

E. Kelner; D. George; T. Morrow; T. Owen; M. Nored; R. Burkey; A. Minachi

2005-05-01

In 1998, Southwest Research Institute began a multi-year project to develop a working prototype instrument module for natural gas energy measurement. The module will be used to retrofit a natural gas custody transfer flow meter for energy measurement, at a cost an order of magnitude lower than a gas chromatograph. Development and evaluation of the prototype energy meter in 2002-2003 included: (1) refinement of the algorithm used to infer properties of the natural gas stream, such as heating value; (2) evaluation of potential sensing technologies for nitrogen content, improvements in carbon dioxide measurements, and improvements in ultrasonic measurement technology and signal processing for improved speed of sound measurements; (3) design, fabrication and testing of a new prototype energy meter module incorporating these algorithm and sensor refinements; and (4) laboratory and field performance tests of the original and modified energy meter modules. Field tests of the original energy meter module have provided results in close agreement with an onsite gas chromatograph. The original algorithm has also been tested at a field site as a stand-alone application using measurements from in situ instruments, and has demonstrated its usefulness as a diagnostic tool. The algorithm has been revised to use measurement technologies existing in the module to measure the gas stream at multiple states and infer nitrogen content. The instrumentation module has also been modified to incorporate recent improvements in CO{sub 2} and sound speed sensing technology. Laboratory testing of the upgraded module has identified additional testing needed to attain the target accuracy in sound speed measurements and heating value.

3. A Comparison of the Performance of Advanced Statistical Techniques for the Refinement of Day-ahead and Longer NWP-based Wind Power Forecasts

Science.gov (United States)

Zack, J. W.

2015-12-01

Predictions from Numerical Weather Prediction (NWP) models are the foundation for wind power forecasts for day-ahead and longer forecast horizons. The NWP models directly produce three-dimensional wind forecasts on their respective computational grids. These can be interpolated to the location and time of interest. However, these direct predictions typically contain significant systematic errors ("biases"). This is due to a variety of factors including the limited space-time resolution of the NWP models and shortcomings in the model's representation of physical processes. It has become common practice to attempt to improve the raw NWP forecasts by statistically adjusting them through a procedure that is widely known as Model Output Statistics (MOS). The challenge is to identify complex patterns of systematic errors and then use this knowledge to adjust the NWP predictions. The MOS-based improvements are the basis for much of the value added by commercial wind power forecast providers. There are an enormous number of statistical approaches that can be used to generate the MOS adjustments to the raw NWP forecasts. In order to obtain insight into the potential value of some of the newer and more sophisticated statistical techniques often referred to as "machine learning methods" a MOS-method comparison experiment has been performed for wind power generation facilities in 6 wind resource areas of California. The underlying NWP models that provided the raw forecasts were the two primary operational models of the US National Weather Service: the GFS and NAM models. The focus was on 1- and 2-day ahead forecasts of the hourly wind-based generation. The statistical methods evaluated included: (1) screening multiple linear regression, which served as a baseline method, (2) artificial neural networks, (3) a decision-tree approach called random forests, (4) gradient boosted regression based upon an decision-tree algorithm, (5) support vector regression and (6) analog ensemble

4. Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.

Science.gov (United States)

Pandolfi, Maurizio; Carreras, Giulia

2018-06-07

It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.

5. Statistics for economics

CERN Document Server

2012-01-01

Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...

6. Performance studies of GooFit on GPUs vs RooFit on CPUs while estimating the statistical significance of a new physical signal

Science.gov (United States)

2017-10-01

In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B + → J/ψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

7. DESIGNING CURRICULUM, CAPACITY OF INNOVATION, AND PERFORMANCES: A STUDY ON THE PESANTRENS IN NORTH SUMATRA

Directory of Open Access Journals (Sweden)

Jafar Syahbuddin Ritonga

2016-06-01

Full Text Available Desain Kurikulum, Kemampuan Inovasi dan Performa: Studi Pesantren di Sumatera Utara. There have been many articles and studies on Pesantren that have been published. Unfortunately, Almost all of them discuss about the Pesantren from the perspective of Islam exclusively. This paper tries to offer a new perspective in looking at the Pesantren: as an entity of business. This is a research paper which aims to know how Pesantrens’ performances are influenced by designing curriculum at different levels of capacity of innovation. The data are analyzed using descriptive and inferential statistics, namely frequency, multiple regressions and hierarchical regression. This paper analyzes the influence of designing the curriculum on performances of the Pesantren and the important effects of capacity of innovation on them. This paper reveals that the influence of designing modern Islamic currciculum to the Pesantren’s performances is expected to be varied according to the levels of capacity of innovation at the Pesantren.

8. Statistical physics

CERN Document Server

2012-01-01

This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

9. Statistical optics

CERN Document Server

Goodman, Joseph W

2015-01-01

This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications.  The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i

10. Harmonic statistics

Energy Technology Data Exchange (ETDEWEB)

Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

2017-05-15

The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

11. Harmonic statistics

International Nuclear Information System (INIS)

Eliazar, Iddo

2017-01-01

The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

12. Statistical methods

CERN Document Server

Szulc, Stefan

1965-01-01

Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

13. Histoplasmosis Statistics

Science.gov (United States)

... Testing Treatment & Outcomes Health Professionals Statistics More Resources Candidiasis Candida infections of the mouth, throat, and esophagus Vaginal candidiasis Invasive candidiasis Definition Symptoms Risk & Prevention Sources Diagnosis ...

14. Statistics in a nutshell

CERN Document Server

Boslaugh, Sarah

2013-01-01

Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.

15. Microbial Performance of Food Safety Control and Assurance Activities in a Fresh Produce Processing Sector Measured Using a Microbial Assessment Scheme and Statistical Modeling

DEFF Research Database (Denmark)

Njage, Patrick Murigu Kamau; Sawe, Chemutai Tonui; Onyango, Cecilia Moraa

2017-01-01

assessment scheme and statistical modeling were used to systematically assess the microbial performance of core control and assurance activities in five Kenyan fresh produce processing and export companies. Generalized linear mixed models and correlated random-effects joint models for multivariate clustered...... the maximum safety level for environmental samples. Escherichia coli was detected in five of the six CSLs, including the final product. Among the processing-environment samples, the hand or glove swabs of personnel revealed a higher level of predicted contamination with E. coli, and 80% of the factories were...... of contamination with coliforms in water at the inlet than in the final rinse water. Four (80%) of the five assessed processors had poor to unacceptable counts of Enterobacteriaceae on processing surfaces. Personnel-, equipment-, and product-related hygiene measures to improve the performance of preventive...

16. A study of adult e-learning in higher distance educacion: a statistical analysis of students' performance in financial accounting in a Spanish University for Distance Education (UNED

Directory of Open Access Journals (Sweden)

2013-07-01

Full Text Available This article aims to establish a mathematical model to measure the performance of e-learning in adult distance education in the field of financial economics and accounting. As an innovative methodology, a ‘linear regression’ approach was applied to contrast correlations between variables in the model. A teacher innovation network was implemented for e-learning using different types of material. The compilation of data was carried out by means of an opinion survey to evaluate the usage of the four elements that comprised the design factors of the e-learning model applied at UNED. The contrast of a ‘Null Hypothesis’, e.g. by means of linear regression analysis, validated the established model. However, the ‘binary logistical analysis’ and the statistical contrast of the group demonstrated that the effect of e-learning on performance was not as high as it had been expected.

17. The Impact of Home Environment Factors on Academic Performance of Senior Secondary School Students in Garki Area District, Abuja - Nigeria

Directory of Open Access Journals (Sweden)

L. T. Dzever

2015-12-01

Full Text Available The study examined the impact of home environment factors on the academic performance of public secondary school students in Garki Area District, Abuja, Nigeria. The stratified sampling technique was used to select 300 students from six public schools, while the simple random sampling technique was used to administer the questionnaire. The study utilized a descriptive survey research design for the study. Also, data on student’s academic performance was obtained from student’s scores in four selected school subjects. Data obtained was analyzed using descriptive and inferential statistical techniques; Pearson Product Moment Correlation and Multiple regression analysis (ANOVA. The results result revealed a positive and significant relationship between permissive patenting style with academic performance (p0.05. Also, the result from the study identified income, educational background and occupational level as well as permissive parenting style as the main predictive variables influencing students’ academic performance.

18. Executive Remuneration and the Financial Performance of Quoted Firms: The Nigerian Experience

Directory of Open Access Journals (Sweden)

Sunday OGBEIDE

2016-12-01

Full Text Available This study examined executive remuneration and firms’ performance in Nigeria. Specifically, the study seeks to ascertain the nexus between executive remuneration, firm size and board size variables and the performance of quoted companies. The population of the study consists of all the quoted firms as at 31st December, 2014. A sample of sixty (60 companies excluding non- financial firms was selected for the period 2013 and 2014. Summary statistics such as descriptive, correlation and granger causality tests were used. Inferential statistics, using panel Generalized Least Square (EGLS with fixed effect was used for the purpose of empirical validations. This was after the application of diagnostic test to enhance the study. The study ascertained that executive remuneration has a relationship with firm performance, but negatively impacted on it; though was not statistically significant. Firm size was ascertained not to have significant positive relationship with firms’ performance; though it has a causality relationship with the performance of the firms. Board size was found to negatively affect the performance of firms and is statistically not significant. Premised on this, the study suggests that executive remuneration of quote firms should be pegged constantly in a flexible manner. This will enable shareholders known the causality relationship between what is paid to the executive and how that influence performance.

19. Statistical Diversions

Science.gov (United States)

Petocz, Peter; Sowey, Eric

2012-01-01

The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…

20. Statistical Diversions

Science.gov (United States)

Petocz, Peter; Sowey, Eric

2008-01-01

In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

1. Scan Statistics

CERN Document Server

Glaz, Joseph

2009-01-01

Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.

2. Analysis of Statistical Performance Measures

National Research Council Canada - National Science Library

Zoltowski, Michael D

2004-01-01

When only a limited number of snapshots is available for estimating the spatial correlation matrix, a low-rank solution of the MVDR equations, obtained via a small number of iterations of Conjugate Gradients (CG...

3. Study design and statistical analysis of data in human population studies with the micronucleus assay.

Science.gov (United States)

Ceppi, Marcello; Gallo, Fabio; Bonassi, Stefano

2011-01-01

The most common study design performed in population studies based on the micronucleus (MN) assay, is the cross-sectional study, which is largely performed to evaluate the DNA damaging effects of exposure to genotoxic agents in the workplace, in the environment, as well as from diet or lifestyle factors. Sample size is still a critical issue in the design of MN studies since most recent studies considering gene-environment interaction, often require a sample size of several hundred subjects, which is in many cases difficult to achieve. The control of confounding is another major threat to the validity of causal inference. The most popular confounders considered in population studies using MN are age, gender and smoking habit. Extensive attention is given to the assessment of effect modification, given the increasing inclusion of biomarkers of genetic susceptibility in the study design. Selected issues concerning the statistical treatment of data have been addressed in this mini-review, starting from data description, which is a critical step of statistical analysis, since it allows to detect possible errors in the dataset to be analysed and to check the validity of assumptions required for more complex analyses. Basic issues dealing with statistical analysis of biomarkers are extensively evaluated, including methods to explore the dose-response relationship among two continuous variables and inferential analysis. A critical approach to the use of parametric and non-parametric methods is presented, before addressing the issue of most suitable multivariate models to fit MN data. In the last decade, the quality of statistical analysis of MN data has certainly evolved, although even nowadays only a small number of studies apply the Poisson model, which is the most suitable method for the analysis of MN data.

4. Semiconductor statistics

CERN Document Server

Blakemore, J S

1962-01-01

Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co

5. Statistical Physics

CERN Document Server

Wannier, Gregory Hugh

1966-01-01

Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for

6. Statistics Clinic

Science.gov (United States)

Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

2014-01-01

Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

7. Applied statistics for economists

CERN Document Server

Lewis, Margaret

2012-01-01

This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.

8. Image Statistics

Energy Technology Data Exchange (ETDEWEB)

Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2017-08-08

In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.

9. Accident Statistics

Data.gov (United States)

Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

10. CMS Statistics

Data.gov (United States)

U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

11. WPRDC Statistics

Data.gov (United States)

Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Data about the usage of the WPRDC site and its various datasets, obtained by combining Google Analytics statistics with information from the WPRDC's data portal.

12. Multiparametric statistics

CERN Document Server

2007-01-01

This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...

13. Gonorrhea Statistics

Science.gov (United States)

... Search Form Controls Cancel Submit Search the CDC Gonorrhea Note: Javascript is disabled or is not supported ... Twitter STD on Facebook Sexually Transmitted Diseases (STDs) Gonorrhea Statistics Recommend on Facebook Tweet Share Compartir Gonorrhea ...

14. Reversible Statistics

DEFF Research Database (Denmark)

2004-01-01

The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...

15. Vital statistics

CERN Document Server

MacKenzie, Dana

2004-01-01

The drawbacks of using 19th-century mathematics in physics and astronomy are illustrated. To continue with the expansion of the knowledge about the cosmos, the scientists will have to come in terms with modern statistics. Some researchers have deliberately started importing techniques that are used in medical research. However, the physicists need to identify the brand of statistics that will be suitable for them, and make a choice between the Bayesian and the frequentists approach. (Edited abstract).

16. Classification of the medicinal plants of the genus Atractylodes using high-performance liquid chromatography with diode array and tandem mass spectrometry detection combined with multivariate statistical analysis.

Science.gov (United States)

Cho, Hyun-Deok; Kim, Unyong; Suh, Joon Hyuk; Eom, Han Young; Kim, Junghyun; Lee, Seul Gi; Choi, Yong Seok; Han, Sang Beom

2016-04-01

Analytical methods using high-performance liquid chromatography with diode array and tandem mass spectrometry detection were developed for the discrimination of the rhizomes of four Atractylodes medicinal plants: A. japonica, A. macrocephala, A. chinensis, and A. lancea. A quantitative study was performed, selecting five bioactive components, including atractylenolide I, II, III, eudesma-4(14),7(11)-dien-8-one and atractylodin, on twenty-six Atractylodes samples of various origins. Sample extraction was optimized to sonication with 80% methanol for 40 min at room temperature. High-performance liquid chromatography with diode array detection was established using a C18 column with a water/acetonitrile gradient system at a flow rate of 1.0 mL/min, and the detection wavelength was set at 236 nm. Liquid chromatography with tandem mass spectrometry was applied to certify the reliability of the quantitative results. The developed methods were validated by ensuring specificity, linearity, limit of quantification, accuracy, precision, recovery, robustness, and stability. Results showed that cangzhu contained higher amounts of atractylenolide I and atractylodin than baizhu, and especially atractylodin contents showed the greatest variation between baizhu and cangzhu. Multivariate statistical analysis, such as principal component analysis and hierarchical cluster analysis, were also employed for further classification of the Atractylodes plants. The established method was suitable for quality control of the Atractylodes plants. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

17. A statistical approach to evaluate the performance of cardiac biomarkers in predicting death due to acute myocardial infarction: time-dependent ROC curve

Science.gov (United States)

Karaismailoğlu, Eda; Dikmen, Zeliha Günnur; Akbıyık, Filiz; Karaağaoğlu, Ahmet Ergun

2018-04-30

Background/aim: Myoglobin, cardiac troponin T, B-type natriuretic peptide (BNP), and creatine kinase isoenzyme MB (CK-MB) are frequently used biomarkers for evaluating risk of patients admitted to an emergency department with chest pain. Recently, time- dependent receiver operating characteristic (ROC) analysis has been used to evaluate the predictive power of biomarkers where disease status can change over time. We aimed to determine the best set of biomarkers that estimate cardiac death during follow-up time. We also obtained optimal cut-off values of these biomarkers, which differentiates between patients with and without risk of death. A web tool was developed to estimate time intervals in risk. Materials and methods: A total of 410 patients admitted to the emergency department with chest pain and shortness of breath were included. Cox regression analysis was used to determine an optimal set of biomarkers that can be used for estimating cardiac death and to combine the significant biomarkers. Time-dependent ROC analysis was performed for evaluating performances of significant biomarkers and a combined biomarker during 240 h. The bootstrap method was used to compare statistical significance and the Youden index was used to determine optimal cut-off values. Results : Myoglobin and BNP were significant by multivariate Cox regression analysis. Areas under the time-dependent ROC curves of myoglobin and BNP were about 0.80 during 240 h, and that of the combined biomarker (myoglobin + BNP) increased to 0.90 during the first 180 h. Conclusion: Although myoglobin is not clinically specific to a cardiac event, in our study both myoglobin and BNP were found to be statistically significant for estimating cardiac death. Using this combined biomarker may increase the power of prediction. Our web tool can be useful for evaluating the risk status of new patients and helping clinicians in making decisions.

18. Missing data in randomized clinical trials for weight loss: scope of the problem, state of the field, and performance of statistical methods.

Directory of Open Access Journals (Sweden)

Mai A Elobeid

2009-08-01

Full Text Available Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods.We searched PubMed and Cochrane databases (2000-2006 for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout rates being approximated by an exponential decay curve (e(-lambdat where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100 and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive.Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last

19. Storytelling, statistics and hereditary thought: the narrative support of early statistics.

Science.gov (United States)

López-Beltrán, Carlos

2006-03-01

This paper's main contention is that some basically methodological developments in science which are apparently distant and unrelated can be seen as part of a sequential story. Focusing on general inferential and epistemological matters, the paper links occurrences separated by both in time and space, by formal and representational issues rather than social or disciplinary links. It focuses on a few limited aspects of several cognitive practices in medical and biological contexts separated by geography, disciplines and decades, but connected by long term transdisciplinary representational and inferential structures and constraints. The paper intends to show a given set of knowledge claims based on organizing statistically empirical data can be seen to have been underpinned by a previous, more familiar, and probably more natural, narrative handling of similar evidence. To achieve that this paper moves from medicine in France in the late eighteenth and early nineteenth century to the second half of the nineteenth century in England among gentleman naturalists, following its subject: the shift from narrative depiction of hereditary transmission of physical peculiarities to posterior statistical articulations of the same phenomena. Some early defenders of heredity as an important (if not the most important) causal presence in the understanding of life adopted singular narratives, in the form of case stories from medical and natural history traditions, to flesh out a special kind of causality peculiar to heredity. This work tries to reconstruct historically the rationale that drove the use of such narratives. It then shows that when this rationale was methodologically challenged, its basic narrative and probabilistic underpinings were transferred to the statistical quantificational tools that took their place.

20. Statistical nuclear reactions

International Nuclear Information System (INIS)

Hilaire, S.

2001-01-01

A review of the statistical model of nuclear reactions is presented. The main relations are described, together with the ingredients necessary to perform practical calculations. In addition, a substantial overview of the width fluctuation correction factor is given. (author)

1. Statistical optics

Science.gov (United States)

Goodman, J. W.

This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

2. Statistical mechanics

CERN Document Server

Schwabl, Franz

2006-01-01

The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...

3. Statistical mechanics

CERN Document Server

2015-01-01

Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...

4. Statistical physics

CERN Document Server

Guénault, Tony

2007-01-01

In this revised and enlarged second edition of an established text Tony Guénault provides a clear and refreshingly readable introduction to statistical physics, an essential component of any first degree in physics. The treatment itself is self-contained and concentrates on an understanding of the physical ideas, without requiring a high level of mathematical sophistication. A straightforward quantum approach to statistical averaging is adopted from the outset (easier, the author believes, than the classical approach). The initial part of the book is geared towards explaining the equilibrium properties of a simple isolated assembly of particles. Thus, several important topics, for example an ideal spin-½ solid, can be discussed at an early stage. The treatment of gases gives full coverage to Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein statistics. Towards the end of the book the student is introduced to a wider viewpoint and new chapters are included on chemical thermodynamics, interactions in, for exam...

5. Statistical Physics

CERN Document Server

Mandl, Franz

1988-01-01

The Manchester Physics Series General Editors: D. J. Sandiford; F. Mandl; A. C. Phillips Department of Physics and Astronomy, University of Manchester Properties of Matter B. H. Flowers and E. Mendoza Optics Second Edition F. G. Smith and J. H. Thomson Statistical Physics Second Edition E. Mandl Electromagnetism Second Edition I. S. Grant and W. R. Phillips Statistics R. J. Barlow Solid State Physics Second Edition J. R. Hook and H. E. Hall Quantum Mechanics F. Mandl Particle Physics Second Edition B. R. Martin and G. Shaw The Physics of Stars Second Edition A. C. Phillips Computing for Scient

6. AP statistics

CERN Document Server

Levine-Wissing, Robin

2012-01-01

All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep

7. Statistical mechanics

CERN Document Server

Davidson, Norman

2003-01-01

Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody

8. PathMAPA: a tool for displaying gene expression and performing statistical tests on metabolic pathways at multiple levels for Arabidopsis

Directory of Open Access Journals (Sweden)

Ma Ligeng

2003-11-01

Full Text Available Abstract Background To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. Results We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i upload and populate microarray data into a database; (ii integrate gene expression with enzymes of the pathways; (iii generate pathway diagrams without building image files manually; (iv visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. Conclusion PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i automatic generation of pathways associated with gene expression and (ii statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s.

9. Statistical Computing

inference and finite population sampling. Sudhakar Kunte. Elements of statistical computing are discussed in this series. ... which captain gets an option to decide whether to field first or bat first ... may of course not be fair, in the sense that the team which wins ... describe two methods of drawing a random number between 0.

10. Statistical thermodynamics

CERN Document Server

Schrödinger, Erwin

1952-01-01

Nobel Laureate's brilliant attempt to develop a simple, unified standard method of dealing with all cases of statistical thermodynamics - classical, quantum, Bose-Einstein, Fermi-Dirac, and more.The work also includes discussions of Nernst theorem, Planck's oscillator, fluctuations, the n-particle problem, problem of radiation, much more.

11. Energy Statistics

International Nuclear Information System (INIS)

Anon.

1994-01-01

For the years 1992 and 1993, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period. The tables and figures shown in this publication are: Changes in the volume of GNP and energy consumption; Coal consumption; Natural gas consumption; Peat consumption; Domestic oil deliveries; Import prices of oil; Price development of principal oil products; Fuel prices for power production; Total energy consumption by source; Electricity supply; Energy imports by country of origin in 1993; Energy exports by recipient country in 1993; Consumer prices of liquid fuels; Consumer prices of hard coal and natural gas, prices of indigenous fuels; Average electricity price by type of consumer; Price of district heating by type of consumer and Excise taxes and turnover taxes included in consumer prices of some energy sources

12. Statistical Optics

Science.gov (United States)

Goodman, Joseph W.

2000-07-01

The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

13. Statistical utilitarianism

OpenAIRE

Pivato, Marcus

2013-01-01

We show that, in a sufficiently large population satisfying certain statistical regularities, it is often possible to accurately estimate the utilitarian social welfare function, even if we only have very noisy data about individual utility functions and interpersonal utility comparisons. In particular, we show that it is often possible to identify an optimal or close-to-optimal utilitarian social choice using voting rules such as the Borda rule, approval voting, relative utilitarianism, or a...

14. Spatial Statistical Data Fusion (SSDF)

Science.gov (United States)

Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel

2013-01-01

As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is

15. Energy statistics

International Nuclear Information System (INIS)

Anon.

1989-01-01

World data from the United Nation's latest Energy Statistics Yearbook, first published in our last issue, are completed here. The 1984-86 data were revised and 1987 data added for world commercial energy production and consumption, world natural gas plant liquids production, world LP-gas production, imports, exports, and consumption, world residual fuel oil production, imports, exports, and consumption, world lignite production, imports, exports, and consumption, world peat production and consumption, world electricity production, imports, exports, and consumption (Table 80), and world nuclear electric power production

16. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

Science.gov (United States)

Schlenker, Evelyn

2016-01-01

This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

17. Statistical analysis of subjective preferences for video enhancement

Science.gov (United States)

Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

2010-02-01

Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

18. Using iterative cluster merging with improved gap statistics to perform online phenotype discovery in the context of high-throughput RNAi screens

Directory of Open Access Journals (Sweden)

Sun Youxian

2008-06-01

Full Text Available Abstract Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of

19. National Statistical Commission and Indian Official Statistics*

a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.

20. An inferentialist perspective on the coordination of actions and reasons involved in making a statistical inference

Science.gov (United States)

Bakker, Arthur; Ben-Zvi, Dani; Makar, Katie

2017-12-01

To understand how statistical and other types of reasoning are coordinated with actions to reduce uncertainty, we conducted a case study in vocational education that involved statistical hypothesis testing. We analyzed an intern's research project in a hospital laboratory in which reducing uncertainties was crucial to make a valid statistical inference. In his project, the intern, Sam, investigated whether patients' blood could be sent through pneumatic post without influencing the measurement of particular blood components. We asked, in the process of making a statistical inference, how are reasons and actions coordinated to reduce uncertainty? For the analysis, we used the semantic theory of inferentialism, specifically, the concept of webs of reasons and actions—complexes of interconnected reasons for facts and actions; these reasons include premises and conclusions, inferential relations, implications, motives for action, and utility of tools for specific purposes in a particular context. Analysis of interviews with Sam, his supervisor and teacher as well as video data of Sam in the classroom showed that many of Sam's actions aimed to reduce variability, rule out errors, and thus reduce uncertainties so as to arrive at a valid inference. Interestingly, the decisive factor was not the outcome of a t test but of the reference change value, a clinical chemical measure of analytic and biological variability. With insights from this case study, we expect that students can be better supported in connecting statistics with context and in dealing with uncertainty.

1. Client Financial Support for Mitigating Cost Factors Affecting ...

African Journals Online (AJOL)

Sultan

Descriptive and inferential statistics were used to analyse data obtained. Findings ... improved SSC business performance and hence commensurate contribution to national economy. Keywords: ... March 2014 (The Australian Performance.

2. Stupid statistics!

Science.gov (United States)

Tellinghuisen, Joel

2008-01-01

The method of least squares is probably the most powerful data analysis tool available to scientists. Toward a fuller appreciation of that power, this work begins with an elementary review of statistics fundamentals, and then progressively increases in sophistication as the coverage is extended to the theory and practice of linear and nonlinear least squares. The results are illustrated in application to data analysis problems important in the life sciences. The review of fundamentals includes the role of sampling and its connection to probability distributions, the Central Limit Theorem, and the importance of finite variance. Linear least squares are presented using matrix notation, and the significance of the key probability distributions-Gaussian, chi-square, and t-is illustrated with Monte Carlo calculations. The meaning of correlation is discussed, including its role in the propagation of error. When the data themselves are correlated, special methods are needed for the fitting, as they are also when fitting with constraints. Nonlinear fitting gives rise to nonnormal parameter distributions, but the 10% Rule of Thumb suggests that such problems will be insignificant when the parameter is sufficiently well determined. Illustrations include calibration with linear and nonlinear response functions, the dangers inherent in fitting inverted data (e.g., Lineweaver-Burk equation), an analysis of the reliability of the van't Hoff analysis, the problem of correlated data in the Guggenheim method, and the optimization of isothermal titration calorimetry procedures using the variance-covariance matrix for experiment design. The work concludes with illustrations on assessing and presenting results.

3. Statistical Literacy: Simulations with Dolphins

Science.gov (United States)

Strayer, Jeremy; Matuszewski, Amber

2016-01-01

In this article, Strayer and Matuszewski present a six-phase strategy that teachers can use to help students develop a conceptual understanding of inferential hypothesis testing through simulation. As Strayer and Matuszewski discuss the strategy, they describe each phase in general, explain how they implemented the phase while teaching their…

4. Assessing Adult Learner’s Numeracy as Related to Gender and Performance in Arithmetic

Directory of Open Access Journals (Sweden)

2014-07-01

5. Student Performance and Success Factors in Learning Business Statistics in Online vs. On-Ground Classes Using a Web-Based Assessment Platform

Science.gov (United States)

Shotwell, Mary; Apigian, Charles H.

2015-01-01

This study aimed to quantify the influence of student attributes, coursework resources, and online assessments on student learning in business statistics. Surveys were administered to students at the completion of both online and on-ground classes, covering student perception and utilization of internal and external academic resources, as well as…

6. Statistical Analysis of Past Catalytic Data on Oxidative Methane Coupling for New Insights into the Composition of High-Performance Catalysts

Czech Academy of Sciences Publication Activity Database

Zavyalova, U.; Holeňa, Martin; Schlögl, R.; Baerns, M.

2011-01-01

Roč. 3, č. 12 (2011), s. 1935-1947 ISSN 1867-3880 Institutional research plan: CEZ:AV0Z10300504 Keywords : catalyst development * heterogeneous catalysis * methane * oxidative coupling * catalyst composition * statistical analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 5.207, year: 2011

7. A simulation study to evaluate the performance of five statistical monitoring methods when applied to different time-series components in the context of control programs for endemic diseases

DEFF Research Database (Denmark)

Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

2017-01-01

Disease monitoring and surveillance play a crucial role in control and eradication programs, as it is important to track implemented strategies in order to reduce and/or eliminate a specific disease. The objectives of this study were to assess the performance of different statistical monitoring......, decreases and constant sero-prevalence levels (referred as events). Two space-state models were used to model the time series, and different statistical monitoring methods (such as univariate process control algorithms–Shewart Control Chart, Tabular Cumulative Sums, and the V-mask- and monitoring...... of noise in the baseline was greater for the Shewhart Control Chart and Tabular Cumulative Sums than for the V-Mask and trend-based methods. The performance of the different statistical monitoring methods varied when monitoring increases and decreases in disease sero-prevalence. Combining two of more...

8. A simulation study to evaluate the performance of five statistical monitoring methods when applied to different time-series components in the context of control programs for endemic diseases

DEFF Research Database (Denmark)

Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

2017-01-01

, decreases and constant sero-prevalence levels (referred as events). Two space-state models were used to model the time series, and different statistical monitoring methods (such as univariate process control algorithms–Shewart Control Chart, Tabular Cumulative Sums, and the V-mask- and monitoring......Disease monitoring and surveillance play a crucial role in control and eradication programs, as it is important to track implemented strategies in order to reduce and/or eliminate a specific disease. The objectives of this study were to assess the performance of different statistical monitoring...... of noise in the baseline was greater for the Shewhart Control Chart and Tabular Cumulative Sums than for the V-Mask and trend-based methods. The performance of the different statistical monitoring methods varied when monitoring increases and decreases in disease sero-prevalence. Combining two of more...

9. To What Extent Is Mathematical Ability Predictive of Performance in a Methodology and Statistics Course? Can an Action Research Approach Be Used to Understand the Relevance of Mathematical Ability in Psychology Undergraduates?

Science.gov (United States)

Bourne, Victoria J.

2014-01-01

Research methods and statistical analysis is typically the least liked and most anxiety provoking aspect of a psychology undergraduate degree, in large part due to the mathematical component of the content. In this first cycle of a piece of action research, students' mathematical ability is examined in relation to their performance across…

10. Not the Norm: The Potential of Tree Analysis of Performance Data from Students in a Foundation Mathematics Module

Science.gov (United States)

Kirby, Nicola; Dempster, Edith

2015-01-01

Quantitative methods of data analysis usually involve inferential statistics, and are not well known for their ability to reflect the intricacies of a diverse student population. The South African tertiary education sector is characterised by extreme inequality and diversity. Foundation programmes address issues of inequality of access by…

11. Principles for statistical inference on big spatio-temporal data from climate models

KAUST Repository

Castruccio, Stefano

2018-02-24

The vast increase in size of modern spatio-temporal datasets has prompted statisticians working in environmental applications to develop new and efficient methodologies that are still able to achieve inference for nontrivial models within an affordable time. Climate model outputs push the limits of inference for Gaussian processes, as their size can easily be larger than 10 billion data points. Drawing from our experience in a set of previous work, we provide three principles for the statistical analysis of such large datasets that leverage recent methodological and computational advances. These principles emphasize the need of embedding distributed and parallel computing in the inferential process.

12. Principles for statistical inference on big spatio-temporal data from climate models

KAUST Repository

Castruccio, Stefano; Genton, Marc G.

2018-01-01

The vast increase in size of modern spatio-temporal datasets has prompted statisticians working in environmental applications to develop new and efficient methodologies that are still able to achieve inference for nontrivial models within an affordable time. Climate model outputs push the limits of inference for Gaussian processes, as their size can easily be larger than 10 billion data points. Drawing from our experience in a set of previous work, we provide three principles for the statistical analysis of such large datasets that leverage recent methodological and computational advances. These principles emphasize the need of embedding distributed and parallel computing in the inferential process.

13. Statistical Methods for Comparative Phenomics Using High-Throughput Phenotype Microarrays

KAUST Repository

Sturino, Joseph

2010-01-24

We propose statistical methods for comparing phenomics data generated by the Biolog Phenotype Microarray (PM) platform for high-throughput phenotyping. Instead of the routinely used visual inspection of data with no sound inferential basis, we develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second approach employs functional principal component analysis. Properties of the proposed methods are investigated on both simulated data and data sets from the PM platform.

14. Childhood Cancer Statistics

Science.gov (United States)

... Watchdog Ratings Feedback Contact Select Page Childhood Cancer Statistics Home > Cancer Resources > Childhood Cancer Statistics Childhood Cancer Statistics – Graphs and Infographics Number of Diagnoses Incidence Rates ...

15. Impact of a short biostatistics course on knowledge and performance of postgraduate scholars: Implications for training of African doctors and biomedical researchers.

Science.gov (United States)

Chima, S C; Nkwanyana, N M; Esterhuizen, T M

2015-12-01

This study was designed to evaluate the impact of a short biostatistics course on knowledge and performance of statistical analysis by biomedical researchers in Africa. It is recognized that knowledge of biostatistics is essential for understanding and interpretation of modern scientific literature and active participation in the global research enterprise. Unfortunately, it has been observed that basic education of African scholars may be deficient in applied mathematics including biostatistics. Forty university affiliated biomedical researchers from South Africa volunteered for a 4-day short-course where participants were exposed to lectures on descriptive and inferential biostatistics and practical training on using a statistical software package for data analysis. A quantitative questionnaire was used to evaluate participants' statistical knowledge and performance pre- and post-course. Changes in knowledge and performance were measured using objective and subjective criteria. Data from completed questionnaires were captured and analyzed using Statistical Package for Social Sciences. Participants' pre- and post-course data were compared using nonparametric Wilcoxon signed ranks tests for nonnormally distributed variables. A P researchers in this cohort and highlights the potential benefits of short-courses in biostatistics to improve the knowledge and skills of biomedical researchers and scholars in Africa.

16. Operation statistics of KEKB

International Nuclear Information System (INIS)

Kawasumi, Takeshi; Funakoshi, Yoshihiro

2008-01-01

KEKB accelerator has been operated since December 1998. We achieved the design peak luminosity of 10.00/nb/s. The present record is 17.12/nb/s. Detailed data of the KEKB Operation is important to evaluate the KEKB performance and to suggest the direction of the performance enhancement. We have classified all KEKB machine time into the following seven categories (1) Physics Run (2) Machine Study (3) Machine Tuning (4) Beam Tuning (5) Trouble (6) Maintenance (7) Others, to estimate the accelerator availability. In this paper we report the operation statistics of the KEKB accelerator. (author)

17. Does Tacit Knowledge Predict Organizational Performance? A Scrutiny of Firms in the Upstream Sector in Nigeria

Directory of Open Access Journals (Sweden)

Vincent I.O Odiri

2016-02-01

Full Text Available This paper examined tacit knowledge so as to see whether tacit knowledge when properly put to use can lead to improved performance by upstream sector firms in Nigeria. Knowledge as we believe, is very vital to both corporate entities and individuals. Knowledge encompasses both explicit and tacit. This paper focused on one aspect of knowledge – ‘tacit’ which is in the psyche or brain of the individual possessing it. Inspite of the central role it plays, tacit knowledge has been downplayed by most firms. However, we adopted a survey research design via questionnaires administered to 504 employees randomly selected from 3 different oil firms. The data obtained were analyzed using inferential statistics. Also, multi-collinearity diagnoses of tacit knowledge and organizational performance was performed. The result suggests that tacit knowledge is linearly correlated with organizational performance. This implies that tacit knowledge predicts organizational performance. This study is significant in that the findings would be useful to management of firms, as it divulge how tacit knowledge when properly harnessed can lead to increased performance. Most prior studies in this area were conducted in other countries, hence our study is one of the first in Nigeria that examined tacit knowledge and organizational performance.

18. Statistics for Learning Genetics

Science.gov (United States)

Charles, Abigail Sheena

This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless

19. Long-term performance of grid-connected photovoltaic plant - Appendix 1: normalised annual statistics; Langzeitverhalten von netzgekoppelten Photovoltaikanlagen 2 (LZPV2). Anhang 1: Normierte Jahresstatistiken

Energy Technology Data Exchange (ETDEWEB)

Renken, C.; Haeberlin, H.

2003-07-01

This is second part of a four-part final report for the Swiss Federal Office of Energy (SFOE) made by the University of Applied Sciences in Burgdorf, Switzerland. This report presents the findings of a project begun in 1992 that monitored the performance of around 40 photovoltaic (PV) installations in Switzerland, including the demonstration installation on Mont Soleil and three test installations using modern thin-film technologies. The specific performance of the plant and reductions in yield caused mostly by increasing soiling of the modules over the years were monitored. This extensive first appendix to the report describes the plant monitored in detail, presents the results of various performance measurements made and discusses the two monitoring concepts used. The specific yields over the years are presented in graphical form. Also, the meteorological equipment installed at the University of Applied Science in Burgdorf that was used to provide reference values is described.

20. Long-term performance of grid-connected photovoltaic plant - Appendix 2: normalised monthly statistics; Langzeitverhalten von netzgekoppelten Photovoltaikanlagen 2 (LZPV2). Anhang 2: Normierte Monatsstatistiken

Energy Technology Data Exchange (ETDEWEB)

Renken, C.; Haeberlin, H.

2003-07-01

This is the third part of a four-part final report for the Swiss Federal Office of Energy (SFOE) made by the University of Applied Sciences in Burgdorf, Switzerland. This report presents the findings of a project begun in 1992 that monitored the performance of around 40 photovoltaic (PV) installations in Switzerland. This extensive second appendix to the report describes the eight installations that were monitored in detail, including - amongst others - the demonstration installations on Mont Soleil in the Jura mountains and on the Jungfraujoch in the Alps as well as three test installations using modern thin-film technologies in Burgdorf. The normalised monthly specific performance of these installations was monitored. The report presents the various performance figures in graphical form.

1. MODELO ESTADÍSTICO PARA ASOCIAR VARIABLES DEL ALUMNO CON SU RENDIMIENTO ESCOLAR I STATISTICAL MODEL TO ASSOCIATE VARIABLES OF THE STUDENT WITH HIS SCHOOL PERFORMANCE

Directory of Open Access Journals (Sweden)

Ely Rosas

2018-04-01

Full Text Available The main objective of this study was to determine associations between categorical variables pertaining to the student and his school performance, at governmental schools of the municipalities Gómez and Marcano of Nueva Esparta state, by adjusting the effects of column partnership model. The investigation was correlational in nature, with field design, based on applications to a reality of the educational context. As main results, obtained by adjusting the model in reference, the variables associated with school performance in Gómez municipality were: recreational activities, frequent use of computer at home and the use of Internet outside home to do homework. While in Marcano Municipality, they were: to have Internet at home, the place where the student is watching videogames and the number of times he eats in the day. In both municipalities, the characteristics: good feeling of the student when going to school and mastering of mathematical operations, were also linked to school performance.

2. MQSA National Statistics

Science.gov (United States)

... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...

3. State Transportation Statistics 2014

Science.gov (United States)

2014-12-15

The Bureau of Transportation Statistics (BTS) presents State Transportation Statistics 2014, a statistical profile of transportation in the 50 states and the District of Columbia. This is the 12th annual edition of State Transportation Statistics, a ...

4. New Closed-Form Results on Ordered Statistics of Partial Sums of Gamma Random Variables and its Application to Performance Evaluation in the Presence of Nakagami Fading

KAUST Repository

Nam, Sung Sik; Ko, Young-Chai; Alouini, Mohamed-Slim

2017-01-01

in the literature. In addition, as a feasible application example in which our new offered derived closed-form results can be applied is presented. In particular, we analyze the outage performance of the finger replacement schemes over Nakagami fading channels

5. Transportation Statistics Annual Report, 2017

Science.gov (United States)

2018-01-01

The Transportation Statistics Annual Report describes the Nations transportation system, the systems performance, its contributions to the economy, and its effects on people and the environment. This 22nd edition of the report is based on infor...

6. Transportation statistics annual report, 2015

Science.gov (United States)

2016-01-01

The Transportation Statistics Annual Report : describes the Nations transportation system, : the systems performance, its contributions to : the economy, and its effects on people and the : environment. This 20th edition of the report is : base...

7. Transportation statistics annual report, 2013

Science.gov (United States)

2014-01-01

The Transportation Statistics Annual Report : describes the Nations transportation system, : the systems performance, its contributions to : the economy, and its effects on people and the : environment. This 18th edition of the report is : base...

8. [A study on breakfast and school performance in a group of adolescents].

Science.gov (United States)

Herrero Lozano, R; Fillat Ballesteros, J C

2006-01-01

TO know the relationship between breakfast, from a qualitative perspective, and school performance. The study was performed in 141 students (70 males and 71 females) with ages ranging 12-13 years, of 1st grade of Mandatory Secondary Education (ESO) from an institute of Saragossa, by means of recalling the breakfast of the day before. Breakfast quality has been assessed according to criteria of the Kid study: GOOD QUALITY: contains at least one food from each one of dairy, cereals, or fruit groups. IMPROVABLE QUALITY: lacks one of the groups. INSUFFICIENT QUALITY: lacks two groups. POOR QUALITY: does not have breakfast. We considered that quality was improved only when a mid-morning snack with a different food from those taken with breakfast was added. Average mark at the end of the school year has been the criterion used to assess school performance. Statistical analysis of data gathered for the present study has been done with SPSS software. This analysis comprises descriptive and inferential statistics. For analysis of global significance between the differences the Analysis of Variance method has been applied, followed by post hoe tests with Bonferroni's and Turkey's methods to detect specific groups explaining global significance. Average mark systematically increases as breakfast quality increases from an average score of 5.63 in the group with poor quality breakfast to 7.73 average score in the group with a good quality breakfast. An analysis of variance has been performed to study the statistical significance of the mean differences between both groups. The outcomes yield significant global differences between groups (p value = 0.001), i.e., the average mark significantly varies according to breakfast quality. When pooled quality of breakfast and mid-morning snack is analyzed, the average mark systematically increases as breakfast-snack quality increases, from an average mark of 5,77 in the group with poor or insufficient quality up to 7.61 in the group with

9. Renyi statistics in equilibrium statistical mechanics

International Nuclear Information System (INIS)

Parvan, A.S.; Biro, T.S.

2010-01-01

The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.

10. Sampling, Probability Models and Statistical Reasoning Statistical

Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

11. A Novel High Performance Liquid Chromatographic Method for Determination of Nystatin in Pharmaceutical Formulations by Box-Behnken Statistical Experiment Design.

Science.gov (United States)

Shokraneh, Farnaz; Asgharian, Ramin; Abdollahpour, Assem; Ramin, Mehdi; Montaseri, Ali; Mahboubi, Arash

2015-01-01

In this study a novel High Performance Liquid Chromatography for the assay of nystatin in oral and vaginal tablets were optimized and validated using Box-Behnken experimental design. The method was performed in the isocratic mode on a RP-18 column (30 °C) using a mobile phase consisting of ammonium acetate 0.05 M buffer/ Methanol mixture (30:70) and a flow-rate of 1.0 mL/min. The specificity, linearity, precision, accuracy, LOD and LOQ of the method were validated. The method was linear over the range of 5-500 µg/mL with an acceptable correlation coefficient (r(2) = 0.9996). The method's limit of detection (LOD) and quantification (LOQ) were 0.01 and 0.025 µg/mL respectively. The results indicate that this validated method can be used as an alternative method for assay of nystatin.

12. Statistical methods used in the public health literature and implications for training of public health professionals.

Science.gov (United States)

Hayat, Matthew J; Powell, Amanda; Johnson, Tessa; Cadwell, Betsy L

2017-01-01

Statistical literacy and knowledge is needed to read and understand the public health literature. The purpose of this study was to quantify basic and advanced statistical methods used in public health research. We randomly sampled 216 published articles from seven top tier general public health journals. Studies were reviewed by two readers and a standardized data collection form completed for each article. Data were analyzed with descriptive statistics and frequency distributions. Results were summarized for statistical methods used in the literature, including descriptive and inferential statistics, modeling, advanced statistical techniques, and statistical software used. Approximately 81.9% of articles reported an observational study design and 93.1% of articles were substantively focused. Descriptive statistics in table or graphical form were reported in more than 95% of the articles, and statistical inference reported in more than 76% of the studies reviewed. These results reveal the types of statistical methods currently used in the public health literature. Although this study did not obtain information on what should be taught, information on statistical methods being used is useful for curriculum development in graduate health sciences education, as well as making informed decisions about continuing education for public health professionals.

13. Quality of statistical reporting in developmental disability journals.

Science.gov (United States)

Namasivayam, Aravind K; Yan, Tina; Wong, Wing Yiu Stephanie; van Lieshout, Pascal

2015-12-01

Null hypothesis significance testing (NHST) dominates quantitative data analysis, but its use is controversial and has been heavily criticized. The American Psychological Association has advocated the reporting of effect sizes (ES), confidence intervals (CIs), and statistical power analysis to complement NHST results to provide a more comprehensive understanding of research findings. The aim of this paper is to carry out a sample survey of statistical reporting practices in two journals with the highest h5-index scores in the areas of developmental disability and rehabilitation. Using a checklist that includes critical recommendations by American Psychological Association, we examined 100 randomly selected articles out of 456 articles reporting inferential statistics in the year 2013 in the Journal of Autism and Developmental Disorders (JADD) and Research in Developmental Disabilities (RDD). The results showed that for both journals, ES were reported only half the time (JADD 59.3%; RDD 55.87%). These findings are similar to psychology journals, but are in stark contrast to ES reporting in educational journals (73%). Furthermore, a priori power and sample size determination (JADD 10%; RDD 6%), along with reporting and interpreting precision measures (CI: JADD 13.33%; RDD 16.67%), were the least reported metrics in these journals, but not dissimilar to journals in other disciplines. To advance the science in developmental disability and rehabilitation and to bridge the research-to-practice divide, reforms in statistical reporting, such as providing supplemental measures to NHST, are clearly needed.

14. Measurement and statistics for teachers

CERN Document Server

Van Blerkom, Malcolm

2008-01-01

Written in a student-friendly style, Measurement and Statistics for Teachers shows teachers how to use measurement and statistics wisely in their classes. Although there is some discussion of theory, emphasis is given to the practical, everyday uses of measurement and statistics. The second part of the text provides more complete coverage of basic descriptive statistics and their use in the classroom than in any text now available.Comprehensive and accessible, Measurement and Statistics for Teachers includes:Short vignettes showing concepts in action Numerous classroom examples Highlighted vocabulary Boxes summarizing related concepts End-of-chapter exercises and problems Six full chapters devoted to the essential topic of Classroom Tests Instruction on how to carry out informal assessments, performance assessments, and portfolio assessments, and how to use and interpret standardized tests A five-chapter section on Descriptive Statistics, giving instructors the option of more thoroughly teaching basic measur...

15. Statistical exploration of dataset examining key indicators influencing housing and urban infrastructure investments in megacities

Directory of Open Access Journals (Sweden)

2018-06-01

Full Text Available Lagos, by the UN standards, has attained the megacity status, with the attendant challenges of living up to that titanic position; regrettably it struggles with its present stock of housing and infrastructural facilities to match its new status. Based on a survey of construction professionals’ perception residing within the state, a questionnaire instrument was used to gather the dataset. The statistical exploration contains dataset on the state of housing and urban infrastructural deficit, key indicators spurring the investment by government to upturn the deficit and improvement mechanisms to tackle the infrastructural dearth. Descriptive statistics and inferential statistics were used to present the dataset. The dataset when analyzed can be useful for policy makers, local and international governments, world funding bodies, researchers and infrastructural investors. Keywords: Construction, Housing, Megacities, Population, Urban infrastructures

16. Statistical characteristics of aberrations of human eyes after small incision lenticule extraction surgery and analysis of visual performance with individual eye model.

Science.gov (United States)

Lou, Qiqi; Wang, Yan; Wang, Zhaoqi; Liu, Yongji; Zhang, Lin; Fang, Hui

2015-09-01

Preoperative and postoperative wavefront aberrations of 73 myopic eyes with small incision lenticule extraction surgery are analyzed in this paper. Twenty-eight postoperative individual eye models are constructed to investigate the visual acuity (VA) of human eyes. Results show that in photopic condition, residual defocus, residual astigmatism, and higher-order aberrations are relatively small. 100% of eyes reach a VA of 0.8 or better, and 89.3% of eyes reach a VA of 1.0 or better. In scotopic condition, the residual defocus and the higher-order aberrations are, respectively, 1.9 and 8.5 times the amount of that in photopic condition, and the defocus becomes the main factor attenuating visual performance.

17. Performance assessment and beamline diagnostics based on evaluation of temporal information from infrared spectral datasets by means of R Environment for statistical analysis.

Science.gov (United States)

Banas, Krzysztof; Banas, Agnieszka; Gajda, Mariusz; Kwiatek, Wojciech M; Pawlicki, Bohdan; Breese, Mark B H

2014-07-15

Assessment of the performance and up-to-date diagnostics of scientific equipment is one of the key components in contemporary laboratories. Most reliable checks are performed by real test experiments while varying the experimental conditions (typically, in the case of infrared spectroscopic measurements, the size of the beam aperture, the duration of the experiment, the spectral range, the scanner velocity, etc.). On the other hand, the stability of the instrument response in time is another key element of the great value. Source stability (or easy predictable temporal changes, similar to those observed in the case of synchrotron radiation-based sources working in non top-up mode), detector stability (especially in the case of liquid nitrogen- or liquid helium-cooled detectors) should be monitored. In these cases, recorded datasets (spectra) include additional variables such as time stamp when a particular spectrum was recorded (in the case of time trial experiments). A favorable approach in evaluating these data is building hyperspectral object that consist of all spectra and all additional parameters at which these spectra were recorded. Taking into account that these datasets could be considerably large in size, there is a need for the tools for semiautomatic data evaluation and information extraction. A comprehensive R archive network--the open-source R Environment--with its flexibility and growing potential, fits these requirements nicely. In this paper, examples of practical implementation of methods available in R for real-life Fourier transform infrared (FTIR) spectroscopic data problems are presented. However, this approach could easily be adopted to many various laboratory scenarios with other spectroscopic techniques.

18. BUDGETING, BUDGETARY CONTROL AND PERFORMANCE EVALUATION: Evidence from Hospitality Firms in Nigeria

Directory of Open Access Journals (Sweden)

Patrick Egbunike

2017-12-01

Full Text Available This study was carried out with the view to address two fundamental issues: first, to determine if there is any association between budget, budgetary control and performance evaluation; second, to ascertain if there is any significant variation in the budget, budgetary control and performance evaluation measures of hospitality firms in Nigeria. The study employed descriptive design and primary data (questionnaire was the major source of data collection. Questionnaire was administered to a total of six hundred (600 employees of ten (10 selected hospitality firms in Nigeria. The data obtained were analyzed using both descriptive and inferential statistics. Findings indicated that budget and budgetary control could serve as an avenue through which hospitality firms in Nigeria can be evaluated. In addition, it was revealed that there is a significant variation in the budget, budgetary control and performance evaluation of hospitality firms in Nigeria. On the basis of the findings, it was recommended that hospitality firms in Nigeria should carry out performance evaluation on every aspect of their budget and budgetary activities as a way of ensuring that budgeted outcomes are met. Also, budgetary costs should be a basis of choosing the most-fit performance evaluation technique for hospitality firms since such

19. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

Science.gov (United States)

2014-05-01

To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

20. SOCR: Statistics Online Computational Resource

Directory of Open Access Journals (Sweden)

Ivo D. Dinov

2006-10-01

Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

1. Project Leadership and Quality Performance of Construction Projects

Directory of Open Access Journals (Sweden)

SPG Buba

2017-05-01

Full Text Available Background: The construction industry in Nigeria, is pigeonholed by poor quality of construction products as a result of the inherent corruption in the country. Lack of purposeful leadership and inappropriate choice of leadership styles in the industry have been attributed to project failure. Abandoned and failed projects are more predominant in the public sector which litters every corner of the country. Objectives: The objective of this paper is to assess the impact of leadership styles on quality performance criteria of public projects in Nigeria. Methodology: A total of 43 questionnaires were distributed to 3 key groups of respondents (Quantity Surveyors, Builders, and Architects who are project managers in Nigeria. Descriptive and Inferential statistics were used to analyse the data using the Statistical Package for Social Sciences (SPSS. Likert Scale was used to measure the independent variables (leadership style: facilitative, coaching, delegating and directing; and the level of achievement of projects based on the dependent variables (quality and function performance criteria which are: achieving highest aesthetic quality; and functional building that fits its purpose. Findings: The study revealed that Directing is the major leadership style used by project managers in Nigeria. Amongst the leadership styles which has the most impact on quality performance indicators is also directing which has the most relative influence on achieving highest aesthetic quality and functional building that fits its purpose. Conclusion/Recommendation/Way forward: The underlying relationship between Directing leadership styles and the performance criteria of achieving highest aesthetic quality and functional building that fits its purpose will be beneficial to the Nigerian construction environment.

2. Statistical-learning strategies generate only modestly performing predictive models for urinary symptoms following external beam radiotherapy of the prostate: A comparison of conventional and machine-learning methods

International Nuclear Information System (INIS)

Yahya, Noorazrul; Ebert, Martin A.; Bulsara, Max; House, Michael J.; Kennedy, Angel; Joseph, David J.; Denham, James W.

2016-01-01

Purpose: Given the paucity of available data concerning radiotherapy-induced urinary toxicity, it is important to ensure derivation of the most robust models with superior predictive performance. This work explores multiple statistical-learning strategies for prediction of urinary symptoms following external beam radiotherapy of the prostate. Methods: The performance of logistic regression, elastic-net, support-vector machine, random forest, neural network, and multivariate adaptive regression splines (MARS) to predict urinary symptoms was analyzed using data from 754 participants accrued by TROG03.04-RADAR. Predictive features included dose-surface data, comorbidities, and medication-intake. Four symptoms were analyzed: dysuria, haematuria, incontinence, and frequency, each with three definitions (grade ≥ 1, grade ≥ 2 and longitudinal) with event rate between 2.3% and 76.1%. Repeated cross-validations producing matched models were implemented. A synthetic minority oversampling technique was utilized in endpoints with rare events. Parameter optimization was performed on the training data. Area under the receiver operating characteristic curve (AUROC) was used to compare performance using sample size to detect differences of ≥0.05 at the 95% confidence level. Results: Logistic regression, elastic-net, random forest, MARS, and support-vector machine were the highest-performing statistical-learning strategies in 3, 3, 3, 2, and 1 endpoints, respectively. Logistic regression, MARS, elastic-net, random forest, neural network, and support-vector machine were the best, or were not significantly worse than the best, in 7, 7, 5, 5, 3, and 1 endpoints. The best-performing statistical model was for dysuria grade ≥ 1 with AUROC ± standard deviation of 0.649 ± 0.074 using MARS. For longitudinal frequency and dysuria grade ≥ 1, all strategies produced AUROC>0.6 while all haematuria endpoints and longitudinal incontinence models produced AUROC<0.6. Conclusions

3. Statistical-learning strategies generate only modestly performing predictive models for urinary symptoms following external beam radiotherapy of the prostate: A comparison of conventional and machine-learning methods

Energy Technology Data Exchange (ETDEWEB)

Yahya, Noorazrul, E-mail: noorazrul.yahya@research.uwa.edu.au [School of Physics, University of Western Australia, Western Australia 6009, Australia and School of Health Sciences, National University of Malaysia, Bangi 43600 (Malaysia); Ebert, Martin A. [School of Physics, University of Western Australia, Western Australia 6009, Australia and Department of Radiation Oncology, Sir Charles Gairdner Hospital, Western Australia 6008 (Australia); Bulsara, Max [Institute for Health Research, University of Notre Dame, Fremantle, Western Australia 6959 (Australia); House, Michael J. [School of Physics, University of Western Australia, Western Australia 6009 (Australia); Kennedy, Angel [Department of Radiation Oncology, Sir Charles Gairdner Hospital, Western Australia 6008 (Australia); Joseph, David J. [Department of Radiation Oncology, Sir Charles Gairdner Hospital, Western Australia 6008, Australia and School of Surgery, University of Western Australia, Western Australia 6009 (Australia); Denham, James W. [School of Medicine and Public Health, University of Newcastle, New South Wales 2308 (Australia)

2016-05-15

Purpose: Given the paucity of available data concerning radiotherapy-induced urinary toxicity, it is important to ensure derivation of the most robust models with superior predictive performance. This work explores multiple statistical-learning strategies for prediction of urinary symptoms following external beam radiotherapy of the prostate. Methods: The performance of logistic regression, elastic-net, support-vector machine, random forest, neural network, and multivariate adaptive regression splines (MARS) to predict urinary symptoms was analyzed using data from 754 participants accrued by TROG03.04-RADAR. Predictive features included dose-surface data, comorbidities, and medication-intake. Four symptoms were analyzed: dysuria, haematuria, incontinence, and frequency, each with three definitions (grade ≥ 1, grade ≥ 2 and longitudinal) with event rate between 2.3% and 76.1%. Repeated cross-validations producing matched models were implemented. A synthetic minority oversampling technique was utilized in endpoints with rare events. Parameter optimization was performed on the training data. Area under the receiver operating characteristic curve (AUROC) was used to compare performance using sample size to detect differences of ≥0.05 at the 95% confidence level. Results: Logistic regression, elastic-net, random forest, MARS, and support-vector machine were the highest-performing statistical-learning strategies in 3, 3, 3, 2, and 1 endpoints, respectively. Logistic regression, MARS, elastic-net, random forest, neural network, and support-vector machine were the best, or were not significantly worse than the best, in 7, 7, 5, 5, 3, and 1 endpoints. The best-performing statistical model was for dysuria grade ≥ 1 with AUROC ± standard deviation of 0.649 ± 0.074 using MARS. For longitudinal frequency and dysuria grade ≥ 1, all strategies produced AUROC>0.6 while all haematuria endpoints and longitudinal incontinence models produced AUROC<0.6. Conclusions

4. The foundations of statistics

CERN Document Server

Savage, Leonard J

1972-01-01

Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.

5. State Transportation Statistics 2010

Science.gov (United States)

2011-09-14

The Bureau of Transportation Statistics (BTS), a part of DOTs Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2010, a statistical profile of transportation in the 50 states and the District of Col...

6. State Transportation Statistics 2012

Science.gov (United States)

2013-08-15

The Bureau of Transportation Statistics (BTS), a part of the U.S. Department of Transportation's (USDOT) Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2012, a statistical profile of transportation ...

Science.gov (United States)

... Gland Tumor: Statistics Request Permissions Adrenal Gland Tumor: Statistics Approved by the Cancer.Net Editorial Board , 03/ ... primary adrenal gland tumor is very uncommon. Exact statistics are not available for this type of tumor ...

8. State transportation statistics 2009

Science.gov (United States)

2009-01-01

The Bureau of Transportation Statistics (BTS), a part of DOTs Research and : Innovative Technology Administration (RITA), presents State Transportation : Statistics 2009, a statistical profile of transportation in the 50 states and the : District ...

9. State Transportation Statistics 2011

Science.gov (United States)

2012-08-08

The Bureau of Transportation Statistics (BTS), a part of DOTs Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2011, a statistical profile of transportation in the 50 states and the District of Col...

10. Neuroendocrine Tumor: Statistics

Science.gov (United States)

... Tumor > Neuroendocrine Tumor: Statistics Request Permissions Neuroendocrine Tumor: Statistics Approved by the Cancer.Net Editorial Board , 01/ ... the body. It is important to remember that statistics on the survival rates for people with a ...

11. State Transportation Statistics 2013

Science.gov (United States)

2014-09-19

The Bureau of Transportation Statistics (BTS), a part of the U.S. Department of Transportations (USDOT) Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2013, a statistical profile of transportatio...

12. BTS statistical standards manual

Science.gov (United States)

2005-10-01

The Bureau of Transportation Statistics (BTS), like other federal statistical agencies, establishes professional standards to guide the methods and procedures for the collection, processing, storage, and presentation of statistical data. Standards an...

13. Statistical Analysis of Data for Timber Strengths

DEFF Research Database (Denmark)

Sørensen, John Dalsgaard; Hoffmeyer, P.

Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...

14. Performance in grade 12 mathematics and science predicts student nurses' performance in first year science modules at a university in the Western Cape.

Science.gov (United States)

Mthimunye, Katlego D T; Daniels, Felicity M

2017-10-26

15. Postural changes and pain in the academic performance of elementary school students

Directory of Open Access Journals (Sweden)

Maria Homéria Leite de Morais Sampaio

Full Text Available Abstract Postural changes and pain in the spine of children and adolescents of school age are influenced by the permanent incorrect sitting position, misuse of furniture and weight of the backpack. The aim of this study was to verify postural changes and pain in the academic performance of elementary school students. It was a cross-sectional study, with a descriptive and analytical approach. The subjects were 83 elementary students, aged 8 to 12 years, of Kindergarten and Elementary Education at Paulo Sarasate Municipal School, Ceará. It was performed from March to June 2008. In the physical examination it was used an evaluation form, based on Global Postural reeducation, by Souchard method, which included the variables: compromised anterior, posterior, superior shoulder muscle chains and pain and, in academic performance, a semi-structured questionnaire with the variables: behavior, attendance and performance. The data was stored in the Statistical Package for the Social Science (SPSS version 18.0. In the descriptive analysis, absolute and relative frequencies were used, and in the inferential analysis, the following tests were applied: Mann-Whitney, to verify the existence of significant differences in changes in groups A and B, at a significance level of 5%, and the F statistical test, for comparing postural changes and pain, in the three grades. Results: it was noted that the majority of the students presented postural changes, such as forward head, lifted shoulders, dorsal hyperkyphosis and pain, which predominantly occurred in the anterior chain, when compared with the posterior and superior chains. These changes in both groups were statistically significant only in subjects of the fifth grade with satisfactory academic performance and behavior. It was concluded that there was no association between postural changes and school performance, although it was influenced by pain.

16. Multimodal integration in statistical learning

DEFF Research Database (Denmark)

Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan

2014-01-01

, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.......Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study...

17. Potential errors and misuse of statistics in studies on leakage in endodontics.

Science.gov (United States)

Lucena, C; Lopez, J M; Pulgar, R; Abalos, C; Valderrama, M J

2013-04-01

To assess the quality of the statistical methodology used in studies of leakage in Endodontics, and to compare the results found using appropriate versus inappropriate inferential statistical methods. The search strategy used the descriptors 'root filling' 'microleakage', 'dye penetration', 'dye leakage', 'polymicrobial leakage' and 'fluid filtration' for the time interval 2001-2010 in journals within the categories 'Dentistry, Oral Surgery and Medicine' and 'Materials Science, Biomaterials' of the Journal Citation Report. All retrieved articles were reviewed to find potential pitfalls in statistical methodology that may be encountered during study design, data management or data analysis. The database included 209 papers. In all the studies reviewed, the statistical methods used were appropriate for the category attributed to the outcome variable, but in 41% of the cases, the chi-square test or parametric methods were inappropriately selected subsequently. In 2% of the papers, no statistical test was used. In 99% of cases, a statistically 'significant' or 'not significant' effect was reported as a main finding, whilst only 1% also presented an estimation of the magnitude of the effect. When the appropriate statistical methods were applied in the studies with originally inappropriate data analysis, the conclusions changed in 19% of the cases. Statistical deficiencies in leakage studies may affect their results and interpretation and might be one of the reasons for the poor agreement amongst the reported findings. Therefore, more effort should be made to standardize statistical methodology. © 2012 International Endodontic Journal.

18. Statistical decay of giant resonances

International Nuclear Information System (INIS)

Dias, H.; Teruya, N.; Wolynec, E.

1986-01-01

Statistical calculations to predict the neutron spectrum resulting from the decay of Giant Resonances are discussed. The dependence of the resutls on the optical potential parametrization and on the level density of the residual nucleus is assessed. A Hauser-Feshbach calculation is performed for the decay of the monople giant resonance in 208 Pb using the experimental levels of 207 Pb from a recent compilation. The calculated statistical decay is in excelent agreement with recent experimental data, showing that the decay of this resonance is dominantly statistical, as predicted by continuum RPA calculations. (Author) [pt

19. Statistical decay of giant resonances

International Nuclear Information System (INIS)

Dias, H.; Teruya, N.; Wolynec, E.

1986-02-01

Statistical calculations to predict the neutron spectrum resulting from the decay of Giant Resonances are discussed. The dependence of the results on the optical potential parametrization and on the level density of the residual nucleus is assessed. A Hauser-Feshbach calculation is performed for the decay of the monopole giant resonance in 208 Pb using the experimental levels of 207 Pb from a recent compilation. The calculated statistical decay is in excellent agreement with recent experimental data, showing that decay of this resonance is dominantly statistical, as predicted by continuum RPA calculations. (Author) [pt

20. A statistical study of the performance of the Hakamada-Akasofu-Fry version 2 numerical model in predicting solar shock arrival times at Earth during different phases of solar cycle 23

Energy Technology Data Exchange (ETDEWEB)

McKenna-Lawlor, S.M.P. [National Univ. of Ireland, Maynooth, Co. Kildare (Ireland). Space Technology Ireland; Fry, C.D. [Exploration Physics International, Inc., Huntsville, AL (United States); Dryer, M. [Exploration Physics International, Inc., Huntsville, AL (United States); NOAA Space Environment Center, Boulder, CO (United States); Heynderickx, D. [D-H Consultancy, Leuven (Belgium); Kecskemety, K. [KFKI Research Institute for Particle and Nuclear Physics, Budapest (Hungary); Kudela, K. [Institute of Experimental Physics, Kosice (Slovakia); Balaz, J. [National Univ. of Ireland, Maynooth, Co. Kildare (Ireland). Space Technology Ireland; Institute of Experimental Physics, Kosice (Slovakia)

2012-07-01

The performance of the Hakamada Akasofu-Fry, version 2 (HAFv.2) numerical model, which provides predictions of solar shock arrival times at Earth, was subjected to a statistical study to investigate those solar/interplanetary circumstances under which the model performed well/poorly during key phases (rise/maximum/decay) of solar cycle 23. In addition to analyzing elements of the overall data set (584 selected events) associated with particular cycle phases, subsets were formed such that those events making up a particular sub-set showed common characteristics. The statistical significance of the results obtained using the various sets/subsets was generally very low and these results were not significant as compared with the hit by chance rate (50 %). This implies a low level of confidence in the predictions of the model with no compelling result encouraging its use. However, the data suggested that the success rates of HAFv.2 were higher when the background solar wind speed at the time of shock initiation was relatively fast. Thus, in scenarios where the background solar wind speed is elevated and the calculated success rate significantly exceeds the rate by chance, the forecasts could provide potential value to the customer. With the composite statistics available for solar cycle 23, the calculated success rate at high solar wind speed, although clearly above 50 %, was indicative rather than conclusive. The RMS error estimated for shock arrival times for every cycle phase and for the composite sample was in each case significantly better than would be expected for a random data set. Also, the parameter ''Probability of Detection, yes'' (PODy) which presents the Proportion of Yes observations that were correctly forecast (i.e. the ratio between the shocks correctly predicted and all the shocks observed), yielded values for the rise/maximum/decay phases of the cycle and using the composite sample of 0.85, 0.64, 0.79 and 0.77, respectively. The

1. A statistical study of the performance of the Hakamada-Akasofu-Fry version 2 numerical model in predicting solar shock arrival times at Earth during different phases of solar cycle 23

Directory of Open Access Journals (Sweden)

S. M. P. McKenna-Lawlor

2012-02-01

Full Text Available The performance of the Hakamada Akasofu-Fry, version 2 (HAFv.2 numerical model, which provides predictions of solar shock arrival times at Earth, was subjected to a statistical study to investigate those solar/interplanetary circumstances under which the model performed well/poorly during key phases (rise/maximum/decay of solar cycle 23. In addition to analyzing elements of the overall data set (584 selected events associated with particular cycle phases, subsets were formed such that those events making up a particular sub-set showed common characteristics. The statistical significance of the results obtained using the various sets/subsets was generally very low and these results were not significant as compared with the hit by chance rate (50%. This implies a low level of confidence in the predictions of the model with no compelling result encouraging its use. However, the data suggested that the success rates of HAFv.2 were higher when the background solar wind speed at the time of shock initiation was relatively fast. Thus, in scenarios where the background solar wind speed is elevated and the calculated success rate significantly exceeds the rate by chance, the forecasts could provide potential value to the customer. With the composite statistics available for solar cycle 23, the calculated success rate at high solar wind speed, although clearly above 50%, was indicative rather than conclusive. The RMS error estimated for shock arrival times for every cycle phase and for the composite sample was in each case significantly better than would be expected for a random data set. Also, the parameter "Probability of Detection, yes" (PODy which presents the Proportion of Yes observations that were correctly forecast (i.e. the ratio between the shocks correctly predicted and all the shocks observed, yielded values for the rise/maximum/decay phases of the cycle and using the composite sample of 0.85, 0.64, 0.79 and 0.77, respectively. The statistical

2. Statistics in Schools

Science.gov (United States)

Information Statistics in Schools Educate your students about the value and everyday use of statistics. The Statistics in Schools program provides resources for teaching and learning with real life data. Explore the site for standards-aligned, classroom-ready activities. Statistics in Schools Math Activities History

3. Transport Statistics - Transport - UNECE

Science.gov (United States)

Sustainable Energy Statistics Trade Transport Themes UNECE and the SDGs Climate Change Gender Ideas 4 Change UNECE Weekly Videos UNECE Transport Areas of Work Transport Statistics Transport Transport Statistics About us Terms of Reference Meetings and Events Meetings Working Party on Transport Statistics (WP.6

4. Generalized quantum statistics

International Nuclear Information System (INIS)

Chou, C.

1992-01-01

In the paper, a non-anyonic generalization of quantum statistics is presented, in which Fermi-Dirac statistics (FDS) and Bose-Einstein statistics (BES) appear as two special cases. The new quantum statistics, which is characterized by the dimension of its single particle Fock space, contains three consistent parts, namely the generalized bilinear quantization, the generalized quantum mechanical description and the corresponding statistical mechanics

5. Handbook of tables for order statistics from lognormal distributions with applications

CERN Document Server

Balakrishnan, N

1999-01-01

Lognormal distributions are one of the most commonly studied models in the sta­ tistical literature while being most frequently used in the applied literature. The lognormal distributions have been used in problems arising from such diverse fields as hydrology, biology, communication engineering, environmental science, reliability, agriculture, medical science, mechanical engineering, material science, and pharma­ cology. Though the lognormal distributions have been around from the beginning of this century (see Chapter 1), much of the work concerning inferential methods for the parameters of lognormal distributions has been done in the recent past. Most of these methods of inference, particUlarly those based on censored samples, involve extensive use of numerical methods to solve some nonlinear equations. Order statistics and their moments have been discussed quite extensively in the literature for many distributions. It is very well known that the moments of order statistics can be derived explicitly only...

6. Statistical hypothesis testing and common misinterpretations: Should we abandon p-value in forensic science applications?

Science.gov (United States)

Taroni, F; Biedermann, A; Bozza, S

2016-02-01

Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

7. Reasoning with data an introduction to traditional and Bayesian statistics using R

CERN Document Server

Stanton, Jeffrey M

2017-01-01

Engaging and accessible, this book teaches readers how to use inferential statistical thinking to check their assumptions, assess evidence about their beliefs, and avoid overinterpreting results that may look more promising than they really are. It provides step-by-step guidance for using both classical (frequentist) and Bayesian approaches to inference. Statistical techniques covered side by side from both frequentist and Bayesian approaches include hypothesis testing, replication, analysis of variance, calculation of effect sizes, regression, time series analysis, and more. Students also get a complete introduction to the open-source R programming language and its key packages. Throughout the text, simple commands in R demonstrate essential data analysis skills using real-data examples. The companion website provides annotated R code for the book's examples, in-class exercises, supplemental reading lists, and links to online videos, interactive materials, and other resources.

8. PERFORMANCE

Directory of Open Access Journals (Sweden)

M Cilli

2014-10-01

Full Text Available This study aimed to investigate the kinematic and kinetic changes when resistance is applied in horizontal and vertical directions, produced by using different percentages of body weight, caused by jumping movements during a dynamic warm-up. The group of subjects consisted of 35 voluntary male athletes (19 basketball and 16 volleyball players; age: 23.4 ± 1.4 years, training experience: 9.6 ± 2.7 years; height: 177.2 ± 5.7 cm, body weight: 69.9 ± 6.9 kg studying Physical Education, who had a jump training background and who were training for 2 hours, on 4 days in a week. A dynamic warm-up protocol containing seven specific resistance movements with specific resistance corresponding to different percentages of body weight (2%, 4%, 6%, 8%, 10% was applied randomly on non consecutive days. Effects of different warm-up protocols were assessed by pre-/post- exercise changes in jump height in the countermovement jump (CMJ and the squat jump (SJ measured using a force platform and changes in hip and knee joint angles at the end of the eccentric phase measured using a video camera. A significant increase in jump height was observed in the dynamic resistance warm-up conducted with different percentages of body weight (p 0.05. In jump movements before and after the warm-up, while no significant difference between the vertical ground reaction forces applied by athletes was observed (p>0.05, in some cases of resistance, a significant reduction was observed in hip and knee joint angles (p<0.05. The dynamic resistance warm-up method was found to cause changes in the kinematics of jumping movements, as well as an increase in jump height values. As a result, dynamic warm-up exercises could be applicable in cases of resistance corresponding to 6-10% of body weight applied in horizontal and vertical directions in order to increase the jump performance acutely.

9. STARTING BLOCK PERFORMANCE IN SPRINTERS: A STATISTICAL METHOD FOR IDENTIFYING DISCRIMINATIVE PARAMETERS OF THE PERFORMANCE AND AN ANALYSIS OF THE EFFECT OF PROVIDING FEEDBACK OVER A 6-WEEK PERIOD

Directory of Open Access Journals (Sweden)

Sylvie Fortier

2005-06-01

Full Text Available The purpose of this study was twofold: (a to examine if kinetic and kinematic parameters of the sprint start could differentiate elite from sub-elite sprinters and, (b to investigate whether providing feedback (FB about selected parameters could improve starting block performance of intermediate sprinters over a 6-week training period. Twelve male sprinters, assigned to an elite or a sub-elite group, participated in Experiment 1. Eight intermediate sprinters participated in Experiment 2. All athletes were required to perform three sprint starts at maximum intensity followed by a 10-m run. To detect differences between elite and sub-elite groups, comparisons were made using t-tests for independent samples. Parameters reaching a significant group difference were retained for the linear discriminant analysis (LDA. The LDA yielded four discriminative kinetic parameters. Feedback about these selected parameters was given to sprinters in Experiment 2. For this experiment, data acquisition was divided into three periods. The first six sessions were without specific FB, whereas the following six sessions were enriched by kinetic FB. Finally, athletes underwent a retention session (without FB 4 weeks after the twelfth session. Even though differences were found in the time to front peak force, the time to rear peak force, and the front peak force in the retention session, the results of the present study showed that providing FB about selected kinetic parameters differentiating elite from sub-elite sprinters did not improve the starting block performance of intermediate sprinters

10. National Statistical Commission and Indian Official Statistics

Author Affiliations. T J Rao1. C. R. Rao Advanced Institute of Mathematics, Statistics and Computer Science (AIMSCS) University of Hyderabad Campus Central University Post Office, Prof. C. R. Rao Road Hyderabad 500 046, AP, India.

11. Statistical test of anarchy

International Nuclear Information System (INIS)

Gouvea, Andre de; Murayama, Hitoshi

2003-01-01

'Anarchy' is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on vertical bar U e3 vertical bar 2 , the remaining unknown 'angle' of the leptonic mixing matrix

12. A method for screening active components from Chinese herbs by cell membrane chromatography-offline-high performance liquid chromatography/mass spectrometry and an online statistical tool for data processing.

Science.gov (United States)

Cao, Yan; Wang, Shaozhan; Li, Yinghua; Chen, Xiaofei; Chen, Langdong; Wang, Dongyao; Zhu, Zhenyu; Yuan, Yongfang; Lv, Diya

2018-03-09

Cell membrane chromatography (CMC) has been successfully applied to screen bioactive compounds from Chinese herbs for many years, and some offline and online two-dimensional (2D) CMC-high performance liquid chromatography (HPLC) hyphenated systems have been established to perform screening assays. However, the requirement of sample preparation steps for the second-dimensional analysis in offline systems and the need for an interface device and technical expertise in the online system limit their extensive use. In the present study, an offline 2D CMC-HPLC analysis combined with the XCMS (various forms of chromatography coupled to mass spectrometry) Online statistical tool for data processing was established. First, our previously reported online 2D screening system was used to analyze three Chinese herbs that were reported to have potential anti-inflammatory effects, and two binding components were identified. By contrast, the proposed offline 2D screening method with XCMS Online analysis was applied, and three more ingredients were discovered in addition to the two compounds revealed by the online system. Then, cross-validation of the three compounds was performed, and they were confirmed to be included in the online data as well, but were not identified there because of their low concentrations and lack of credible statistical approaches. Last, pharmacological experiments showed that these five ingredients could inhibit IL-6 release and IL-6 gene expression on LPS-induced RAW cells in a dose-dependent manner. Compared with previous 2D CMC screening systems, this newly developed offline 2D method needs no sample preparation steps for the second-dimensional analysis, and it is sensitive, efficient, and convenient. It will be applicable in identifying active components from Chinese herbs and practical in discovery of lead compounds derived from herbs. Copyright © 2018 Elsevier B.V. All rights reserved.

13. Descriptive statistics: the specification of statistical measures and their presentation in tables and graphs. Part 7 of a series on evaluation of scientific publications.

Science.gov (United States)

Spriestersbach, Albert; Röhrig, Bernd; du Prel, Jean-Baptist; Gerhold-Ay, Aslihan; Blettner, Maria

2009-09-01

Descriptive statistics are an essential part of biometric analysis and a prerequisite for the understanding of further statistical evaluations, including the drawing of inferences. When data are well presented, it is usually obvious whether the author has collected and evaluated them correctly and in keeping with accepted practice in the field. Statistical variables in medicine may be of either the metric (continuous, quantitative) or categorical (nominal, ordinal) type. Easily understandable examples are given. Basic techniques for the statistical description of collected data are presented and illustrated with examples. The goal of a scientific study must always be clearly defined. The definition of the target value or clinical endpoint determines the level of measurement of the variables in question. Nearly all variables, whatever their level of measurement, can be usefully presented graphically and numerically. The level of measurement determines what types of diagrams and statistical values are appropriate. There are also different ways of presenting combinations of two independent variables graphically and numerically. The description of collected data is indispensable. If the data are of good quality, valid and important conclusions can already be drawn when they are properly described. Furthermore, data description provides a basis for inferential statistics.

14. The Relationship between of Teacher Competence, Emotional Intelligence and Teacher Performance Madrasah Tsanawiyah at District of Serang Banten

Science.gov (United States)

Wahyuddin, Wawan

2016-01-01

This study wants to examine the relationship between teacher competence and emotional intelligence that held by teachers to increase the teacher performance Madrasah Tsanawiyah at district of Serang Banten. This research was conducted with the quantitative method, through analysis descriptive and inferential. Samples the research were teachers…

15. Statistical theory of dynamo

Science.gov (United States)

Kim, E.; Newton, A. P.

2012-04-01

One major problem in dynamo theory is the multi-scale nature of the MHD turbulence, which requires statistical theory in terms of probability distribution functions. In this contribution, we present the statistical theory of magnetic fields in a simplified mean field α-Ω dynamo model by varying the statistical property of alpha, including marginal stability and intermittency, and then utilize observational data of solar activity to fine-tune the mean field dynamo model. Specifically, we first present a comprehensive investigation into the effect of the stochastic parameters in a simplified α-Ω dynamo model. Through considering the manifold of marginal stability (the region of parameter space where the mean growth rate is zero), we show that stochastic fluctuations are conductive to dynamo. Furthermore, by considering the cases of fluctuating alpha that are periodic and Gaussian coloured random noise with identical characteristic time-scales and fluctuating amplitudes, we show that the transition to dynamo is significantly facilitated for stochastic alpha with random noise. Furthermore, we show that probability density functions (PDFs) of the growth-rate, magnetic field and magnetic energy can provide a wealth of useful information regarding the dynamo behaviour/intermittency. Finally, the precise statistical property of the dynamo such as temporal correlation and fluctuating amplitude is found to be dependent on the distribution the fluctuations of stochastic parameters. We then use observations of solar activity to constrain parameters relating to the effect in stochastic α-Ω nonlinear dynamo models. This is achieved through performing a comprehensive statistical comparison by computing PDFs of solar activity from observations and from our simulation of mean field dynamo model. The observational data that are used are the time history of solar activity inferred for C14 data in the past 11000 years on a long time scale and direct observations of the sun spot

16. Statistics For Dummies

CERN Document Server

Rumsey, Deborah

2011-01-01

The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

17. Recreational Boating Statistics 2012

Data.gov (United States)

Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

18. Recreational Boating Statistics 2013

Data.gov (United States)

Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

19. Statistical data analysis handbook

National Research Council Canada - National Science Library

Wall, Francis J

1986-01-01

It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

20. CMS Program Statistics

Data.gov (United States)

U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

1. Recreational Boating Statistics 2011

Data.gov (United States)

Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

2. Uterine Cancer Statistics

Science.gov (United States)

... Doing AMIGAS Stay Informed Cancer Home Uterine Cancer Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... the most commonly diagnosed gynecologic cancer. U.S. Cancer Statistics Data Visualizations Tool The Data Visualizations tool makes ...

3. Tuberculosis Data and Statistics

Science.gov (United States)

... Advisory Groups Federal TB Task Force Data and Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... Set) Mortality and Morbidity Weekly Reports Data and Statistics Decrease in Reported Tuberculosis Cases MMWR 2010; 59 ( ...

4. National transportation statistics 2011

Science.gov (United States)

2011-04-01

Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics : (BTS), National Transportation Statistics presents information on the U.S. transportation system, including : its physical components, safety reco...

5. National Transportation Statistics 2008

Science.gov (United States)

2009-01-08

Compiled and published by the U.S. Department of Transportations Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record...

6. Mental Illness Statistics

Science.gov (United States)

... News & Events About Us Home > Health Information Share Statistics Research shows that mental illnesses are common in ... of mental illnesses, such as suicide and disability. Statistics Top ı cs Mental Illness Any Anxiety Disorder ...

7. School Violence: Data & Statistics

Science.gov (United States)

... Social Media Publications Injury Center School Violence: Data & Statistics Recommend on Facebook Tweet Share Compartir The first ... Vehicle Safety Traumatic Brain Injury Injury Response Data & Statistics (WISQARS) Funded Programs Press Room Social Media Publications ...

8. Caregiver Statistics: Demographics

Science.gov (United States)

... You are here Home Selected Long-Term Care Statistics Order this publication Printer-friendly version What is ... needs and services are wide-ranging and complex, statistics may vary from study to study. Sources for ...

9. Aortic Aneurysm Statistics

Science.gov (United States)

... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

10. Alcohol Facts and Statistics

Science.gov (United States)

... Standard Drink? Drinking Levels Defined Alcohol Facts and Statistics Print version Alcohol Use in the United States: ... 1238–1245, 2004. PMID: 15010446 National Center for Statistics and Analysis. 2014 Crash Data Key Findings (Traffic ...

11. National Transportation Statistics 2009

Science.gov (United States)

2010-01-21

Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record, ...

12. Statistics for Finance

DEFF Research Database (Denmark)

Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

13. Principles of applied statistics

National Research Council Canada - National Science Library

Cox, D. R; Donnelly, Christl A

2011-01-01

.... David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation...

14. Applying contemporary statistical techniques

CERN Document Server

Wilcox, Rand R

2003-01-01

Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

15. Interactive statistics with ILLMO

NARCIS (Netherlands)

Martens, J.B.O.S.

2014-01-01

Progress in empirical research relies on adequate statistical analysis and reporting. This article proposes an alternative approach to statistical modeling that is based on an old but mostly forgotten idea, namely Thurstone modeling. Traditional statistical methods assume that either the measured

16. Ethics in Statistics

Science.gov (United States)

Lenard, Christopher; McCarthy, Sally; Mills, Terence

2014-01-01

There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…

17. Youth Sports Safety Statistics

Science.gov (United States)

... 6):794-799. 31 American Heart Association. CPR statistics. www.heart.org/HEARTORG/CPRAndECC/WhatisCPR/CPRFactsandStats/CPRpercent20Statistics_ ... Mental Health Services Administration, Center for Behavioral Health Statistics and Quality. (January 10, 2013). The DAWN Report: ...

18. THE INFLUENCE OF ORGANIZATIONAL CLIMATE, TRANSFORMATIONAL LEADERSHIP, AND WORK MOTIVATION ON TEACHER JOB PERFORMANCE

Directory of Open Access Journals (Sweden)

K. Kartini

2017-07-01

Full Text Available This research aimed at investigating the influence of organizational climate, transformational leadership, and work motivation on teacher job performance at Pondok Modern Tazakka, Batang - Central Java.The research using a quantitative approach with survey method. Amount of the samples in this research are 55 teachers that selected randomly. The data were analyzed by using descriptive statistics and inferential statistic using path analysis. (1 organizational climate have positive direct effect on teacher performance with path coefficient (py1 = 0,257 and t-count 2,963> t-table 1,684; (2 transformational leadership have positive direct effect on teacher performance with path coefficient (py2 = 0,489 and t-count 5,164> t-table 1,684, (3 work motivation have positive direct effect to teacher performance with path coefficient (py3 = 0,261 and t count 2,42> t-table 1,684, (4 organizational climate have positive direct effect (p31 = 0.391 and t-count 3.990> t-table 1.684, and (5 transformational leadership have a direct positive effect on work motivation with path coefficient (p32 = 0.526 and t-count 5,376> t- table 1,684. The Conclusion is organizational climate, transformational leadership, and work motivation have a direct effect on teacher job performance. Organizational climate and transformational leadership also have a direct effect on teacher work motivation. Therefore to improve teacher job performance, organizational climate, transformational leadership, and work motivation must be considered to be improved.

19. Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain

Science.gov (United States)

Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young

2010-01-01

Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071

20. Statistical summary 1990-91

International Nuclear Information System (INIS)

1991-01-01

The information contained in this statistical summary leaflet summarizes in bar charts or pie charts Nuclear Electric's performance in 1990-91 in the areas of finance, plant and plant operations, safety, commercial operations and manpower. It is intended that the information will provide a basis for comparison in future years. The leaflet also includes a summary of Nuclear Electric's environmental policy statement. (UK)