WorldWideScience

Sample records for perform inferential statistical

  1. Performing Inferential Statistics Prior to Data Collection

    Science.gov (United States)

    Trafimow, David; MacDonald, Justin A.

    2017-01-01

    Typically, in education and psychology research, the investigator collects data and subsequently performs descriptive and inferential statistics. For example, a researcher might compute group means and use the null hypothesis significance testing procedure to draw conclusions about the populations from which the groups were drawn. We propose an…

  2. A Statistical Primer: Understanding Descriptive and Inferential Statistics

    OpenAIRE

    Gillian Byrne

    2007-01-01

    As libraries and librarians move more towards evidence‐based decision making, the data being generated in libraries is growing. Understanding the basics of statistical analysis is crucial for evidence‐based practice (EBP), in order to correctly design and analyze researchas well as to evaluate the research of others. This article covers the fundamentals of descriptive and inferential statistics, from hypothesis construction to sampling to common statistical techniques including chi‐square, co...

  3. Descriptive and inferential statistical methods used in burns research.

    Science.gov (United States)

    Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars

    2010-05-01

    Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals

  4. The Development of Introductory Statistics Students' Informal Inferential Reasoning and Its Relationship to Formal Inferential Reasoning

    Science.gov (United States)

    Jacob, Bridgette L.

    2013-01-01

    The difficulties introductory statistics students have with formal statistical inference are well known in the field of statistics education. "Informal" statistical inference has been studied as a means to introduce inferential reasoning well before and without the formalities of formal statistical inference. This mixed methods study…

  5. [The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].

    Science.gov (United States)

    Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel

    2017-01-01

    The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  6. The research protocol VI: How to choose the appropriate statistical test. Inferential statistics

    Directory of Open Access Journals (Sweden)

    Eric Flores-Ruiz

    2017-10-01

    Full Text Available The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  7. An introduction to inferential statistics: A review and practical guide

    International Nuclear Information System (INIS)

    Marshall, Gill; Jonker, Leon

    2011-01-01

    Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.

  8. An introduction to inferential statistics: A review and practical guide

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, Gill, E-mail: gill.marshall@cumbria.ac.u [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom); Jonker, Leon [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom)

    2011-02-15

    Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.

  9. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    Science.gov (United States)

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  10. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    Science.gov (United States)

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  11. Applying contemporary philosophy in mathematics and statistics education : The perspective of inferentialism

    NARCIS (Netherlands)

    Schindler, Maike; Mackrell, Kate; Pratt, Dave; Bakker, A.

    2017-01-01

    Schindler, M., Mackrell, K., Pratt, D., & Bakker, A. (2017). Applying contemporary philosophy in mathematics and statistics education: The perspective of inferentialism. In G. Kaiser (Ed.). Proceedings of the 13th International Congress on Mathematical Education, ICME-13

  12. A Response to White and Gorard: Against Inferential Statistics: How and Why Current Statistics Teaching Gets It Wrong

    Science.gov (United States)

    Nicholson, James; Ridgway, Jim

    2017-01-01

    White and Gorard make important and relevant criticisms of some of the methods commonly used in social science research, but go further by criticising the logical basis for inferential statistical tests. This paper comments briefly on matters we broadly agree on with them and more fully on matters where we disagree. We agree that too little…

  13. Inferential statistics, power estimates, and study design formalities continue to suppress biomedical innovation

    OpenAIRE

    Kern, Scott E.

    2014-01-01

    Innovation is the direct intended product of certain styles in research, but not of others. Fundamental conflicts between descriptive vs inferential statistics, deductive vs inductive hypothesis testing, and exploratory vs pre-planned confirmatory research designs have been played out over decades, with winners and losers and consequences. Longstanding warnings from both academics and research-funding interests have failed to influence effectively the course of these battles. The NIH publicly...

  14. On inferentialism

    Science.gov (United States)

    Radford, Luis

    2017-12-01

    This article is a critical commentary on inferentialism in mathematics education. In the first part, I comment on some of the major shortcomings that inferentialists see in the theoretical underpinnings of representationalist, empiricist, and socioconstructivist mathematics education theories. I discuss in particular the criticism that inferentialism makes of the social dimension as conceptualized by socioconstructivism and the question related to the objectivity of knowledge. In the second part, I discuss some of the theoretical foundations of inferentialism in mathematics education and try to answer the question of whether or not inferentialism overcomes the individual-social divide. In the third part, I speculate on what I think inferentialism accomplishes and what I think it does not.

  15. Why inferential statistics are inappropriate for development studies and how the same data can be better used

    OpenAIRE

    Ballinger, Clint

    2011-01-01

    The purpose of this paper is twofold: 1) to highlight the widely ignored but fundamental problem of 'superpopulations' for the use of inferential statistics in development studies. We do not to dwell on this problem however as it has been sufficiently discussed in older papers by statisticians that social scientists have nevertheless long chosen to ignore; the interested reader can turn to those for greater detail. 2) to show that descriptive statistics both avoid the problem of s...

  16. A Framework to Support Research on Informal Inferential Reasoning

    Science.gov (United States)

    Zieffler, Andrew; Garfield, Joan; delMas, Robert; Reading, Chris

    2008-01-01

    Informal inferential reasoning is a relatively recent concept in the research literature. Several research studies have defined this type of cognitive process in slightly different ways. In this paper, a working definition of informal inferential reasoning based on an analysis of the key aspects of statistical inference, and on research from…

  17. Surface Area of Patellar Facets: Inferential Statistics in the Iraqi Population

    Directory of Open Access Journals (Sweden)

    Ahmed Al-Imam

    2017-01-01

    Full Text Available Background. The patella is the largest sesamoid bone in the body; its three-dimensional complexity necessitates biomechanical perfection. Numerous pathologies occur at the patellofemoral unit which may end in degenerative changes. This study aims to test the presence of statistical correlation between the surface areas of patellar facets and other patellar morphometric parameters. Materials and Methods. Forty dry human patellae were studied. The morphometry of each patella was measured using a digital Vernier Caliper, electronic balance, and image analyses software known as ImageJ. The patellar facetal surface area was correlated with patellar weight, height, width, and thickness. Results. Inferential statistics proved the existence of linear correlation of total facetal surface area and patellar weight, height, width, and thickness. The correlation was strongest for surface area versus patellar weight. The lateral facetal area was found persistently larger than the medial facetal area, the p value was found to be <0.001 (one-tailed t-test for right patellae, and another significant p value of < 0.001 (one-tailed t-test was found for left patellae. Conclusion. These data are vital for the restoration of the normal biomechanics of the patellofemoral unit; these are to be consulted during knee surgeries and implant designs and can be of an indispensable anthropometric, interethnic, and biometric value.

  18. Attitude towards statistics and performance among post-graduate students

    Science.gov (United States)

    Rosli, Mira Khalisa; Maat, Siti Mistima

    2017-05-01

    For student to master Statistics is a necessity, especially for those post-graduates that are involved in the research field. The purpose of this research was to identify the attitude towards Statistics among the post-graduates and to determine the relationship between the attitude towards Statistics and post-graduates' of Faculty of Education, UKM, Bangi performance. 173 post-graduate students were chosen randomly to participate in the study. These students registered in Research Methodology II course that was introduced by faculty. A survey of attitude toward Statistics using 5-points Likert scale was used for data collection purposes. The instrument consists of four components such as affective, cognitive competency, value and difficulty. The data was analyzed using the SPSS version 22 in producing the descriptive and inferential Statistics output. The result of this research showed that there is a medium and positive relation between attitude towards statistics and students' performance. As a conclusion, educators need to access students' attitude towards the course to accomplish the learning outcomes.

  19. On optimal feedforward and ILC : the role of feedback for optimal performance and inferential control

    NARCIS (Netherlands)

    van Zundert, J.C.D.; Oomen, T.A.E

    2017-01-01

    The combination of feedback control with inverse model feedforward control or iterative learning control is known to yield high performance. The aim of this paper is to clarify the role of feedback in the design of feedforward controllers, with specific attention to the inferential situation. Recent

  20. Conditionals and inferential connections: A hypothetical inferential theory.

    Science.gov (United States)

    Douven, Igor; Elqayam, Shira; Singmann, Henrik; van Wijnbergen-Huitink, Janneke

    2018-03-01

    Intuition suggests that for a conditional to be evaluated as true, there must be some kind of connection between its component clauses. In this paper, we formulate and test a new psychological theory to account for this intuition. We combined previous semantic and psychological theorizing to propose that the key to the intuition is a relevance-driven, satisficing-bounded inferential connection between antecedent and consequent. To test our theory, we created a novel experimental paradigm in which participants were presented with a soritical series of objects, notably colored patches (Experiments 1 and 4) and spheres (Experiment 2), or both (Experiment 3), and were asked to evaluate related conditionals embodying non-causal inferential connections (such as "If patch number 5 is blue, then so is patch number 4"). All four experiments displayed a unique response pattern, in which (largely determinate) responses were sensitive to parameters determining inference strength, as well as to consequent position in the series, in a way analogous to belief bias. Experiment 3 showed that this guaranteed relevance can be suppressed, with participants reverting to the defective conditional. Experiment 4 showed that this pattern can be partly explained by a measure of inference strength. This pattern supports our theory's "principle of relevant inference" and "principle of bounded inference," highlighting the dual processing characteristics of the inferential connection. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  1. [Inferential evaluation of intimacy based on observation of interpersonal communication].

    Science.gov (United States)

    Kimura, Masanori

    2015-06-01

    How do people inferentially evaluate others' levels of intimacy with friends? We examined the inferential evaluation of intimacy based on the observation of interpersonal communication. In Experiment 1, participants (N = 41) responded to questions after observing conversations between friends. Results indicated that participants inferentially evaluated not only goodness of communication, but also intimacy between friends, using an expressivity heuristic approach. In Experiment 2, we investigated how inferential evaluation of intimacy was affected by prior information about relationships and by individual differences in face-to-face interactional ability. Participants (N = 64) were divided into prior- and no-prior-information groups and all performed the same task as in Experiment 1. Additionally, their interactional ability was assessed. In the prior-information group, individual differences had no effect on inferential evaluation of intimacy. On the other hand, in the no-prior-information group, face-to-face interactional ability partially influenced evaluations of intimacy. Finally, we discuss the fact that to understand one's social environment, it is important to observe others' interpersonal communications.

  2. The Relationship between Test Anxiety and Academic Performance of Students in Vital Statistics Course

    Directory of Open Access Journals (Sweden)

    Shirin Iranfar

    2013-12-01

    Full Text Available Introduction: Test anxiety is a common phenomenon among students and is one of the problems of educational system. The present study was conducted to investigate the test anxiety in vital statistics course and its association with academic performance of students at Kermanshah University of Medical Sciences. This study was descriptive-analytical and the study sample included the students studying in nursing and midwifery, paramedicine and health faculties that had taken vital statistics course and were selected through census method. Sarason questionnaire was used to analyze the test anxiety. Data were analyzed by descriptive and inferential statistics. The findings indicated no significant correlation between test anxiety and score of vital statistics course.

  3. Inferentializing Semantics

    Czech Academy of Sciences Publication Activity Database

    Peregrin, Jaroslav

    2010-01-01

    Roč. 39, č. 3 (2010), s. 255-274 ISSN 0022-3611 R&D Projects: GA ČR(CZ) GA401/07/0904 Institutional research plan: CEZ:AV0Z90090514 Keywords : inference * proof theory * model theory * inferentialism * semantics Subject RIV: AA - Philosophy ; Religion

  4. Computational statistics handbook with Matlab

    CERN Document Server

    Martinez, Wendy L

    2007-01-01

    Prefaces Introduction What Is Computational Statistics? An Overview of the Book Probability Concepts Introduction Probability Conditional Probability and Independence Expectation Common Distributions Sampling Concepts Introduction Sampling Terminology and Concepts Sampling Distributions Parameter Estimation Empirical Distribution Function Generating Random Variables Introduction General Techniques for Generating Random Variables Generating Continuous Random Variables Generating Discrete Random Variables Exploratory Data Analysis Introduction Exploring Univariate Data Exploring Bivariate and Trivariate Data Exploring Multidimensional Data Finding Structure Introduction Projecting Data Principal Component Analysis Projection Pursuit EDA Independent Component Analysis Grand Tour Nonlinear Dimensionality Reduction Monte Carlo Methods for Inferential Statistics Introduction Classical Inferential Statistics Monte Carlo Methods for Inferential Statist...

  5. An introduction to inferentialism in mathematics education

    Science.gov (United States)

    Derry, Jan

    2017-12-01

    This paper introduces the philosophical work of Robert Brandom, termed inferentialism, which underpins this collection and argues that it offers rich theoretical resources for reconsidering many of the challenges and issues that have arisen in mathematics education. Key to inferentialism is the privileging of the inferential over the representational in an account of meaning; and of direct concern here is the theoretical relevance of this to the process by which learners gain knowledge. Inferentialism requires that the correct application of a concept is to be understood in terms of inferential articulation, simply put, understanding it as having meaning only as part of a set of related concepts. The paper explains how Brandom's account of the meaning is inextricably tied to freedom and it is our responsiveness to reasons involving norms which makes humans a distinctive life form. In an educational context norms, function to delimit the domain in which knowledge is acquired and it is here that the neglect of our responsiveness to reasons is significant, not only for Brandom but also for Vygotsky, with implications for how knowledge is understood in mathematics classrooms. The paper explains the technical terms in Brandom's account of meaning, such as deontic scorekeeping, illustrating these through examples to show how the inferential articulation of a concept, and thus its correct application, is made visible. Inferentialism fosters the possibility of overcoming some of the thorny old problems that have seen those on the side of facts and disciplines opposed to those whose primary concern is the meaning making of learners.

  6. Back to basics: an introduction to statistics.

    Science.gov (United States)

    Halfens, R J G; Meijers, J M M

    2013-05-01

    In the second in the series, Professor Ruud Halfens and Dr Judith Meijers give an overview of statistics, both descriptive and inferential. They describe the first principles of statistics, including some relevant inferential tests.

  7. Inferential Role and the Ideal of Deductive Logic

    Directory of Open Access Journals (Sweden)

    Thomas Hofweber

    2010-11-01

    Full Text Available Although there is a prima facie strong case for a close connection between the meaning and inferential role of certain expressions, this connection seems seriously threatened by the semantic and logical paradoxes which rely on these inferential roles. Some philosophers have drawn radical conclusions from the paradoxes for the theory of meaning in general, and for which sentences in our language are true. I criticize these overreactions, and instead propose to distinguish two conceptions of inferential role. This distinction is closely tied to two conceptions of deductive logic, and it is the key, I argue, for understanding first the connection between meaning and inferential role, and second what the paradoxes show more generally.

  8. The use of statistics in real and simulated investigations performed by undergraduate health sciences' students

    OpenAIRE

    Pimenta, Rui; Nascimento, Ana; Vieira, Margarida; Costa, Elísio

    2010-01-01

    In previous works, we evaluated the statistical reasoning ability acquired by health sciences’ students carrying out their final undergraduate project. We found that these students achieved a good level of statistical literacy and reasoning in descriptive statistics. However, concerning inferential statistics the students did not reach a similar level. Statistics educators therefore claim for more effective ways to learn statistics such as project based investigations. These can be simulat...

  9. Inferentialism as an alternative to socioconstructivism in mathematics education

    Science.gov (United States)

    Noorloos, Ruben; Taylor, Samuel D.; Bakker, Arthur; Derry, Jan

    2017-12-01

    The purpose of this article is to draw the attention of mathematics education researchers to a relatively new semantic theory called inferentialism, as developed by the philosopher Robert Brandom. Inferentialism is a semantic theory which explains concept formation in terms of the inferences individuals make in the context of an intersubjective practice of acknowledging, attributing, and challenging one another's commitments. The article argues that inferentialism can help to overcome certain problems that have plagued the various forms of constructivism, and socioconstructivism in particular. Despite the range of socioconstructivist positions on offer, there is reason to think that versions of these problems will continue to haunt socioconstructivism. The problems are that socioconstructivists (i) have not come to a satisfactory resolution of the social-individual dichotomy, (ii) are still threatened by relativism, and (iii) have been vague in their characterization of what construction is. We first present these problems; then we introduce inferentialism, and finally we show how inferentialism can help to overcome the problems. We argue that inferentialism (i) contains a powerful conception of norms that can overcome the social-individual dichotomy, (ii) draws attention to the reality that constrains our inferences, and (iii) develops a clearer conception of learning in terms of the mastering of webs of reasons. Inferentialism therefore represents a powerful alternative theoretical framework to socioconstructivism.

  10. Adaptive inferential sensors based on evolving fuzzy models.

    Science.gov (United States)

    Angelov, Plamen; Kordon, Arthur

    2010-04-01

    A new technique to the design and use of inferential sensors in the process industry is proposed in this paper, which is based on the recently introduced concept of evolving fuzzy models (EFMs). They address the challenge that the modern process industry faces today, namely, to develop such adaptive and self-calibrating online inferential sensors that reduce the maintenance costs while keeping the high precision and interpretability/transparency. The proposed new methodology makes possible inferential sensors to recalibrate automatically, which reduces significantly the life-cycle efforts for their maintenance. This is achieved by the adaptive and flexible open-structure EFM used. The novelty of this paper lies in the following: (1) the overall concept of inferential sensors with evolving and self-developing structure from the data streams; (2) the new methodology for online automatic selection of input variables that are most relevant for the prediction; (3) the technique to detect automatically a shift in the data pattern using the age of the clusters (and fuzzy rules); (4) the online standardization technique used by the learning procedure of the evolving model; and (5) the application of this innovative approach to several real-life industrial processes from the chemical industry (evolving inferential sensors, namely, eSensors, were used for predicting the chemical properties of different products in The Dow Chemical Company, Freeport, TX). It should be noted, however, that the methodology and conclusions of this paper are valid for the broader area of chemical and process industries in general. The results demonstrate that well-interpretable and with-simple-structure inferential sensors can automatically be designed from the data stream in real time, which predict various process variables of interest. The proposed approach can be used as a basis for the development of a new generation of adaptive and evolving inferential sensors that can address the

  11. Two degree of freedom PID based inferential control of continuous bioreactor for ethanol production.

    Science.gov (United States)

    Pachauri, Nikhil; Singh, Vijander; Rani, Asha

    2017-05-01

    This article presents the development of inferential control scheme based on Adaptive linear neural network (ADALINE) soft sensor for the control of fermentation process. The ethanol concentration of bioreactor is estimated from temperature profile of the process using soft sensor. The prediction accuracy of ADALINE is enhanced by retraining it with immediate past measurements. The ADALINE and retrained ADALINE are used along with PID and 2-DOF-PID leading to APID, A2PID, RAPID and RA2PID inferential controllers. Further the parameters of 2-DOF-PID are optimized using Non-dominated sorted genetic algorithm-II and used with retrained ADALINE soft sensor which leads to RAN2PID inferential controller. Simulation results demonstrate that performance of proposed RAN2PID controller is better in comparison to other designed controllers in terms of qualitative and quantitative performance indices. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Ludics, dialogue and inferentialism

    Directory of Open Access Journals (Sweden)

    Alain Lecomte

    2013-12-01

    Full Text Available In this paper, we try to show that Ludics, a (pre-logical framework invented by J-Y. Girard, enables us to rethink some of the relationships between Philosophy, Semantics and Pragmatics. In particular, Ludics helps to shed light on the nature of dialogue and to articulate features of Brandom's inferentialism.

  13. The relationship between magical thinking, inferential confusion and obsessive-compulsive symptoms.

    Science.gov (United States)

    Goods, N A R; Rees, C S; Egan, S J; Kane, R T

    2014-01-01

    Inferential confusion is an under-researched faulty reasoning process in obsessive-compulsive disorder (OCD). Based on an overreliance on imagined possibilities, it shares similarities with the extensively researched construct of thought-action fusion (TAF). While TAF has been proposed as a specific subset of the broader construct of magical thinking, the relationship between inferential confusion and magical thinking is unexplored. The present study investigated this relationship, and hypothesised that magical thinking would partially mediate the relationship between inferential confusion and obsessive-compulsive symptoms. A non-clinical sample of 201 participants (M = 34.94, SD = 15.88) were recruited via convenience sampling. Regression analyses found the hypothesised mediating relationship was supported, as magical thinking did partially mediate the relationship between inferential confusion and OC symptoms. Interestingly, inferential confusion had the stronger relationship with OC symptoms in comparison to the other predictor variables. Results suggest that inferential confusion can both directly and indirectly (via magical thinking) impact on OC symptoms. Future studies with clinical samples should further investigate these constructs to determine whether similar patterns emerge, as this may eventually inform which cognitive errors to target in treatment of OCD.

  14. The use of regularization in inferential measurements

    International Nuclear Information System (INIS)

    Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.

    1999-01-01

    Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)

  15. Probability Theory Plus Noise: Descriptive Estimation and Inferential Judgment.

    Science.gov (United States)

    Costello, Fintan; Watts, Paul

    2018-01-01

    We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks. Copyright © 2018 Cognitive Science Society, Inc.

  16. Inferentialism and the Compositionality of Meaning

    Czech Academy of Sciences Publication Activity Database

    Peregrin, Jaroslav

    2009-01-01

    Roč. 1, č. 1 (2009), s. 154-181 ISSN 1877-3095 R&D Projects: GA ČR(CZ) GA401/07/0904 Institutional research plan: CEZ:AV0Z90090514 Keywords : inferentialism * compositionality * semantics Subject RIV: AA - Philosophy ; Religion

  17. Appraisal of within- and between-laboratory reproducibility of non-radioisotopic local lymph node assay using flow cytometry, LLNA:BrdU-FCM: comparison of OECD TG429 performance standard and statistical evaluation.

    Science.gov (United States)

    Yang, Hyeri; Na, Jihye; Jang, Won-Hee; Jung, Mi-Sook; Jeon, Jun-Young; Heo, Yong; Yeo, Kyung-Wook; Jo, Ji-Hoon; Lim, Kyung-Min; Bae, SeungJin

    2015-05-05

    Mouse local lymph node assay (LLNA, OECD TG429) is an alternative test replacing conventional guinea pig tests (OECD TG406) for the skin sensitization test but the use of a radioisotopic agent, (3)H-thymidine, deters its active dissemination. New non-radioisotopic LLNA, LLNA:BrdU-FCM employs a non-radioisotopic analog, 5-bromo-2'-deoxyuridine (BrdU) and flow cytometry. For an analogous method, OECD TG429 performance standard (PS) advises that two reference compounds be tested repeatedly and ECt(threshold) values obtained must fall within acceptable ranges to prove within- and between-laboratory reproducibility. However, this criteria is somewhat arbitrary and sample size of ECt is less than 5, raising concerns about insufficient reliability. Here, we explored various statistical methods to evaluate the reproducibility of LLNA:BrdU-FCM with stimulation index (SI), the raw data for ECt calculation, produced from 3 laboratories. Descriptive statistics along with graphical representation of SI was presented. For inferential statistics, parametric and non-parametric methods were applied to test the reproducibility of SI of a concurrent positive control and the robustness of results were investigated. Descriptive statistics and graphical representation of SI alone could illustrate the within- and between-laboratory reproducibility. Inferential statistics employing parametric and nonparametric methods drew similar conclusion. While all labs passed within- and between-laboratory reproducibility criteria given by OECD TG429 PS based on ECt values, statistical evaluation based on SI values showed that only two labs succeeded in achieving within-laboratory reproducibility. For those two labs that satisfied the within-lab reproducibility, between-laboratory reproducibility could be also attained based on inferential as well as descriptive statistics. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Negative inferential style, emotional clarity, and life stress: integrating vulnerabilities to depression in adolescence.

    Science.gov (United States)

    Stange, Jonathan P; Alloy, Lauren B; Flynn, Megan; Abramson, Lyn Y

    2013-01-01

    Negative inferential style and deficits in emotional clarity have been identified as vulnerability factors for depression in adolescence, particularly when individuals experience high levels of life stress. However, previous research has not integrated these characteristics when evaluating vulnerability to depression. In the present study, a racially diverse community sample of 256 early adolescents (ages 12 and 13) completed a baseline visit and a follow-up visit 9 months later. Inferential style, emotional clarity, and depressive symptoms were assessed at baseline, and intervening life events and depressive symptoms were assessed at follow-up. Hierarchical linear regressions indicated that there was a significant three-way interaction between adolescents' weakest-link negative inferential style, emotional clarity, and intervening life stress predicting depressive symptoms at follow-up, controlling for initial depressive symptoms. Adolescents with low emotional clarity and high negative inferential styles experienced the greatest increases in depressive symptoms following life stress. Emotional clarity buffered against the impact of life stress on depressive symptoms among adolescents with negative inferential styles. Similarly, negative inferential styles exacerbated the impact of life stress on depressive symptoms among adolescents with low emotional clarity. These results provide evidence of the utility of integrating inferential style and emotional clarity as constructs of vulnerability in combination with life stress in the identification of adolescents at risk for depression. They also suggest the enhancement of emotional clarity as a potential intervention technique to protect against the effects of negative inferential styles and life stress on depression in early adolescence.

  19. Gas energy meter for inferential determination of thermophysical properties of a gas mixture at multiple states of the gas

    Science.gov (United States)

    Morrow, Thomas B [San Antonio, TX; Kelner, Eric [San Antonio, TX; Owen, Thomas E [Helotes, TX

    2008-07-08

    A gas energy meter that acquires the data and performs the processing for an inferential determination of one or more gas properties, such as heating value, molecular weight, or density. The meter has a sensor module that acquires temperature, pressure, CO2, and speed of sound data. Data is acquired at two different states of the gas, which eliminates the need to determine the concentration of nitrogen in the gas. A processing module receives this data and uses it to perform a "two-state" inferential algorithm.

  20. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    Science.gov (United States)

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  1. Inferential Style, School Teachers, and Depressive Symptoms in College Students

    Directory of Open Access Journals (Sweden)

    Caroline M. Pittard,

    2018-04-01

    Full Text Available Depressive symptoms affect around half of students at some point during college. According to the hopelessness theory of depression, making negative inferences about stressful events is a vulnerability for developing depression. Negative and socioemotional teaching behavior can be stressors that are associated with depression in school students. First-time college freshmen completed the Cognitive Style Questionnaire (CSQ, Teaching Behavior Questionnaire (TBQ, and Center for Epidemiological Studies Depression Scale (CES-D. While completing the TBQ, participants reported on a teacher from prior education to college. Multiple regression analysis found significant effects of the independent variables (four teaching behavior types, inferential style, and interactions between the four teaching behavior types and inferential style on the dependent variable (depressive symptoms. More specifically, negative and socio-emotional teaching behavior were positively associated with depressive symptoms and instructional and organizational teaching behavior were negatively associated with depressive symptoms. Both organizational and negative teaching behavior interacted significantly with inferential style. Organizational and negative teaching behavior shared different relationships with depressive symptoms depending upon an individual‟s level of inferential style. Promotion of instructional and organizational teaching behavior in school as well as the reduction of negative teaching behavior may be useful in reducing students‟ depressive symptoms.

  2. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

    Science.gov (United States)

    Shaikh, Masood Ali

    2017-09-01

    Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

  3. Regularization methods for inferential sensing in nuclear power plants

    International Nuclear Information System (INIS)

    Hines, J.W.; Gribok, A.V.; Attieh, I.; Uhrig, R.E.

    2000-01-01

    Inferential sensing is the use of information related to a plant parameter to infer its actual value. The most common method of inferential sensing uses a mathematical model to infer a parameter value from correlated sensor values. Collinearity in the predictor variables leads to an ill-posed problem that causes inconsistent results when data based models such as linear regression and neural networks are used. This chapter presents several linear and non-linear inferential sensing methods including linear regression and neural networks. Both of these methods can be modified from their original form to solve ill-posed problems and produce more consistent results. We will compare these techniques using data from Florida Power Corporation's Crystal River Nuclear Power Plant to predict the drift in a feedwater flow sensor. According to a report entitled 'Feedwater Flow Measurement in U.S. Nuclear Power Generation Stations' that was commissioned by the Electric Power Research Institute, venturi meter fouling is 'the single most frequent cause' for derating in Pressurized Water Reactors. This chapter presents several viable solutions to this problem. (orig.)

  4. Introduction to statistics using interactive MM*Stat elements

    CERN Document Server

    Härdle, Wolfgang Karl; Rönz, Bernd

    2015-01-01

    MM*Stat, together with its enhanced online version with interactive examples, offers a flexible tool that facilitates the teaching of basic statistics. It covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). MM*Stat is also designed to help students rework class material independently and to promote comprehension with the help of additional examples. Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students’ knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical...

  5. Crop identification technology assessment for remote sensing. (CITARS) Volume 9: Statistical analysis of results

    Science.gov (United States)

    Davis, B. J.; Feiveson, A. H.

    1975-01-01

    Results are presented of CITARS data processing in raw form. Tables of descriptive statistics are given along with descriptions and results of inferential analyses. The inferential results are organized by questions which CITARS was designed to answer.

  6. Statistical inference

    CERN Document Server

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  7. Evidence-based orthodontics. Current statistical trends in published articles in one journal.

    Science.gov (United States)

    Law, Scott V; Chudasama, Dipak N; Rinchuse, Donald J

    2010-09-01

    To ascertain the number, type, and overall usage of statistics in American Journal of Orthodontics and Dentofacial (AJODO) articles for 2008. These data were then compared to data from three previous years: 1975, 1985, and 2003. The frequency and distribution of statistics used in the AJODO original articles for 2008 were dichotomized into those using statistics and those not using statistics. Statistical procedures were then broadly divided into descriptive statistics (mean, standard deviation, range, percentage) and inferential statistics (t-test, analysis of variance). Descriptive statistics were used to make comparisons. In 1975, 1985, 2003, and 2008, AJODO published 72, 87, 134, and 141 original articles, respectively. The percentage of original articles using statistics was 43.1% in 1975, 75.9% in 1985, 94.0% in 2003, and 92.9% in 2008; original articles using statistics stayed relatively the same from 2003 to 2008, with only a small 1.1% decrease. The percentage of articles using inferential statistical analyses was 23.7% in 1975, 74.2% in 1985, 92.9% in 2003, and 84.4% in 2008. Comparing AJODO publications in 2003 and 2008, there was an 8.5% increase in the use of descriptive articles (from 7.1% to 15.6%), and there was an 8.5% decrease in articles using inferential statistics (from 92.9% to 84.4%).

  8. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study.

    Science.gov (United States)

    Nour-Eldein, Hebatallah

    2016-01-01

    With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.

  9. Against Inferential Statistics: How and Why Current Statistics Teaching Gets It Wrong

    Science.gov (United States)

    White, Patrick; Gorard, Stephen

    2017-01-01

    Recent concerns about a shortage of capacity for statistical and numerical analysis skills among social science students and researchers have prompted a range of initiatives aiming to improve teaching in this area. However, these projects have rarely re-evaluated the content of what is taught to students and have instead focussed primarily on…

  10. Statistics in the pharmacy literature.

    Science.gov (United States)

    Lee, Charlene M; Soin, Herpreet K; Einarson, Thomas R

    2004-09-01

    Research in statistical methods is essential for maintenance of high quality of the published literature. To update previous reports of the types and frequencies of statistical terms and procedures in research studies of selected professional pharmacy journals. We obtained all research articles published in 2001 in 6 journals: American Journal of Health-System Pharmacy, The Annals of Pharmacotherapy, Canadian Journal of Hospital Pharmacy, Formulary, Hospital Pharmacy, and Journal of the American Pharmaceutical Association. Two independent reviewers identified and recorded descriptive and inferential statistical terms/procedures found in the methods, results, and discussion sections of each article. Results were determined by tallying the total number of times, as well as the percentage, that each statistical term or procedure appeared in the articles. One hundred forty-four articles were included. Ninety-eight percent employed descriptive statistics; of these, 28% used only descriptive statistics. The most common descriptive statistical terms were percentage (90%), mean (74%), standard deviation (58%), and range (46%). Sixty-nine percent of the articles used inferential statistics, the most frequent being chi(2) (33%), Student's t-test (26%), Pearson's correlation coefficient r (18%), ANOVA (14%), and logistic regression (11%). Statistical terms and procedures were found in nearly all of the research articles published in pharmacy journals. Thus, pharmacy education should aim to provide current and future pharmacists with an understanding of the common statistical terms and procedures identified to facilitate the appropriate appraisal and consequential utilization of the information available in research articles.

  11. Application of pedagogy reflective in statistical methods course and practicum statistical methods

    Science.gov (United States)

    Julie, Hongki

    2017-08-01

    Subject Elementary Statistics, Statistical Methods and Statistical Methods Practicum aimed to equip students of Mathematics Education about descriptive statistics and inferential statistics. The students' understanding about descriptive and inferential statistics were important for students on Mathematics Education Department, especially for those who took the final task associated with quantitative research. In quantitative research, students were required to be able to present and describe the quantitative data in an appropriate manner, to make conclusions from their quantitative data, and to create relationships between independent and dependent variables were defined in their research. In fact, when students made their final project associated with quantitative research, it was not been rare still met the students making mistakes in the steps of making conclusions and error in choosing the hypothetical testing process. As a result, they got incorrect conclusions. This is a very fatal mistake for those who did the quantitative research. There were some things gained from the implementation of reflective pedagogy on teaching learning process in Statistical Methods and Statistical Methods Practicum courses, namely: 1. Twenty two students passed in this course and and one student did not pass in this course. 2. The value of the most accomplished student was A that was achieved by 18 students. 3. According all students, their critical stance could be developed by them, and they could build a caring for each other through a learning process in this course. 4. All students agreed that through a learning process that they undergo in the course, they can build a caring for each other.

  12. Psychometric Evaluation of the Italian Adaptation of the Test of Inferential and Creative Thinking

    Science.gov (United States)

    Faraci, Palmira; Hell, Benedikt; Schuler, Heinz

    2016-01-01

    This article describes the psychometric properties of the Italian adaptation of the "Analyse des Schlussfolgernden und Kreativen Denkens" (ASK; Test of Inferential and Creative Thinking) for measuring inferential and creative thinking. The study aimed to (a) supply evidence for the factorial structure of the instrument, (b) describe its…

  13. The Heuristic Value of p in Inductive Statistical Inference

    Directory of Open Access Journals (Sweden)

    Joachim I. Krueger

    2017-06-01

    Full Text Available Many statistical methods yield the probability of the observed data – or data more extreme – under the assumption that a particular hypothesis is true. This probability is commonly known as ‘the’ p-value. (Null Hypothesis Significance Testing ([NH]ST is the most prominent of these methods. The p-value has been subjected to much speculation, analysis, and criticism. We explore how well the p-value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p-value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p-value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.

  14. The Heuristic Value of p in Inductive Statistical Inference.

    Science.gov (United States)

    Krueger, Joachim I; Heck, Patrick R

    2017-01-01

    Many statistical methods yield the probability of the observed data - or data more extreme - under the assumption that a particular hypothesis is true. This probability is commonly known as 'the' p -value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p -value has been subjected to much speculation, analysis, and criticism. We explore how well the p -value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p -value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p -value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.

  15. The Cost of Thinking about False Beliefs: Evidence from Adults' Performance on a Non-Inferential Theory of Mind Task

    Science.gov (United States)

    Apperly, Ian A.; Back, Elisa; Samson, Dana; France, Lisa

    2008-01-01

    Much of what we know about other people's beliefs comes non-inferentially from what people tell us. Developmental research suggests that 3-year-olds have difficulty processing such information: they suffer interference from their own knowledge of reality when told about someone's false belief (e.g., [Wellman, H. M., & Bartsch, K. (1988). Young…

  16. Statistical inference a short course

    CERN Document Server

    Panik, Michael J

    2012-01-01

    A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causal

  17. A hierarchical inferential method for indoor scene classification

    Directory of Open Access Journals (Sweden)

    Jiang Jingzhe

    2017-12-01

    Full Text Available Indoor scene classification forms a basis for scene interaction for service robots. The task is challenging because the layout and decoration of a scene vary considerably. Previous studies on knowledge-based methods commonly ignore the importance of visual attributes when constructing the knowledge base. These shortcomings restrict the performance of classification. The structure of a semantic hierarchy was proposed to describe similarities of different parts of scenes in a fine-grained way. Besides the commonly used semantic features, visual attributes were also introduced to construct the knowledge base. Inspired by the processes of human cognition and the characteristics of indoor scenes, we proposed an inferential framework based on the Markov logic network. The framework is evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

  18. Brain Evolution and Human Neuropsychology: The Inferential Brain Hypothesis

    Science.gov (United States)

    Koscik, Timothy R.; Tranel, Daniel

    2013-01-01

    Collaboration between human neuropsychology and comparative neuroscience has generated invaluable contributions to our understanding of human brain evolution and function. Further cross-talk between these disciplines has the potential to continue to revolutionize these fields. Modern neuroimaging methods could be applied in a comparative context, yielding exciting new data with the potential of providing insight into brain evolution. Conversely, incorporating an evolutionary base into the theoretical perspectives from which we approach human neuropsychology could lead to novel hypotheses and testable predictions. In the spirit of these objectives, we present here a new theoretical proposal, the Inferential Brain Hypothesis, whereby the human brain is thought to be characterized by a shift from perceptual processing to inferential computation, particularly within the social realm. This shift is believed to be a driving force for the evolution of the large human cortex. PMID:22459075

  19. Effects of Stress and Working Memory Capacity on Foreign Language Readers' Inferential Processing during Comprehension

    Science.gov (United States)

    Rai, Manpreet K.; Loschky, Lester C.; Harris, Richard Jackson; Peck, Nicole R.; Cook, Lindsay G.

    2011-01-01

    Although stress is frequently claimed to impede foreign language (FL) reading comprehension, it is usually not explained how. We investigated the effects of stress, working memory (WM) capacity, and inferential complexity on Spanish FL readers' inferential processing during comprehension. Inferences, although necessary for reading comprehension,…

  20. Beyond the Story Map: Inferential Comprehension via Character Perspective

    Science.gov (United States)

    McTigue, Erin; Douglass, April; Wright, Katherine L.; Hodges, Tracey S.; Franks, Amanda D.

    2015-01-01

    Inferential comprehension requires both emotional intelligence and cognitive skills, however instructional comprehension strategies typically underemphasize the emotional contribution. This article documents an intervention used by diverse third grade students which centers on teaching story comprehension through character perspective-taking…

  1. Illustrating Sampling Distribution of a Statistic: Minitab Revisited

    Science.gov (United States)

    Johnson, H. Dean; Evans, Marc A.

    2008-01-01

    Understanding the concept of the sampling distribution of a statistic is essential for the understanding of inferential procedures. Unfortunately, this topic proves to be a stumbling block for students in introductory statistics classes. In efforts to aid students in their understanding of this concept, alternatives to a lecture-based mode of…

  2. The t-test: An Influential Inferential Tool in Chaplaincy and Other Healthcare Research.

    Science.gov (United States)

    Jankowski, Katherine R B; Flannelly, Kevin J; Flannelly, Laura T

    2018-01-01

    The t-test developed by William S. Gosset (also known as Student's t-test and the two-sample t-test) is commonly used to compare one sample mean on a measure with another sample mean on the same measure. The outcome of the t-test is used to draw inferences about how different the samples are from each other. It is probably one of the most frequently relied upon statistics in inferential research. It is easy to use: a researcher can calculate the statistic with three simple tools: paper, pen, and a calculator. A computer program can quickly calculate the t-test for large samples. The ease of use can result in the misuse of the t-test. This article discusses the development of the original t-test, basic principles of the t-test, two additional types of t-tests (the one-sample t-test and the paired t-test), and recommendations about what to consider when using the t-test to draw inferences in research.

  3. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  4. Pre-service primary school teachers’ knowledge of informal statistical inference

    NARCIS (Netherlands)

    de Vetten, Arjen; Schoonenboom, Judith; Keijzer, Ronald; van Oers, Bert

    2018-01-01

    The ability to reason inferentially is increasingly important in today’s society. It is hypothesized here that engaging primary school students in informal statistical reasoning (ISI), defined as making generalizations without the use of formal statistical tests, will help them acquire the

  5. Human capital accumulation and its effect on agribusiness performance: the case of China.

    Science.gov (United States)

    Udimal, Thomas Bilaliib; Jincai, Zhuang; Ayamba, Emmanuel Caesar; Sarpong, Patrick Boateng

    2017-09-01

    This study investigates the effect of accumulated human capital on the performance of agribusinesses in China. Four hundred fifty agribusiness owners were interviewed for the study. Growth in sales over the last 5 years was used as a measure of performance. The following variables were reviewed and captured as those constituting human capital: education, raised in the area, parents being entrepreneurs, attending business seminars/trade fairs, managerial experience, similar work experience, cooperative membership, and training. Logit regression model and inferential statistics were used to analyze the data. The logit regression model was used to analyze the effect of accumulated human capital on growth in sales. The inferential statistics on the other hand was used to measure the association between age, education, sex, provinces, and the categories of growth. Our study found that parents who are entrepreneurs and attend business seminars/trade fairs, as well as have managerial experience, similar work experience, education, and training, display a statistically significant positive effect on the growth in sales.

  6. Application of descriptive statistics in analysis of experimental data

    OpenAIRE

    Mirilović Milorad; Pejin Ivana

    2008-01-01

    Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...

  7. Descriptive statistics.

    Science.gov (United States)

    Nick, Todd G

    2007-01-01

    Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.

  8. Inferential Statistics from Black Hispanic Breast Cancer Survival Data

    Directory of Open Access Journals (Sweden)

    Hafiz M. R. Khan

    2014-01-01

    Full Text Available In this paper we test the statistical probability models for breast cancer survival data for race and ethnicity. Data was collected from breast cancer patients diagnosed in United States during the years 1973–2009. We selected a stratified random sample of Black Hispanic female patients from the Surveillance Epidemiology and End Results (SEER database to derive the statistical probability models. We used three common model building criteria which include Akaike Information Criteria (AIC, Bayesian Information Criteria (BIC, and Deviance Information Criteria (DIC to measure the goodness of fit tests and it was found that Black Hispanic female patients survival data better fit the exponentiated exponential probability model. A novel Bayesian method was used to derive the posterior density function for the model parameters as well as to derive the predictive inference for future response. We specifically focused on Black Hispanic race. Markov Chain Monte Carlo (MCMC method was used for obtaining the summary results of posterior parameters. Additionally, we reported predictive intervals for future survival times. These findings would be of great significance in treatment planning and healthcare resource allocation.

  9. Why do people show minimal knowledge updating with task experience: inferential deficit or experimental artifact?

    Science.gov (United States)

    Hertzog, Christopher; Price, Jodi; Burpee, Ailis; Frentzel, William J; Feldstein, Simeon; Dunlosky, John

    2009-01-01

    Students generally do not have highly accurate knowledge about strategy effectiveness for learning, such as that imagery is superior to rote repetition. During multiple study-test trials using both strategies, participants' predictions about performance on List 2 do not markedly differ for the two strategies, even though List 1 recall is substantially greater for imagery. Two experiments evaluated whether such deficits in knowledge updating about the strategy effects were due to an experimental artifact or to inaccurate inferences about the effects the strategies had on recall. Participants studied paired associates on two study-test trials--they were instructed to study half using imagery and half using rote repetition. Metacognitive judgements tapped the quality of inferential processes about the strategy effects during the List 1 test and tapped gains in knowledge about the strategies across lists. One artifactual explanation--noncompliance with strategy instructions--was ruled out, whereas manipulations aimed at supporting the data available to inferential processes improved but did not fully repair knowledge updating.

  10. Inferentialism in mathematics education : introduction to a special issue

    NARCIS (Netherlands)

    Bakker, Arthur; Hußmann, Stephan

    2017-01-01

    Inferentialism, as developed by the philosopher Robert Brandom (1994, 2000), is a theory of meaning. The theory has wide-ranging implications in various fields but this special issue concentrates on the use and content of concepts. The key idea, relevant to mathematics education research, is that

  11. Quality of reporting statistics in two Indian pharmacology journals.

    Science.gov (United States)

    Jaykaran; Yadav, Preeti

    2011-04-01

    To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. All original articles published since 2002 were downloaded from the journals' (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of reporting of method of description and central tendencies. Inferential statistics was evaluated on the basis of fulfilling of assumption of statistical methods and appropriateness of statistical tests. Values are described as frequencies, percentage, and 95% confidence interval (CI) around the percentages. Inappropriate descriptive statistics was observed in 150 (78.1%, 95% CI 71.7-83.3%) articles. Most common reason for this inappropriate descriptive statistics was use of mean ± SEM at the place of "mean (SD)" or "mean ± SD." Most common statistical method used was one-way ANOVA (58.4%). Information regarding checking of assumption of statistical test was mentioned in only two articles. Inappropriate statistical test was observed in 61 (31.7%, 95% CI 25.6-38.6%) articles. Most common reason for inappropriate statistical test was the use of two group test for three or more groups. Articles published in two Indian pharmacology journals are not devoid of statistical errors.

  12. Quality of reporting statistics in two Indian pharmacology journals

    OpenAIRE

    Jaykaran,; Yadav, Preeti

    2011-01-01

    Objective: To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. Materials and Methods: All original articles published since 2002 were downloaded from the journals′ (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of...

  13. Teaching Statistics Using Classic Psychology Research: An Activities-Based Approach

    Science.gov (United States)

    Holmes, Karen Y.; Dodd, Brett A.

    2012-01-01

    In this article, we discuss a collection of active learning activities derived from classic psychology studies that illustrate the appropriate use of descriptive and inferential statistics. (Contains 2 tables.)

  14. Inferential ecosystem models, from network data to prediction

    Science.gov (United States)

    James S. Clark; Pankaj Agarwal; David M. Bell; Paul G. Flikkema; Alan Gelfand; Xuanlong Nguyen; Eric Ward; Jun Yang

    2011-01-01

    Recent developments suggest that predictive modeling could begin to play a larger role not only for data analysis, but also for data collection. We address the example of efficient wireless sensor networks, where inferential ecosystem models can be used to weigh the value of an observation against the cost of data collection. Transmission costs make observations ‘‘...

  15. Applying Statistical Process Quality Control Methodology to Educational Settings.

    Science.gov (United States)

    Blumberg, Carol Joyce

    A subset of Statistical Process Control (SPC) methodology known as Control Charting is introduced. SPC methodology is a collection of graphical and inferential statistics techniques used to study the progress of phenomena over time. The types of control charts covered are the null X (mean), R (Range), X (individual observations), MR (moving…

  16. Analysis of Statistical Methods Currently used in Toxicology Journals.

    Science.gov (United States)

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-09-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

  17. Translator’s inferential excursions, with imagination in the background

    Directory of Open Access Journals (Sweden)

    Bożena Tokarz

    2014-01-01

    Full Text Available In a literary work, signals that trigger reader’s inferential excursions allow the reader’s imagination to identify with and control the represented world. They constitute an important element of sense-generating mechanism. Thanks to imagination, the translator imitates the inferential mechanism of the original on various level’s of the text’s structure, activating the imagination of the reader. The translator’s imagination is bi- or multivalent in having the linguistic-semiotic, literary, and cultural quality. Although it manifests itself in language, it goes beyond the boundaries of language. Imagination is a form of consciousness which has no object of its own, and a medium connecting a specific non-imaginary knowledge with representations. It constitutes a mind faculty shaped on the basis of sensory and mental perception. It is derived from individual principles of perception and cognition data processing. It usually requires a stymulus to activate the capabilities of the imagining subject. As a mind faculty, imagination is based on the mental capability common to all people, which is the ability to create chains of associations.Translator’s respect for inferential excursions in the original text is necessary for retaining the original meaning, regardless of whether they occur on the phonetic-phonological level (as in Ionesco’s The Chairs, or on the level of image-semantic and syntactic relations (as in translation of Apollinaire’s Zone, or on the level of syntax (as in translation of Mrożek’s short stories into Slovenian, or on the level of cultural communication (as in Slovenian translation of Gombrowicz’s Trans-Atlantic.

  18. Using Facebook Data to Turn Introductory Statistics Students into Consultants

    Science.gov (United States)

    Childers, Adam F.

    2017-01-01

    Facebook provides businesses and organizations with copious data that describe how users are interacting with their page. This data affords an excellent opportunity to turn introductory statistics students into consultants to analyze the Facebook data using descriptive and inferential statistics. This paper details a semester-long project that…

  19. Selecting the most appropriate inferential statistical test for your quantitative research study.

    Science.gov (United States)

    Bettany-Saltikov, Josette; Whittaker, Victoria Jane

    2014-06-01

    To discuss the issues and processes relating to the selection of the most appropriate statistical test. A review of the basic research concepts together with a number of clinical scenarios is used to illustrate this. Quantitative nursing research generally features the use of empirical data which necessitates the selection of both descriptive and statistical tests. Different types of research questions can be answered by different types of research designs, which in turn need to be matched to a specific statistical test(s). Discursive paper. This paper discusses the issues relating to the selection of the most appropriate statistical test and makes some recommendations as to how these might be dealt with. When conducting empirical quantitative studies, a number of key issues need to be considered. Considerations for selecting the most appropriate statistical tests are discussed and flow charts provided to facilitate this process. When nursing clinicians and researchers conduct quantitative research studies, it is crucial that the most appropriate statistical test is selected to enable valid conclusions to be made. © 2013 John Wiley & Sons Ltd.

  20. SAS and R data management, statistical analysis, and graphics

    CERN Document Server

    Kleinman, Ken

    2009-01-01

    An All-in-One Resource for Using SAS and R to Carry out Common TasksProvides a path between languages that is easier than reading complete documentationSAS and R: Data Management, Statistical Analysis, and Graphics presents an easy way to learn how to perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferential procedures, regression analysis, and the creation of graphics, along with more complex applicat

  1. Using R for Data Management, Statistical Analysis, and Graphics

    CERN Document Server

    Horton, Nicholas J

    2010-01-01

    This title offers quick and easy access to key element of documentation. It includes worked examples across a wide variety of applications, tasks, and graphics. "Using R for Data Management, Statistical Analysis, and Graphics" presents an easy way to learn how to perform an analytical task in R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation and vast number of add-on packages. Organized by short, clear descriptive entries, the book covers many common tasks, such as data management, descriptive summaries, inferential proc

  2. Predictive capacity of a non-radioisotopic local lymph node assay using flow cytometry, LLNA:BrdU-FCM: Comparison of a cutoff approach and inferential statistics.

    Science.gov (United States)

    Kim, Da-Eun; Yang, Hyeri; Jang, Won-Hee; Jung, Kyoung-Mi; Park, Miyoung; Choi, Jin Kyu; Jung, Mi-Sook; Jeon, Eun-Young; Heo, Yong; Yeo, Kyung-Wook; Jo, Ji-Hoon; Park, Jung Eun; Sohn, Soo Jung; Kim, Tae Sung; Ahn, Il Young; Jeong, Tae-Cheon; Lim, Kyung-Min; Bae, SeungJin

    2016-01-01

    In order for a novel test method to be applied for regulatory purposes, its reliability and relevance, i.e., reproducibility and predictive capacity, must be demonstrated. Here, we examine the predictive capacity of a novel non-radioisotopic local lymph node assay, LLNA:BrdU-FCM (5-bromo-2'-deoxyuridine-flow cytometry), with a cutoff approach and inferential statistics as a prediction model. 22 reference substances in OECD TG429 were tested with a concurrent positive control, hexylcinnamaldehyde 25%(PC), and the stimulation index (SI) representing the fold increase in lymph node cells over the vehicle control was obtained. The optimal cutoff SI (2.7≤cutoff <3.5), with respect to predictive capacity, was obtained by a receiver operating characteristic curve, which produced 90.9% accuracy for the 22 substances. To address the inter-test variability in responsiveness, SI values standardized with PC were employed to obtain the optimal percentage cutoff (42.6≤cutoff <57.3% of PC), which produced 86.4% accuracy. A test substance may be diagnosed as a sensitizer if a statistically significant increase in SI is elicited. The parametric one-sided t-test and non-parametric Wilcoxon rank-sum test produced 77.3% accuracy. Similarly, a test substance could be defined as a sensitizer if the SI means of the vehicle control, and of the low, middle, and high concentrations were statistically significantly different, which was tested using ANOVA or Kruskal-Wallis, with post hoc analysis, Dunnett, or DSCF (Dwass-Steel-Critchlow-Fligner), respectively, depending on the equal variance test, producing 81.8% accuracy. The absolute SI-based cutoff approach produced the best predictive capacity, however the discordant decisions between prediction models need to be examined further. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Inference and the Introductory Statistics Course

    Science.gov (United States)

    Pfannkuch, Maxine; Regan, Matt; Wild, Chris; Budgett, Stephanie; Forbes, Sharleen; Harraway, John; Parsonage, Ross

    2011-01-01

    This article sets out some of the rationale and arguments for making major changes to the teaching and learning of statistical inference in introductory courses at our universities by changing from a norm-based, mathematical approach to more conceptually accessible computer-based approaches. The core problem of the inferential argument with its…

  4. Inferential, non-parametric statistics to assess the quality of probabilistic forecast systems

    NARCIS (Netherlands)

    Maia, A.H.N.; Meinke, H.B.; Lennox, S.; Stone, R.C.

    2007-01-01

    Many statistical forecast systems are available to interested users. To be useful for decision making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and its statistical manifestation have been firmly established, the forecasts must

  5. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

    2009-08-15

    The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

  6. Fault detection in IRIS reactor secondary loop using inferential models

    International Nuclear Information System (INIS)

    Perillo, Sergio R.P.; Upadhyaya, Belle R.; Hines, J. Wesley

    2013-01-01

    The development of fault detection algorithms is well-suited for remote deployment of small and medium reactors, such as the IRIS, and the development of new small modular reactors (SMR). However, an extensive number of tests are still to be performed for new engineering aspects and components that are not yet proven technology in the current PWRs, and present some technological challenges for its deployment since many of its features cannot be proven until a prototype plant is built. In this work, an IRIS plant simulation platform was developed using a Simulink® model. The dynamic simulation was utilized in obtaining inferential models that were used to detect faults artificially added to the secondary system simulations. The implementation of data-driven models and the results are discussed. (author)

  7. Statistical Analysis of Research Data | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

  8. The use and misuse of statistical methodologies in pharmacology research.

    Science.gov (United States)

    Marino, Michael J

    2014-01-01

    Descriptive, exploratory, and inferential statistics are necessary components of hypothesis-driven biomedical research. Despite the ubiquitous need for these tools, the emphasis on statistical methods in pharmacology has become dominated by inferential methods often chosen more by the availability of user-friendly software than by any understanding of the data set or the critical assumptions of the statistical tests. Such frank misuse of statistical methodology and the quest to reach the mystical αstatistical training. Perhaps more critically, a poor understanding of statistical tools limits the conclusions that may be drawn from a study by divorcing the investigator from their own data. The net result is a decrease in quality and confidence in research findings, fueling recent controversies over the reproducibility of high profile findings and effects that appear to diminish over time. The recent development of "omics" approaches leading to the production of massive higher dimensional data sets has amplified these issues making it clear that new approaches are needed to appropriately and effectively mine this type of data. Unfortunately, statistical education in the field has not kept pace. This commentary provides a foundation for an intuitive understanding of statistics that fosters an exploratory approach and an appreciation for the assumptions of various statistical tests that hopefully will increase the correct use of statistics, the application of exploratory data analysis, and the use of statistical study design, with the goal of increasing reproducibility and confidence in the literature. Copyright © 2013. Published by Elsevier Inc.

  9. Bayesian statistical evaluation of peak area measurements in gamma spectrometry

    International Nuclear Information System (INIS)

    Silva, L.; Turkman, A.; Paulino, C.D.

    2010-01-01

    We analyze results from determinations of peak areas for a radioactive source containing several radionuclides. The statistical analysis was performed using Bayesian methods based on the usual Poisson model for observed counts. This model does not appear to be a very good assumption for the counting system under investigation, even though it is not questioned as a whole by the inferential procedures adopted. We conclude that, in order to avoid incorrect inferences on relevant quantities, one must proceed to a further study that allows us to include missing influence parameters and to select a model explaining the observed data much better.

  10. A new ore reserve estimation method, Yang Chizhong filtering and inferential measurement method, and its application

    International Nuclear Information System (INIS)

    Wu Jingqin.

    1989-01-01

    Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described

  11. Neural Networks. Diagnostic and inferential measurements; Reti neurali. Diagnostica e misure inferenziali

    Energy Technology Data Exchange (ETDEWEB)

    Bonavita, N. [Apc Group Leader, Abb Industria, Genua (Italy); Parisini, T. [Milan Politecnico, Milan (Italy). Dipt. di Elettronica e Informazione

    2000-09-01

    In this work, the use of neural approximating networks is described in the context of fault diagnosis of industrial plants with a particular emphasis to the technique of inferential measurements. The proposed methodology is related to the current literature emphasizing advantages and disadvantages of the analytical redundancy concept. The use of neural approximators for the generation of inferential measurement is described in the context of industrial distributed control systems. [Italian] In questo articolo viene descritta l'utilizzazione degli approssimatori neurali in problemi di diagnostica d'impianto con particolare riferimento alla tecnica delle misure inferenziali. Viene fornito un inquadramento della metodologia rispetto alla letteratura attuale mettendo in risalto vantaggi e svantaggi del concetto di ridondanza analitica. L'uso degli approssimatori neurali per la generazione di misure inferenziali e' illustrato in un contesto di sistemi di controllo distribuito di tipo industriale.

  12. Storytelling, statistics and hereditary thought: the narrative support of early statistics.

    Science.gov (United States)

    López-Beltrán, Carlos

    2006-03-01

    This paper's main contention is that some basically methodological developments in science which are apparently distant and unrelated can be seen as part of a sequential story. Focusing on general inferential and epistemological matters, the paper links occurrences separated by both in time and space, by formal and representational issues rather than social or disciplinary links. It focuses on a few limited aspects of several cognitive practices in medical and biological contexts separated by geography, disciplines and decades, but connected by long term transdisciplinary representational and inferential structures and constraints. The paper intends to show a given set of knowledge claims based on organizing statistically empirical data can be seen to have been underpinned by a previous, more familiar, and probably more natural, narrative handling of similar evidence. To achieve that this paper moves from medicine in France in the late eighteenth and early nineteenth century to the second half of the nineteenth century in England among gentleman naturalists, following its subject: the shift from narrative depiction of hereditary transmission of physical peculiarities to posterior statistical articulations of the same phenomena. Some early defenders of heredity as an important (if not the most important) causal presence in the understanding of life adopted singular narratives, in the form of case stories from medical and natural history traditions, to flesh out a special kind of causality peculiar to heredity. This work tries to reconstruct historically the rationale that drove the use of such narratives. It then shows that when this rationale was methodologically challenged, its basic narrative and probabilistic underpinings were transferred to the statistical quantificational tools that took their place.

  13. Cognitive stimulation of pupils with Down syndrome: A study of inferential talk during book-sharing.

    Science.gov (United States)

    Engevik, Liv Inger; Næss, Kari-Anne B; Hagtvet, Bente E

    2016-08-01

    In the education of pupils with Down syndrome, "simplifying" literal talk and concrete stimulation have typically played a dominant role. This explorative study investigated the extent to which teachers stimulated abstract cognitive functions via inferential talk during book-sharing and how pupils with Down syndrome responded. Dyadic interactions (N=7) were videotaped, transcribed and coded to identify levels of abstraction in teacher utterances and to evaluate the adequacy of pupil responses. One-third of the teachers' utterances contained high levels of abstraction and promoted inferential talk. Six of the seven children predominantly responded in ways which revealed inferential thinking. Dialog excerpts highlighted individual, contextual and interactional factors contributing to variations in the findings. Contrary to previous claims, the children with Down syndrome in the current sample appear able to draw inferences beyond the "here-and-now" with teacher support. This finding highlights the educational relevance and importance of higher-order cognitive stimulation of pupils with intellectual disabilities, to foster independent metacognitive skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    Science.gov (United States)

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  15. Inferential Statistics in "Language Teaching Research": A Review and Ways Forward

    Science.gov (United States)

    Lindstromberg, Seth

    2016-01-01

    This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…

  16. Imaging of neural oscillations with embedded inferential and group prevalence statistics

    Science.gov (United States)

    2018-01-01

    Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be

  17. Cross-categorization of legal concepts across boundaries of legal systems: in consideration of inferential links

    DEFF Research Database (Denmark)

    Glückstad, Fumiko Kano; Herlau, Tue; Schmidt, Mikkel Nørgaard

    2014-01-01

    This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp’s probabili......This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp...... and Griffiths in Behav Brain Sci 4:629–640, 2001), the probabilistic model of theory formation, i.e., the Infinite Relational Model (IRM) first introduced by Kemp et al. (The twenty-first national conference on artificial intelligence, 2006, Cognition 114:165–196, 2010) and its extended model, i.e., the normal...... to the International Standard Classification of Education. The main contribution of this work is the proposal of a conceptual framework of the cross-categorization approach that, inspired by Sartor (Artif Intell Law 17:217–251, 2009), attempts to explain reasoner’s inferential mechanisms....

  18. A Genetic-Neuro-Fuzzy inferential model for diagnosis of tuberculosis

    Directory of Open Access Journals (Sweden)

    Mumini Olatunji Omisore

    2017-01-01

    Full Text Available Tuberculosis is a social, re-emerging infectious disease with medical implications throughout the globe. Despite efforts, the coverage of tuberculosis disease (with HIV prevalence in Nigeria rose from 2.2% in 1991 to 22% in 2013 and the orthodox diagnosis methods available for Tuberculosis diagnosis were been faced with a number of challenges which can, if measure not taken, increase the spread rate; hence, there is a need for aid in diagnosis of the disease. This study proposes a technique for intelligent diagnosis of TB using Genetic-Neuro-Fuzzy Inferential method to provide a decision support platform that can assist medical practitioners in administering accurate, timely, and cost effective diagnosis of Tuberculosis. Performance evaluation observed, using a case study of 10 patients from St. Francis Catholic Hospital Okpara-In-Land (Delta State, Nigeria, shows sensitivity and accuracy results of 60% and 70% respectively which are within the acceptable range of predefined by domain experts.

  19. Inferential statistics of electron backscatter diffraction data from within individual crystalline grains

    DEFF Research Database (Denmark)

    Bachmann, Florian; Hielscher, Ralf; Jupp, Peter E.

    2010-01-01

    -spatial statistical analysis adapts ideas borrowed from the Bingham quaternion distribution on . Special emphasis is put on the mathematical definition and the numerical determination of a `mean orientation' characterizing the crystallographic grain as well as on distinguishing several types of symmetry......Highly concentrated distributed crystallographic orientation measurements within individual crystalline grains are analysed by means of ordinary statistics neglecting their spatial reference. Since crystallographic orientations are modelled as left cosets of a given subgroup of SO(3), the non...... of the orientation distribution with respect to the mean orientation, like spherical, prolate or oblate symmetry. Applications to simulated as well as to experimental data are presented. All computations have been done with the free and open-source texture toolbox MTEX....

  20. Comparative Gender Performance in Business Statistics.

    Science.gov (United States)

    Mogull, Robert G.

    1989-01-01

    Comparative performance of male and female students in introductory and intermediate statistics classes was examined for over 16 years at a state university. Gender means from 97 classes and 1,609 males and 1,085 females revealed a probabilistic--although statistically insignificant--superior performance by female students that appeared to…

  1. Using inferential sensors for quality control of Everglades Depth Estimation Network water-level data

    Science.gov (United States)

    Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul

    2016-09-29

    The Everglades Depth Estimation Network (EDEN), with over 240 real-time gaging stations, provides hydrologic data for freshwater and tidal areas of the Everglades. These data are used to generate daily water-level and water-depth maps of the Everglades that are used to assess biotic responses to hydrologic change resulting from the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. The generation of EDEN daily water-level and water-depth maps is dependent on high quality real-time data from water-level stations. Real-time data are automatically checked for outliers by assigning minimum and maximum thresholds for each station. Small errors in the real-time data, such as gradual drift of malfunctioning pressure transducers, are more difficult to immediately identify with visual inspection of time-series plots and may only be identified during on-site inspections of the stations. Correcting these small errors in the data often is time consuming and water-level data may not be finalized for several months. To provide daily water-level and water-depth maps on a near real-time basis, EDEN needed an automated process to identify errors in water-level data and to provide estimates for missing or erroneous water-level data.The Automated Data Assurance and Management (ADAM) software uses inferential sensor technology often used in industrial applications. Rather than installing a redundant sensor to measure a process, such as an additional water-level station, inferential sensors, or virtual sensors, were developed for each station that make accurate estimates of the process measured by the hard sensor (water-level gaging station). The inferential sensors in the ADAM software are empirical models that use inputs from one or more proximal stations. The advantage of ADAM is that it provides a redundant signal to the sensor in the field without the environmental threats associated with field conditions at stations (flood or hurricane, for example). In the

  2. Clues as information, the semiotic gap, and inferential investigative processes, or making a (very small) contribution to the new discipline, Forensic Semiotics

    DEFF Research Database (Denmark)

    Sørensen, Bent; Thellefsen, Torkild Leo; Thellefsen, Martin Muderspach

    2017-01-01

    In this article, we try to contribute to the new discipline Forensic Semiotics – a discipline introduced by the Canadian polymath Marcel Danesi. We focus on clues as information and criminal investigative processes as inferential. These inferential (and Peircean) processes have a certain complexity...

  3. Impact of Autocorrelation on Principal Components and Their Use in Statistical Process Control

    DEFF Research Database (Denmark)

    Vanhatalo, Erik; Kulahci, Murat

    2015-01-01

    A basic assumption when using principal component analysis (PCA) for inferential purposes, such as in statistical process control (SPC), is that the data are independent in time. In many industrial processes, frequent sampling and process dynamics make this assumption unrealistic rendering sampled...

  4. Differences in game-related statistics of basketball performance by game location for men's winning and losing teams.

    Science.gov (United States)

    Gómez, Miguel A; Lorenzo, Alberto; Barakat, Rubén; Ortega, Enrique; Palao, José M

    2008-02-01

    The aim of the present study was to identify game-related statistics that differentiate winning and losing teams according to game location. The sample included 306 games of the 2004-2005 regular season of the Spanish professional men's league (ACB League). The independent variables were game location (home or away) and game result (win or loss). The game-related statistics registered were free throws (successful and unsuccessful), 2- and 3-point field goals (successful and unsuccessful), offensive and defensive rebounds, blocks, assists, fouls, steals, and turnovers. Descriptive and inferential analyses were done (one-way analysis of variance and discriminate analysis). The multivariate analysis showed that winning teams differ from losing teams in defensive rebounds (SC = .42) and in assists (SC = .38). Similarly, winning teams differ from losing teams when they play at home in defensive rebounds (SC = .40) and in assists (SC = .41). On the other hand, winning teams differ from losing teams when they play away in defensive rebounds (SC = .44), assists (SC = .30), successful 2-point field goals (SC = .31), and unsuccessful 3-point field goals (SC = -.35). Defensive rebounds and assists were the only game-related statistics common to all three analyses.

  5. Artificial Intelligence for Inferential Control of Crude Oil Stripping Process

    Directory of Open Access Journals (Sweden)

    Mehdi Ebnali

    2018-01-01

    Full Text Available Stripper columns are used for sweetening crude oil, and they must hold product hydrogen sulfide content as near the set points as possible in the faces of upsets. Since product    quality cannot be measured easily and economically online, the control of product quality is often achieved by maintaining a suitable tray temperature near its set point. Tray temperature control method, however, is not a proper option for a multi-component stripping column because the tray temperature does not correspond exactly to the product composition. To overcome this problem, secondary measurements can be used to infer the product quality and adjust the values of the manipulated variables. In this paper, we have used a novel inferential control approach base on adaptive network fuzzy inference system (ANFIS for stripping process. ANFIS with different learning algorithms is used for modeling the process and building a composition estimator to estimate the composition of the bottom product. The developed estimator is tested, and the results show that the predictions made by ANFIS structure are in good agreement with the results of simulation by ASPEN HYSYS process simulation package. In addition, inferential control by the implementation of ANFIS-based online composition estimator in a cascade control scheme is superior to traditional tray temperature control method based on less integral time absolute error and low duty consumption in reboiler.

  6. STATCAT, Statistical Analysis of Parametric and Non-Parametric Data

    International Nuclear Information System (INIS)

    David, Hugh

    1990-01-01

    1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required

  7. Teachers' Literal and Inferential Talk in Early Childhood and Special Education Classrooms

    Science.gov (United States)

    Sembiante, Sabrina F.; Dynia, Jaclyn M.; Kaderavek, Joan N.; Justice, Laura M.

    2018-01-01

    Research Findings: This study examined preschool teachers' literal talk (LT) and inferential talk (IT) during shared book readings in early childhood education (ECE) and early childhood special education (ECSE) classrooms. We aimed to characterize and compare teachers' LT and IT in these 2 classroom contexts and determine whether differences in LT…

  8. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

    Science.gov (United States)

    Schlenker, Evelyn

    2016-01-01

    This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

  9. Fisher statistics for analysis of diffusion tensor directional information.

    Science.gov (United States)

    Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P

    2012-04-30

    A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Relationship between school culture and students\\' performance in ...

    African Journals Online (AJOL)

    ... a longer history of offering French subject characterized by high expectations for and recognition of academic and co-curricula achievement, parental involvement, ... standard deviations) and inferential statistics (Pearson's product moment ...

  11. Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.

    Science.gov (United States)

    Pandolfi, Maurizio; Carreras, Giulia

    2018-06-07

    It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.

  12. Inferential reasoning by exclusion in children (Homo sapiens).

    Science.gov (United States)

    Hill, Andrew; Collier-Baker, Emma; Suddendorf, Thomas

    2012-08-01

    The cups task is the most widely adopted forced-choice paradigm for comparative studies of inferential reasoning by exclusion. In this task, subjects are presented with two cups, one of which has been surreptitiously baited. When the empty cup is shaken or its interior shown, it is possible to infer by exclusion that the alternative cup contains the reward. The present study extends the existing body of comparative work to include human children (Homo sapiens). Like chimpanzees (Pan troglodytes) that were tested with the same equipment and near-identical procedures, children aged three to five made apparent inferences using both visual and auditory information, although the youngest children showed the least-developed ability in the auditory modality. However, unlike chimpanzees, children of all ages used causally irrelevant information in a control test designed to examine the possibility that their apparent auditory inferences were the product of contingency learning (the duplicate cups test). Nevertheless, the children's ability to reason by exclusion was corroborated by their performance on a novel verbal disjunctive syllogism test, and we found preliminary evidence consistent with the suggestion that children used their causal-logical understanding to reason by exclusion in the cups task, but subsequently treated the duplicate cups information as symbolic or communicative, rather than causal. Implications for future comparative research are discussed. 2012 APA, all rights reserved

  13. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  14. Children's inferential styles, 5-HTTLPR genotype, and maternal expressed emotion-criticism: An integrated model for the intergenerational transmission of depression.

    Science.gov (United States)

    Gibb, Brandon E; Uhrlass, Dorothy J; Grassia, Marie; Benas, Jessica S; McGeary, John

    2009-11-01

    The authors tested a model for the intergenerational transmission of depression integrating specific genetic (5-HTTLPR), cognitive (inferential style), and environmental (mother depressive symptoms and expressed-emotion criticism [EE-Crit]) risk factors. Supporting the hypothesis that maternal depression is associated with elevated levels of stress in children's lives, mothers with a history of major depressive disorder (MDD) exhibited higher depressive symptoms across a 6-month multiwave follow-up than mothers with no depression history. In addition, partially supporting our hypothesis, levels of maternal criticism during the follow-up were significantly related to mothers' current depressive symptoms but not to history of MDD. Finally, the authors found support for an integrated Gene x Cognition x Environment model of risk. Specifically, among children with negative inferential styles regarding their self-characteristics, there was a clear dose response of 5-HTTLPR genotype moderating the relation between maternal criticism and children's depressive symptoms, with the highest depressive symptoms during the follow-up observed among children carrying 2 copies of the 5-HTTLPR lower expressing alleles (short [S] or long [LG]) who also exhibited negative inferential styles for self-characteristics and who experienced high levels of EE-Crit. In contrast, children with positive inferential styles exhibited low depressive symptoms regardless of 5-HTTLPR genotype or level of maternal criticism. PsycINFO Database Record 2009 APA, all rights reserved.

  15. Statistical modelling of citation exchange between statistics journals.

    Science.gov (United States)

    Varin, Cristiano; Cattelan, Manuela; Firth, David

    2016-01-01

    Rankings of scholarly journals based on citation data are often met with scepticism by the scientific community. Part of the scepticism is due to disparity between the common perception of journals' prestige and their ranking based on citation counts. A more serious concern is the inappropriate use of journal rankings to evaluate the scientific influence of researchers. The paper focuses on analysis of the table of cross-citations among a selection of statistics journals. Data are collected from the Web of Science database published by Thomson Reuters. Our results suggest that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized. Inferential conclusions require care to avoid potential overinterpretation of insignificant differences between journal ratings. Comparison with published ratings of institutions from the UK's research assessment exercise shows strong correlation at aggregate level between assessed research quality and journal citation 'export scores' within the discipline of statistics.

  16. Subjectivism as an unavoidable feature of ecological statistics

    Directory of Open Access Journals (Sweden)

    Martínez–Abraín, A.

    2014-12-01

    Full Text Available We approach here the handling of previous information when performing statistical inference in ecology, both when dealing with model specification and selection, and when dealing with parameter estimation. We compare the perspectives of this problem from the frequentist and Bayesian schools, including objective and subjective Bayesians. We show that the issue of making use of previous information and making a priori decisions is not only a reality for Bayesians but also for frequentists. However, the latter tend to overlook this because of the common difficulty of having previous information available on the magnitude of the effect that is thought to be biologically relevant. This prior information should be fed into a priori power tests when looking for the necessary sample sizes to couple statistical and biological significances. Ecologists should make a greater effort to make use of available prior information because this is their most legitimate contribution to the inferential process. Parameter estimation and model selection would benefit if this was done, allowing a more reliable accumulation of knowledge, and hence progress, in the biological sciences.

  17. Statistics 101 for Radiologists.

    Science.gov (United States)

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  18. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    Directory of Open Access Journals (Sweden)

    Hamid Reza Marateb

    2014-01-01

    Full Text Available Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD. Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.

  19. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    Science.gov (United States)

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  20. From inferential statistics to climate knowledge

    OpenAIRE

    H. N. Maia, A.; Meinke, H.

    2006-01-01

    International audience; Climate variability and change are risk factors for climate sensitive activities such as agriculture. Managing these risks requires "climate knowledge", i.e. a sound understanding of causes and consequences of climate variability and knowledge of potential management options that are suitable in light of the climatic risks posed. Often such information about prognostic variables (e.g. yield, rainfall, run-off) is provided in probabilistic terms (e.g. via cumulative dis...

  1. Evaluation of Solid Rocket Motor Component Data Using a Commercially Available Statistical Software Package

    Science.gov (United States)

    Stefanski, Philip L.

    2015-01-01

    Commercially available software packages today allow users to quickly perform the routine evaluations of (1) descriptive statistics to numerically and graphically summarize both sample and population data, (2) inferential statistics that draws conclusions about a given population from samples taken of it, (3) probability determinations that can be used to generate estimates of reliability allowables, and finally (4) the setup of designed experiments and analysis of their data to identify significant material and process characteristics for application in both product manufacturing and performance enhancement. This paper presents examples of analysis and experimental design work that has been conducted using Statgraphics®(Registered Trademark) statistical software to obtain useful information with regard to solid rocket motor propellants and internal insulation material. Data were obtained from a number of programs (Shuttle, Constellation, and Space Launch System) and sources that include solid propellant burn rate strands, tensile specimens, sub-scale test motors, full-scale operational motors, rubber insulation specimens, and sub-scale rubber insulation analog samples. Besides facilitating the experimental design process to yield meaningful results, statistical software has demonstrated its ability to quickly perform complex data analyses and yield significant findings that might otherwise have gone unnoticed. One caveat to these successes is that useful results not only derive from the inherent power of the software package, but also from the skill and understanding of the data analyst.

  2. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

  3. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  4. Statistical learning methods: Basics, control and performance

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de

    2006-04-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms.

  5. Statistical learning methods: Basics, control and performance

    International Nuclear Information System (INIS)

    Zimmermann, J.

    2006-01-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms

  6. Spacecraft control center automation using the generic inferential executor (GENIE)

    Science.gov (United States)

    Hartley, Jonathan; Luczak, Ed; Stump, Doug

    1996-01-01

    The increasing requirement to dramatically reduce the cost of mission operations led to increased emphasis on automation technology. The expert system technology used at the Goddard Space Flight Center (MD) is currently being applied to the automation of spacecraft control center activities. The generic inferential executor (GENIE) is a tool which allows pass automation applications to be constructed. The pass script templates constructed encode the tasks necessary to mimic flight operations team interactions with the spacecraft during a pass. These templates can be configured with data specific to a particular pass. Animated graphical displays illustrate the progress during the pass. The first GENIE application automates passes of the solar, anomalous and magnetospheric particle explorer (SAMPEX) spacecraft.

  7. Statistics Anxiety, Trait Anxiety, Learning Behavior, and Academic Performance

    Science.gov (United States)

    Macher, Daniel; Paechter, Manuela; Papousek, Ilona; Ruggeri, Kai

    2012-01-01

    The present study investigated the relationship between statistics anxiety, individual characteristics (e.g., trait anxiety and learning strategies), and academic performance. Students enrolled in a statistics course in psychology (N = 147) filled in a questionnaire on statistics anxiety, trait anxiety, interest in statistics, mathematical…

  8. Inferential smart sensing for feedwater flowrate in PWRs

    International Nuclear Information System (INIS)

    Na, M. G.; Hwang, I. J.; Lee, Y. J.

    2006-01-01

    The feedwater flowrate that is measured by Venturi flow meters in most pressurized water reactors can be over-measured because of the fouling phenomena that make corrosion products accumulate in the Venturi meters. Therefore, in this work, two kinds of methods, a support vector regression method and a fuzzy modeling method, combined with a sequential probability ratio test, are used in order to accurately estimate online the feedwater flowrate, and also to monitor the status of the existing hardware sensors. Also, the data for training the support vector machines and the fuzzy model are selected by using a subtractive clustering scheme to use informative data from among all acquired data. The proposed inferential sensing and monitoring algorithm is verified by using the acquired real plant data of Yonggwang Nuclear Power Plant Unit 3. In the simulations, it was known that the root mean squared error and the relative maximum error are so small and the proposed method early detects the degradation of an existing hardware sensor. (authors)

  9. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  10. Self-assessed performance improves statistical fusion of image labels

    Energy Technology Data Exchange (ETDEWEB)

    Bryan, Frederick W., E-mail: frederick.w.bryan@vanderbilt.edu; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Reich, Daniel S. [Translational Neuroradiology Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, Maryland 20892 (United States); Landman, Bennett A. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); and Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37235 (United States)

    2014-03-15

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  11. Self-assessed performance improves statistical fusion of image labels

    International Nuclear Information System (INIS)

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  12. Writing-Reading Relationships: Effectiveness of Writing Activities As Pre-Reading Tasks to Enhance L2 Inferential Reading Comprehension

    Directory of Open Access Journals (Sweden)

    Thilina Indrajie Wickramaarachchi

    2014-10-01

    Full Text Available The study examines the interaction between reading and writing processes in general and more specifically the impact of pre-reading tasks incorporating writing tasks (referred to as “prw tasks” in helping the development of inferential reading comprehension. A sample of 70 first year ESL students of the University of Kelaniya were initially selected with one group (experimental group engaging in “prw tasks” while the other group (control group performing the tasks without a pre-reading component. The intervention was for 6 sessions (one hour in each session. At the end of each session, the performance of the two groups was measured and the test scores were analyzed using the data analysis package SPSS to determine the effectiveness of the intervention. The results indicated that the experimental group had significantly performed better than the control group which indicated the effectiveness of the prw tasks in improving reading comprehension.

  13. An inferentialist perspective on the coordination of actions and reasons involved in making a statistical inference

    Science.gov (United States)

    Bakker, Arthur; Ben-Zvi, Dani; Makar, Katie

    2017-12-01

    To understand how statistical and other types of reasoning are coordinated with actions to reduce uncertainty, we conducted a case study in vocational education that involved statistical hypothesis testing. We analyzed an intern's research project in a hospital laboratory in which reducing uncertainties was crucial to make a valid statistical inference. In his project, the intern, Sam, investigated whether patients' blood could be sent through pneumatic post without influencing the measurement of particular blood components. We asked, in the process of making a statistical inference, how are reasons and actions coordinated to reduce uncertainty? For the analysis, we used the semantic theory of inferentialism, specifically, the concept of webs of reasons and actions—complexes of interconnected reasons for facts and actions; these reasons include premises and conclusions, inferential relations, implications, motives for action, and utility of tools for specific purposes in a particular context. Analysis of interviews with Sam, his supervisor and teacher as well as video data of Sam in the classroom showed that many of Sam's actions aimed to reduce variability, rule out errors, and thus reduce uncertainties so as to arrive at a valid inference. Interestingly, the decisive factor was not the outcome of a t test but of the reference change value, a clinical chemical measure of analytic and biological variability. With insights from this case study, we expect that students can be better supported in connecting statistics with context and in dealing with uncertainty.

  14. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    Science.gov (United States)

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  15. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  16. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  17. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

    Science.gov (United States)

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural

  18. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics.

    Science.gov (United States)

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural

  19. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

    Directory of Open Access Journals (Sweden)

    Manuela Paechter

    2017-07-01

    Full Text Available In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men. Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in

  20. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  1. Evaluating performance measures to determine training effectiveness

    International Nuclear Information System (INIS)

    Klemm, R.W.; Feiza, A.S.

    1987-01-01

    This research was conceived and dedicated to helping the CECo training organization become a more integrated part of the corporate business. The target population for this study was nuclear and fossil generating station employees who directly impacted the production of electricity. The target sample (n = 150) included: instrument, mechanical, and electrical maintenance personnel; control room operators; engineers, radiation chemists, and other technical specialists; and equipment operators and attendants. A total of four instruments were utilized by this study. Three instruments were administered to the generating station personnel. These included a demographic form, a learning style profile, and a motivational style profile. The focal instrument, a performance skills rating form, was administered to supervisory personnel. Data analysis consisted of three major parts. Part one established internal consistency through Cronbach alpha statistics. Part two provides summary statistics and breakdown tables for important variables. Part three provides inferential statistics responding to the research questions. All six Performance Skills variables discriminated significantly between the trained and non-trained groups (p .001). In all cases, the mean value for the trained group exceeded the mean value for the non-trained group. Implications for further research indicate that training does have a quantifiable effect on job performance

  2. Effects of Working Memory Capacity and Content Familiarity on Literal and Inferential Comprehension in L2 Reading

    Science.gov (United States)

    Alptekin, Cem; Ercetin, Gulcan

    2011-01-01

    This study examines the effects of working memory capacity and content familiarity on literal and inferential comprehension in second language (L2) reading. Participants were 62 Turkish university students with an advanced English proficiency level. Working memory capacity was measured through a computerized version of a reading span test, whereas…

  3. Research Article Special Issue

    African Journals Online (AJOL)

    2016-05-15

    May 15, 2016 ... was performed using statistical software SPSS 11 and paired T-test, ANOVA, Pearson correlation. .... with a pilot study of 30 people. In order to comply ... Data analysis was performed at the level of inferential statistics. (t-test ...

  4. Executive Remuneration and the Financial Performance of Quoted Firms: The Nigerian Experience

    Directory of Open Access Journals (Sweden)

    Sunday OGBEIDE

    2016-12-01

    Full Text Available This study examined executive remuneration and firms’ performance in Nigeria. Specifically, the study seeks to ascertain the nexus between executive remuneration, firm size and board size variables and the performance of quoted companies. The population of the study consists of all the quoted firms as at 31st December, 2014. A sample of sixty (60 companies excluding non- financial firms was selected for the period 2013 and 2014. Summary statistics such as descriptive, correlation and granger causality tests were used. Inferential statistics, using panel Generalized Least Square (EGLS with fixed effect was used for the purpose of empirical validations. This was after the application of diagnostic test to enhance the study. The study ascertained that executive remuneration has a relationship with firm performance, but negatively impacted on it; though was not statistically significant. Firm size was ascertained not to have significant positive relationship with firms’ performance; though it has a causality relationship with the performance of the firms. Board size was found to negatively affect the performance of firms and is statistically not significant. Premised on this, the study suggests that executive remuneration of quote firms should be pegged constantly in a flexible manner. This will enable shareholders known the causality relationship between what is paid to the executive and how that influence performance.

  5. A Model of Statistics Performance Based on Achievement Goal Theory.

    Science.gov (United States)

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  6. Client Financial Support for Mitigating Cost Factors Affecting ...

    African Journals Online (AJOL)

    Sultan

    Descriptive and inferential statistics were used to analyse data obtained. Findings ... improved SSC business performance and hence commensurate contribution to national economy. Keywords: ... March 2014 (The Australian Performance.

  7. Descriptive statistics: the specification of statistical measures and their presentation in tables and graphs. Part 7 of a series on evaluation of scientific publications.

    Science.gov (United States)

    Spriestersbach, Albert; Röhrig, Bernd; du Prel, Jean-Baptist; Gerhold-Ay, Aslihan; Blettner, Maria

    2009-09-01

    Descriptive statistics are an essential part of biometric analysis and a prerequisite for the understanding of further statistical evaluations, including the drawing of inferences. When data are well presented, it is usually obvious whether the author has collected and evaluated them correctly and in keeping with accepted practice in the field. Statistical variables in medicine may be of either the metric (continuous, quantitative) or categorical (nominal, ordinal) type. Easily understandable examples are given. Basic techniques for the statistical description of collected data are presented and illustrated with examples. The goal of a scientific study must always be clearly defined. The definition of the target value or clinical endpoint determines the level of measurement of the variables in question. Nearly all variables, whatever their level of measurement, can be usefully presented graphically and numerically. The level of measurement determines what types of diagrams and statistical values are appropriate. There are also different ways of presenting combinations of two independent variables graphically and numerically. The description of collected data is indispensable. If the data are of good quality, valid and important conclusions can already be drawn when they are properly described. Furthermore, data description provides a basis for inferential statistics.

  8. Parametric Estimation of Load for Air Force Data Centers

    Science.gov (United States)

    2015-03-27

    inferential statistics . The purpose of descriptive statistics is to describe and summarize the characteristics of a sample. To accomplish that...28 Descriptive Statistical Analysis... Inferential Statistical Analysis ....................................................................................33 Contrasting with Sixty Percent

  9. The role of working memory in inferential sentence comprehension.

    Science.gov (United States)

    Pérez, Ana Isabel; Paolieri, Daniela; Macizo, Pedro; Bajo, Teresa

    2014-08-01

    Existing literature on inference making is large and varied. Trabasso and Magliano (Discourse Process 21(3):255-287, 1996) proposed the existence of three types of inferences: explicative, associative and predictive. In addition, the authors suggested that these inferences were related to working memory (WM). In the present experiment, we investigated whether WM capacity plays a role in our ability to answer comprehension sentences that require text information based on these types of inferences. Participants with high and low WM span read two narratives with four paragraphs each. After each paragraph was read, they were presented with four true/false comprehension sentences. One required verbatim information and the other three implied explicative, associative and predictive inferential information. Results demonstrated that only the explicative and predictive comprehension sentences required WM: participants with high verbal WM were more accurate in giving explanations and also faster at making predictions relative to participants with low verbal WM span; in contrast, no WM differences were found in the associative comprehension sentences. These results are interpreted in terms of the causal nature underlying these types of inferences.

  10. Statistical Methods for Comparative Phenomics Using High-Throughput Phenotype Microarrays

    KAUST Repository

    Sturino, Joseph

    2010-01-24

    We propose statistical methods for comparing phenomics data generated by the Biolog Phenotype Microarray (PM) platform for high-throughput phenotyping. Instead of the routinely used visual inspection of data with no sound inferential basis, we develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second approach employs functional principal component analysis. Properties of the proposed methods are investigated on both simulated data and data sets from the PM platform.

  11. Assessing Adult Learner’s Numeracy as Related to Gender and Performance in Arithmetic

    Directory of Open Access Journals (Sweden)

    Adeneye O. A. Awofala

    2014-07-01

    Full Text Available The study investigated adult learner numeracy as related to gender and performance in arithmetic among 32 Nigerian adult learners from one government accredited adult literacy centre in Lagos State using the quantitative research method within the blueprint of descriptive survey design. Data collected were analysed using the descriptive statistics of percentages, mean, and standard deviation and inferential statistics of factor analysis, independent samples t-test, and multiple regression analysis. Findings revealed that numeracy skill assessed by the numeracy self-assessment scale was a multi-dimensional construct (numeracy in everyday life, numeracy in workplace, and numeracy in mathematical tasks. Adult learners showed average numeracy strength as gender differences in perception of numeracy skills and performance in arithmetic among adult learners reached zero-tolerance level. Numeracy in workplace and numeracy in mathematical tasks made statistically significant contributions to the variance in adult learners’ performance in arithmetic. Based on this base line study, it was thus, recommended that future studies in Nigeria should investigate adult learners’ numeracy skills using more robust and psychometrically sound instruments such as the Adult Literacy and Life Skills Survey (ALLS and the International Adult Literacy Survey (IALS.

  12. Statistical inference for the lifetime performance index based on generalised order statistics from exponential distribution

    Science.gov (United States)

    Vali Ahmadi, Mohammad; Doostparast, Mahdi; Ahmadi, Jafar

    2015-04-01

    In manufacturing industries, the lifetime of an item is usually characterised by a random variable X and considered to be satisfactory if X exceeds a given lower lifetime limit L. The probability of a satisfactory item is then ηL := P(X ≥ L), called conforming rate. In industrial companies, however, the lifetime performance index, proposed by Montgomery and denoted by CL, is widely used as a process capability index instead of the conforming rate. Assuming a parametric model for the random variable X, we show that there is a connection between the conforming rate and the lifetime performance index. Consequently, the statistical inferences about ηL and CL are equivalent. Hence, we restrict ourselves to statistical inference for CL based on generalised order statistics, which contains several ordered data models such as usual order statistics, progressively Type-II censored data and records. Various point and interval estimators for the parameter CL are obtained and optimal critical regions for the hypothesis testing problems concerning CL are proposed. Finally, two real data-sets on the lifetimes of insulating fluid and ball bearings, due to Nelson (1982) and Caroni (2002), respectively, and a simulated sample are analysed.

  13. Statistical analysis of RHIC beam position monitors performance

    Science.gov (United States)

    Calaga, R.; Tomás, R.

    2004-04-01

    A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  14. Statistical analysis of RHIC beam position monitors performance

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2004-04-01

    Full Text Available A detailed statistical analysis of beam position monitors (BPM performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  15. Equivalent statistics and data interpretation.

    Science.gov (United States)

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  16. THESEE-3, Orgel Reactor Performance and Statistic Hot Channel Factors

    International Nuclear Information System (INIS)

    Chambaud, B.

    1974-01-01

    1 - Nature of physical problem solved: The code applies to a heavy-water moderated organic-cooled reactor channel. Different fuel cluster models can be used (circular or hexagonal patterns). The code gives coolant temperatures and velocities and cladding temperatures throughout the channel and also channel performances, such as power, outlet temperature, boiling and burn-out safety margins (see THESEE-1). In a further step, calculations are performed with statistical values obtained by random retrieval of geometrical in- put data and taking into account construction tolerances, vibrations, etc. The code evaluates the mean value and standard deviation for the more important thermal and hydraulic parameters. 2 - Method of solution: First step calculations are performed for nominal values of parameters by solving iteratively the non-linear system of equations which give the pressure drops in subchannels of the current zone (see THESEE-1). Then a Gaussian probability distribution of possible statistical values of the geometrical input data is assumed. A random number generation routine determines the statistical case. Calculations are performed in the same way as for the nominal case. In the case of several channels, statistical performances must be adjusted to equalize the normal pressure drop. A special subroutine (AVERAGE) then determines the mean value and standard deviation, and thus probability functions of the most significant thermal and hydraulic results. 3 - Restrictions on the complexity of the problem: Maximum 7 fuel clusters, each divided into 10 axial zones. Fuel bundle geometries are restricted to the following models - circular pattern 6/7, 18/19, 36/67 rods, with or without fillers. The fuel temperature distribution is not studied. The probability distribution of the statistical input is assumed to be a Gaussian function. The principle of random retrieval of statistical values is correct, but some additional correlations could be found from a more

  17. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  18. Do narcissism and emotional intelligence win us friends? : modeling dynamics of peer popularity using inferential network analysis

    OpenAIRE

    Czarna, Anna; Leifeld, Philip; Śmieja-Nęcka, Magdalena; Dufner, Michael; Salovey, Peter

    2016-01-01

    This research investigated effects of narcissism and emotional intelligence (EI) on popularity in social networks. In a longitudinal field study, we examined the dynamics of popularity in 15 peer groups in two waves (N = 273). We measured narcissism, ability EI, and explicit and implicit self-esteem. In addition, we measured popularity at zero acquaintance and 3 months later. We analyzed the data using inferential network analysis (temporal exponential random graph modeling, TERGM) accounting...

  19. Statistical methods for assays with limits of detection: Serum bile acid as a differentiator between patients with normal colons, adenomas, and colorectal cancer

    Directory of Open Access Journals (Sweden)

    Bonnie LaFleur

    2011-01-01

    Full Text Available In analytic chemistry a detection limit (DL is the lowest measurable amount of an analyte that can be distinguished from a blank; many biomedical measurement technologies exhibit this property. From a statistical perspective, these data present inferential challenges because instead of precise measures, one only has information that the value is somewhere between 0 and the DL (below detection limit, BDL. Substitution of BDL values, with 0 or the DL can lead to biased parameter estimates and a loss of statistical power. Statistical methods that make adjustments when dealing with these types of data, often called left-censored data, are available in many commercial statistical packages. Despite this availability, the use of these methods is still not widespread in biomedical literature. We have reviewed the statistical approaches of dealing with BDL values, and used simulations to examine the performance of the commonly used substitution methods and the most widely available statistical methods. We have illustrated these methods using a study undertaken at the Vanderbilt-Ingram Cancer Center, to examine the serum bile acid levels in patients with colorectal cancer and adenoma. We have found that the modern methods for BDL values identify disease-related differences that are often missed, with statistically naive approaches.

  20. Statistical exploration of dataset examining key indicators influencing housing and urban infrastructure investments in megacities

    Directory of Open Access Journals (Sweden)

    Adedeji O. Afolabi

    2018-06-01

    Full Text Available Lagos, by the UN standards, has attained the megacity status, with the attendant challenges of living up to that titanic position; regrettably it struggles with its present stock of housing and infrastructural facilities to match its new status. Based on a survey of construction professionals’ perception residing within the state, a questionnaire instrument was used to gather the dataset. The statistical exploration contains dataset on the state of housing and urban infrastructural deficit, key indicators spurring the investment by government to upturn the deficit and improvement mechanisms to tackle the infrastructural dearth. Descriptive statistics and inferential statistics were used to present the dataset. The dataset when analyzed can be useful for policy makers, local and international governments, world funding bodies, researchers and infrastructural investors. Keywords: Construction, Housing, Megacities, Population, Urban infrastructures

  1. Improved custom statistics visualization for CA Performance Center data

    CERN Document Server

    Talevi, Iacopo

    2017-01-01

    The main goal of my project is to understand and experiment the possibilities that CA Performance Center (CA PC) offers for creating custom applications to display stored information through interesting visual means, such as maps. In particular, I have re-written some of the network statistics web pages in order to fetch data from new statistics modules in CA PC, which has its own API, and stop using the RRD data.

  2. Inferential revision in narrative texts: An ERP study.

    Science.gov (United States)

    Pérez, Ana; Cain, Kate; Castellanos, María C; Bajo, Teresa

    2015-11-01

    We evaluated the process of inferential revision during text comprehension in adults. Participants with high or low working memory read short texts, in which the introduction supported two plausible concepts (e.g., 'guitar/violin'), although one was more probable ('guitar'). There were three possible continuations: a neutral sentence, which did not refer back to either concept; a no-revise sentence, which referred to a general property consistent with either concept (e.g., '…beautiful curved body'); and a revise sentence, which referred to a property that was consistent with only the less likely concept (e.g., '…matching bow'). Readers took longer to read the sentence in the revise condition, indicating that they were able to evaluate their comprehension and detect a mismatch. In a final sentence, a target noun referred to the alternative concept supported in the revise condition (e.g., 'violin'). ERPs indicated that both working memory groups were able to evaluate their comprehension of the text (P3a), but only high working memory readers were able to revise their initial incorrect interpretation (P3b) and integrate the new information (N400) when reading the revise sentence. Low working memory readers had difficulties inhibiting the no-longer-relevant interpretation and thus failed to revise their situation model, and they experienced problems integrating semantically related information into an accurate memory representation.

  3. An audit of the statistics and the comparison with the parameter in the population

    Science.gov (United States)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  4. Early warning of limit-exceeding concentrations of cyanobacteria and cyanotoxins in drinking water reservoirs by inferential modelling.

    Science.gov (United States)

    Recknagel, Friedrich; Orr, Philip T; Bartkow, Michael; Swanepoel, Annelie; Cao, Hongqing

    2017-11-01

    An early warning scheme is proposed that runs ensembles of inferential models for predicting the cyanobacterial population dynamics and cyanotoxin concentrations in drinking water reservoirs on a diel basis driven by in situ sonde water quality data. When the 10- to 30-day-ahead predicted concentrations of cyanobacteria cells or cyanotoxins exceed pre-defined limit values, an early warning automatically activates an action plan considering in-lake control, e.g. intermittent mixing and ad hoc water treatment in water works, respectively. Case studies of the sub-tropical Lake Wivenhoe (Australia) and the Mediterranean Vaal Reservoir (South Africa) demonstrate that ensembles of inferential models developed by the hybrid evolutionary algorithm HEA are capable of up to 30days forecasts of cyanobacteria and cyanotoxins using data collected in situ. The resulting models for Dolicospermum circinale displayed validity for up to 10days ahead, whilst concentrations of Cylindrospermopsis raciborskii and microcystins were successfully predicted up to 30days ahead. Implementing the proposed scheme for drinking water reservoirs enhances current water quality monitoring practices by solely utilising in situ monitoring data, in addition to cyanobacteria and cyanotoxin measurements. Access to routinely measured cyanotoxin data allows for development of models that predict explicitly cyanotoxin concentrations that avoid to inadvertently model and predict non-toxic cyanobacterial strains. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. EDI Performance Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...

  6. Impact of a short biostatistics course on knowledge and performance of postgraduate scholars: Implications for training of African doctors and biomedical researchers.

    Science.gov (United States)

    Chima, S C; Nkwanyana, N M; Esterhuizen, T M

    2015-12-01

    This study was designed to evaluate the impact of a short biostatistics course on knowledge and performance of statistical analysis by biomedical researchers in Africa. It is recognized that knowledge of biostatistics is essential for understanding and interpretation of modern scientific literature and active participation in the global research enterprise. Unfortunately, it has been observed that basic education of African scholars may be deficient in applied mathematics including biostatistics. Forty university affiliated biomedical researchers from South Africa volunteered for a 4-day short-course where participants were exposed to lectures on descriptive and inferential biostatistics and practical training on using a statistical software package for data analysis. A quantitative questionnaire was used to evaluate participants' statistical knowledge and performance pre- and post-course. Changes in knowledge and performance were measured using objective and subjective criteria. Data from completed questionnaires were captured and analyzed using Statistical Package for Social Sciences. Participants' pre- and post-course data were compared using nonparametric Wilcoxon signed ranks tests for nonnormally distributed variables. A P researchers in this cohort and highlights the potential benefits of short-courses in biostatistics to improve the knowledge and skills of biomedical researchers and scholars in Africa.

  7. Humans make efficient use of natural image statistics when performing spatial interpolation.

    Science.gov (United States)

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  8. Statistical methods used in the public health literature and implications for training of public health professionals.

    Science.gov (United States)

    Hayat, Matthew J; Powell, Amanda; Johnson, Tessa; Cadwell, Betsy L

    2017-01-01

    Statistical literacy and knowledge is needed to read and understand the public health literature. The purpose of this study was to quantify basic and advanced statistical methods used in public health research. We randomly sampled 216 published articles from seven top tier general public health journals. Studies were reviewed by two readers and a standardized data collection form completed for each article. Data were analyzed with descriptive statistics and frequency distributions. Results were summarized for statistical methods used in the literature, including descriptive and inferential statistics, modeling, advanced statistical techniques, and statistical software used. Approximately 81.9% of articles reported an observational study design and 93.1% of articles were substantively focused. Descriptive statistics in table or graphical form were reported in more than 95% of the articles, and statistical inference reported in more than 76% of the studies reviewed. These results reveal the types of statistical methods currently used in the public health literature. Although this study did not obtain information on what should be taught, information on statistical methods being used is useful for curriculum development in graduate health sciences education, as well as making informed decisions about continuing education for public health professionals.

  9. Reasoning with data an introduction to traditional and Bayesian statistics using R

    CERN Document Server

    Stanton, Jeffrey M

    2017-01-01

    Engaging and accessible, this book teaches readers how to use inferential statistical thinking to check their assumptions, assess evidence about their beliefs, and avoid overinterpreting results that may look more promising than they really are. It provides step-by-step guidance for using both classical (frequentist) and Bayesian approaches to inference. Statistical techniques covered side by side from both frequentist and Bayesian approaches include hypothesis testing, replication, analysis of variance, calculation of effect sizes, regression, time series analysis, and more. Students also get a complete introduction to the open-source R programming language and its key packages. Throughout the text, simple commands in R demonstrate essential data analysis skills using real-data examples. The companion website provides annotated R code for the book's examples, in-class exercises, supplemental reading lists, and links to online videos, interactive materials, and other resources.

  10. Handbook of tables for order statistics from lognormal distributions with applications

    CERN Document Server

    Balakrishnan, N

    1999-01-01

    Lognormal distributions are one of the most commonly studied models in the sta­ tistical literature while being most frequently used in the applied literature. The lognormal distributions have been used in problems arising from such diverse fields as hydrology, biology, communication engineering, environmental science, reliability, agriculture, medical science, mechanical engineering, material science, and pharma­ cology. Though the lognormal distributions have been around from the beginning of this century (see Chapter 1), much of the work concerning inferential methods for the parameters of lognormal distributions has been done in the recent past. Most of these methods of inference, particUlarly those based on censored samples, involve extensive use of numerical methods to solve some nonlinear equations. Order statistics and their moments have been discussed quite extensively in the literature for many distributions. It is very well known that the moments of order statistics can be derived explicitly only...

  11. A Divergence Statistics Extension to VTK for Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  12. Not the Norm: The Potential of Tree Analysis of Performance Data from Students in a Foundation Mathematics Module

    Science.gov (United States)

    Kirby, Nicola; Dempster, Edith

    2015-01-01

    Quantitative methods of data analysis usually involve inferential statistics, and are not well known for their ability to reflect the intricacies of a diverse student population. The South African tertiary education sector is characterised by extreme inequality and diversity. Foundation programmes address issues of inequality of access by…

  13. Principles for statistical inference on big spatio-temporal data from climate models

    KAUST Repository

    Castruccio, Stefano; Genton, Marc G.

    2018-01-01

    The vast increase in size of modern spatio-temporal datasets has prompted statisticians working in environmental applications to develop new and efficient methodologies that are still able to achieve inference for nontrivial models within an affordable time. Climate model outputs push the limits of inference for Gaussian processes, as their size can easily be larger than 10 billion data points. Drawing from our experience in a set of previous work, we provide three principles for the statistical analysis of such large datasets that leverage recent methodological and computational advances. These principles emphasize the need of embedding distributed and parallel computing in the inferential process.

  14. Principles for statistical inference on big spatio-temporal data from climate models

    KAUST Repository

    Castruccio, Stefano

    2018-02-24

    The vast increase in size of modern spatio-temporal datasets has prompted statisticians working in environmental applications to develop new and efficient methodologies that are still able to achieve inference for nontrivial models within an affordable time. Climate model outputs push the limits of inference for Gaussian processes, as their size can easily be larger than 10 billion data points. Drawing from our experience in a set of previous work, we provide three principles for the statistical analysis of such large datasets that leverage recent methodological and computational advances. These principles emphasize the need of embedding distributed and parallel computing in the inferential process.

  15. Assessment and Certification of Neonatal Incubator Sensors through an Inferential Neural Network

    Directory of Open Access Journals (Sweden)

    José Medeiros de Araújo

    2013-11-01

    Full Text Available Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU. Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal.

  16. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  17. Study design and statistical analysis of data in human population studies with the micronucleus assay.

    Science.gov (United States)

    Ceppi, Marcello; Gallo, Fabio; Bonassi, Stefano

    2011-01-01

    The most common study design performed in population studies based on the micronucleus (MN) assay, is the cross-sectional study, which is largely performed to evaluate the DNA damaging effects of exposure to genotoxic agents in the workplace, in the environment, as well as from diet or lifestyle factors. Sample size is still a critical issue in the design of MN studies since most recent studies considering gene-environment interaction, often require a sample size of several hundred subjects, which is in many cases difficult to achieve. The control of confounding is another major threat to the validity of causal inference. The most popular confounders considered in population studies using MN are age, gender and smoking habit. Extensive attention is given to the assessment of effect modification, given the increasing inclusion of biomarkers of genetic susceptibility in the study design. Selected issues concerning the statistical treatment of data have been addressed in this mini-review, starting from data description, which is a critical step of statistical analysis, since it allows to detect possible errors in the dataset to be analysed and to check the validity of assumptions required for more complex analyses. Basic issues dealing with statistical analysis of biomarkers are extensively evaluated, including methods to explore the dose-response relationship among two continuous variables and inferential analysis. A critical approach to the use of parametric and non-parametric methods is presented, before addressing the issue of most suitable multivariate models to fit MN data. In the last decade, the quality of statistical analysis of MN data has certainly evolved, although even nowadays only a small number of studies apply the Poisson model, which is the most suitable method for the analysis of MN data.

  18. Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations

    Science.gov (United States)

    Garrett, Karen A.; Allison, David B.

    2015-01-01

    Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106

  19. Challenges and approaches to statistical design and inference in high-dimensional investigations.

    Science.gov (United States)

    Gadbury, Gary L; Garrett, Karen A; Allison, David B

    2009-01-01

    Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.

  20. The CEO performance effect : Statistical issues and a complex fit perspective

    NARCIS (Netherlands)

    Blettner, D.P.; Chaddad, F.R.; Bettis, R.

    2012-01-01

    How CEOs affect strategy and performance is important to strategic management research. We show that sophisticated statistical analysis alone is problematic for establishing the magnitude and causes of CEO impact on performance. We discuss three problem areas that substantially distort the

  1. On the use of recognition in inferential decision making: An overview of the debate

    Directory of Open Access Journals (Sweden)

    Rudiger F. Pohl

    2011-07-01

    Full Text Available I describe and discuss the sometimes heated controversy surrounding the recognition heuristic (RH as a model of inferential decision making. After briefly recapitulating the history of the RH up to its current version, I critically evaluate several specific assumptions and predictions of the RH and its surrounding framework: recognition as a memory-based process; the RH as a cognitive process model; proper conditions of testing the RH; measures of using the RH; reasons for not using the RH; the RH as a non-compensatory strategy; evidence for a Less-is-more effect (LIME; and the RH as part of the toolbox. The collection of these controversial issues may help to better understand the debate, to further sharpen the RH theory, and to develop ideas for future research.

  2. Biostatistics primer: part I.

    Science.gov (United States)

    Overholser, Brian R; Sowinski, Kevin M

    2007-12-01

    Biostatistics is the application of statistics to biologic data. The field of statistics can be broken down into 2 fundamental parts: descriptive and inferential. Descriptive statistics are commonly used to categorize, display, and summarize data. Inferential statistics can be used to make predictions based on a sample obtained from a population or some large body of information. It is these inferences that are used to test specific research hypotheses. This 2-part review will outline important features of descriptive and inferential statistics as they apply to commonly conducted research studies in the biomedical literature. Part 1 in this issue will discuss fundamental topics of statistics and data analysis. Additionally, some of the most commonly used statistical tests found in the biomedical literature will be reviewed in Part 2 in the February 2008 issue.

  3. Potential errors and misuse of statistics in studies on leakage in endodontics.

    Science.gov (United States)

    Lucena, C; Lopez, J M; Pulgar, R; Abalos, C; Valderrama, M J

    2013-04-01

    To assess the quality of the statistical methodology used in studies of leakage in Endodontics, and to compare the results found using appropriate versus inappropriate inferential statistical methods. The search strategy used the descriptors 'root filling' 'microleakage', 'dye penetration', 'dye leakage', 'polymicrobial leakage' and 'fluid filtration' for the time interval 2001-2010 in journals within the categories 'Dentistry, Oral Surgery and Medicine' and 'Materials Science, Biomaterials' of the Journal Citation Report. All retrieved articles were reviewed to find potential pitfalls in statistical methodology that may be encountered during study design, data management or data analysis. The database included 209 papers. In all the studies reviewed, the statistical methods used were appropriate for the category attributed to the outcome variable, but in 41% of the cases, the chi-square test or parametric methods were inappropriately selected subsequently. In 2% of the papers, no statistical test was used. In 99% of cases, a statistically 'significant' or 'not significant' effect was reported as a main finding, whilst only 1% also presented an estimation of the magnitude of the effect. When the appropriate statistical methods were applied in the studies with originally inappropriate data analysis, the conclusions changed in 19% of the cases. Statistical deficiencies in leakage studies may affect their results and interpretation and might be one of the reasons for the poor agreement amongst the reported findings. Therefore, more effort should be made to standardize statistical methodology. © 2012 International Endodontic Journal.

  4. High performance statistical computing with parallel R: applications to biology and climate modelling

    International Nuclear Information System (INIS)

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  5. Quality of statistical reporting in developmental disability journals.

    Science.gov (United States)

    Namasivayam, Aravind K; Yan, Tina; Wong, Wing Yiu Stephanie; van Lieshout, Pascal

    2015-12-01

    Null hypothesis significance testing (NHST) dominates quantitative data analysis, but its use is controversial and has been heavily criticized. The American Psychological Association has advocated the reporting of effect sizes (ES), confidence intervals (CIs), and statistical power analysis to complement NHST results to provide a more comprehensive understanding of research findings. The aim of this paper is to carry out a sample survey of statistical reporting practices in two journals with the highest h5-index scores in the areas of developmental disability and rehabilitation. Using a checklist that includes critical recommendations by American Psychological Association, we examined 100 randomly selected articles out of 456 articles reporting inferential statistics in the year 2013 in the Journal of Autism and Developmental Disorders (JADD) and Research in Developmental Disabilities (RDD). The results showed that for both journals, ES were reported only half the time (JADD 59.3%; RDD 55.87%). These findings are similar to psychology journals, but are in stark contrast to ES reporting in educational journals (73%). Furthermore, a priori power and sample size determination (JADD 10%; RDD 6%), along with reporting and interpreting precision measures (CI: JADD 13.33%; RDD 16.67%), were the least reported metrics in these journals, but not dissimilar to journals in other disciplines. To advance the science in developmental disability and rehabilitation and to bridge the research-to-practice divide, reforms in statistical reporting, such as providing supplemental measures to NHST, are clearly needed.

  6. To improve the quality of the statistical analysis of papers published in the Journal of the Korean Society for Therapeutic Radiology and Oncology

    International Nuclear Information System (INIS)

    Park, Hee Chul; Choi, Doo Ho; Ahn, Song Vogue

    2008-01-01

    To improve the quality of the statistical analysis of papers published in the Journal of the Korean Society for Therapeutic Radiology and Oncology (JKOSTRO) by evaluating commonly encountered errors. Materials and Methods: Papers published in the JKOSTRO from January 2006 to December 2007 were reviewed for methodological and statistical validity using a modified version of Ahn's checklist. A statistician reviewed individual papers and evaluated the list items in the checklist for each paper. To avoid the potential assessment error by the statistician who lacks expertise in the field of radiation oncology; the editorial board of the JKOSTRO reviewed each checklist for individual articles. A frequency analysis of the list items was performed using SAS (version 9.0, SAS Institute, NC, USA) software. Results: A total of 73 papers including 5 case reports and 68 original articles were reviewed. Inferential statistics was used in 46 papers. The most commonly adopted statistical methodology was a survival analysis (58.7%). Only 19% of papers were free of statistical errors. Errors of omission were encountered in 34 (50.0%) papers. Errors of commission were encountered in 35 (51.5%) papers. Twenty-one papers (30.9%) had both errors of omission and commission. Conclusion: A variety of statistical errors were encountered in papers published in the JKOSTRO. The current study suggests that a more thorough review of the statistical analysis is needed for manuscripts submitted in the JKOSTRO

  7. Designing Better Graphs by Including Distributional Information and Integrating Words, Numbers, and Images

    Science.gov (United States)

    Lane, David M.; Sandor, Aniko

    2009-01-01

    Statistical graphs are commonly used in scientific publications. Unfortunately, graphs in psychology journals rarely portray distributional information beyond central tendency, and few graphs portray inferential statistics. Moreover, those that do portray inferential information generally do not portray it in a way that is useful for interpreting…

  8. Performance in College Chemistry: a Statistical Comparison Using Gender and Jungian Personality Type

    Science.gov (United States)

    Greene, Susan V.; Wheeler, Henry R.; Riley, Wayne D.

    This study sorted college introductory chemistry students by gender and Jungian personality type. It recognized differences from the general population distribution and statistically compared the students' grades with their Jungian personality types. Data from 577 female students indicated that ESFP (extroverted, sensory, feeling, perceiving) and ENFP (extroverted, intuitive, feeling, perceiving) profiles performed poorly at statistically significant levels when compared with the distribution of females enrolled in introductory chemistry. The comparable analysis using data from 422 male students indicated that the poorly performing male profiles were ISTP (introverted, sensory, thinking, perceiving) and ESTP (extroverted, sensory, thinking, perceiving). ESTJ (extroverted, sensory, thinking, judging) female students withdrew from the course at a statistically significant level. For both genders, INTJ (introverted, intuitive, thinking, judging) students were the best performers. By examining the documented characteristics of Jungian profiles that correspond with poorly performing students in chemistry, one may more effectively assist the learning process and the retention of these individuals in the fields of natural science, engineering, and technology.

  9. DESIGNING CURRICULUM, CAPACITY OF INNOVATION, AND PERFORMANCES: A STUDY ON THE PESANTRENS IN NORTH SUMATRA

    Directory of Open Access Journals (Sweden)

    Jafar Syahbuddin Ritonga

    2016-06-01

    Full Text Available Desain Kurikulum, Kemampuan Inovasi dan Performa: Studi Pesantren di Sumatera Utara. There have been many articles and studies on Pesantren that have been published. Unfortunately, Almost all of them discuss about the Pesantren from the perspective of Islam exclusively. This paper tries to offer a new perspective in looking at the Pesantren: as an entity of business. This is a research paper which aims to know how Pesantrens’ performances are influenced by designing curriculum at different levels of capacity of innovation. The data are analyzed using descriptive and inferential statistics, namely frequency, multiple regressions and hierarchical regression. This paper analyzes the influence of designing the curriculum on performances of the Pesantren and the important effects of capacity of innovation on them. This paper reveals that the influence of designing modern Islamic currciculum to the Pesantren’s performances is expected to be varied according to the levels of capacity of innovation at the Pesantren.

  10. WASP (Write a Scientific Paper) using Excel 9: Analysis of variance.

    Science.gov (United States)

    Grech, Victor

    2018-06-01

    Analysis of variance (ANOVA) may be required by researchers as an inferential statistical test when more than two means require comparison. This paper explains how to perform ANOVA in Microsoft Excel. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. The Impact of Home Environment Factors on Academic Performance of Senior Secondary School Students in Garki Area District, Abuja - Nigeria

    Directory of Open Access Journals (Sweden)

    L. T. Dzever

    2015-12-01

    Full Text Available The study examined the impact of home environment factors on the academic performance of public secondary school students in Garki Area District, Abuja, Nigeria. The stratified sampling technique was used to select 300 students from six public schools, while the simple random sampling technique was used to administer the questionnaire. The study utilized a descriptive survey research design for the study. Also, data on student’s academic performance was obtained from student’s scores in four selected school subjects. Data obtained was analyzed using descriptive and inferential statistical techniques; Pearson Product Moment Correlation and Multiple regression analysis (ANOVA. The results result revealed a positive and significant relationship between permissive patenting style with academic performance (p0.05. Also, the result from the study identified income, educational background and occupational level as well as permissive parenting style as the main predictive variables influencing students’ academic performance.

  12. Statistical Literacy: Simulations with Dolphins

    Science.gov (United States)

    Strayer, Jeremy; Matuszewski, Amber

    2016-01-01

    In this article, Strayer and Matuszewski present a six-phase strategy that teachers can use to help students develop a conceptual understanding of inferential hypothesis testing through simulation. As Strayer and Matuszewski discuss the strategy, they describe each phase in general, explain how they implemented the phase while teaching their…

  13. Project Leadership and Quality Performance of Construction Projects

    Directory of Open Access Journals (Sweden)

    SPG Buba

    2017-05-01

    Full Text Available Background: The construction industry in Nigeria, is pigeonholed by poor quality of construction products as a result of the inherent corruption in the country. Lack of purposeful leadership and inappropriate choice of leadership styles in the industry have been attributed to project failure. Abandoned and failed projects are more predominant in the public sector which litters every corner of the country. Objectives: The objective of this paper is to assess the impact of leadership styles on quality performance criteria of public projects in Nigeria. Methodology: A total of 43 questionnaires were distributed to 3 key groups of respondents (Quantity Surveyors, Builders, and Architects who are project managers in Nigeria. Descriptive and Inferential statistics were used to analyse the data using the Statistical Package for Social Sciences (SPSS. Likert Scale was used to measure the independent variables (leadership style: facilitative, coaching, delegating and directing; and the level of achievement of projects based on the dependent variables (quality and function performance criteria which are: achieving highest aesthetic quality; and functional building that fits its purpose. Findings: The study revealed that Directing is the major leadership style used by project managers in Nigeria. Amongst the leadership styles which has the most impact on quality performance indicators is also directing which has the most relative influence on achieving highest aesthetic quality and functional building that fits its purpose. Conclusion/Recommendation/Way forward: The underlying relationship between Directing leadership styles and the performance criteria of achieving highest aesthetic quality and functional building that fits its purpose will be beneficial to the Nigerian construction environment.

  14. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  15. Dry deposition of reactive nitrogen to European ecosystems: a comparison of inferential models across the NitroEurope network

    Directory of Open Access Journals (Sweden)

    C. R. Flechard

    2011-03-01

    Full Text Available Inferential models have long been used to determine pollutant dry deposition to ecosystems from measurements of air concentrations and as part of national and regional atmospheric chemistry and transport models, and yet models still suffer very large uncertainties. An inferential network of 55 sites throughout Europe for atmospheric reactive nitrogen (Nr was established in 2007, providing ambient concentrations of gaseous NH3, NO2, HNO3 and HONO and aerosol NH4+ and NO3 as part of the NitroEurope Integrated Project.

    Network results providing modelled inorganic Nr dry deposition to the 55 monitoring sites are presented, using four existing dry deposition routines, revealing inter-model differences and providing ensemble average deposition estimates. Dry deposition is generally largest over forests in regions with large ambient NH3 concentrations, exceeding 30–40 kg N ha−1 yr−1 over parts of the Netherlands and Belgium, while some remote forests in Scandinavia receive less than 2 kg N ha−1 yr−1. Turbulent Nr deposition to short vegetation ecosystems is generally smaller than to forests due to reduced turbulent exchange, but also because NH3 inputs to fertilised, agricultural systems are limited by the presence of a substantial NH3 source in the vegetation, leading to periods of emission as well as deposition.

    Differences between models reach a factor 2–3 and are often greater than differences between monitoring sites. For soluble Nr gases such as NH3 and HNO3, the non-stomatal pathways are responsible for most of the annual uptake over many surfaces, especially the non-agricultural land uses, but parameterisations of the sink strength vary considerably among models. For aerosol NH4

  16. Statistical analysis in MSW collection performance assessment.

    Science.gov (United States)

    Teixeira, Carlos Afonso; Avelino, Catarina; Ferreira, Fátima; Bentes, Isabel

    2014-09-01

    The increase of Municipal Solid Waste (MSW) generated over the last years forces waste managers pursuing more effective collection schemes, technically viable, environmentally effective and economically sustainable. The assessment of MSW services using performance indicators plays a crucial role for improving service quality. In this work, we focus on the relevance of regular system monitoring as a service assessment tool. In particular, we select and test a core-set of MSW collection performance indicators (effective collection distance, effective collection time and effective fuel consumption) that highlights collection system strengths and weaknesses and supports pro-active management decision-making and strategic planning. A statistical analysis was conducted with data collected in mixed collection system of Oporto Municipality, Portugal, during one year, a week per month. This analysis provides collection circuits' operational assessment and supports effective short-term municipality collection strategies at the level of, e.g., collection frequency and timetables, and type of containers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Can We Use Polya’s Method to Improve Students’ Performance in the Statistics Classes?

    Directory of Open Access Journals (Sweden)

    Indika Wickramasinghe

    2015-01-01

    Full Text Available In this study, Polya’s problem-solving method is introduced in a statistics class in an effort to enhance students’ performance. Teaching the method was applied to one of the two introductory-level statistics classes taught by the same instructor, and a comparison was made between the performances in the two classes. The results indicate there was a significant improvement of the students’ performance in the class in which Polya’s method was introduced.

  18. Statistical modelling of networked human-automation performance using working memory capacity.

    Science.gov (United States)

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

  19. How to analyze germination of species with empty seeds using contemporary statistical methods?

    Directory of Open Access Journals (Sweden)

    Denise Garcia de Santana

    2018-02-01

    Full Text Available ABSTRACT Statistical analysis is considered an important tool for scientific studies, including those on seeds. However, seed scientists and statisticians often disagree on the nature of variables addressed in germination experiments. Statisticians consider the number of germinated seeds to be a binomially distributed variable, whereas seed scientists convert it into a percentage and often analyze it as a normally distributed variable. The requirement for normal adjustment restricts the models of analysis of variance that can be used. Lack of fit requires nonparametric tests, but they are known by their inferential problems. Generalized Linear Models (GLM can provide better fit to germination variables for any species, including Lychnophora ericoides Mart., because they allow wider probability distributions with fewer requirements. Here we suggest the use of relative germination besides absolute germination for species with seed development problems, such for L. ericoides and others from the campos rupestres. This paper introduces the most current statistical advancements and increases the possibilities for their application in seed science research.

  20. Business Statistics: A Comparison of Student Performance in Three Learning Modes

    Science.gov (United States)

    Simmons, Gerald R.

    2014-01-01

    The purpose of this study was to compare the performance of three teaching modes and age groups of business statistics sections in terms of course exam scores. The research questions were formulated to determine the performance of the students within each teaching mode, to compare each mode in terms of exam scores, and to compare exam scores by…

  1. A comparison of linear and nonlinear statistical techniques in performance attribution.

    Science.gov (United States)

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  2. PENGARUH PENGUASAAN KONSEP TEOREMA PYTHAGORAS TERHADAP KEMAMPUAN MENYELESAIKAN SOAL-SOAL BANGUN RUANG SISI DATAR PADA SISWA KELAS VIII MTS NEGERI BALANG-BALANG

    Directory of Open Access Journals (Sweden)

    Syahrida Zaerani

    2017-12-01

    Abstract: This study aims to determine the impact of the mastery of Pythagoras Theorem concepts on the ability of MTs Negeri Balang-Balang grade VIII students to solve polyhedron problems. This study is a quantitative research with ex post facto design. Statistical population consists of all grade VIII students in MTs Negeri Balang-Balang which was 204 students. The number of sampling used is 102 respondents taken by simple random sampling technique. The research instrument was a test related to Pythagorean an polyhedron problems. Descriptive and inferential statistics were performed as the statistical analysis. The descriptive analysis result shows that the mastery of Pythagoras Theorem concepts is within the category of ‘high’ and the ability of solving polyhedron problems is as well. The inferential statistical analysis using simple linear regression shows that there is an impact of the mastery of Pythagoras Theorem concepts on polyhedron problem solving ability of Grade VIII students of MTs Negeri Balang-Balang.

  3. [A study on breakfast and school performance in a group of adolescents].

    Science.gov (United States)

    Herrero Lozano, R; Fillat Ballesteros, J C

    2006-01-01

    TO know the relationship between breakfast, from a qualitative perspective, and school performance. The study was performed in 141 students (70 males and 71 females) with ages ranging 12-13 years, of 1st grade of Mandatory Secondary Education (ESO) from an institute of Saragossa, by means of recalling the breakfast of the day before. Breakfast quality has been assessed according to criteria of the Kid study: GOOD QUALITY: contains at least one food from each one of dairy, cereals, or fruit groups. IMPROVABLE QUALITY: lacks one of the groups. INSUFFICIENT QUALITY: lacks two groups. POOR QUALITY: does not have breakfast. We considered that quality was improved only when a mid-morning snack with a different food from those taken with breakfast was added. Average mark at the end of the school year has been the criterion used to assess school performance. Statistical analysis of data gathered for the present study has been done with SPSS software. This analysis comprises descriptive and inferential statistics. For analysis of global significance between the differences the Analysis of Variance method has been applied, followed by post hoe tests with Bonferroni's and Turkey's methods to detect specific groups explaining global significance. Average mark systematically increases as breakfast quality increases from an average score of 5.63 in the group with poor quality breakfast to 7.73 average score in the group with a good quality breakfast. An analysis of variance has been performed to study the statistical significance of the mean differences between both groups. The outcomes yield significant global differences between groups (p value = 0.001), i.e., the average mark significantly varies according to breakfast quality. When pooled quality of breakfast and mid-morning snack is analyzed, the average mark systematically increases as breakfast-snack quality increases, from an average mark of 5,77 in the group with poor or insufficient quality up to 7.61 in the group with

  4. Statistical Control Charts: Performances of Short Term Stock Trading in Croatia

    Directory of Open Access Journals (Sweden)

    Dumičić Ksenija

    2015-03-01

    Full Text Available Background: The stock exchange, as a regulated financial market, in modern economies reflects their economic development level. The stock market indicates the mood of investors in the development of a country and is an important ingredient for growth. Objectives: This paper aims to introduce an additional statistical tool used to support the decision-making process in stock trading, and it investigate the usage of statistical process control (SPC methods into the stock trading process. Methods/Approach: The individual (I, exponentially weighted moving average (EWMA and cumulative sum (CUSUM control charts were used for gaining trade signals. The open and the average prices of CROBEX10 index stocks on the Zagreb Stock Exchange were used in the analysis. The statistical control charts capabilities for stock trading in the short-run were analysed. Results: The statistical control chart analysis pointed out too many signals to buy or sell stocks. Most of them are considered as false alarms. So, the statistical control charts showed to be not so much useful in stock trading or in a portfolio analysis. Conclusions: The presence of non-normality and autocorellation has great impact on statistical control charts performances. It is assumed that if these two problems are solved, the use of statistical control charts in a portfolio analysis could be greatly improved.

  5. Statistical analyses of the performance of Macedonian investment and pension funds

    Directory of Open Access Journals (Sweden)

    Petar Taleski

    2015-10-01

    Full Text Available The foundation of the post-modern portfolio theory is creating a portfolio based on a desired target return. This specifically applies to the performance of investment and pension funds that provide a rate of return meeting payment requirements from investment funds. A desired target return is the goal of an investment or pension fund. It is the primary benchmark used to measure performances, dynamic monitoring and evaluation of the risk–return ratio on investment funds. The analysis in this paper is based on monthly returns of Macedonian investment and pension funds (June 2011 - June 2014. Such analysis utilizes the basic, but highly informative statistical characteristic moments like skewness, kurtosis, Jarque–Bera, and Chebyishev’s Inequality. The objective of this study is to perform a trough analysis, utilizing the above mentioned and other types of statistical techniques (Sharpe, Sortino, omega, upside potential, Calmar, Sterling to draw relevant conclusions regarding the risks and characteristic moments in Macedonian investment and pension funds. Pension funds are the second largest segment of the financial system, and has great potential for further growth due to constant inflows from pension insurance. The importance of investment funds for the financial system in the Republic of Macedonia is still small, although open-end investment funds have been the fastest growing segment of the financial system. Statistical analysis has shown that pension funds have delivered a significantly positive volatility-adjusted risk premium in the analyzed period more so than investment funds.

  6. THE INFLUENCE OF ORGANIZATIONAL CLIMATE, TRANSFORMATIONAL LEADERSHIP, AND WORK MOTIVATION ON TEACHER JOB PERFORMANCE

    Directory of Open Access Journals (Sweden)

    K. Kartini

    2017-07-01

    Full Text Available This research aimed at investigating the influence of organizational climate, transformational leadership, and work motivation on teacher job performance at Pondok Modern Tazakka, Batang - Central Java.The research using a quantitative approach with survey method. Amount of the samples in this research are 55 teachers that selected randomly. The data were analyzed by using descriptive statistics and inferential statistic using path analysis. (1 organizational climate have positive direct effect on teacher performance with path coefficient (py1 = 0,257 and t-count 2,963> t-table 1,684; (2 transformational leadership have positive direct effect on teacher performance with path coefficient (py2 = 0,489 and t-count 5,164> t-table 1,684, (3 work motivation have positive direct effect to teacher performance with path coefficient (py3 = 0,261 and t count 2,42> t-table 1,684, (4 organizational climate have positive direct effect (p31 = 0.391 and t-count 3.990> t-table 1.684, and (5 transformational leadership have a direct positive effect on work motivation with path coefficient (p32 = 0.526 and t-count 5,376> t- table 1,684. The Conclusion is organizational climate, transformational leadership, and work motivation have a direct effect on teacher job performance. Organizational climate and transformational leadership also have a direct effect on teacher work motivation. Therefore to improve teacher job performance, organizational climate, transformational leadership, and work motivation must be considered to be improved.

  7. Power of mental health nursing research: a statistical analysis of studies in the International Journal of Mental Health Nursing.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Happell, Brenda

    2013-02-01

    Having sufficient power to detect effect sizes of an expected magnitude is a core consideration when designing studies in which inferential statistics will be used. The main aim of this study was to investigate the statistical power in studies published in the International Journal of Mental Health Nursing. From volumes 19 (2010) and 20 (2011) of the journal, studies were analysed for their power to detect small, medium, and large effect sizes, according to Cohen's guidelines. The power of the 23 studies included in this review to detect small, medium, and large effects was 0.34, 0.79, and 0.94, respectively. In 90% of papers, no adjustments for experiment-wise error were reported. With a median of nine inferential tests per paper, the mean experiment-wise error rate was 0.51. A priori power analyses were only reported in 17% of studies. Although effect sizes for correlations and regressions were routinely reported, effect sizes for other tests (χ(2)-tests, t-tests, ANOVA/MANOVA) were largely absent from the papers. All types of effect sizes were infrequently interpreted. Researchers are strongly encouraged to conduct power analyses when designing studies, and to avoid scattergun approaches to data analysis (i.e. undertaking large numbers of tests in the hope of finding 'significant' results). Because reviewing effect sizes is essential for determining the clinical significance of study findings, researchers would better serve the field of mental health nursing if they reported and interpreted effect sizes. © 2012 The Authors. International Journal of Mental Health Nursing © 2012 Australian College of Mental Health Nurses Inc.

  8. Inferential comprehension of 3-6 year olds within the context of story grammar: a scoping review.

    Science.gov (United States)

    Filiatrault-Veilleux, Paméla; Bouchard, Caroline; Trudeau, Natacha; Desmarais, Chantal

    2015-01-01

    The ability to make inferences plays a crucial role in reading comprehension and the educational success of school-aged children. However, it starts to unfold much earlier than school entry and literacy. Given that it is likely to be targeted in speech language therapy, it would be useful for clinicians to have access to information about a developmental sequence of inferential comprehension. Yet, at this time, there is no clear proposition of the way in which this ability develops in young children prior to school entry. To reduce the knowledge gap with regards to inferential comprehension in young children by conducting a scoping review of the literature. The two objectives of this research are: (1) to describe typically developing children's comprehension of causal inferences targeting elements of story grammar, with the goal of proposing milestones in the development of this ability; and (2) to highlight key elements of the methodology used to gather this information in a paediatric population. A total of 16 studies from six databases that met the inclusion criteria were qualitatively analysed in the context of a scoping review. This methodological approach was used to identify common themes and gaps in the knowledge base to achieve the intended objectives. Results permit the description of key elements in the development of six types of causal inference targeting elements of story grammar in children between 3 and 6 years old. Results also demonstrate the various methods used to assess this ability in young children and highlight particularly interesting procedures for use with this younger population. These findings point to the need for additional studies to understand this ability better and to develop strategies to stimulate an evidence-based developmental sequence in children from an early age. © 2015 The Authors. International Journal of Language & Communication Disorders published by John Wiley & Sons Ltd on behalf of Royal College of Speech and

  9. Postural changes and pain in the academic performance of elementary school students

    Directory of Open Access Journals (Sweden)

    Maria Homéria Leite de Morais Sampaio

    Full Text Available Abstract Postural changes and pain in the spine of children and adolescents of school age are influenced by the permanent incorrect sitting position, misuse of furniture and weight of the backpack. The aim of this study was to verify postural changes and pain in the academic performance of elementary school students. It was a cross-sectional study, with a descriptive and analytical approach. The subjects were 83 elementary students, aged 8 to 12 years, of Kindergarten and Elementary Education at Paulo Sarasate Municipal School, Ceará. It was performed from March to June 2008. In the physical examination it was used an evaluation form, based on Global Postural reeducation, by Souchard method, which included the variables: compromised anterior, posterior, superior shoulder muscle chains and pain and, in academic performance, a semi-structured questionnaire with the variables: behavior, attendance and performance. The data was stored in the Statistical Package for the Social Science (SPSS version 18.0. In the descriptive analysis, absolute and relative frequencies were used, and in the inferential analysis, the following tests were applied: Mann-Whitney, to verify the existence of significant differences in changes in groups A and B, at a significance level of 5%, and the F statistical test, for comparing postural changes and pain, in the three grades. Results: it was noted that the majority of the students presented postural changes, such as forward head, lifted shoulders, dorsal hyperkyphosis and pain, which predominantly occurred in the anterior chain, when compared with the posterior and superior chains. These changes in both groups were statistically significant only in subjects of the fifth grade with satisfactory academic performance and behavior. It was concluded that there was no association between postural changes and school performance, although it was influenced by pain.

  10. Performance in grade 12 mathematics and science predicts student nurses' performance in first year science modules at a university in the Western Cape.

    Science.gov (United States)

    Mthimunye, Katlego D T; Daniels, Felicity M

    2017-10-26

    The demand for highly qualified and skilled nurses is increasing in South Africa as well as around the world. Having a background in science can create a significant advantage for students wishing to enrol for an undergraduate nursing qualification because nursing as profession is grounded in scientific evidence. The aim of this study was to investigate the predictive validity of grade 12 mathematics and science on the academic performance of first year student nurses in science modules. A quantitative research method using a cross-sectional predictive design was employed in this study. The participants included first year Bachelor of Nursing students enrolled at a university in the Western Cape, South Africa. Descriptive and inferential statistics were performed to analyse the data by using the IBM Statistical Package for Social Sciences versions 24. Descriptive analysis of all variables was performed as well as the Spearman's rank correlation test to describe the relationship among the study variables. Standard multiple linear regressions analysis was performed to determine the predictive validity of grade 12 mathematics and science on the academic performance of first year student nurses in science modules. The results of this study showed that grade 12 physical science is not a significant predictor (p > 0.062) of performance in first year science modules. The multiple linear regression revealed that grade 12 mathematics and life science grades explained 37.1% to 38.1% (R2 = 0.381 and adj R2 = 0.371) of the variation in the first year science grade distributions. Based on the results of the study it is evident that performance in grade 12 mathematics (β = 2.997) and life science (β = 3.175) subjects is a significant predictor (p < 0.001) of the performance in first year science modules for student nurses at the university identified for this study.

  11. Nursing students' attitudes toward statistics: Effect of a biostatistics course and association with examination performance.

    Science.gov (United States)

    Kiekkas, Panagiotis; Panagiotarou, Aliki; Malja, Alvaro; Tahirai, Daniela; Zykai, Rountina; Bakalis, Nick; Stefanopoulos, Nikolaos

    2015-12-01

    Although statistical knowledge and skills are necessary for promoting evidence-based practice, health sciences students have expressed anxiety about statistics courses, which may hinder their learning of statistical concepts. To evaluate the effects of a biostatistics course on nursing students' attitudes toward statistics and to explore the association between these attitudes and their performance in the course examination. One-group quasi-experimental pre-test/post-test design. Undergraduate nursing students of the fifth or higher semester of studies, who attended a biostatistics course. Participants were asked to complete the pre-test and post-test forms of The Survey of Attitudes Toward Statistics (SATS)-36 scale at the beginning and end of the course respectively. Pre-test and post-test scale scores were compared, while correlations between post-test scores and participants' examination performance were estimated. Among 156 participants, post-test scores of the overall SATS-36 scale and of the Affect, Cognitive Competence, Interest and Effort components were significantly higher than pre-test ones, indicating that the course was followed by more positive attitudes toward statistics. Among 104 students who participated in the examination, higher post-test scores of the overall SATS-36 scale and of the Affect, Difficulty, Interest and Effort components were significantly but weakly correlated with higher examination performance. Students' attitudes toward statistics can be improved through appropriate biostatistics courses, while positive attitudes contribute to higher course achievements and possibly to improved statistical skills in later professional life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Statistical hypothesis testing and common misinterpretations: Should we abandon p-value in forensic science applications?

    Science.gov (United States)

    Taroni, F; Biedermann, A; Bozza, S

    2016-02-01

    Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. The Relationship Between Internal Motivation And Customer Satisfaction Of Posthaste Purchase

    Directory of Open Access Journals (Sweden)

    Farbodsouri

    2017-06-01

    Full Text Available The purpose of this study is performing an empirical study in the field of marketing in order to examine the relationship between internal motivation and customer satisfaction of posthaste purchase in stores that researcher made model have been used.In this model the relationship between internal motivation and customer satisfaction of posthaste purchase was investigated. To review research sample of 385 customers was selected as a stepwise clustering. Data gathered by standard questionnaire with 21 questions that its validity and reliability was confirmed and distributed among the statistical population. Data analysis was performed using descriptive statistics and inferential statistics. At the level of Descriptive statistics indices such as frequency and frequency percentage were used and in inferential statistics correlation methods structural equation modeling path analysis has been done using the spss and lisrel software. The results of the analysis showing the existence of a significant and positive impact of internal motivation on customer satisfaction of posthaste buying. In general the ability of the Tehran Hyperstar chain store to increase customers satisfaction of posthaste buying must pay special attention to the effective internal motivation of customers the impulsivity the pleasure of shopping fashion involvement personal identity p050.

  14. Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain

    Science.gov (United States)

    Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young

    2010-01-01

    Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071

  15. Exploring Statistics Anxiety: Contrasting Mathematical, Academic Performance and Trait Psychological Predictors

    Science.gov (United States)

    Bourne, Victoria J.

    2018-01-01

    Statistics anxiety is experienced by a large number of psychology students, and previous research has examined a range of potential correlates, including academic performance, mathematical ability and psychological predictors. These varying predictors are often considered separately, although there may be shared variance between them. In the…

  16. Statistical Analysis of EGFR Structures’ Performance in Virtual Screening

    Science.gov (United States)

    Li, Yan; Li, Xiang; Dong, Zigang

    2015-01-01

    In this work the ability of EGFR structures to distinguish true inhibitors from decoys in docking and MM-PBSA is assessed by statistical procedures. The docking performance depends critically on the receptor conformation and bound state. The enrichment of known inhibitors is well correlated with the difference between EGFR structures rather than the bound-ligand property. The optimal structures for virtual screening can be selected based purely on the complex information. And the mixed combination of distinct EGFR conformations is recommended for ensemble docking. In MM-PBSA, a variety of EGFR structures have identically good performance in the scoring and ranking of known inhibitors, indicating that the choice of the receptor structure has little effect on the screening. PMID:26476847

  17. Evaluation of Bending Strength of Carburized Gears Based on Inferential Identification of Principal Surface Layer Defects

    Science.gov (United States)

    Masuyama, Tomoya; Inoue, Katsumi; Yamanaka, Masashi; Kitamura, Kenichi; Saito, Tomoyuki

    High load capacity of carburized gears originates mainly from the hardened layer and induced residual stress. On the other hand, surface decarburization, which causes a nonmartensitic layer, and inclusions such as oxides and segregation act as latent defects which considerably reduce fatigue strength. In this connection, the authors have proposed a formula of strength evaluation by separately quantifying defect influence. However, the principal defect which limits strength of gears with several different defects remains unclarified. This study presents a method of inferential identification of principal defects based on test results of carburized gears made of SCM420 clean steel, gears with both an artificial notch and nonmartensitic layer at the tooth fillet, and so forth. It clarifies practical uses of presented methods, and strength of carburized gears can be evaluated by focusing on principal defect size.

  18. Descriptive and inferential statistics for the SANREM CRSP project database

    OpenAIRE

    Villca, E.I.

    2008-01-01

    This working paper presents survey data, discusses methodology, and draws some conclusions about livelihood strategies in Bolivia in the Jatun Mayu watershed. LTRA-3 (Watershed-based NRM for Small-scale Agriculture)

  19. European downstream oil industry safety performance. Statistical summary of reported incidents 2009

    International Nuclear Information System (INIS)

    Burton, A.; Den Haan, K.H.

    2010-10-01

    The sixteenth such report by CONCAWE, this issue includes statistics on workrelated personal injuries for the European downstream oil industry's own employees as well as contractors for the year 2009. Data were received from 33 companies representing more than 97% of the European refining capacity. Trends over the last sixteen years are highlighted and the data are also compared to similar statistics from related industries. In addition, this report presents the results of the first Process Safety Performance Indicator data gathering exercise amongst the CONCAWE membership.

  20. BUDGETING, BUDGETARY CONTROL AND PERFORMANCE EVALUATION: Evidence from Hospitality Firms in Nigeria

    Directory of Open Access Journals (Sweden)

    Patrick Egbunike

    2017-12-01

    Full Text Available This study was carried out with the view to address two fundamental issues: first, to determine if there is any association between budget, budgetary control and performance evaluation; second, to ascertain if there is any significant variation in the budget, budgetary control and performance evaluation measures of hospitality firms in Nigeria. The study employed descriptive design and primary data (questionnaire was the major source of data collection. Questionnaire was administered to a total of six hundred (600 employees of ten (10 selected hospitality firms in Nigeria. The data obtained were analyzed using both descriptive and inferential statistics. Findings indicated that budget and budgetary control could serve as an avenue through which hospitality firms in Nigeria can be evaluated. In addition, it was revealed that there is a significant variation in the budget, budgetary control and performance evaluation of hospitality firms in Nigeria. On the basis of the findings, it was recommended that hospitality firms in Nigeria should carry out performance evaluation on every aspect of their budget and budgetary activities as a way of ensuring that budgeted outcomes are met. Also, budgetary costs should be a basis of choosing the most-fit performance evaluation technique for hospitality firms since such

  1. Inference comprehension in text reading: Performance of individuals with right- versus left-hemisphere lesions and the influence of cognitive functions.

    Science.gov (United States)

    Silagi, Marcela Lima; Radanovic, Marcia; Conforto, Adriana Bastos; Mendonça, Lucia Iracema Zanotto; Mansur, Leticia Lessa

    2018-01-01

    Right-hemisphere lesions (RHL) may impair inference comprehension. However, comparative studies between left-hemisphere lesions (LHL) and RHL are rare, especially regarding reading comprehension. Moreover, further knowledge of the influence of cognition on inferential processing in this task is needed. To compare the performance of patients with RHL and LHL on an inference reading comprehension task. We also aimed to analyze the effects of lesion site and to verify correlations between cognitive functions and performance on the task. Seventy-five subjects were equally divided into the groups RHL, LHL, and control group (CG). The Implicit Management Test was used to evaluate inference comprehension. In this test, subjects read short written passages and subsequently answer five types of questions (explicit, logical, distractor, pragmatic, and other), which require different types of inferential reasoning. The cognitive functional domains of attention, memory, executive functions, language, and visuospatial abilities were assessed using the Cognitive Linguistic Quick Test (CLQT). The LHL and RHL groups presented difficulties in inferential comprehension in comparison with the CG. However, the RHL group presented lower scores than the LHL group on logical, pragmatic and other questions. A covariance analysis did not show any effect of lesion site within the hemispheres. Overall, all cognitive domains were correlated with all the types of questions from the inference test (especially logical, pragmatic, and other). Attention and visuospatial abilities affected the scores of both the RHL and LHL groups, and only memory influenced the performance of the RHL group. Lesions in either hemisphere may cause difficulties in making inferences during reading. However, processing more complex inferences was more difficult for patients with RHL than for those with LHL, which suggests that the right hemisphere plays an important role in tasks with higher comprehension demands

  2. Inference comprehension in text reading: Performance of individuals with right- versus left-hemisphere lesions and the influence of cognitive functions.

    Directory of Open Access Journals (Sweden)

    Marcela Lima Silagi

    Full Text Available Right-hemisphere lesions (RHL may impair inference comprehension. However, comparative studies between left-hemisphere lesions (LHL and RHL are rare, especially regarding reading comprehension. Moreover, further knowledge of the influence of cognition on inferential processing in this task is needed.To compare the performance of patients with RHL and LHL on an inference reading comprehension task. We also aimed to analyze the effects of lesion site and to verify correlations between cognitive functions and performance on the task.Seventy-five subjects were equally divided into the groups RHL, LHL, and control group (CG. The Implicit Management Test was used to evaluate inference comprehension. In this test, subjects read short written passages and subsequently answer five types of questions (explicit, logical, distractor, pragmatic, and other, which require different types of inferential reasoning. The cognitive functional domains of attention, memory, executive functions, language, and visuospatial abilities were assessed using the Cognitive Linguistic Quick Test (CLQT.The LHL and RHL groups presented difficulties in inferential comprehension in comparison with the CG. However, the RHL group presented lower scores than the LHL group on logical, pragmatic and other questions. A covariance analysis did not show any effect of lesion site within the hemispheres. Overall, all cognitive domains were correlated with all the types of questions from the inference test (especially logical, pragmatic, and other. Attention and visuospatial abilities affected the scores of both the RHL and LHL groups, and only memory influenced the performance of the RHL group.Lesions in either hemisphere may cause difficulties in making inferences during reading. However, processing more complex inferences was more difficult for patients with RHL than for those with LHL, which suggests that the right hemisphere plays an important role in tasks with higher comprehension

  3. A systems thinking approach to eliminate delays on building ...

    African Journals Online (AJOL)

    It is obvious that the performance of firms and their market competitiveness hinge on project delivery time. Many approaches have been used to reduce the effect of the potential factors of delay on project delivery time. In this study, the system approach has been employed and validated. Inferential statistical analysis was ...

  4. PRIS-STATISTICS: Power Reactor Information System Statistical Reports. User's Manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IAEA developed the Power Reactor Information System (PRIS)-Statistics application to assist PRIS end users with generating statistical reports from PRIS data. Statistical reports provide an overview of the status, specification and performance results of every nuclear power reactor in the world. This user's manual was prepared to facilitate the use of the PRIS-Statistics application and to provide guidelines and detailed information for each report in the application. Statistical reports support analyses of nuclear power development and strategies, and the evaluation of nuclear power plant performance. The PRIS database can be used for comprehensive trend analyses and benchmarking against best performers and industrial standards.

  5. Spatial Statistical Data Fusion (SSDF)

    Science.gov (United States)

    Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel

    2013-01-01

    As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is

  6. Investigate The Relationship Between Brand Equity Brand Loyalty And Customer Satisfaction

    Directory of Open Access Journals (Sweden)

    Farbod Souri

    2017-06-01

    Full Text Available The purpose of this study conducted an empirical study in the field of marketing in order to investigate the relationship between brand equity brand loyalty and customer satisfaction in Refah stores in which Nam and colleagues model 2011 have been used. In this model the relationship between brand equity and brand loyalty and customer satisfaction is evaluated. To review research a sample of 384 customers was selected as a stepwise clustering. Data gathered by standard questionnaire with 23 questions that its validity and reliability confirmed and was distributed among the statistical population. Data analysis was performed using descriptive statistics and inferential statistics. At the level of Descriptive statistics indices such as frequency and frequency percentage were used and in inferential statistics correlation methods structural equation modeling path analysis has been done using the spss and lisrel software. The results of the analysis showing the existence of a significant and positive relationship of brand equity on customer satisfaction and loyalty. In general the ability of Refah store to increase customer satisfaction and brand loyalty of customers to Refah brand being associated with the brand equity 050 p.

  7. Performance of Generating Plant: Managing the Changes. Part 2: Thermal Generating Plant Unavailability Factors and Availability Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Curley, G. Michael [North American Electric Reliability Corporation (United States); Mandula, Jiri [International Atomic Energy Agency (IAEA)

    2008-05-15

    The WEC Committee on the Performance of Generating Plant (PGP) has been collecting and analysing power plant performance statistics worldwide for more than 30 years and has produced regular reports, which include examples of advanced techniques and methods for improving power plant performance through benchmarking. A series of reports from the various working groups was issued in 2008. This reference presents the results of Working Group 2 (WG2). WG2's main task is to facilitate the collection and input on an annual basis of power plant performance data (unit-by-unit and aggregated data) into the WEC PGP database. The statistics will be collected for steam, nuclear, gas turbine and combined cycle, hydro and pump storage plant. WG2 will also oversee the ongoing development of the availability statistics database, including the contents, the required software, security issues and other important information. The report is divided into two sections: Thermal generating, combined cycle/co-generation, combustion turbine, hydro and pumped storage unavailability factors and availability statistics; and nuclear power generating units.

  8. Magnetic resonance imaging of the wrist: Diagnostic performance statistics

    International Nuclear Information System (INIS)

    Hobby, Jonathan L.; Tom, Brian D.M.; Bearcroft, Philip W.P.; Dixon, Adrian K.

    2001-01-01

    AIM: To review the published diagnostic performance statistics for magnetic resonance imaging (MRI) of the wrist for tears of the triangular fibrocartilage complex, the intrinsic carpal ligaments, and for osteonecrosis of the carpal bones. MATERIALS AND METHODS: We used Medline and Embase to search the English language literature. Studies evaluating the diagnostic performance of MRI of the wrist in living patients with surgical confirmation of MR findings were identified. RESULTS: We identified 11 studies reporting the diagnostic performance of MRI for tears of the triangular fibrocartilage complex for a total of 410 patients, six studies for the scapho-lunate ligament (159 patients), six studies for the luno-triquetral ligament (142 patients) and four studies (56 patients) for osteonecrosis of the carpal bones. CONCLUSIONS: Magnetic resonance imaging is an accurate means of diagnosing tears of the triangular fibrocartilage and carpal osteonecrosis. Although MRI is highly specific for tears of the intrinsic carpal ligaments, its sensitivity is low. The diagnostic performance of MRI in the wrist is improved by using high-resolution T2* weighted 3D gradient echo sequences. Using current imaging techniques without intra-articular contrast medium, magnetic resonance imaging cannot reliably exclude tears of the intrinsic carpal ligaments. Hobby, J.L. (2001)

  9. Progressive statistics for studies in sports medicine and exercise science.

    Science.gov (United States)

    Hopkins, William G; Marshall, Stephen W; Batterham, Alan M; Hanin, Juri

    2009-01-01

    Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.

  10. Does Tacit Knowledge Predict Organizational Performance? A Scrutiny of Firms in the Upstream Sector in Nigeria

    Directory of Open Access Journals (Sweden)

    Vincent I.O Odiri

    2016-02-01

    Full Text Available This paper examined tacit knowledge so as to see whether tacit knowledge when properly put to use can lead to improved performance by upstream sector firms in Nigeria. Knowledge as we believe, is very vital to both corporate entities and individuals. Knowledge encompasses both explicit and tacit. This paper focused on one aspect of knowledge – ‘tacit’ which is in the psyche or brain of the individual possessing it. Inspite of the central role it plays, tacit knowledge has been downplayed by most firms. However, we adopted a survey research design via questionnaires administered to 504 employees randomly selected from 3 different oil firms. The data obtained were analyzed using inferential statistics. Also, multi-collinearity diagnoses of tacit knowledge and organizational performance was performed. The result suggests that tacit knowledge is linearly correlated with organizational performance. This implies that tacit knowledge predicts organizational performance. This study is significant in that the findings would be useful to management of firms, as it divulge how tacit knowledge when properly harnessed can lead to increased performance. Most prior studies in this area were conducted in other countries, hence our study is one of the first in Nigeria that examined tacit knowledge and organizational performance.

  11. Statistical performance evaluation of ECG transmission using wireless networks.

    Science.gov (United States)

    Shakhatreh, Walid; Gharaibeh, Khaled; Al-Zaben, Awad

    2013-07-01

    This paper presents simulation of the transmission of biomedical signals (using ECG signal as an example) over wireless networks. Investigation of the effect of channel impairments including SNR, pathloss exponent, path delay and network impairments such as packet loss probability; on the diagnosability of the received ECG signal are presented. The ECG signal is transmitted through a wireless network system composed of two communication protocols; an 802.15.4- ZigBee protocol and an 802.11b protocol. The performance of the transmission is evaluated using higher order statistics parameters such as kurtosis and Negative Entropy in addition to the common techniques such as the PRD, RMS and Cross Correlation.

  12. Global health business: the production and performativity of statistics in Sierra Leone and Germany.

    Science.gov (United States)

    Erikson, Susan L

    2012-01-01

    The global push for health statistics and electronic digital health information systems is about more than tracking health incidence and prevalence. It is also experienced on the ground as means to develop and maintain particular norms of health business, knowledge, and decision- and profit-making that are not innocent. Statistics make possible audit and accountability logics that undergird the management of health at a distance and that are increasingly necessary to the business of health. Health statistics are inextricable from their social milieus, yet as business artifacts they operate as if they are freely formed, objectively originated, and accurate. This article explicates health statistics as cultural forms and shows how they have been produced and performed in two very different countries: Sierra Leone and Germany. In both familiar and surprising ways, this article shows how statistics and their pursuit organize and discipline human behavior, constitute subject positions, and reify existing relations of power.

  13. A statistical approach to nuclear fuel design and performance

    Science.gov (United States)

    Cunning, Travis Andrew

    As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance

  14. Long-Term Propagation Statistics and Availability Performance Assessment for Simulated Terrestrial Hybrid FSO/RF System

    Directory of Open Access Journals (Sweden)

    Fiser Ondrej

    2011-01-01

    Full Text Available Long-term monthly and annual statistics of the attenuation of electromagnetic waves that have been obtained from 6 years of measurements on a free space optical path, 853 meters long, with a wavelength of 850 nm and on a precisely parallel radio path with a frequency of 58 GHz are presented. All the attenuation events observed are systematically classified according to the hydrometeor type causing the particular event. Monthly and yearly propagation statistics on the free space optical path and radio path are obtained. The influence of individual hydrometeors on attenuation is analysed. The obtained propagation statistics are compared to the calculated statistics using ITU-R models. The calculated attenuation statistics both at 850 nm and 58 GHz underestimate the measured statistics for higher attenuation levels. The availability performance of a simulated hybrid FSO/RF system is analysed based on the measured data.

  15. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  16. Do Narcissism and Emotional Intelligence Win Us Friends? Modeling Dynamics of Peer Popularity Using Inferential Network Analysis.

    Science.gov (United States)

    Czarna, Anna Z; Leifeld, Philip; Śmieja, Magdalena; Dufner, Michael; Salovey, Peter

    2016-09-27

    This research investigated effects of narcissism and emotional intelligence (EI) on popularity in social networks. In a longitudinal field study, we examined the dynamics of popularity in 15 peer groups in two waves (N = 273). We measured narcissism, ability EI, and explicit and implicit self-esteem. In addition, we measured popularity at zero acquaintance and 3 months later. We analyzed the data using inferential network analysis (temporal exponential random graph modeling, TERGM) accounting for self-organizing network forces. People high in narcissism were popular, but increased less in popularity over time than people lower in narcissism. In contrast, emotionally intelligent people increased more in popularity over time than less emotionally intelligent people. The effects held when we controlled for explicit and implicit self-esteem. These results suggest that narcissism is rather disadvantageous and that EI is rather advantageous for long-term popularity. © 2016 by the Society for Personality and Social Psychology, Inc.

  17. Multivariate Statistical Inference of Lightning Occurrence, and Using Lightning Observations

    Science.gov (United States)

    Boccippio, Dennis

    2004-01-01

    Two classes of multivariate statistical inference using TRMM Lightning Imaging Sensor, Precipitation Radar, and Microwave Imager observation are studied, using nonlinear classification neural networks as inferential tools. The very large and globally representative data sample provided by TRMM allows both training and validation (without overfitting) of neural networks with many degrees of freedom. In the first study, the flashing / or flashing condition of storm complexes is diagnosed using radar, passive microwave and/or environmental observations as neural network inputs. The diagnostic skill of these simple lightning/no-lightning classifiers can be quite high, over land (above 80% Probability of Detection; below 20% False Alarm Rate). In the second, passive microwave and lightning observations are used to diagnose radar reflectivity vertical structure. A priori diagnosis of hydrometeor vertical structure is highly important for improved rainfall retrieval from either orbital radars (e.g., the future Global Precipitation Mission "mothership") or radiometers (e.g., operational SSM/I and future Global Precipitation Mission passive microwave constellation platforms), we explore the incremental benefit to such diagnosis provided by lightning observations.

  18. A study on the use of Gumbel approximation with the Bernoulli spatial scan statistic.

    Science.gov (United States)

    Read, S; Bath, P A; Willett, P; Maheswaran, R

    2013-08-30

    The Bernoulli version of the spatial scan statistic is a well established method of detecting localised spatial clusters in binary labelled point data, a typical application being the epidemiological case-control study. A recent study suggests the inferential accuracy of several versions of the spatial scan statistic (principally the Poisson version) can be improved, at little computational cost, by using the Gumbel distribution, a method now available in SaTScan(TM) (www.satscan.org). We study in detail the effect of this technique when applied to the Bernoulli version and demonstrate that it is highly effective, albeit with some increase in false alarm rates at certain significance thresholds. We explain how this increase is due to the discrete nature of the Bernoulli spatial scan statistic and demonstrate that it can affect even small p-values. Despite this, we argue that the Gumbel method is actually preferable for very small p-values. Furthermore, we extend previous research by running benchmark trials on 12 000 synthetic datasets, thus demonstrating that the overall detection capability of the Bernoulli version (i.e. ratio of power to false alarm rate) is not noticeably affected by the use of the Gumbel method. We also provide an example application of the Gumbel method using data on hospital admissions for chronic obstructive pulmonary disease. Copyright © 2013 John Wiley & Sons, Ltd.

  19. The Relationship between of Teacher Competence, Emotional Intelligence and Teacher Performance Madrasah Tsanawiyah at District of Serang Banten

    Science.gov (United States)

    Wahyuddin, Wawan

    2016-01-01

    This study wants to examine the relationship between teacher competence and emotional intelligence that held by teachers to increase the teacher performance Madrasah Tsanawiyah at district of Serang Banten. This research was conducted with the quantitative method, through analysis descriptive and inferential. Samples the research were teachers…

  20. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  1. Effects of Concept Mapping Strategy on Learning Performance in Business and Economics Statistics

    Science.gov (United States)

    Chiou, Chei-Chang

    2009-01-01

    A concept map (CM) is a hierarchically arranged, graphic representation of the relationships among concepts. Concept mapping (CMING) is the process of constructing a CM. This paper examines whether a CMING strategy can be useful in helping students to improve their learning performance in a business and economics statistics course. A single…

  2. ANALISIS PENGARUH KEPEMIMPINAN DAN KOMUNIKASI TERHADAP KINERJA PEGAWAI

    Directory of Open Access Journals (Sweden)

    Tombang Simatupang

    2013-03-01

    Full Text Available This paper analyses the influence of leadership and communication on employee’s performance. Employing inferential statistics, this research finds that leadership and communication are among key success factors for improving organization performance, and thus service to public. This research focuses on one public organizational within the Directorate General of Treasury, Ministry of Finance. Given similar contexts facing other public organizations, it is expected that the findings shed light on issues such as organizational performance through enhancing leadership skill and communication.

  3. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.

    Science.gov (United States)

    Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin

    2017-06-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https

  4. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    Science.gov (United States)

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Flipping the Classroom and Student Performance in Advanced Statistics: Evidence from a Quasi-Experiment

    Science.gov (United States)

    Touchton, Michael

    2015-01-01

    I administer a quasi-experiment using undergraduate political science majors in statistics classes to evaluate whether "flipping the classroom" (the treatment) alters students' applied problem-solving performance and satisfaction relative to students in a traditional classroom environment (the control). I also assess whether general…

  6. Predicting energy performance of a net-zero energy building: A statistical approach

    International Nuclear Information System (INIS)

    Kneifel, Joshua; Webb, David

    2016-01-01

    Highlights: • A regression model is applied to actual energy data from a net-zero energy building. • The model is validated through a rigorous statistical analysis. • Comparisons are made between model predictions and those of a physics-based model. • The model is a viable baseline for evaluating future models from the energy data. - Abstract: Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid Climate Zone, and compares these

  7. Statistics for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...

  8. Self-Concept, Values Orientation, and Teaching Performance Among Hospitality Educators

    Directory of Open Access Journals (Sweden)

    Joy D. Jocson

    2014-02-01

    Full Text Available This survey-correlational study aimed to investigate the self-concept, values orientation, and teaching performance among hospitality educators of the West Visayas State University System. The study was conducted in January 2013 and utilized 42 randomly selected hospitality educators as participants. The simple random sampling method was used in the selection of the participants. Three (3 standardized and published datagathering instruments were adapted to obtain the data for the study. To ascertain the degree of self-concept, Girdano and Everly’s (1979 Self-perception Test instrument was used. In determining the pre-dominant values orientation, Rokeach’s (1973 Value Survey Form used by Rabago (1988 was utilized. To ascertain the level of teaching performance, the WVSU F-PES was employed. Frequency counts, rank, percentage analyses, mean scores, and standard deviations were employed as descriptive statistics; while t-test for independent samples, one-way ANOVA, and Pearson’s Product Moment Coefficient of Correlation (Pearson’s r were employed as inferential statistics. The criterion for the acceptance or rejection of the null hypotheses was set at .05 alpha level. The results of the study revealed that, generally, the hospitality educators had outstanding teaching performance and strong self-concept. Family security, salvation, and happiness were their most important terminal values while social recognition, a world of beauty and pleasure were their least important values. Loving, responsible, and honest were their most important instrumental values and imaginative, ambitious, and clean were their least important values. In terms of teaching performance, no significant differences existed when hospitality educators were classified according sex, age, civil status, educational attainment, status of employment and number of years in teaching. Significant differences existed in the degree of self-concept among hospitality educators grouped

  9. Using the expected detection delay to assess the performance of different multivariate statistical process monitoring methods for multiplicative and drift faults.

    Science.gov (United States)

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Peng, Kaixiang

    2017-03-01

    Using the expected detection delay (EDD) index to measure the performance of multivariate statistical process monitoring (MSPM) methods for constant additive faults have been recently developed. This paper, based on a statistical investigation of the T 2 - and Q-test statistics, extends the EDD index to the multiplicative and drift fault cases. As well, it is used to assess the performance of common MSPM methods that adopt these two test statistics. Based on how to use the measurement space, these methods can be divided into two groups, those which consider the complete measurement space, for example, principal component analysis-based methods, and those which only consider some subspace that reflects changes in key performance indicators, such as partial least squares-based methods. Furthermore, a generic form for them to use T 2 - and Q-test statistics are given. With the extended EDD index, the performance of these methods to detect drift and multiplicative faults is assessed using both numerical simulations and the Tennessee Eastman process. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. A Novel Approach in Facilitating Aviation Emergency Procedure Learning and Recall through an Intuitive Pictorial System

    National Research Council Canada - National Science Library

    Estrada, Arthur; Keeley, Jennifer A; LeDuc, Patricia A; Bass, Julie M; Rouse, Tiffany N; Ramiccio, John G; Rowe, Terri L

    2007-01-01

    ...: the Intuitive Pictorial System (IPS). Descriptive and inferential statistics, along with correlation, were used to assess the study data, which determined statistically significant differences between the IPS and traditional training methods...

  11. Performance evaluation of a hybrid-passive landfill leachate treatment system using multivariate statistical techniques

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, Jack, E-mail: jack.wallace@ce.queensu.ca [Department of Civil Engineering, Queen’s University, Ellis Hall, 58 University Avenue, Kingston, Ontario K7L 3N6 (Canada); Champagne, Pascale, E-mail: champagne@civil.queensu.ca [Department of Civil Engineering, Queen’s University, Ellis Hall, 58 University Avenue, Kingston, Ontario K7L 3N6 (Canada); Monnier, Anne-Charlotte, E-mail: anne-charlotte.monnier@insa-lyon.fr [National Institute for Applied Sciences – Lyon, 20 Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2015-01-15

    Highlights: • Performance of a hybrid passive landfill leachate treatment system was evaluated. • 33 Water chemistry parameters were sampled for 21 months and statistically analyzed. • Parameters were strongly linked and explained most (>40%) of the variation in data. • Alkalinity, ammonia, COD, heavy metals, and iron were criteria for performance. • Eight other parameters were key in modeling system dynamics and criteria. - Abstract: A pilot-scale hybrid-passive treatment system operated at the Merrick Landfill in North Bay, Ontario, Canada, treats municipal landfill leachate and provides for subsequent natural attenuation. Collected leachate is directed to a hybrid-passive treatment system, followed by controlled release to a natural attenuation zone before entering the nearby Little Sturgeon River. The study presents a comprehensive evaluation of the performance of the system using multivariate statistical techniques to determine the interactions between parameters, major pollutants in the leachate, and the biological and chemical processes occurring in the system. Five parameters (ammonia, alkalinity, chemical oxygen demand (COD), “heavy” metals of interest, with atomic weights above calcium, and iron) were set as criteria for the evaluation of system performance based on their toxicity to aquatic ecosystems and importance in treatment with respect to discharge regulations. System data for a full range of water quality parameters over a 21-month period were analyzed using principal components analysis (PCA), as well as principal components (PC) and partial least squares (PLS) regressions. PCA indicated a high degree of association for most parameters with the first PC, which explained a high percentage (>40%) of the variation in the data, suggesting strong statistical relationships among most of the parameters in the system. Regression analyses identified 8 parameters (set as independent variables) that were most frequently retained for modeling

  12. Descriptive data analysis.

    Science.gov (United States)

    Thompson, Cheryl Bagley

    2009-01-01

    This 13th article of the Basics of Research series is first in a short series on statistical analysis. These articles will discuss creating your statistical analysis plan, levels of measurement, descriptive statistics, probability theory, inferential statistics, and general considerations for interpretation of the results of a statistical analysis.

  13. LHCb: Statistical Comparison of CPU performance for LHCb applications on the Grid

    CERN Multimedia

    Graciani, R

    2009-01-01

    The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks, clusters,…) Both applications are based on the Gaudi and LHCb software frameworks. Gauss uses Pythia and Geant as underlying libraries for the simulation of the collision and the later passage of the generated particles through the LHCb detector. While Brunel makes use of LHCb specific code to process the data from each sub-detector. Both applications are CPU bound. Large Monte Carlo productions or data reconstructions running on the Grid are an ideal benchmark to compare the performance of the different CPU models for each case. Since the processed events are only statistically comparable, only statistical comparison of the...

  14. The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments

    International Nuclear Information System (INIS)

    Pham, Bihn T.; Einerson, Jeffrey J.

    2010-01-01

    This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory's Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automated processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.

  15. The statistical analysis techniques to support the NGNP fuel performance experiments

    Energy Technology Data Exchange (ETDEWEB)

    Pham, Binh T., E-mail: Binh.Pham@inl.gov; Einerson, Jeffrey J.

    2013-10-15

    This paper describes the development and application of statistical analysis techniques to support the Advanced Gas Reactor (AGR) experimental program on Next Generation Nuclear Plant (NGNP) fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel temperature) is regulated by the He–Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the NGNP Data Management and Analysis System for automated processing and qualification of the AGR measured data. The neutronic and thermal code simulation results are used for comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the fuel temperature within a given range.

  16. Turking Statistics: Student-Generated Surveys Increase Student Engagement and Performance

    Science.gov (United States)

    Whitley, Cameron T.; Dietz, Thomas

    2018-01-01

    Thirty years ago, Hubert M. Blalock Jr. published an article in "Teaching Sociology" about the importance of teaching statistics. We honor Blalock's legacy by assessing how using Amazon Mechanical Turk (MTurk) in statistics classes can enhance student learning and increase statistical literacy among social science gradaute students. In…

  17. Handbook of statistical methods single subject design

    CERN Document Server

    Satake, Eiki; Maxwell, David L

    2008-01-01

    This book is a practical guide of the most commonly used approaches in analyzing and interpreting single-subject data. It arranges the methodologies used in a logical sequence using an array of research studies from the existing published literature to illustrate specific applications. The book provides a brief discussion of each approach such as visual, inferential, and probabilistic model, the applications for which it is intended, and a step-by-step illustration of the test as used in an actual research study.

  18. WASP (Write a Scientific Paper) using Excel - 8: t-Tests.

    Science.gov (United States)

    Grech, Victor

    2018-06-01

    t-Testing is a common component of inferential statistics when comparing two means. This paper explains the central limit theorem and the concept of the null hypothesis as well as types of errors. On the practical side, this paper outlines how different t-tests may be performed in Microsoft Excel, for different purposes, both statically as well as dynamically, with Excel's functions. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Changes in Math Prerequisites and Student Performance in Business Statistics: Do Math Prerequisites Really Matter?

    OpenAIRE

    Jeffrey J. Green; Courtenay C. Stone; Abera Zegeye; Thomas A. Charles

    2007-01-01

    We use a binary probit model to assess the impact of several changes in math prerequisites on student performance in an undergraduate business statistics course. While the initial prerequisites did not necessarily provide students with the necessary math skills, our study, the first to examine the effect of math prerequisite changes, shows that these changes were deleterious to student performance. Our results helped convince the College of Business to change the math prerequisite again begin...

  20. A laboratory evaluation of the influence of weighing gauges performance on extreme events statistics

    Science.gov (United States)

    Colli, Matteo; Lanza, Luca

    2014-05-01

    The effects of inaccurate ground based rainfall measurements on the information derived from rain records is yet not much documented in the literature. La Barbera et al. (2002) investigated the propagation of the systematic mechanic errors of tipping bucket type rain gauges (TBR) into the most common statistics of rainfall extremes, e.g. in the assessment of the return period T (or the related non-exceedance probability) of short-duration/high intensity events. Colli et al. (2012) and Lanza et al. (2012) extended the analysis to a 22-years long precipitation data set obtained from a virtual weighing type gauge (WG). The artificial WG time series was obtained basing on real precipitation data measured at the meteo-station of the University of Genova and modelling the weighing gauge output as a linear dynamic system. This approximation was previously validated with dedicated laboratory experiments and is based on the evidence that the accuracy of WG measurements under real world/time varying rainfall conditions is mainly affected by the dynamic response of the gauge (as revealed during the last WMO Field Intercomparison of Rainfall Intensity Gauges). The investigation is now completed by analyzing actual measurements performed by two common weighing gauges, the OTT Pluvio2 load-cell gauge and the GEONOR T-200 vibrating-wire gauge, since both these instruments demonstrated very good performance under previous constant flow rate calibration efforts. A laboratory dynamic rainfall generation system has been arranged and validated in order to simulate a number of precipitation events with variable reference intensities. Such artificial events were generated basing on real world rainfall intensity (RI) records obtained from the meteo-station of the University of Genova so that the statistical structure of the time series is preserved. The influence of the WG RI measurements accuracy on the associated extreme events statistics is analyzed by comparing the original intensity

  1. Fungsi Manajemen Kepala Sekolah, Motivasi, dan Kinerja Guru

    Directory of Open Access Journals (Sweden)

    Rita Lisnawati

    2018-01-01

    Nowadays, there are many competitions in the world of education. Output which is generated by school should be in accordance with national education goals. Principal as the manager plays an important role in creating the goals. In addition, teachers are also expected not only to transfer knowledge but also to provide the maximum performance. In order to make teachers achieve maximum performance, they must be driven by high motivation. This research aims to find out how is the function level of principal management, motivation and performance of teacher and how big is the influence of teacher motivation on teacher performance.The technique of data collection is using a questionnaire which is spread to respondents, observations, interviews, and documentation. Responses were measured by questionnaire is adapted to 4 Likert scale. The test of validity is using Pearson Product Moment. The analysis technique which is used to answer the research questions is using the descriptive statistical analysis, while to answer the research hypothesis related to the motivation of teachers and teacher performance  is using Simple Linear Regression Inferential Analysis. The results showed that: 1 The level of function of principal management is in high category with an average value of respondents amounted to 72.92. 2 The level of teacher motivation inferentially is in high category with an average score of 60 < μ ≤ 80. 3 The level of teacher performance inferentially is in high category with an average score of 60 < μ ≤ 80. 4 The motivation of teachers affects the performance of teachers with determination coefficient of linear regression models by 68.82%.

  2. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    Science.gov (United States)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization

  3. Preparing for the first meeting with a statistician.

    Science.gov (United States)

    De Muth, James E

    2008-12-15

    Practical statistical issues that should be considered when performing data collection and analysis are reviewed. The meeting with a statistician should take place early in the research development before any study data are collected. The process of statistical analysis involves establishing the research question, formulating a hypothesis, selecting an appropriate test, sampling correctly, collecting data, performing tests, and making decisions. Once the objectives are established, the researcher can determine the characteristics or demographics of the individuals required for the study, how to recruit volunteers, what type of data are needed to answer the research question(s), and the best methods for collecting the required information. There are two general types of statistics: descriptive and inferential. Presenting data in a more palatable format for the reader is called descriptive statistics. Inferential statistics involve making an inference or decision about a population based on results obtained from a sample of that population. In order for the results of a statistical test to be valid, the sample should be representative of the population from which it is drawn. When collecting information about volunteers, researchers should only collect information that is directly related to the study objectives. Important information that a statistician will require first is an understanding of the type of variables involved in the study and which variables can be controlled by researchers and which are beyond their control. Data can be presented in one of four different measurement scales: nominal, ordinal, interval, or ratio. Hypothesis testing involves two mutually exclusive and exhaustive statements related to the research question. Statisticians should not be replaced by computer software, and they should be consulted before any research data are collected. When preparing to meet with a statistician, the pharmacist researcher should be familiar with the steps

  4. Practical Statistics

    CERN Document Server

    Lyons, L.

    2016-01-01

    Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

  5. The Influence of Business Environmental Dynamism, Complexity and Munificence on Performance of Small and Medium Enterprises in Kenya

    Directory of Open Access Journals (Sweden)

    Washington Oduor Okeyo

    2014-08-01

    Full Text Available The main purpose of this article is to examine how business environment affects small and medium enterprises. The paper is motivated by the important contributions small and medium enterprises have in many countries, especially Kenya towards job creation, poverty reduction and economic development. Literature however argues that effectiveness of the contributions is conditioned by the state of business environmental factors such as politics, economy, socio-culture, technology, ecology and laws/regulations. Dynamism, complexity and munificence of these factors are therefore vital to achievement of organizational objectives and overall performance. Even so, a review of literature reveals contradictory views regarding the effect of these factors on performance of organizations. Furthermore, studies focusing on these factors in the Kenyan context, particularly with regard to their effect on performance of small and medium firms, are scarce. This article bridges this gap based on a study focusing on 800 manufacturing organizations in Nairobi – Kenya. A sample of 150 enterprises was selected through stratification by business sector followed by simple random sampling. The research design was cross sectional survey where data was collected using a structured questionnaire over a period of one month at the end of which 95 organizations responded giving a response rate of 64%. Reliability and validity of the instrument were determined through Cronbach’s alpha tests and expert reviews. Statistical Package for Social Sciences was used to determine normality through descriptive statistics and study hypotheses tested using inferential statistics. The study established that business environment had an overall impact on organizational performance. Specifically, dynamism, complexity and munificence each had a direct influence on the enterprises in the study. Furthermore the combined effect on performance was found to be greater than that of dynamism and

  6. Development of 4S and related technologies. (3) Statistical evaluation of safety performance of 4S on ULOF event

    International Nuclear Information System (INIS)

    Ishii, Kyoko; Matsumiya, Hisato; Horie, Hideki; Miyagi, Kazumi

    2009-01-01

    The purpose of this work is to evaluate quantitatively and statistically the safety performance of Super-Safe, Small, and Simple reactor (4S) by analyzing with ARGO code, a plant dynamics code for a sodium-cooled fast reactor. In this evaluation, an Anticipated Transient Without Scram (ATWS) is assumed, and an Unprotected Loss of Flow (ULOF) event is selected as a typical ATWS case. After a metric concerned with safety design is defined as performance factor a Phenomena Identification Ranking Table (PIRT) is produced in order to select the plausible phenomena that affect the metric. Then a sensitivity analysis is performed for the parameters related to the selected plausible phenomena. Finally the metric is evaluated with statistical methods whether it satisfies the given safety acceptance criteria. The result is as follows: The Cumulative Damage Fraction (CDF) for the cladding is defined as a metric, and the statistical estimation of the one-sided upper tolerance limit of 95 percent probability at a 95 percent confidence level in CDF is within the safety acceptance criterion; CDF < 0.1. The result shows that the 4S safety performance is acceptable in the ULOF event. (author)

  7. Maternal Risk Factors for Singleton Preterm Births and Survival at ...

    African Journals Online (AJOL)

    Context: Risk factors for and survival of singleton preterm births may vary ... factors and survival‑to‑discharge rate for singleton preterm births at the University of ... Statistical analysis involved descriptive and inferential statistics at 95% level of ...

  8. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  9. Industrial statistics with Minitab

    CERN Document Server

    Cintas, Pere Grima; Llabres, Xavier Tort-Martorell

    2012-01-01

    Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores

  10. Inferential misconceptions and replication crisis

    Directory of Open Access Journals (Sweden)

    Norbert Hirschauer

    2016-12-01

    Full Text Available Misinterpretations of the p value and the introduction of bias through arbitrary analytical choices have been discussed in the literature for decades. Nonetheless, they seem to have persisted in empirical research, and criticisms of p value misuses have increased in the recent past due to the non-replicability of many studies. Unfortunately, the critical concerns that have been raised in the literature are scattered over many disciplines, often linguistically confusing, and differing in their main reasons for criticisms. Misuses and misinterpretations of the p value are currently being discussed intensely under the label “replication crisis” in many academic disciplines and journals, ranging from specialized scientific journals to Nature and Science. In a drastic response to the crisis, the editors of the journal Basic and Applied Social Psychology even decided to ban the use of p values from future publications at the beginning of 2015, a fact that has certainly added fuel to the discussions in the relevant scientific forums. Finally, in early March, the American Statistical Association released a brief formal statement on p values that explicitly addresses misuses and misinterpretations. In this context, we systematize the most serious flaws related to the p value and discuss suggestions of how to prevent them and reduce the rate of false discoveries in the future.

  11. Inferential backbone assignment for sparse data

    International Nuclear Information System (INIS)

    Vitek, Olga; Bailey-Kellogg, Chris; Craig, Bruce; Vitek, Jan

    2006-01-01

    This paper develops an approach to protein backbone NMR assignment that effectively assigns large proteins while using limited sets of triple-resonance experiments. Our approach handles proteins with large fractions of missing data and many ambiguous pairs of pseudoresidues, and provides a statistical assessment of confidence in global and position-specific assignments. The approach is tested on an extensive set of experimental and synthetic data of up to 723 residues, with match tolerances of up to 0.5 ppm for C α and C β resonance types. The tests show that the approach is particularly helpful when data contain experimental noise and require large match tolerances. The keys to the approach are an empirical Bayesian probability model that rigorously accounts for uncertainty in the data at all stages in the analysis, and a hybrid stochastic tree-based search algorithm that effectively explores the large space of possible assignments

  12. What Do Deep Statistical Analysis on Gaming Motivation and Game Characteristics Clusters Reveal about Targeting Demographics when Designing Gamified Contents?

    Directory of Open Access Journals (Sweden)

    Alireza Tavakkoli

    2015-06-01

    Full Text Available This paper presents the comprehensive results of the study of a cohort of college graduate and undergraduate students who participated in playing a Massively Multiplayer Online Role Playing Game (MMORPG as a gameplay rich with social interaction as well as intellectual and aesthetic features. We present the full results of the study in the form of inferential statistics and a review of our descriptive statistics previously reported in [46]. Separate one-way independent-measures multivariate analysis of variance (MANOVA's were used to analyze the data from several instruments to determine if there were statistically significant differences first by gender, then by age group, and then by degree. Moreover, a one-way repeated-measures analysis of variance (ANOVA was used to determine if there was a statistically significant difference between the clusters in the 5 gaming clusters on the Game Characteristic Survey. Follow-up paired samples t-tests were used to see if there was a statistically significant difference between each of the 10 possible combinations of paired clusters. Our results support the hypotheses and outline the features that may need to be taken into account in support of tailoring gamified educational content targeting a certain demographic. Sections 1, 2, and 3 below from our pervious study [46] are included because this is the second part of the two-part study. [46] Tavakkoli, A., Loffredo, D., Ward, M., Sr. (2014. "Insights from Massively Multiplayer Online Role Playing Games to Enhance Gamification in Education", Journal of Systemics, Cybernetics, and Informatics, 12(4, 66-78.

  13. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  14. Transitivity performance, relational hierarchy knowledge and awareness: results of an instructional framing manipulation.

    Science.gov (United States)

    Kumaran, Dharshan; Ludwig, Hans

    2013-12-01

    The transitive inference (TI) paradigm has been widely used to examine the role of the hippocampus in generalization. Here we consider a surprising feature of experimental findings in this task: the relatively poor transitivity performance and levels of hierarchy knowledge achieved by adult human subjects. We focused on the influence of the task instructions on participants' subsequent performance--a single-word framing manipulation which either specified the relation between items as transitive (i.e., OLD-FRAME: choose which item is "older") or left it ambiguous (i.e., NO-FRAME: choose which item is "correct"). We show a marked but highly specific effect of manipulating prior knowledge through instruction: transitivity performance and levels of relational hierarchy knowledge were enhanced, but premise performance unchanged. Further, we show that hierarchy recall accuracy, but not conventional awareness scores, was a significant predictor of inferential performance across the entire group of participants. The current study has four main implications: first, our findings establish the importance of the task instructions, and prior knowledge, in the TI paradigm--suggesting that they influence the size of the overall hypothesis space (e.g., to favor a linear hierarchical structure over other possibilities in the OLD-FRAME). Second, the dissociable effects of the instructional frame on premise and inference performance provide evidence for the operation of distinct underlying mechanisms (i.e., an associative mechanism vs. relational hierarchy knowledge). Third, our findings suggest that a detailed measurement of hierarchy recall accuracy may be a more sensitive index of relational hierarchy knowledge, than conventional awareness score--and should be used in future studies investigating links between awareness and inferential performance. Finally, our study motivates an experimental setting that ensures robust hierarchy learning across participants

  15. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

    Science.gov (United States)

    Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

    2015-02-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  16. Key Characteristics of Rehabilitation Quality Improvement Publications: Scoping Review From 2010 to 2016.

    Science.gov (United States)

    Jesus, Tiago S; Papadimitriou, Christina; Pinho, Cátia S; Hoenig, Helen

    2017-09-28

    To characterize the peer-reviewed quality improvement (QI) literature in rehabilitation. Five electronic databases were searched for English-language articles from 2010 to 2016. Keywords for QI and safety management were searched for in combination with keywords for rehabilitation content and journals. Secondary searches (eg, references-list scanning) were also performed. Two reviewers independently selected articles using working definitions of rehabilitation and QI study types; of 1016 references, 112 full texts were assessed for eligibility. Reported study characteristics including study focus, study setting, use of inferential statistics, stated limitations, and use of improvement cycles and theoretical models were extracted by 1 reviewer, with a second reviewer consulted whenever inferences or interpretation were involved. Fifty-nine empirical rehabilitation QI studies were found: 43 reporting on local QI activities, 7 reporting on QI effectiveness research, 8 reporting on QI facilitators or barriers, and 1 systematic review of a specific topic. The number of publications had significant yearly growth between 2010 and 2016 (P=.03). Among the 43 reports on local QI activities, 23.3% did not explicitly report any study limitations; 39.5% did not used inferential statistics to measure the QI impact; 95.3% did not cite/mention the appropriate reporting guidelines; only 18.6% reported multiple QI cycles; just over 50% reported using a model to guide the QI activity; and only 7% reported the use of a particular theoretical model. Study sites and focuses were diverse; however, nearly a third (30.2%) examined early mobilization in intensive care units. The number of empirical, peer-reviewed rehabilitation QI publications is growing but remains a tiny fraction of rehabilitation research publications. Rehabilitation QI studies could be strengthened by greater use of extant models and theory to guide the QI work, consistent reporting of study limitations, and use of

  17. National transportation statistics 2010

    Science.gov (United States)

    2010-01-01

    National Transportation Statistics presents statistics on the U.S. transportation system, including its physical components, safety record, economic performance, the human and natural environment, and national security. This is a large online documen...

  18. Implementation of Statistical Process Control: Evaluating the Mechanical Performance of a Candidate Silicone Elastomer Docking Seal

    Science.gov (United States)

    Oravec, Heather Ann; Daniels, Christopher C.

    2014-01-01

    The National Aeronautics and Space Administration has been developing a novel docking system to meet the requirements of future exploration missions to low-Earth orbit and beyond. A dynamic gas pressure seal is located at the main interface between the active and passive mating components of the new docking system. This seal is designed to operate in the harsh space environment, but is also to perform within strict loading requirements while maintaining an acceptable level of leak rate. In this study, a candidate silicone elastomer seal was designed, and multiple subscale test articles were manufactured for evaluation purposes. The force required to fully compress each test article at room temperature was quantified and found to be below the maximum allowable load for the docking system. However, a significant amount of scatter was observed in the test results. Due to the stochastic nature of the mechanical performance of this candidate docking seal, a statistical process control technique was implemented to isolate unusual compression behavior from typical mechanical performance. The results of this statistical analysis indicated a lack of process control, suggesting a variation in the manufacturing phase of the process. Further investigation revealed that changes in the manufacturing molding process had occurred which may have influenced the mechanical performance of the seal. This knowledge improves the chance of this and future space seals to satisfy or exceed design specifications.

  19. Data, model, conclusion, doing it again

    NARCIS (Netherlands)

    Molenaar, W.

    1998-01-01

    This paper explores the robustness of conclusions from a statistical model against variations in model choice (rather than variations in random sampling and random assignment to treatments, which are the usual variations covered by inferential statistics). After the problem formulation in section 1,

  20. Measures of difference and significance in the era of computer simulations, meta-analysis, and big data

    NARCIS (Netherlands)

    Heijungs, R.; Henriksson, P.J.G.; Guinée, J.B.

    2016-01-01

    In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically) with

  1. Optimization of Biodiesel-Diesel Blended Fuel Properties and Engine Performance with Ether Additive Using Statistical Analysis and Response Surface Methods

    Directory of Open Access Journals (Sweden)

    Obed M. Ali

    2015-12-01

    Full Text Available In this study, the fuel properties and engine performance of blended palm biodiesel-diesel using diethyl ether as additive have been investigated. The properties of B30 blended palm biodiesel-diesel fuel were measured and analyzed statistically with the addition of 2%, 4%, 6% and 8% (by volume diethyl ether additive. The engine tests were conducted at increasing engine speeds from 1500 rpm to 3500 rpm and under constant load. Optimization of independent variables was performed using the desirability approach of the response surface methodology (RSM with the goal of minimizing emissions and maximizing performance parameters. The experiments were designed using a statistical tool known as design of experiments (DoE based on RSM.

  2. Statistics in a nutshell

    CERN Document Server

    Boslaugh, Sarah

    2013-01-01

    Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.

  3. Oil pipeline performance review 1995, 1996, 1997, 1998 : Technical/statistical report

    International Nuclear Information System (INIS)

    2000-12-01

    This document provides a summary of the pipeline performance and reportable pipeline failures of liquid hydrocarbon pipelines in Canada, for the years 1995 through 1998. The year 1994 was the last one for which the Oil Pipeline Performance Review (OPPR) was published on an annual basis. The OPPR will continue to be published until such time as the Pipeline Risk Assesment Sub-Committee (PRASC) has obtained enough pipeline failure data to be aggregated into a meaningful report. The shifts in the mix of reporting pipeline companies is apparent in the data presented, comparing the volumes transported and the traffic volume during the previous ten-year period. Another table presents a summary of the failures which occurred during the period under consideration, 1995-1998, allowing for a comparison with the data for the previous ten-year period. From the current perspective and from an historical context, this document provides a statistical review of the performance of the pipelines, covering refined petroleum product pipelines, clean oil pipelines and High Vapour Pressure (HVP) pipelines downstream of battery limits. Classified as reportable are spills of 1.5 cubic metre or more of liquid hydrocarbons, any amount of HVP material, any incident involving an injury, a death, a fire, or an explosion. For those companies that responded to the survey, the major items, including number of failures and volumes released are accurate. Samples of the forms used for collecting the information are provided within the document. 6 tabs., 1 fig

  4. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    Science.gov (United States)

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Whey or Casein Hydrolysate with Carbohydrate for Metabolism and Performance in Cycling.

    Science.gov (United States)

    Oosthuyse, T; Carstens, M; Millen, A M E

    2015-07-01

    The protein type most suitable for ingestion during endurance exercise is undefined. This study compared co-ingestion of either 15 g/h whey or casein hydrolysate with 63 g/h fructose: maltodextrin (0.8:1) on exogenous carbohydrate oxidation, exercise metabolism and performance. 2 h postprandial, 8 male cyclists ingested either: carbohydrate-only, carbohydrate-whey hydrolysate, carbohydrate-casein hydrolysate or placebo-water in a crossover, double-blind design during 2 h of exercise at 60%W max followed by a 16-km time trial. Data were evaluated by magnitude-based inferential statistics. Exogenous carbohydrate oxidation, measured from (13)CO2 breath enrichment, was not substantially influenced by co-ingestion of either protein hydrolysate. However, only co-ingestion of carbohydrate-casein hydrolysate substantially decreased (98% very likely decrease) total carbohydrate oxidation (mean±SD, 242±44; 258±47; 277±33 g for carbohydrate-casein, carbohydrate-whey and carbohydrate-only, respectively) and substantially increased (93% likely increase) total fat oxidation (92±14; 83±27; 73±19 g) compared with carbohydrate-only. Furthermore, only carbohydrate-casein hydrolysate ingestion resulted in a faster time trial (-3.6%; 90% CI: ±3.2%) compared with placebo-water (95% likely benefit). However, neither protein hydrolysate enhanced time trial performance when compared with carbohydrate-only. Under the conditions of this study, ingesting carbohydrate-casein, but not carbohydrate-whey hydrolysate, favourably alters metabolism during prolonged moderate-strenuous cycling without substantially altering cycling performance compared with carbohydrate-only. © Georg Thieme Verlag KG Stuttgart · New York.

  6. 78 FR 32652 - Agency Information Collections Activities: Proposed Collection; Comment Request

    Science.gov (United States)

    2013-05-31

    ... descriptive and inferential techniques appropriate to answering questions about outcomes and impacts..., Bureau of Labor Statistics. Request for Comments In accordance with the Paperwork Reduction Act, comments...

  7. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    Science.gov (United States)

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  8. A review of statistical modelling and inference for electrical capacitance tomography

    International Nuclear Information System (INIS)

    Watzenig, D; Fox, C

    2009-01-01

    Bayesian inference applied to electrical capacitance tomography, or other inverse problems, provides a framework for quantified model fitting. Estimation of unknown quantities of interest is based on the posterior distribution over the unknown permittivity and unobserved data, conditioned on measured data. Key components in this framework are a prior model requiring a parametrization of the permittivity and a normalizable prior density, the likelihood function that follows from a decomposition of measurements into deterministic and random parts, and numerical simulation of noise-free measurements. Uncertainty in recovered permittivities arises from measurement noise, measurement sensitivities, model inaccuracy, discretization error and a priori uncertainty; each of these sources may be accounted for and in some cases taken advantage of. Estimates or properties of the permittivity can be calculated as summary statistics over the posterior distribution using Markov chain Monte Carlo sampling. Several modified Metropolis–Hastings algorithms are available to speed up this computationally expensive step. The bias in estimates that is induced by the representation of unknowns may be avoided by design of a prior density. The differing purpose of applications means that there is no single 'Bayesian' analysis. Further, differing solutions will use different modelling choices, perhaps influenced by the need for computational efficiency. We solve a reference problem of recovering the unknown shape of a constant permittivity inclusion in an otherwise uniform background. Statistics calculated in the reference problem give accurate estimates of inclusion area, and other properties, when using measured data. The alternatives available for structuring inferential solutions in other applications are clarified by contrasting them against the choice we made in our reference solution. (topical review)

  9. Statistical cluster analysis and diagnosis of nuclear system level performance

    International Nuclear Information System (INIS)

    Teichmann, T.; Levine, M.M.; Samanta, P.K.; Kato, W.Y.

    1985-01-01

    The complexity of individual nuclear power plants and the importance of maintaining reliable and safe operations makes it desirable to complement the deterministic analyses of these plants by corresponding statistical surveys and diagnoses. Based on such investigations, one can then explore, statistically, the anticipation, prevention, and when necessary, the control of such failures and malfunctions. This paper, and the accompanying one by Samanta et al., describe some of the initial steps in exploring the feasibility of setting up such a program on an integrated and global (industry-wide) basis. The conceptual statistical and data framework was originally outlined in BNL/NUREG-51609, NUREG/CR-3026, and the present work aims at showing how some important elements might be implemented in a practical way (albeit using hypothetical or simulated data)

  10. 78 FR 49516 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Science.gov (United States)

    2013-08-14

    ... descriptive and inferential techniques appropriate to answering questions about outcomes and impacts... Estimates, United States.'' U.S. Department of Labor, Bureau of Labor Statistics. Request for Comments In...

  11. Introductory Life Science Mathematics and Quantitative Neuroscience Courses

    Science.gov (United States)

    Duffus, Dwight; Olifer, Andrei

    2010-01-01

    We describe two sets of courses designed to enhance the mathematical, statistical, and computational training of life science undergraduates at Emory College. The first course is an introductory sequence in differential and integral calculus, modeling with differential equations, probability, and inferential statistics. The second is an…

  12. Utilization of cocoyam production technologies among women ...

    African Journals Online (AJOL)

    The study analysed utilization of improved cocoyam production technologies among women in Abia State, Nigeria. A multistage random sampling technique was used to select sixty (60) women. Data for the study were collected using a structured questionnaire and analysed with descriptive statistics and inferential statistics ...

  13. Effect of the Target Motion Sampling Temperature Treatment Method on the Statistics and Performance

    Science.gov (United States)

    Viitanen, Tuomas; Leppänen, Jaakko

    2014-06-01

    Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very strong resonances. This effect is actually not related to the usage of sampled responses, but is instead an inherent property of the TMS tracking method and concerns both EBT and 0 K calculations.

  14. Frequency of use, awareness, and attitudes toward side effects of anabolic-androgenic steroids consumption among male medical students in Iran.

    Science.gov (United States)

    Fayyazi Bordbar, Mohammad Reza; Abdollahian, Ebrahim; Samadi, Roya; Dolatabadi, Hamid

    2014-11-01

    This study was conducted to determine the frequency of anabolic-androgenic steroids consumption in male students studying at the university and their awareness, attitude, and role of sports activities; the present descriptive study was conducted on 271 volunteers in 2008. The data collected by self-report questionnaires was analyzed by descriptive inferential statistics. The prevalence of consumption was 3.3%, and it was significantly higher in those with a history of bodybuilding or athletic performance. The overall awareness rate was low, and the attitude was too optimistic. It seems that unawareness, incorrect attitude, and history of athletic performance increases the risk of consumption.

  15. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  16. Inferential monitoring of global change impact on biodiversity through remote sensing and species distribution modeling

    Science.gov (United States)

    Sangermano, Florencia

    2009-12-01

    The world is suffering from rapid changes in both climate and land cover which are the main factors affecting global biodiversity. These changes may affect ecosystems by altering species distributions, population sizes, and community compositions, which emphasizes the need for a rapid assessment of biodiversity status for conservation and management purposes. Current approaches on monitoring biodiversity rely mainly on long term observations of predetermined sites, which require large amounts of time, money and personnel to be executed. In order to overcome problems associated with current field monitoring methods, the main objective of this dissertation is the development of framework for inferential monitoring of the impact of global change on biodiversity based on remotely sensed data coupled with species distribution modeling techniques. Several research pieces were performed independently in order to fulfill this goal. First, species distribution modeling was used to identify the ranges of 6362 birds, mammals and amphibians in South America. Chapter 1 compares the power of different presence-only species distribution methods for modeling distributions of species with different response curves to environmental gradients and sample sizes. It was found that there is large variability in the power of the methods for modeling habitat suitability and species ranges, showing the importance of performing, when possible, a preliminary gradient analysis of the species distribution before selecting the method to be used. Chapter 2 presents a new methodology for the redefinition of species range polygons. Using a method capable of establishing the uncertainty in the definition of existing range polygons, the automated procedure identifies the relative importance of bioclimatic variables for the species, predicts their ranges and generates a quality assessment report to explore prediction errors. Analysis using independent validation data shows the power of this

  17. Statistical properties of a utility measure of observer performance compared to area under the ROC curve

    Science.gov (United States)

    Abbey, Craig K.; Samuelson, Frank W.; Gallas, Brandon D.; Boone, John M.; Niklason, Loren T.

    2013-03-01

    The receiver operating characteristic (ROC) curve has become a common tool for evaluating diagnostic imaging technologies, and the primary endpoint of such evaluations is the area under the curve (AUC), which integrates sensitivity over the entire false positive range. An alternative figure of merit for ROC studies is expected utility (EU), which focuses on the relevant region of the ROC curve as defined by disease prevalence and the relative utility of the task. However if this measure is to be used, it must also have desirable statistical properties keep the burden of observer performance studies as low as possible. Here, we evaluate effect size and variability for EU and AUC. We use two observer performance studies recently submitted to the FDA to compare the EU and AUC endpoints. The studies were conducted using the multi-reader multi-case methodology in which all readers score all cases in all modalities. ROC curves from the study were used to generate both the AUC and EU values for each reader and modality. The EU measure was computed assuming an iso-utility slope of 1.03. We find mean effect sizes, the reader averaged difference between modalities, to be roughly 2.0 times as big for EU as AUC. The standard deviation across readers is roughly 1.4 times as large, suggesting better statistical properties for the EU endpoint. In a simple power analysis of paired comparison across readers, the utility measure required 36% fewer readers on average to achieve 80% statistical power compared to AUC.

  18. DOI: 10.18697/ajfand.77.16680 11477 AN ADAPTIVE HOUSEHOLD ...

    African Journals Online (AJOL)

    Rebecca Awuah

    4Research Assistant, Department of Arts & Sciences, Ashesi University, Berekuso, ... using inferential statistical methods, then obtaining a random and ... and without birth certificates, voter registration, or social security identities; dwellings.

  19. Research Article Special Issue

    African Journals Online (AJOL)

    2017-02-15

    Feb 15, 2017 ... MA of Clinical Psychology –University of Medical Sciences North ... physical symptoms caused some social and psychological .... information obtained during the investigation, descriptive and inferential statistical methods.

  20. Assessing Major Adjustment Problems of Freshman Students in ...

    African Journals Online (AJOL)

    Ethiopian Journal of Education and Sciences ... The data was analyzed by using both descriptive and inferential statistical methods. ... in Jimma University experience social adjustment problems than educational and personalpsychological, ...

  1. Strategic environment and bank performance; (Empirical study of bank listed in Indonesian stock exchange period 2011-2015

    Directory of Open Access Journals (Sweden)

    Mursalim Nohong

    2017-03-01

    Full Text Available This study aimed to explain the interaction between macroeconomic and the internal environment with the performance of banks in Indonesia. The analysed data obtained from 10 banks for 5-year observation period by using descriptive and inferential analysis through PLS program. The results showed that the BI rate is the most significant indicator in measuring changes in the macro environment, the efficiency ratio indicators for internal environment variables and indicators ROA for the variable performance. Further analysis showed that changes in the macro environment do not significantly influence the efficiency and performance of the banking system. However, efficiency is measured by using a ratio BOPO significant effect on performance.

  2. An Analysis of Research Trends in Dissertations and Theses Studying Blended Learning

    Science.gov (United States)

    Drysdale, Jeffery S.; Graham, Charles R.; Spring, Kristian J.; Halverson, Lisa R.

    2013-01-01

    This article analyzes the research of 205 doctoral dissertations and masters' theses in the domain of blended learning. A summary of trends regarding the growth and context of blended learning research is presented. Methodological trends are described in terms of qualitative, inferential statistics, descriptive statistics, and combined approaches…

  3. The Development and Demonstration of Multiple Regression Models for Operant Conditioning Questions.

    Science.gov (United States)

    Fanning, Fred; Newman, Isadore

    Based on the assumption that inferential statistics can make the operant conditioner more sensitive to possible significant relationships, regressions models were developed to test the statistical significance between slopes and Y intercepts of the experimental and control group subjects. These results were then compared to the traditional operant…

  4. Developing Sampling Frame for Case Study: Challenges and Conditions

    Science.gov (United States)

    Ishak, Noriah Mohd; Abu Bakar, Abu Yazid

    2014-01-01

    Due to statistical analysis, the issue of random sampling is pertinent to any quantitative study. Unlike quantitative study, the elimination of inferential statistical analysis, allows qualitative researchers to be more creative in dealing with sampling issue. Since results from qualitative study cannot be generalized to the bigger population,…

  5. Investigating the role of organizational happiness inteachers ...

    African Journals Online (AJOL)

    In order to collect the required data, the Oxford Happiness questionnaire and Maslach Burnout Inventory (MBI) questionnaire have been used. In order to analyze the data and examine the hypotheses, descriptive statistic indices including mean and standard deviation as well as inferential statistics such as Pearson's ...

  6. Statistical Techniques For Real-time Anomaly Detection Using Spark Over Multi-source VMware Performance Data

    Energy Technology Data Exchange (ETDEWEB)

    Solaimani, Mohiuddin [Univ. of Texas-Dallas, Richardson, TX (United States); Iftekhar, Mohammed [Univ. of Texas-Dallas, Richardson, TX (United States); Khan, Latifur [Univ. of Texas-Dallas, Richardson, TX (United States); Thuraisingham, Bhavani [Univ. of Texas-Dallas, Richardson, TX (United States); Ingram, Joey Burton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Anomaly detection refers to the identi cation of an irregular or unusual pat- tern which deviates from what is standard, normal, or expected. Such deviated patterns typically correspond to samples of interest and are assigned different labels in different domains, such as outliers, anomalies, exceptions, or malware. Detecting anomalies in fast, voluminous streams of data is a formidable chal- lenge. This paper presents a novel, generic, real-time distributed anomaly detection framework for heterogeneous streaming data where anomalies appear as a group. We have developed a distributed statistical approach to build a model and later use it to detect anomaly. As a case study, we investigate group anomaly de- tection for a VMware-based cloud data center, which maintains a large number of virtual machines (VMs). We have built our framework using Apache Spark to get higher throughput and lower data processing time on streaming data. We have developed a window-based statistical anomaly detection technique to detect anomalies that appear sporadically. We then relaxed this constraint with higher accuracy by implementing a cluster-based technique to detect sporadic and continuous anomalies. We conclude that our cluster-based technique out- performs other statistical techniques with higher accuracy and lower processing time.

  7. Identification of robust statistical downscaling methods based on a comprehensive suite of performance metrics for South Korea

    Science.gov (United States)

    Eum, H. I.; Cannon, A. J.

    2015-12-01

    Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the

  8. Role Of Non-Governmental Organizations Leadership In The Implementation Of Community Development Projects In Arumeru District Tanzania

    Directory of Open Access Journals (Sweden)

    Rajabu Ally Mtunge

    2015-08-01

    Full Text Available The purpose of this study was to examine the role of leadership in the implementation of community development projects by local non-governmental organizations in Arumeru District Tanzania. The study applied survey design which covered the sample of 46 respondents including District Executive Director District Social Workers Non-Governmental Organization leaders workers and volunteers and community members in Arumeru district Tanzania. The study employed simple random sampling technique in order to ensure equal chance of an individual being involved in this study as inferential statistics considered. Data collected from a sample of 46 NGOs employees using a semi-structured questionnaire with both closed and open-ended questions. The collected data analyzed using both descriptive and inferential statistics. The descriptive statistical tools used included frequencies mean and standard deviation while inferential statistical tool used was correlation. Statistical Package for the Social Sciences SPSS version 19 used for analyzing the data collected. The study achieved a response of 46 out of a sample of 47 representing a response rate of 97.87. The results show that a significant number of NGOs 34.8 had not completed their projects 21.7 stated that less than five projects were complete and 43.5 of the respondents confirmed that more than five projects not completed over the last one year. Regarding the influence of leadership on implementation of project spearmans rank correlation revealed a very strong positive correlation 0.910 between of leadership vision and implementation of community development projects a strong positive correlation between communication and implementation of community development projects rho 0.730 n 46 p .001 strong positive correlation between commitment and implementation of community developmental projects which was statistically significant rs .601 p .000 and a positive correlation between accountability and

  9. Effect of Task Presentation on Students' Performances in Introductory Statistics Courses

    Science.gov (United States)

    Tomasetto, Carlo; Matteucci, Maria Cristina; Carugati, Felice; Selleri, Patrizia

    2009-01-01

    Research on academic learning indicates that many students experience major difficulties with introductory statistics and methodology courses. We hypothesized that students' difficulties may depend in part on the fact that statistics tasks are commonly viewed as related to the threatening domain of math. In two field experiments which we carried…

  10. Deterrents to the Success of Micro and Small Enterprises in Akaki ...

    African Journals Online (AJOL)

    descriptive survey) and qualitative (exploratory research) methods. ... The quantitative data were analyzed using appropriate descriptive and inferential statistics while the qualitative data were analyzed by using content analysis and narration ...

  11. Air Force Recruitment: A Geographic Perspective

    National Research Council Canada - National Science Library

    Ross, Jason J

    2000-01-01

    ... of relying upon propensity studies. Descriptive and inferential statistics were used to create and evaluate both non-spatial and spatial auto correlated models to determine the best method for predicting recruitment...

  12. Mode transition and change in variable use in perceptual learning

    NARCIS (Netherlands)

    Hajnal, A; Grocki, M; Jacobs, DM; Zaal, FTJM; Michaels, CF

    2006-01-01

    Runeson, Justin, and Olsson (2000) proposed (a) that perceptual learning entails a transition from an inferential to a direct-perceptual mode of apprehension, and (b) that relative confidence-the difference between estimated and actual performance-indicates whether apprehension is inferential or

  13. Mode transition and change in variable use in perceptual learning

    NARCIS (Netherlands)

    Hajnal, A.; Grocki, M.; Jacobs, D.M.; Zaal, F.T.J.M.; Michaels, C.F.

    2006-01-01

    Runeson, Juslin, and Olsson (2000) proposed (a) that perceptual learning entails a transition from an inferential to a direct-perceptual mode of apprehension, and (b) that relative confidence - the difference between estimated and actual performance - indicates whether apprehension is inferential or

  14. The effects of motor vehicle accidents on careers and the work performance of victims

    Directory of Open Access Journals (Sweden)

    Johanna C. Diedericks

    2014-04-01

    Research purpose: The purpose of this study was to contribute to research on the effects of the injuries by investigating the relationship between the severity of the injuries and the careers and growth potential of victims. Motivation for the study: Employers could use the information on the effects of the injuries on the careers of victims to plan interventions and job accommodations to retain employees and to manage their well-being and performance. Research design, approach and method: The author conducted a quantitative survey on a purposive sample (N = 199 of adult victims of motor vehicle accidents in 2010 in South Africa. She used descriptive and inferential statistics to analyse the data. Main findings: The author observed a number of significant relationships between the effects of the different injuries on the careers and growth potential of victims. Practical/managerial implications: Organisations and managers need to recognise the physical and psychological effects of injuries victims sustain in motor accidents and the associated responsibility of organisations to accommodate these employees. Contribution/value-add: The findings of the study can add to the literature and provide insights into the consequences of the injuries. They also provide information that can assist organisations to create an awareness of job accommodation and employee wellness of accident victims.

  15. Is It True That "Blonds Have More Fun"?

    Science.gov (United States)

    Bonsangue, Martin V.

    1992-01-01

    Describes the model for decision making used in inferential statistics and real-world applications that parallel the statistical model. Discusses two activities that ask students to write about a personal decision-making experience and create a mock trial in which the class makes the decision of guilt or innocence. (MDH)

  16. The Future Impact of Vietnam Era Veterans on Inpatient Acute Care and Mental Health Product Lines at a Veterans Affairs Medical Center

    National Research Council Canada - National Science Library

    Parker, Robert

    2000-01-01

    .... This retrospective study uses descriptive statistics, inferential statistics, and trend analysis to observe, describe, explain, predict, test, and evaluate hypotheses associated with the relationship between non-VEV and VEV admissions. The results from this will be used to assist in developing a forecasting methodology using a best curve fit model.

  17. Effects of Climate Change on Poultry Production in Ondo State ...

    African Journals Online (AJOL)

    The study assessed the effects of climate change on poultry production in Ondo State, Nigeria. Eighty three (83) poultry farmers were interviewed to elicit relevant information in line with the objectives of the study. Descriptive statistics and inferential statistical tools were used for data analysis. Findings revealed that majority ...

  18. Training Needs of Vocational Forestry Staff in Ogun State Nigeria ...

    African Journals Online (AJOL)

    These concerns gave rise to this study with specific objectives to level of knowledge and level of skills of vocational staff in forestry activities. Data were collected using a simple random sampling technique in the selection of 50% of vocational staff totaling 143 respondents. Descriptive statistics and inferential statistics were ...

  19. Identifying Androgen Receptor-Independent Mechanisms of Prostate Cancer Resistance to Second-Generation Antiandrogen Therapy

    Science.gov (United States)

    2016-08-01

    topics in descriptive and inferential statistics including hypothesis testing, regression, and multivariate analysis. I look forward to continuing my...analysis, and CRISPR/Cas9 gene editing. To facilitate my clinical-translational skills, I took part in a statistics seminar course that covered... statistics and data science training with further coursework this year offered at MSKCC and through massive open online courses (MOOCs

  20. Earth Observation System Flight Dynamics System Covariance Realism

    Science.gov (United States)

    Zaidi, Waqar H.; Tracewell, David

    2016-01-01

    This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.

  1. The quantitative failure of human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C.T.

    1995-07-01

    This philosophical treatise argues the merits of Human Reliability Analysis (HRA) in the context of the nuclear power industry. Actually, the author attacks historic and current HRA as having failed in informing policy makers who make decisions based on risk that humans contribute to systems performance. He argues for an HRA based on Bayesian (fact-based) inferential statistics, which advocates a systems analysis process that employs cogent heuristics when using opinion, and tempers itself with a rational debate over the weight given subjective and empirical probabilities.

  2. Applied statistics for economists

    CERN Document Server

    Lewis, Margaret

    2012-01-01

    This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.

  3. Joint statistics of partial sums of ordered exponential variates and performance of GSC RAKE receivers over rayleigh fading channel

    KAUST Repository

    Nam, Sungsik

    2011-08-01

    Spread spectrum receivers with generalized selection combining (GSC) RAKE reception were proposed and have been studied as alternatives to the classical two fundamental schemes: maximal ratio combining and selection combining because the number of diversity paths increases with the transmission bandwidth. Previous work on performance analyses of GSC RAKE receivers based on the signal to noise ratio focused on the development of methodologies to derive exact closed-form expressions for various performance measures. However, some open problems related to the performance evaluation of GSC RAKE receivers still remain to be solved such as the exact performance analysis of the capture probability and an exact assessment of the impact of self-interference on GSC RAKE receivers. The major difficulty in these problems is to derive some joint statistics of ordered exponential variates. With this motivation in mind, we capitalize in this paper on some new order statistics results to derive exact closed-form expressions for the capture probability and outage probability of GSC RAKE receivers subject to self-interference over independent and identically distributed Rayleigh fading channels, and compare it to that of partial RAKE receivers. © 2011 IEEE.

  4. Analysis of cost efficiency in food crop production among small ...

    African Journals Online (AJOL)

    The analytical tools were descriptive statistics involving the use of frequency tables and inferential ... The study recommended farmers education on fundamental farm ... plan, evaluate and appraise their farm business activities among others.

  5. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    Science.gov (United States)

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  6. Study of statistical properties of hybrid statistic in coherent multi-detector compact binary coalescences Search

    OpenAIRE

    Haris, K; Pai, Archana

    2015-01-01

    In this article, we revisit the problem of coherent multi-detector search of gravitational wave from compact binary coalescence with Neutron stars and Black Holes using advanced interferometers like LIGO-Virgo. Based on the loss of optimal multi-detector signal-to-noise ratio (SNR), we construct a hybrid statistic as a best of maximum-likelihood-ratio(MLR) statistic tuned for face-on and face-off binaries. The statistical properties of the hybrid statistic is studied. The performance of this ...

  7. Perceived Sleep Quality, Mood States, and Their Relationship With Performance Among Brazilian Elite Athletes During a Competitive Period.

    Science.gov (United States)

    Brandt, Ricardo; Bevilacqua, Guilherme G; Andrade, Alexandro

    2017-04-01

    Brandt, R, Bevilacqua, GG, and Andrade, A. Perceived sleep quality, mood states, and their relationship with performance among Brazilian elite athletes during a competitive period. J Strength Cond Res 31(4): 1033-1039, 2017-We described the perceived sleep quality and mood states of elite athletes during a competitive period, and clarified their relationship to athletes' sport performance. Participants were 576 Brazilian elite athletes (404 men and 172 women) of individual and team sports. Mood states were evaluated using the Brunel Mood Scale, whereas perceived sleep quality was evaluated using a single question ("How would you evaluate the quality of your sleep in the last few days?"). Evaluations of mood state and sleep quality were performed up to 60 minutes before national and international sports competitions began. Descriptive and inferential statistics (including logistic regression) were used to evaluate the relationship of sleep quality and mood states with performance (i.e., winning or losing). Athletes typically had good sleep quality and mood states similar to the Iceberg profile (i.e., high vigor and low tension, depression, anger, fatigue, and mental confusion). The Wald test revealed that sleep, anger, tension, and vigor predicted athletes' performance. Specifically, poor sleep quality and low vigor and anger decreased the odds of winning, whereas higher tension increased these odds. The Hosmer-Lemeshow test indicated that the results were sufficiently generalizable. Overall, we observed a significant relationship between sleep and mood states, which in turn both significantly influenced athletes' sports performance. Thus, coaching staff and athletes should monitor athletes' sleep quality before competitions to ensure athletes are in the optimal condition for performance.

  8. Assessment of food insecurity and coping mechanisms among ...

    African Journals Online (AJOL)

    Assessment of food insecurity and coping mechanisms among pastoral households ... The main tools of analysis for this study include descriptive and inferential statistics ... as well as extended veterinary service and disease control programs.

  9. LITTERFALL AND NUTRIENT RETURNS IN ISOLATED STANDS ...

    African Journals Online (AJOL)

    Dr Osondu

    collected were analyzed in the laboratory and the results were subjected to both descriptive and inferential statistical .... sub-equatorial climate of Af Koppen's ..... rhythms long-term changes. Smithsonian Institution, Washington DC. 468 pp.

  10. AFRREV, 9(1), S/NO 36, JANUARY, 2015

    African Journals Online (AJOL)

    EMILIA

    conducted on accounting students and the analysis will be by inferential statistics –. Analysis of ... insensitivity to the nature of Biology when planning instructional activities in the classroom. ..... Teaching effectively, Owerri: Career Publishers.

  11. Welfare Status of Rural Women Agro-processors' Participants in the ...

    African Journals Online (AJOL)

    User

    Descriptive and inferential statistics were used to analyze data. Results ... nutritive quality and market acceptability, Oil palm processing and marketing, .... community driven project in Nigeria, reported that most of the respondents in the.

  12. The current pattern of gestational age-related anthropometric ...

    African Journals Online (AJOL)

    Descriptive and inferential statistics were appropriately applied .... weeks. OFC increased steadily all through ..... Nigerian community: The comparison of results from bivariate and multivariate analyses. J Trop Pediatr 1986;32(6):295-300. 18.

  13. COgnitive behavioural therapy versus standardised medical care for adults with Dissociative non-Epileptic Seizures (CODES): statistical and economic analysis plan for a randomised controlled trial.

    Science.gov (United States)

    Robinson, Emily J; Goldstein, Laura H; McCrone, Paul; Perdue, Iain; Chalder, Trudie; Mellers, John D C; Richardson, Mark P; Murray, Joanna; Reuber, Markus; Medford, Nick; Stone, Jon; Carson, Alan; Landau, Sabine

    2017-06-06

    Dissociative seizures (DSs), also called psychogenic non-epileptic seizures, are a distressing and disabling problem for many patients in neurological settings with high and often unnecessary economic costs. The COgnitive behavioural therapy versus standardised medical care for adults with Dissociative non-Epileptic Seizures (CODES) trial is an evaluation of a specifically tailored psychological intervention with the aims of reducing seizure frequency and severity and improving psychological well-being in adults with DS. The aim of this paper is to report in detail the quantitative and economic analysis plan for the CODES trial, as agreed by the trial steering committee. The CODES trial is a multicentre, pragmatic, parallel group, randomised controlled trial performed to evaluate the clinical effectiveness and cost-effectiveness of 13 sessions of cognitive behavioural therapy (CBT) plus standardised medical care (SMC) compared with SMC alone for adult outpatients with DS. The objectives and design of the trial are summarised, and the aims and procedures of the planned analyses are illustrated. The proposed analysis plan addresses statistical considerations such as maintaining blinding, monitoring adherence with the protocol, describing aspects of treatment and dealing with missing data. The formal analysis approach for the primary and secondary outcomes is described, as are the descriptive statistics that will be reported. This paper provides transparency to the planned inferential analyses for the CODES trial prior to the extraction of outcome data. It also provides an update to the previously published trial protocol and guidance to those conducting similar trials. ISRCTN registry ISRCTN05681227 (registered on 5 March 2014); ClinicalTrials.gov NCT02325544 (registered on 15 December 2014).

  14. Have Basic Mathematical Skills Grown Obsolete in the Computer Age: Assessing Basic Mathematical Skills and Forecasting Performance in a Business Statistics Course

    Science.gov (United States)

    Noser, Thomas C.; Tanner, John R.; Shah, Situl

    2008-01-01

    The purpose of this study was to measure the comprehension of basic mathematical skills of students enrolled in statistics classes at a large regional university, and to determine if the scores earned on a basic math skills test are useful in forecasting student performance in these statistics classes, and to determine if students' basic math…

  15. Self-Esteem and Academic Achievement of High School Students

    Science.gov (United States)

    Moradi Sheykhjan, Tohid; Jabari, Kamran; Rajeswari, K.

    2014-01-01

    The primary purpose of this study was to determine the influence of self-esteem on academic achievement among high school students in Miandoab City of Iran. The methodology of the research is descriptive and correlation that descriptive and inferential statistics were used to analyze the data. Statistical Society includes male and female high…

  16. Design and performance characteristics of solar adsorption refrigeration system using parabolic trough collector: Experimental and statistical optimization technique

    International Nuclear Information System (INIS)

    Abu-Hamdeh, Nidal H.; Alnefaie, Khaled A.; Almitani, Khalid H.

    2013-01-01

    Highlights: • The successes of using olive waste/methanol as an adsorbent/adsorbate pair. • The experimental gross cycle coefficient of performance obtained was COP a = 0.75. • Optimization showed expanding adsorbent mass to a certain range increases the COP. • The statistical optimization led to optimum tank volume between 0.2 and 0.3 m 3 . • Increasing the collector area to a certain range increased the COP. - Abstract: The current work demonstrates a developed model of a solar adsorption refrigeration system with specific requirements and specifications. The recent scheme can be employed as a refrigerator and cooler unit suitable for remote areas. The unit runs through a parabolic trough solar collector (PTC) and uses olive waste as adsorbent with methanol as adsorbate. Cooling production, COP (coefficient of performance, and COP a (cycle gross coefficient of performance) were used to assess the system performance. The system’s design optimum parameters in this study were arrived to through statistical and experimental methods. The lowest temperature attained in the refrigerated space was 4 °C and the equivalent ambient temperature was 27 °C. The temperature started to decrease steadily at 20:30 – when the actual cooling started – until it reached 4 °C at 01:30 in the next day when it rose again. The highest COP a obtained was 0.75

  17. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  18. Falling in the elderly: Do statistical models matter for performance criteria of fall prediction? Results from two large population-based studies.

    Science.gov (United States)

    Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Fantino, Bruno; Levinoff, Elise J; Allali, Gilles; Beauchet, Olivier

    2016-01-01

    To compare performance criteria (i.e., sensitivity, specificity, positive predictive value, negative predictive value, area under receiver operating characteristic curve and accuracy) of linear and non-linear statistical models for fall risk in older community-dwellers. Participants were recruited in two large population-based studies, "Prévention des Chutes, Réseau 4" (PCR4, n=1760, cross-sectional design, retrospective collection of falls) and "Prévention des Chutes Personnes Agées" (PCPA, n=1765, cohort design, prospective collection of falls). Six linear statistical models (i.e., logistic regression, discriminant analysis, Bayes network algorithm, decision tree, random forest, boosted trees), three non-linear statistical models corresponding to artificial neural networks (multilayer perceptron, genetic algorithm and neuroevolution of augmenting topologies [NEAT]) and the adaptive neuro fuzzy interference system (ANFIS) were used. Falls ≥1 characterizing fallers and falls ≥2 characterizing recurrent fallers were used as outcomes. Data of studies were analyzed separately and together. NEAT and ANFIS had better performance criteria compared to other models. The highest performance criteria were reported with NEAT when using PCR4 database and falls ≥1, and with both NEAT and ANFIS when pooling data together and using falls ≥2. However, sensitivity and specificity were unbalanced. Sensitivity was higher than specificity when identifying fallers, whereas the converse was found when predicting recurrent fallers. Our results showed that NEAT and ANFIS were non-linear statistical models with the best performance criteria for the prediction of falls but their sensitivity and specificity were unbalanced, underscoring that models should be used respectively for the screening of fallers and the diagnosis of recurrent fallers. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  19. PERAN KEPUASAN DALAM MEMEDIASI PENGARUH KOMUNITAS MEREK TERHADAP LOYALITAS PENGGUNA HARLEY-DAVIDSON DI KOTA DENPASAR

    Directory of Open Access Journals (Sweden)

    I Gede Nandya Oktora Panasea

    2013-10-01

    Full Text Available This study empirically explores the influences of brand community on loyalty when mediated by satisfaction. The data on this research were collected from observation, interviews, and  questionnaires.Data analysis methods that used on this research were descriptive and inferential statistics. Descriptive statistics such as frequency distributions were used to describes the  characteristics of the respondents, while inferential statisticssuch  as  Baron  and  Kenny  Steps  and  Sobel  test  is  used  to  test  the  hypotheses.  The  results  showed  that satisfaction partially mediates the effect of brand community on loyalty of Harley-Davidson users inDenpasar. It shows that satisfaction is an essential element for the brand community to realize the brand loya lty.

  20. Topics in computer simulations of statistical systems

    International Nuclear Information System (INIS)

    Salvador, R.S.

    1987-01-01

    Several computer simulations studying a variety of topics in statistical mechanics and lattice gauge theories are performed. The first study describes a Monte Carlo simulation performed on Ising systems defined on Sierpinsky carpets of dimensions between one and four. The critical coupling and the exponent γ are measured as a function of dimension. The Ising gauge theory in d = 4 - epsilon, for epsilon → 0 + , is then studied by performing a Monte Carlo simulation for the theory defined on fractals. A high statistics Monte Carlo simulation for the three-dimensional Ising model is presented for lattices of sizes 8 3 to 44 3 . All the data obtained agrees completely, within statistical errors, with the forms predicted by finite-sizing scaling. Finally, a method to estimate numerically the partition function of statistical systems is developed

  1. The power and statistical behaviour of allele-sharing statistics when ...

    Indian Academy of Sciences (India)

    Unknown

    3Human Genetics Division, School of Medicine, University of Southampton, Southampton SO16 6YD, UK. Abstract ... that the statistic S-#alleles gives good performance for recessive ... (H50) of the families are linked to the single marker. The.

  2. Research Article Special Issue

    African Journals Online (AJOL)

    2016-05-15

    May 15, 2016 ... Journal of Fundamental and Applied Sciences is licensed under a ... Cronbach alpha for participative management (0.85) and social capital (0.87). For the analysis, descriptive and inferential statistical methods (Pearson ...

  3. Study on the relationship between participatory management in ...

    African Journals Online (AJOL)

    Journal of Fundamental and Applied Sciences ... The aim of this study was to examine the relationship between social capital growths with use of ... For the analysis, descriptive and inferential statistical methods (Pearson correlation coefficient, ...

  4. Ischemic priapism in South‑East Nigeria: Presentation, management ...

    African Journals Online (AJOL)

    Statistical Analysis Used: The data were analyzed descriptively and inferentially using Statistical Package for Social Sciences (SPSS version 16, SPSS Inc., Chicago IL, USA) with P < 0.05. Results: Mean age was 30.5 years (standard deviation [SD] =1.63), range: 14–79 years. Onset to presentation interval ranged from 6 h ...

  5. 2012 Aerospace Medical Certification Statistical Handbook

    Science.gov (United States)

    2013-12-01

    2012 Aerospace Medical Certification Statistical Handbook Valerie J. Skaggs Ann I. Norris Civil Aerospace Medical Institute Federal Aviation...Certification Statistical Handbook December 2013 6. Performing Organization Code 7. Author(s) 8. Performing Organization Report No. Skaggs VJ, Norris AI 9...2.57 Hayfever 14,477 2.49 Asthma 12,558 2.16 Other general heart pathology (abnormal ECG, open heart surgery, etc.). Wolff-Parkinson-White syndrome

  6. Reliability modeling of degradation of products with multiple performance characteristics based on gamma processes

    International Nuclear Information System (INIS)

    Pan Zhengqiang; Balakrishnan, Narayanaswamy

    2011-01-01

    Many highly reliable products usually have complex structure, with their reliability being evaluated by two or more performance characteristics. In certain physical situations, the degradation of these performance characteristics would be always positive and strictly increasing. In such a case, the gamma process is usually considered as a degradation process due to its independent and non-negative increments properties. In this paper, we suppose that a product has two dependent performance characteristics and that their degradation can be modeled by gamma processes. For such a bivariate degradation involving two performance characteristics, we propose to use a bivariate Birnbaum-Saunders distribution and its marginal distributions to approximate the reliability function. Inferential method for the corresponding model parameters is then developed. Finally, for an illustration of the proposed model and method, a numerical example about fatigue cracks is discussed and some computational results are presented.

  7. Análisis de rendimiento de las tecnologías Plinq y Linq en sistemas informáticos

    Directory of Open Access Journals (Sweden)

    José Guerra

    2015-12-01

    The research is based on analyzing the performance of integrated data query languages LINQ and PLINQ for computer systems development. The performance measurement includes four indicators: response times, percentage of use of processing, RAM usage, and input and output operations to hard drives. The values for the indicators are obtained through a software module embedded in the prototype developed the same that integrates both LINQ and PLINQ. With data for each indicator and applying descriptive and inferential statistics, it was determined that PLINQ offers better performance (81.75% than the integrated LINQ queries (61.50% language. In the execution of complex queries a difference of 61.25% was obtained, while simple LINQ query data, highlighting its superiority with 33%.

  8. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    Science.gov (United States)

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  9. [''R"--project for statistical computing

    DEFF Research Database (Denmark)

    Dessau, R.B.; Pipper, Christian Bressen

    2008-01-01

    An introduction to the R project for statistical computing (www.R-project.org) is presented. The main topics are: 1. To make the professional community aware of "R" as a potent and free software for graphical and statistical analysis of medical data; 2. Simple well-known statistical tests are fai...... are fairly easy to perform in R, but more complex modelling requires programming skills; 3. R is seen as a tool for teaching statistics and implementing complex modelling of medical data among medical professionals Udgivelsesdato: 2008/1/28......An introduction to the R project for statistical computing (www.R-project.org) is presented. The main topics are: 1. To make the professional community aware of "R" as a potent and free software for graphical and statistical analysis of medical data; 2. Simple well-known statistical tests...

  10. PENGARUH PENDIDIKAN DAN KEPUASAN KERJA TERHADAP KINERJA KARYAWAN PADA PT MATAHARI DEPARTEMEN STORE CABANG MAKASSAR

    Directory of Open Access Journals (Sweden)

    Wiwik Hestyana

    2017-10-01

    Full Text Available This study aims to examine the effect of education on employee performance,and the effect of job satisfaction on employee performance. This research was conducted at Mall Panakukkang PT. Matahari Department Store Makassar branch, with a population of 400 people, researchers took 40 people as a sample. This research method uses quantitative research by using primary data and secondary data. Data collection techniques by conducting interviews and observations with the media questionnaire. Analytical methods use inferential statistics with parametric data types, and use SPSS (Statistical Package for Social Science for windows version 13.0 and Mr. Office Excel 2007. The results show that education variable on employee performance has no positive and significant effect because it has value (education Thitung 1,874 and value (job satisfaction Thitung -0.462 ≥ Ttable 4.150. This means that education and job satisfaction does not have a positive and significant impact on the performance of employees of PT. Matahari Department Store Makassar Branch.

  11. Untitled

    African Journals Online (AJOL)

    Both descriptive and inferential statistical techniques were applied for ... RESULTS: Three hundred thirty four (98 %) of the study participants believed that ... 1960s after the discovery and use of new ... some pilot eradication projects in few.

  12. Factors influencing awareness and attendance of traditional oral ...

    African Journals Online (AJOL)

    Data were recorded using SPSS version 16 software. ... Conclusion: The study showed moderate awareness of traditional oral care .... Descriptive and inferential statistics were used as ..... C. Pilot survey of oral health-related quality of life: a.

  13. Assessing the Knowledge, Self-Efficacy and Health Behaviors of Male Beneficiaries Assigned to the National Capital Area Regarding Participation in Prostate Screening

    National Research Council Canada - National Science Library

    Moore, Angelo

    2002-01-01

    .... The study used both inferential and descriptive statistics to report findings. The majority (93%) of the participants were very knowledgeable about prostate cancer and prostate cancer screening as indicated by high scores on the knowledge scales...

  14. The Incidence of Teenage Pregnancy in Ekiti State, Nigeria | Odu ...

    African Journals Online (AJOL)

    Face and content validities of the instrument was determined by test experts and the ... The inferential statistical technique used for data analysis in this study was ... electronic media regulate and censor sex related programmes and movies.

  15. Introduction of a Journal Excerpt Activity Improves Undergraduate Students' Performance in Statistics

    Science.gov (United States)

    Rabin, Laura A.; Nutter-Upham, Katherine E.

    2010-01-01

    We describe an active learning exercise intended to improve undergraduate students' understanding of statistics by grounding complex concepts within a meaningful, applied context. Students in a journal excerpt activity class read brief excerpts of statistical reporting from published research articles, answered factual and interpretive questions,…

  16. The ‘39 steps’: an algorithm for performing statistical analysis of data on energy intake and expenditure

    Directory of Open Access Journals (Sweden)

    John R. Speakman

    2013-03-01

    Full Text Available The epidemics of obesity and diabetes have aroused great interest in the analysis of energy balance, with the use of organisms ranging from nematode worms to humans. Although generating energy-intake or -expenditure data is relatively straightforward, the most appropriate way to analyse the data has been an issue of contention for many decades. In the last few years, a consensus has been reached regarding the best methods for analysing such data. To facilitate using these best-practice methods, we present here an algorithm that provides a step-by-step guide for analysing energy-intake or -expenditure data. The algorithm can be used to analyse data from either humans or experimental animals, such as small mammals or invertebrates. It can be used in combination with any commercial statistics package; however, to assist with analysis, we have included detailed instructions for performing each step for three popular statistics packages (SPSS, MINITAB and R. We also provide interpretations of the results obtained at each step. We hope that this algorithm will assist in the statistically appropriate analysis of such data, a field in which there has been much confusion and some controversy.

  17. Survey datasets on women participation in green jobs in the construction industry

    Directory of Open Access Journals (Sweden)

    Adedeji O. Afolabi

    2018-04-01

    Full Text Available The unique qualities of women can make them bearers of solutions towards achieving sustainability and dealing with the dangers attributed to climate change. The attitudinal study utilized a questionnaire instrument to obtain perception of female construction professionals. By using a well-structured questionnaire, data was obtained on women participating in green jobs in the construction Industry. Descriptive statistics is performed on the collected data and presented in tables and mean scores (MS. In addition, inferential statistics of categorical regression was performed on the data to determine the level of influence (beta factor the identified barriers had on the level of participation in green jobs. Barriers and the socio-economic benefits which can guide policies and actions on attracting, retaining and exploring the capabilities of women in green jobs can be obtained from the survey data when analyzed.

  18. Statistical nuclear reactions

    International Nuclear Information System (INIS)

    Hilaire, S.

    2001-01-01

    A review of the statistical model of nuclear reactions is presented. The main relations are described, together with the ingredients necessary to perform practical calculations. In addition, a substantial overview of the width fluctuation correction factor is given. (author)

  19. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  20. Evaluation of the performance of Moses statistical engine adapted to ...

    African Journals Online (AJOL)

    ... of Moses statistical engine adapted to English-Arabic language combination. ... of Artificial Intelligence (AI) dedicated to Natural Language Processing (NLP). ... and focuses on SMT, then introducing the features of the open source Moses ...

  1. Review of Statistical Analyses Resulting from Performance of HLDWD-DWPF-005

    International Nuclear Information System (INIS)

    Beck, R.S.

    1997-01-01

    The Engineering Department at the Defense Waste Processing Facility (DWPF) has reviewed two reports from the Statistical Consulting Section (SCS) involving the statistical analysis of test results for analysis of small sample inserts (references 1 ampersand 2). The test results cover two proposed analytical methods, a room temperature hydrofluoric acid preparation (Cold Chem) and a sodium peroxide/sodium hydroxide fusion modified for insert samples (Modified Fusion). The reports support implementation of the proposed small sample containers and analytical methods at DWPF. Hydragard sampler valve performance was typical of previous results (reference 3). Using an element from each major feed stream. lithium from the frit and iron from the sludge, the sampler was determined to deliver a uniform mixture in either sample container.The lithium to iron ratios were equivalent for the standard 15 ml vial and the 3 ml insert.The proposed method provide equivalent analyses as compared to the current methods. The biases associated with the proposed methods on a vitrified basis are less than 5% for major elements. The sum of oxides for the proposed method compares favorably with the sum of oxides for the conventional methods. However, the average sum of oxides for the Cold Chem method was 94.3% which is below the minimum required recovery of 95%. Both proposed methods, cold Chem and Modified Fusion, will be required at first to provide an accurate analysis which will routinely meet the 95% and 105% average sum of oxides limit for Product Composition Control System (PCCS).Issued to be resolved during phased implementation are as follows: (1) Determine calcine/vitrification factor for radioactive feed; (2) Evaluate covariance matrix change against process operating ranges to determine optimum sample size; (3) Evaluate sources for low sum of oxides; and (4) Improve remote operability of production versions of equipment and instruments for installation in 221-S.The specifics of

  2. FREQFIT: Computer program which performs numerical regression and statistical chi-squared goodness of fit analysis

    International Nuclear Information System (INIS)

    Hofland, G.S.; Barton, C.C.

    1990-01-01

    The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program's results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig

  3. Statistical analyses of variability/reproducibility of environmentally assisted cyclic crack growth rate data utilizing JAERI Material Performance Database (JMPD)

    International Nuclear Information System (INIS)

    Tsuji, Hirokazu; Yokoyama, Norio; Nakajima, Hajime; Kondo, Tatsuo

    1993-05-01

    Statistical analyses were conducted by using the cyclic crack growth rate data for pressure vessel steels stored in the JAERI Material Performance Database (JMPD), and comparisons were made on variability and/or reproducibility of the data between obtained by ΔK-increasing and by ΔK-constant type tests. Based on the results of the statistical analyses, it was concluded that ΔK-constant type tests are generally superior to the commonly used ΔK-increasing type ones from the viewpoint of variability and/or reproducibility of the data. Such a tendency was more pronounced in the tests conducted in simulated LWR primary coolants than those in air. (author)

  4. Optimization of the gas turbine-modular helium reactor using statistical methods to maximize performance without compromising system design margins

    International Nuclear Information System (INIS)

    Lommers, L.J.; Parme, L.L.; Shenoy, A.S.

    1995-07-01

    This paper describes a statistical approach for determining the impact of system performance and design uncertainties on power plant performance. The objectives of this design approach are to ensure that adequate margin is provided, that excess margin is minimized, and that full advantage can be taken of unconsumed margin. It is applicable to any thermal system in which these factors are important. The method is demonstrated using the Gas Turbine Modular Helium Reactor as an example. The quantitative approach described allows the characterization of plant performance and the specification of the system design requirements necessary to achieve the desired performance with high confidence. Performance variations due to design evolution, inservice degradation, and basic performance uncertainties are considered. The impact of all performance variabilities is combined using Monte Carlo analysis to predict the range of expected operation

  5. Fiscal federalism and equity in the state joint local governments ...

    African Journals Online (AJOL)

    This resulted in allocating revenues to some LGAs more than they were allocated ... and descriptive and inferential statistical methods of analysis tools such as ratios, ... proportions of direct and indirect allocations made to the LGAs during the ...

  6. Effect of the Target Motion Sampling temperature treatment method on the statistics and performance

    International Nuclear Information System (INIS)

    Viitanen, Tuomas; Leppänen, Jaakko

    2015-01-01

    Highlights: • Use of the Target Motion Sampling (TMS) method with collision estimators is studied. • The expected values of the estimators agree with NJOY-based reference. • In most practical cases also the variances of the estimators are unaffected by TMS. • Transport calculation slow-down due to TMS dominates the impact on figures-of-merit. - Abstract: Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very

  7. Statistical learning in social action contexts.

    Science.gov (United States)

    Monroy, Claire; Meyer, Marlene; Gerson, Sarah; Hunnius, Sabine

    2017-01-01

    Sensitivity to the regularities and structure contained within sequential, goal-directed actions is an important building block for generating expectations about the actions we observe. Until now, research on statistical learning for actions has solely focused on individual action sequences, but many actions in daily life involve multiple actors in various interaction contexts. The current study is the first to investigate the role of statistical learning in tracking regularities between actions performed by different actors, and whether the social context characterizing their interaction influences learning. That is, are observers more likely to track regularities across actors if they are perceived as acting jointly as opposed to in parallel? We tested adults and toddlers to explore whether social context guides statistical learning and-if so-whether it does so from early in development. In a between-subjects eye-tracking experiment, participants were primed with a social context cue between two actors who either shared a goal of playing together ('Joint' condition) or stated the intention to act alone ('Parallel' condition). In subsequent videos, the actors performed sequential actions in which, for certain action pairs, the first actor's action reliably predicted the second actor's action. We analyzed predictive eye movements to upcoming actions as a measure of learning, and found that both adults and toddlers learned the statistical regularities across actors when their actions caused an effect. Further, adults with high statistical learning performance were sensitive to social context: those who observed actors with a shared goal were more likely to correctly predict upcoming actions. In contrast, there was no effect of social context in the toddler group, regardless of learning performance. These findings shed light on how adults and toddlers perceive statistical regularities across actors depending on the nature of the observed social situation and the

  8. Explorasi Hubungan antara Personaliti Islamik dan Gaya Keibubapaan

    Directory of Open Access Journals (Sweden)

    NOORAINI OTHMAN

    2013-12-01

    Full Text Available This study was conducted to identify the parenting styles which influence the Islamic personality among students from one of the secondary school in Batu Pahat district, Johor. Using strata random sampling technique, a total of 302 students from form 1 to form 6 of the secondary school was chosen as the respondents. This correlational study used questionnaire as an instrument for its data collection. The data analysis was done using the descriptive and inferential statistical analysis with the help of Statistical Packages for Social Sciences (SPSS version 17.0. Descriptive analysis result shows a medium level of authoritative parenting style, high level of authoritarian parenting style and low level of permissive parenting style were practiced. Inferential analysis result shows the existence of a significant relationship between parenting styles (authoritarian, authoritative and permissive and Islamic personality.

  9. A nonparametric spatial scan statistic for continuous data.

    Science.gov (United States)

    Jung, Inkyung; Cho, Ho Jin

    2015-10-20

    Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.

  10. Effect of altitude on physiological performance: a statistical analysis using results of international football games.

    Science.gov (United States)

    McSharry, Patrick E

    2007-12-22

    To assess the effect of altitude on match results and physiological performance of a large and diverse population of professional athletes. Statistical analysis of international football (soccer) scores and results. FIFA extensive database of 1460 football matches in 10 countries spanning over 100 years. Altitude had a significant (Pnegative impact on physiological performance as revealed through the overall underperformance of low altitude teams when playing against high altitude teams in South America. High altitude teams score more and concede fewer goals with increasing altitude difference. Each additional 1000 m of altitude difference increases the goal difference by about half of a goal. The probability of the home team winning for two teams from the same altitude is 0.537, whereas this rises to 0.825 for a home team with an altitude difference of 3695 m (such as Bolivia v Brazil) and falls to 0.213 when the altitude difference is -3695 m (such as Brazil v Bolivia). Altitude provides a significant advantage for high altitude teams when playing international football games at both low and high altitudes. Lowland teams are unable to acclimatise to high altitude, reducing physiological performance. As physiological performance does not protect against the effect of altitude, better predictors of individual susceptibility to altitude illness would facilitate team selection.

  11. Impact of diabetes continuing education on health care ...

    African Journals Online (AJOL)

    Methods: A pre- and post-intervention study was carried out in Mukalla City, ... This is an Open Access article that uses a funding model which does not charge .... Finally, after the pilot test, Cronbach's alpha ... inferential statistics were applied.

  12. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics

    Directory of Open Access Journals (Sweden)

    Haejoon Jung

    2018-01-01

    Full Text Available As an intrinsic part of the Internet of Things (IoT ecosystem, machine-to-machine (M2M communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

  13. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics.

    Science.gov (United States)

    Jung, Haejoon; Lee, In-Ho

    2018-01-12

    As an intrinsic part of the Internet of Things (IoT) ecosystem, machine-to-machine (M2M) communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave) communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC) device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs) with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy) of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

  14. Measurement and statistics for teachers

    CERN Document Server

    Van Blerkom, Malcolm

    2008-01-01

    Written in a student-friendly style, Measurement and Statistics for Teachers shows teachers how to use measurement and statistics wisely in their classes. Although there is some discussion of theory, emphasis is given to the practical, everyday uses of measurement and statistics. The second part of the text provides more complete coverage of basic descriptive statistics and their use in the classroom than in any text now available.Comprehensive and accessible, Measurement and Statistics for Teachers includes:Short vignettes showing concepts in action Numerous classroom examples Highlighted vocabulary Boxes summarizing related concepts End-of-chapter exercises and problems Six full chapters devoted to the essential topic of Classroom Tests Instruction on how to carry out informal assessments, performance assessments, and portfolio assessments, and how to use and interpret standardized tests A five-chapter section on Descriptive Statistics, giving instructors the option of more thoroughly teaching basic measur...

  15. Multimodal integration in statistical learning

    DEFF Research Database (Denmark)

    Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan

    2014-01-01

    , we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.......Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study...

  16. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  17. Statistical decay of giant resonances

    International Nuclear Information System (INIS)

    Dias, H.; Teruya, N.; Wolynec, E.

    1986-01-01

    Statistical calculations to predict the neutron spectrum resulting from the decay of Giant Resonances are discussed. The dependence of the resutls on the optical potential parametrization and on the level density of the residual nucleus is assessed. A Hauser-Feshbach calculation is performed for the decay of the monople giant resonance in 208 Pb using the experimental levels of 207 Pb from a recent compilation. The calculated statistical decay is in excelent agreement with recent experimental data, showing that the decay of this resonance is dominantly statistical, as predicted by continuum RPA calculations. (Author) [pt

  18. Statistical decay of giant resonances

    International Nuclear Information System (INIS)

    Dias, H.; Teruya, N.; Wolynec, E.

    1986-02-01

    Statistical calculations to predict the neutron spectrum resulting from the decay of Giant Resonances are discussed. The dependence of the results on the optical potential parametrization and on the level density of the residual nucleus is assessed. A Hauser-Feshbach calculation is performed for the decay of the monopole giant resonance in 208 Pb using the experimental levels of 207 Pb from a recent compilation. The calculated statistical decay is in excellent agreement with recent experimental data, showing that decay of this resonance is dominantly statistical, as predicted by continuum RPA calculations. (Author) [pt

  19. WASP (Write a Scientific Paper) using Excel -5: Quartiles and standard deviation.

    Science.gov (United States)

    Grech, Victor

    2018-03-01

    The almost inevitable descriptive statistics exercise that is undergone once data collection is complete, prior to inferential statistics, requires the acquisition of basic descriptors which may include standard deviation and quartiles. This paper provides pointers as to how to do this in Microsoft Excel™ and explains the relationship between the two. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Nurses' accuracy in estimating backrest elevation

    African Journals Online (AJOL)

    2007-02-08

    Feb 8, 2007 ... study, compared an intended backrest elevation of 45°. Department of ... the breakpoint of the bed frame using an angle finder. Backrest ... Inferential statistics in the form of correlational analysis was .... pilot study. Am J Crit ...

  1. perceived factors influencing the choice of antenatal care

    African Journals Online (AJOL)

    Osondu

    http://dx.doi.org/10.4314/ejesm.v5i4.6 ... analyzed using descriptive and inferential statistical tools. .... overall health status of the population has ... inequalities have become the central goals of ... distance to health facility, onset of labour at.

  2. Statistical learning in high energy and astrophysics

    International Nuclear Information System (INIS)

    Zimmermann, J.

    2005-01-01

    This thesis studies the performance of statistical learning methods in high energy and astrophysics where they have become a standard tool in physics analysis. They are used to perform complex classification or regression by intelligent pattern recognition. This kind of artificial intelligence is achieved by the principle ''learning from examples'': The examples describe the relationship between detector events and their classification. The application of statistical learning methods is either motivated by the lack of knowledge about this relationship or by tight time restrictions. In the first case learning from examples is the only possibility since no theory is available which would allow to build an algorithm in the classical way. In the second case a classical algorithm exists but is too slow to cope with the time restrictions. It is therefore replaced by a pattern recognition machine which implements a fast statistical learning method. But even in applications where some kind of classical algorithm had done a good job, statistical learning methods convinced by their remarkable performance. This thesis gives an introduction to statistical learning methods and how they are applied correctly in physics analysis. Their flexibility and high performance will be discussed by showing intriguing results from high energy and astrophysics. These include the development of highly efficient triggers, powerful purification of event samples and exact reconstruction of hidden event parameters. The presented studies also show typical problems in the application of statistical learning methods. They should be only second choice in all cases where an algorithm based on prior knowledge exists. Some examples in physics analyses are found where these methods are not used in the right way leading either to wrong predictions or bad performance. Physicists also often hesitate to profit from these methods because they fear that statistical learning methods cannot be controlled in a

  3. Statistical learning in high energy and astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J.

    2005-06-16

    This thesis studies the performance of statistical learning methods in high energy and astrophysics where they have become a standard tool in physics analysis. They are used to perform complex classification or regression by intelligent pattern recognition. This kind of artificial intelligence is achieved by the principle ''learning from examples'': The examples describe the relationship between detector events and their classification. The application of statistical learning methods is either motivated by the lack of knowledge about this relationship or by tight time restrictions. In the first case learning from examples is the only possibility since no theory is available which would allow to build an algorithm in the classical way. In the second case a classical algorithm exists but is too slow to cope with the time restrictions. It is therefore replaced by a pattern recognition machine which implements a fast statistical learning method. But even in applications where some kind of classical algorithm had done a good job, statistical learning methods convinced by their remarkable performance. This thesis gives an introduction to statistical learning methods and how they are applied correctly in physics analysis. Their flexibility and high performance will be discussed by showing intriguing results from high energy and astrophysics. These include the development of highly efficient triggers, powerful purification of event samples and exact reconstruction of hidden event parameters. The presented studies also show typical problems in the application of statistical learning methods. They should be only second choice in all cases where an algorithm based on prior knowledge exists. Some examples in physics analyses are found where these methods are not used in the right way leading either to wrong predictions or bad performance. Physicists also often hesitate to profit from these methods because they fear that statistical learning methods cannot

  4. Performance Monitoring System: Summary of Lock Statistics. Revision 1.

    Science.gov (United States)

    1985-12-01

    2751 84 4057 4141 526 798 18 1342 5727 19 5523 3996 4587 8583 1056 1630 35 2721 6536LOCK A DAMI 2 AUXILIARY I Ins NO DATA RECORDD FOR THIS LOCK- " LOCK I...TOTAL (KTOMS) ’ - (AVt OPNP ETC) ’’ ,q [ " ARKANSAS RIVER "" FORRELL LOCK IP 7A/3TRC 9/N83 UPBOUID STATISTICS ISO 53 42 M6 553 356 909 221 41 21 M8

  5. Status of Breast Self-Examination Performance among Women Referring to Health Centers of Tabriz, Iran

    Directory of Open Access Journals (Sweden)

    Farshbaf-Khalili Azizeh

    2014-07-01

    Full Text Available Objective: Breast cancer is the most common type of cancer and the second principal cause of deaths from cancer in women. Breast self-examination (BSE is an inexpensive screening method and is carried out by women themselves. The purpose of this study was to examine the status of breast self-examination performance among women referring to health centers of Tabriz, Iran. Materials and Methods: This study was a descriptive/ cross-sectional research carried out on 400 women aged 20-50 years. The samples were recruited randomly from among female clients of health centers in Tabriz. A questionnaire and an observational checklist were used to elicit socio-demographic information and status of BSE performance among women. Content validity was used for validation and Cronbach’s alpha was calculated (0.80 for reliability of instrument. Descriptive and inferential statistics were used to analyze data through SPSS software. Results: The findings of this research showed that only 18.8% of women performed BSE. Among them, 46.67% performed BSE monthly, and 40% at the end of menstruation. The initiation age of BSE in 77% was between 21-30 years of age. It is considerable that 54.7% of them had received no advice on BSE from physicians and midwives. The majority of women did not perform the various steps of BSE. The quality of this screening was very desirable in 2 (0.5 %, desirable in 5 (1.3%, average in 19 (4.8%, undesirable in 36 (9%, and very undesirable in 338 (84.5% women. Chi-square test showed a significant relationship between the quality of BSE performance and level of education, employment, breastfeeding quality, and family history of breast cancer (P < 0.05. Conclusion: The findings showed that the status of BSE performance was very poor. Therefore, to encourage women to use BSE correctly and regularly, education programs should be performed through various media including television, radio, and leaflets. The role of Health personnel in this

  6. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    Science.gov (United States)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  7. Assessment of food crop farmers' participation and performance in ...

    African Journals Online (AJOL)

    List of farmers who participated in WAAPP collected from N.R.C.R.I WAAPP Coordinating office, Umudike and Agricultural Innovation Platform (AIP), Umudike served as the study population. A structured questionnaire was used to elicit information from the farmers. Data were analyzed using descriptive and inferential ...

  8. Forecasting of a ground-coupled heat pump performance using neural networks with statistical data weighting pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Esen, Hikmet; Esen, Mehmet [Department of Mechanical Education, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey); Inalli, Mustafa [Department of Mechanical Engineering, Faculty of Engineering, Firat University, 23279 Elazig (Turkey); Sengur, Abdulkadir [Department of Electronic and Computer Science, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey)

    2008-04-15

    The objective of this work is to improve the performance of an artificial neural network (ANN) with a statistical weighted pre-processing (SWP) method to learn to predict ground source heat pump (GCHP) systems with the minimum data set. Experimental studies were completed to obtain training and test data. Air temperatures entering/leaving condenser unit, water-antifreeze solution entering/leaving the horizontal ground heat exchangers and ground temperatures (1 and 2 m) were used as input layer, while the output is coefficient of performance (COP) of system. Some statistical methods, such as the root-mean squared (RMS), the coefficient of multiple determinations (R{sup 2}) and the coefficient of variation (cov) is used to compare predicted and actual values for model validation. It is found that RMS value is 0.074, R{sup 2} value is 0.9999 and cov value is 2.22 for SCG6 algorithm of only ANN structure. It is also found that RMS value is 0.002, R{sup 2} value is 0.9999 and cov value is 0.076 for SCG6 algorithm of SWP-ANN structure. The simulation results show that the SWP based networks can be used an alternative way in these systems. Therefore, instead of limited experimental data found in literature, faster and simpler solutions are obtained using hybridized structures such as SWP-ANN. (author)

  9. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    Energy Technology Data Exchange (ETDEWEB)

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  10. Applying Statistical Mechanics to pixel detectors

    International Nuclear Information System (INIS)

    Pindo, Massimiliano

    2002-01-01

    Pixel detectors, being made of a large number of active cells of the same kind, can be considered as significant sets to which Statistical Mechanics variables and methods can be applied. By properly redefining well known statistical parameters in order to let them match the ones that actually characterize pixel detectors, an analysis of the way they work can be performed in a totally new perspective. A deeper understanding of pixel detectors is attained, helping in the evaluation and comparison of their intrinsic characteristics and performance

  11. Inferential Processor.

    Science.gov (United States)

    1982-01-01

    dlutilization,o Groupe d’Intelligence Artificielle , Universite d’Aix- Marseille, Luminy, France, September 1975. 4. Clocksint W.F. and C.S. Mellish...for reasoning in higher-order logics such as the first-order predicate calculus; the latter is required for applications in artificial intelligence ...analysis and evaluation of intelligence reports, the preparation and analysis of tactical I methods and principles, the formulation and interpretation of

  12. Development of a statistical shape model of multi-organ and its performance evaluation

    International Nuclear Information System (INIS)

    Nakada, Misaki; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru

    2010-01-01

    Existing statistical shape modeling methods for an organ can not take into account the correlation between neighboring organs. This study focuses on a level set distribution model and proposes two modeling methods for multiple organs that can take into account the correlation between neighboring organs. The first method combines level set functions of multiple organs into a vector. Subsequently it analyses the distribution of the vectors of a training dataset by a principal component analysis and builds a multiple statistical shape model. Second method constructs a statistical shape model for each organ independently and assembles component scores of different organs in a training dataset so as to generate a vector. It analyses the distribution of the vectors of to build a statistical shape model of multiple organs. This paper shows results of applying the proposed methods trained by 15 abdominal CT volumes to unknown 8 CT volumes. (author)

  13. A INFLUÊNCIA DO TREINAMENTO RESISTIDO DE 20 SEMANAS NO NÍVEL DE DESENVOLVIMENTO MOTOR EM IDOSAS DA UNATI – UNIVERSIDADE ABERTA DA TERCEIRA IDADE – ESEFFEGO

    Directory of Open Access Journals (Sweden)

    Fabrício Galdino Magalhães

    2016-06-01

    Full Text Available Objective: To evaluate the influence of resistance training of 20 weeks in the motor development of active seniors women, from the Open University of the Third Age (UNATI/UEG. Methods: This is a non-controlled clinical trial conducted with adult seniors women, who did not present physical/mental limitations, with registration in UNATI/UEG. The evaluation of motor skills - body schema, temporal and spatial organization- of the Motor Scale for the Elderly (EMTI contained in the motor evaluation Guide for Seniors, was performed before and after resistance training intervention during 20 weeks, three sessions weekly of50 minutes each one. The data analysis was performed using descriptive and inferential statistical level of significance p

  14. Delusions as performance failures.

    Science.gov (United States)

    Gerrans, P

    2001-08-01

    Delusions are explanations of anomalous experiences. A theory of delusion requires an explanation of both the anomalous experience and the apparently irrational explanation generated by the delusional subject. Hence, we require a model of rational belief formation against which the belief formation of delusional subjects can be evaluated. I first describe such a model, distinguishing procedural from pragmatic rationality. Procedural rationality is the use of rules or procedures, deductive or inductive, that produce an inferentially coherent set of propositions. Pragmatic rationality is the use of procedural rationality in context. I then apply the distinction to the explanation of the Capgras and the Cotard delusions. I then argue that delusions are failures of pragmatic rationality. I examine the nature of these failures employing the distinction between performance and competence familiar from Chomskian linguistics. This approach to the irrationality of delusions reconciles accounts in which the explanation of the anomalous experience exhausts the explanation of delusion, accounts that appeal to further deficits within the reasoning processes of delusional subjects, and accounts that argue that delusions are not beliefs at all. (Respectively, one-stage, two-stage, and expressive accounts.) In paradigm cases that concern cognitive neuropsychiatry the irrationality of delusional subjects should be thought of as a performance deficit in pragmatic rationality.

  15. A Survey of Statistical Capstone Projects

    Science.gov (United States)

    Martonosi, Susan E.; Williams, Talithia D.

    2016-01-01

    In this article, we highlight the advantages of incorporating a statistical capstone experience in the undergraduate curriculum, where students perform an in-depth analysis of real-world data. Capstone experiences develop statistical thinking by allowing students to engage in a consulting-like experience that requires skills outside the scope of…

  16. Gender Dimensions of Rural Livelihoods in Artisanal and Small ...

    African Journals Online (AJOL)

    Information collected include: socio-demographic characteristics of miners, income level of miners, as well as perceived health and environmental impacts of mining activities. Data were analysed using both descriptive and inferential statistics. Findings revealed significant differences in the socio-economic characteristics, ...

  17. Download this PDF file

    African Journals Online (AJOL)

    abp

    2016-05-13

    May 13, 2016 ... Descriptive and inferential statistical analysis was carried out depending on the nature of variables. .... Sampling technique .... Bivariate and multivariate binary logistic regression regressions were applied when the variables are normally .... through the traditional maternal and child health care programs.

  18. Sustainability of Marketing Food Crops through the Internet in Lagos ...

    African Journals Online (AJOL)

    Purposive sampling technique was used to select 85 questionnaire respondents and communication officer of an online farm produce shopping mall while snowball was adopted for agricultural engineers. The data collected through questionnaire were analysed using descriptive and inferential statistics while the interviews ...

  19. Constraints to feedback provision on forestry-related technologies ...

    African Journals Online (AJOL)

    This paper ascertained the constraints to feedback provision on forestry-related technologies. Interview schedule was used to elicit information from 163 randomly selected respondents. Descriptive (frequencies, percentages) and inferential (Chi square and Ordinary Least square regression) statistics were used to analyse ...

  20. Modern applied statistics with S-plus

    CERN Document Server

    Venables, W N

    1994-01-01

    S-Plus is a powerful environment for statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-Plus to perform statistical analyses and provides both an introduction to the use of S-Plus and a course in modern statistical methods. The aim of the book is to show how to use S-Plus as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-Plus, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets.

  1. proportion: A comprehensive R package for inference on single Binomial proportion and Bayesian computations

    Directory of Open Access Journals (Sweden)

    M. Subbiah

    2017-01-01

    Full Text Available Extensive statistical practice has shown the importance and relevance of the inferential problem of estimating probability parameters in a binomial experiment; especially on the issues of competing intervals from frequentist, Bayesian, and Bootstrap approaches. The package written in the free R environment and presented in this paper tries to take care of the issues just highlighted, by pooling a number of widely available and well-performing methods and apporting on them essential variations. A wide range of functions helps users with differing skills to estimate, evaluate, summarize, numerically and graphically, various measures adopting either the frequentist or the Bayesian paradigm.

  2. Heuristic versus statistical physics approach to optimization problems

    International Nuclear Information System (INIS)

    Jedrzejek, C.; Cieplinski, L.

    1995-01-01

    Optimization is a crucial ingredient of many calculation schemes in science and engineering. In this paper we assess several classes of methods: heuristic algorithms, methods directly relying on statistical physics such as the mean-field method and simulated annealing; and Hopfield-type neural networks and genetic algorithms partly related to statistical physics. We perform the analysis for three types of problems: (1) the Travelling Salesman Problem, (2) vector quantization, and (3) traffic control problem in multistage interconnection network. In general, heuristic algorithms perform better (except for genetic algorithms) and much faster but have to be specific for every problem. The key to improving the performance could be to include heuristic features into general purpose statistical physics methods. (author)

  3. Leadership preparation in engineering: A study of perceptions of leadership attributes, preparedness, and policy implications

    Science.gov (United States)

    Latorre, Julia Talarico

    Perceptions of engineers and leaders in the field of engineering regarding leadership preparation for engineers were evaluated in this dissertation. More specifically, engineers' and leaders' perceptions of leadership preparation and the necessary skills of leaders in technical fields were studied. The design and analyses of the study were divided into two parts: (1) Data for employment and college enrollment for engineers in New York State (NYS) were plotted using Geographic Information Systems (GIS) in order to evaluate recent data regarding employment and college enrollment for engineers in order to better understand the relevance of leadership preparation in engineering, (2) Perceptions regarding engineering leadership preparedness were analyzed using descriptive statistical methods and inferential statistical methods and engineers' perceptions regarding the importance of chosen leadership attributes were analyzed using inferential statistics and Generalizability Theory (G-theory). Responses to open-ended questions regarding the importance of leadership or management training for engineers, and responses discussing possible implications of increasing leadership or management training for engineers were also examined. Possible implications of the study, and suggestions for future research, were also included.

  4. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...

  5. The statistical-inference approach to generalized thermodynamics

    International Nuclear Information System (INIS)

    Lavenda, B.H.; Scherer, C.

    1987-01-01

    Limit theorems, such as the central-limit theorem and the weak law of large numbers, are applicable to statistical thermodynamics for sufficiently large sample size of indipendent and identically distributed observations performed on extensive thermodynamic (chance) variables. The estimation of the intensive thermodynamic quantities is a problem in parametric statistical estimation. The normal approximation to the Gibbs' distribution is justified by the analysis of large deviations. Statistical thermodynamics is generalized to include the statistical estimation of variance as well as mean values

  6. The nano-mechanical signature of Ultra High Performance Concrete by statistical nanoindentation techniques

    International Nuclear Information System (INIS)

    Sorelli, Luca; Constantinides, Georgios; Ulm, Franz-Josef; Toutlemonde, Francois

    2008-01-01

    Advances in engineering the microstructure of cementitious composites have led to the development of fiber reinforced Ultra High Performance Concretes (UHPC). The scope of this paper is twofold, first to characterize the nano-mechanical properties of the phases governing the UHPC microstructure by means of a novel statistical nanoindentation technique; then to upscale those nanoscale properties, by means of continuum micromechanics, to the macroscopic scale of engineering applications. In particular, a combined investigation of nanoindentation, scanning electron microscope (SEM) and X-ray Diffraction (XRD) indicates that the fiber-matrix transition zone is relatively defect free. On this basis, a four-level multiscale model with defect free interfaces allows to accurately determine the composite stiffness from the measured nano-mechanical properties. Besides evidencing the dominant role of high density calcium silicate hydrates and the stiffening effect of residual clinker, the suggested model may become a useful tool for further optimizing cement-based engineered composites

  7. Renyi statistics in equilibrium statistical mechanics

    International Nuclear Information System (INIS)

    Parvan, A.S.; Biro, T.S.

    2010-01-01

    The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.

  8. Data Collection Manual for Academic and Research Library Network Statistics and Performance Measures.

    Science.gov (United States)

    Shim, Wonsik "Jeff"; McClure, Charles R.; Fraser, Bruce T.; Bertot, John Carlo

    This manual provides a beginning approach for research libraries to better describe the use and users of their networked services. The manual also aims to increase the visibility and importance of developing such statistics and measures. Specific objectives are: to identify selected key statistics and measures that can describe use and users of…

  9. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    Science.gov (United States)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  10. Statistics for Learning Genetics

    Science.gov (United States)

    Charles, Abigail Sheena

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless

  11. Nonsuicidal Self-Injury: Exploring the Connection among Race, Ethnic Identity, and Ethnic Belonging

    Science.gov (United States)

    Wester, Kelly L.; Trepal, Heather C.

    2015-01-01

    This study examined race and ethnic identity in relation to nonsuicidal self-injury (NSSI). Participants included freshmen at 2 universities, who were predominantly female. Final inferential statistics examined differences across Caucasian, African American, Hispanic, Asian American, and Multiracial students, finding African Americans and Asian…

  12. Assessment of Indigenous Knowledge Application among Livestock ...

    African Journals Online (AJOL)

    This study investigated the application of indigenous knowledge among livestock farmers in Southern Ijaw Local Government Area of Bayelsa State. A structured questionnaire was administered to one hundred and fifty four respondents in the study area. The data were analyzed using descriptive and inferential statistics.

  13. Ethno-Veterinary Practices Amongst Small–Holder Farmers In Ekiti ...

    African Journals Online (AJOL)

    Pre-tested structured and unstructured interview schedules were used to collect quantitative data, while Focused Group Discussions (FGDs) were used to elicit qualitative data from the respondents. Frequency distribution, percentages, means and standard deviation were used to describe the data. Inferential statistics such ...

  14. Tomato farmers adoption level of postharvest value addition ...

    African Journals Online (AJOL)

    The study examined tomato farmers' adoption level of postharvest value addition technology and its constraints in Surulere Area of Oyo state. 160 tomato farmers were randomly selected and interviewed through structured interview schedule. Data obtained were subjected to descriptive and inferential statistics. Results ...

  15. Labour productivity and resource use efficiency amongst ...

    African Journals Online (AJOL)

    The study examined labour productivity and resource efficiency amongst smallholder cocoa farmers in Abia State, Nigeria. A purposive random sampling technique was adopted in selecting 60 cocoa farmers from three agricultural zones in the State. The analytical techniques used involve inferential statistics like means, ...

  16. Performance evaluation of CT measurements made on step gauges using statistical methodologies

    DEFF Research Database (Denmark)

    Angel, J.; De Chiffre, L.; Kruth, J.P.

    2015-01-01

    In this paper, a study is presented in which statistical methodologies were applied to evaluate the measurement of step gauges on an X-ray computed tomography (CT) system. In particular, the effects of step gauge material density and orientation were investigated. The step gauges consist of uni......- and bidirectional lengths. By confirming the repeatability of measurements made on the test system, the number of required scans in the design of experiment (DOE) was reduced. The statistical model was checked using model adequacy principles; model adequacy checking is an important step in validating...

  17. Statistical analysis of angular correlation measurements

    International Nuclear Information System (INIS)

    Oliveira, R.A.A.M. de.

    1986-01-01

    Obtaining the multipole mixing ratio, δ, of γ transitions in angular correlation measurements is a statistical problem characterized by the small number of angles in which the observation is made and by the limited statistic of counting, α. The inexistence of a sufficient statistics for the estimator of δ, is shown. Three different estimators for δ were constructed and their properties of consistency, bias and efficiency were tested. Tests were also performed in experimental results obtained in γ-γ directional correlation measurements. (Author) [pt

  18. Statistical monitoring of linear antenna arrays

    KAUST Repository

    Harrou, Fouzi

    2016-11-03

    The paper concerns the problem of monitoring linear antenna arrays using the generalized likelihood ratio (GLR) test. When an abnormal event (fault) affects an array of antenna elements, the radiation pattern changes and significant deviation from the desired design performance specifications can resulted. In this paper, the detection of faults is addressed from a statistical point of view as a fault detection problem. Specifically, a statistical method rested on the GLR principle is used to detect potential faults in linear arrays. To assess the strength of the GLR-based monitoring scheme, three case studies involving different types of faults were performed. Simulation results clearly shown the effectiveness of the GLR-based fault-detection method to monitor the performance of linear antenna arrays.

  19. Statistical assessment of numerous Monte Carlo tallies

    International Nuclear Information System (INIS)

    Kiedrowski, Brian C.; Solomon, Clell J.

    2011-01-01

    Four tests are developed to assess the statistical reliability of collections of tallies that number in thousands or greater. To this end, the relative-variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality. (author)

  20. Narrative Review of Statistical Reporting Checklists, Mandatory Statistical Editing, and Rectifying Common Problems in the Reporting of Scientific Articles.

    Science.gov (United States)

    Dexter, Franklin; Shafer, Steven L

    2017-03-01

    Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.

  1. A Comparison of the Performance of Advanced Statistical Techniques for the Refinement of Day-ahead and Longer NWP-based Wind Power Forecasts

    Science.gov (United States)

    Zack, J. W.

    2015-12-01

    Predictions from Numerical Weather Prediction (NWP) models are the foundation for wind power forecasts for day-ahead and longer forecast horizons. The NWP models directly produce three-dimensional wind forecasts on their respective computational grids. These can be interpolated to the location and time of interest. However, these direct predictions typically contain significant systematic errors ("biases"). This is due to a variety of factors including the limited space-time resolution of the NWP models and shortcomings in the model's representation of physical processes. It has become common practice to attempt to improve the raw NWP forecasts by statistically adjusting them through a procedure that is widely known as Model Output Statistics (MOS). The challenge is to identify complex patterns of systematic errors and then use this knowledge to adjust the NWP predictions. The MOS-based improvements are the basis for much of the value added by commercial wind power forecast providers. There are an enormous number of statistical approaches that can be used to generate the MOS adjustments to the raw NWP forecasts. In order to obtain insight into the potential value of some of the newer and more sophisticated statistical techniques often referred to as "machine learning methods" a MOS-method comparison experiment has been performed for wind power generation facilities in 6 wind resource areas of California. The underlying NWP models that provided the raw forecasts were the two primary operational models of the US National Weather Service: the GFS and NAM models. The focus was on 1- and 2-day ahead forecasts of the hourly wind-based generation. The statistical methods evaluated included: (1) screening multiple linear regression, which served as a baseline method, (2) artificial neural networks, (3) a decision-tree approach called random forests, (4) gradient boosted regression based upon an decision-tree algorithm, (5) support vector regression and (6) analog ensemble

  2. The best motivator priorities parents choose via analytical hierarchy process

    Science.gov (United States)

    Farah, R. N.; Latha, P.

    2015-05-01

    Motivation is probably the most important factor that educators can target in order to improve learning. Numerous cross-disciplinary theories have been postulated to explain motivation. While each of these theories has some truth, no single theory seems to adequately explain all human motivation. The fact is that human beings in general and pupils in particular are complex creatures with complex needs and desires. In this paper, Analytic Hierarchy Process (AHP) has been proposed as an emerging solution to move towards too large, dynamic and complex real world multi-criteria decision making problems in selecting the most suitable motivator when choosing school for their children. Data were analyzed using SPSS 17.0 ("Statistical Package for Social Science") software. Statistic testing used are descriptive and inferential statistic. Descriptive statistic used to identify respondent pupils and parents demographic factors. The statistical testing used to determine the pupils and parents highest motivator priorities and parents' best priorities using AHP to determine the criteria chosen by parents such as school principals, teachers, pupils and parents. The moderating factors are selected schools based on "Standard Kualiti Pendidikan Malaysia" (SKPM) in Ampang. Inferential statistics such as One-way ANOVA used to get the significant and data used to calculate the weightage of AHP. School principals is found to be the best motivator for parents in choosing school for their pupils followed by teachers, parents and pupils.

  3. 14 CFR 298.61 - Reporting of traffic statistics.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Reporting of traffic statistics. 298.61... Requirements § 298.61 Reporting of traffic statistics. (a) Each commuter air carrier and small certificated air... statistics shall be compiled in terms of each flight stage as actually performed. The detail T-100 data shall...

  4. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    Science.gov (United States)

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  5. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  6. Student and Professor Gender Effects in Introductory Business Statistics

    Science.gov (United States)

    Haley, M. Ryan; Johnson, Marianne F.; Kuennen, Eric W.

    2007-01-01

    Studies have yielded highly mixed results as to differences in male and female student performance in statistics courses; the role that professors play in these differences is even less clear. In this paper, we consider the impact of professor and student gender on student performance in an introductory business statistics course taught by…

  7. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

    Science.gov (United States)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

    2015-08-01

    We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

  8. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    1963-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  9. Perception of Farm Succession Planning by Poultry Farmers in ...

    African Journals Online (AJOL)

    This study assessed poultry farm characteristics and poultry farmers' perception of farm succession planning in southwest Nigeria. A multistage sampling procedure was used in selecting poultry farmers in Oyo and Osun states. Data were analyzed using descriptive and inferential statistics. Results reveal that poultry farmers ...

  10. Role of Youths in Agricultural Development in Makurdi Local ...

    African Journals Online (AJOL)

    The study investigated the role of youths in agricultural development in Makurdi Local Government area (LGA) of Benue State. Interview schedules were used to collect data from 120 youths selected through random sampling procedure from Makurdi LGA. Descriptive and inferential statistics namely, mean and factor ...

  11. Transportability of Equivalence-Based Programmed Instruction: Efficacy and Efficiency in a College Classroom

    Science.gov (United States)

    Fienup, Daniel M.; Critchfield, Thomas S.

    2011-01-01

    College students in a psychology research-methods course learned concepts related to inferential statistics and hypothesis decision making. One group received equivalence-based instruction on conditional discriminations that were expected to promote the emergence of many untaught, academically useful abilities (i.e., stimulus equivalence group). A…

  12. Tourism Potentials of Ekiti State, Nigeria | Kayode | Journal of ...

    African Journals Online (AJOL)

    The data collected for the study were analyzed using descriptive and inferential statistics. The study revealed that there are natural features, historical and religious monuments. Monuments identified during the study include Oke Ewo War Center, Kosegbe stone, Ero Shrine, Orinlasa god's shrine among others. Ekiti State ...

  13. Quantum-statistical kinetic equations

    International Nuclear Information System (INIS)

    Loss, D.; Schoeller, H.

    1989-01-01

    Considering a homogeneous normal quantum fluid consisting of identical interacting fermions or bosons, the authors derive an exact quantum-statistical generalized kinetic equation with a collision operator given as explicit cluster series where exchange effects are included through renormalized Liouville operators. This new result is obtained by applying a recently developed superoperator formalism (Liouville operators, cluster expansions, symmetrized projectors, P q -rule, etc.) to nonequilibrium systems described by a density operator ρ(t) which obeys the von Neumann equation. By means of this formalism a factorization theorem is proven (being essential for obtaining closed equations), and partial resummations (leading to renormalized quantities) are performed. As an illustrative application, the quantum-statistical versions (including exchange effects due to Fermi-Dirac or Bose-Einstein statistics) of the homogeneous Boltzmann (binary collisions) and Choh-Uhlenbeck (triple collisions) equations are derived

  14. Statistical process control as a tool for controlling operating room performance: retrospective analysis and benchmarking.

    Science.gov (United States)

    Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao

    2010-10-01

    There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.

  15. Non-linear time series extreme events and integer value problems

    CERN Document Server

    Turkman, Kamil Feridun; Zea Bermudez, Patrícia

    2014-01-01

    This book offers a useful combination of probabilistic and statistical tools for analyzing nonlinear time series. Key features of the book include a study of the extremal behavior of nonlinear time series and a comprehensive list of nonlinear models that address different aspects of nonlinearity. Several inferential methods, including quasi likelihood methods, sequential Markov Chain Monte Carlo Methods and particle filters, are also included so as to provide an overall view of the available tools for parameter estimation for nonlinear models. A chapter on integer time series models based on several thinning operations, which brings together all recent advances made in this area, is also included. Readers should have attended a prior course on linear time series, and a good grasp of simulation-based inferential methods is recommended. This book offers a valuable resource for second-year graduate students and researchers in statistics and other scientific areas who need a basic understanding of nonlinear time ...

  16. Pengaruh Work-life Balance Dan Lingkungan Kerja Terhadap Kepuasan Kerja Karyawan (Studi Pada Perawat RS Lavalette Malang Tahun 2016)

    OpenAIRE

    Maslichah, Nur Intan; Hidayat, Kadarisman

    2017-01-01

    This aims to know and explain the influence among the variables in this study. These variables include work-life balance, physical work environment, non-physical work environment, and job satisfaction. This research uses explanatory research using quantitative approach. Samples taken as many as 63 data analysis using descriptive statistical analysis and inferential statistical analysis include the classical assumption test, multiple linear regression analysis, and hypothesis testing. Based on...

  17. PENGARUH BAURAN PEMASARAN TERHADAP KEPUTUSAN BERKUNJUNG WISATAWAN DI MUSEUM GEOLOGI BANDUNG

    Directory of Open Access Journals (Sweden)

    Yuliana Pinaringsih Kristiutami

    2016-11-01

    ABSTRACT This research is to find out how the impact of the marketing mix to the decision to visit Museum Geology Bandung. This research used inferential method by the quantitative analysis. Inferential method is choosed to minimize the distortion of the sample data, because the main sample number is big. The quantitative analysis is the type of analysis that based on numerical data to process using statistical method. The result of the analisys has prove that marketing mix have positif and significant influence to the decision to visit. Based on the analysis result, the marketing mix influence is bigger than the other variable to the decision to visit.   Keyword : Marketing mix, decision to visit.

  18. Probing NWP model deficiencies by statistical postprocessing

    DEFF Research Database (Denmark)

    Rosgaard, Martin Haubjerg; Nielsen, Henrik Aalborg; Nielsen, Torben S.

    2016-01-01

    The objective in this article is twofold. On one hand, a Model Output Statistics (MOS) framework for improved wind speed forecast accuracy is described and evaluated. On the other hand, the approach explored identifies unintuitive explanatory value from a diagnostic variable in an operational....... Based on the statistical model candidates inferred from the data, the lifted index NWP model diagnostic is consistently found among the NWP model predictors of the best performing statistical models across sites....

  19. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    Science.gov (United States)

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  20. Ontology matching evaluation : A statistical perspective

    NARCIS (Netherlands)

    Mohammadi, M.; Hofman, W.J.; Tan, Y.H.

    2016-01-01

    This paper proposes statistical approaches to test if the difference between two ontology matchers is real. Specifically, the performances of the matchers over multiple data sets are obtained and based on their performances, the conclusion can be drawn whether one method is better than one another

  1. Ontology matching evaluation : A statistical perspective

    NARCIS (Netherlands)

    Mohammadi, M.; Hofman, Wout; Tan, Y.

    2016-01-01

    This paper proposes statistical approaches to test if the difference between two ontology matchers is real. Specifically, the performances of the matchers over multiple data sets are obtained and based on their performances, the conclusion can be drawn whether one method is better than one

  2. The Statistical Analysis of Relation between Compressive and Tensile/Flexural Strength of High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Kępniak M.

    2016-12-01

    Full Text Available This paper addresses the tensile and flexural strength of HPC (high performance concrete. The aim of the paper is to analyse the efficiency of models proposed in different codes. In particular, three design procedures from: the ACI 318 [1], Eurocode 2 [2] and the Model Code 2010 [3] are considered. The associations between design tensile strength of concrete obtained from these three codes and compressive strength are compared with experimental results of tensile strength and flexural strength by statistical tools. Experimental results of tensile strength were obtained in the splitting test. Based on this comparison, conclusions are drawn according to the fit between the design methods and the test data. The comparison shows that tensile strength and flexural strength of HPC depend on more influential factors and not only compressive strength.

  3. National Statistical Commission and Indian Official Statistics*

    Indian Academy of Sciences (India)

    IAS Admin

    a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.

  4. Transportation statistics annual report, 2015

    Science.gov (United States)

    2016-01-01

    The Transportation Statistics Annual Report : describes the Nations transportation system, : the systems performance, its contributions to : the economy, and its effects on people and the : environment. This 20th edition of the report is : base...

  5. Transportation statistics annual report, 2013

    Science.gov (United States)

    2014-01-01

    The Transportation Statistics Annual Report : describes the Nations transportation system, : the systems performance, its contributions to : the economy, and its effects on people and the : environment. This 18th edition of the report is : base...

  6. Statistical Model and Performance Analysis of a Novel Multilevel Polarization Modulation in Local “Twisted” Fibers

    Directory of Open Access Journals (Sweden)

    Pierluigi Perrone

    2017-01-01

    Full Text Available Transmission demand continues to grow and higher capacity optical communication systems are required to economically meet this ever-increasing need for communication services. This article expands and deepens the study of a novel optical communication system for high-capacity Local Area Networks (LANs, based on twisted optical fibers. The complete statistical behavior of this system is shown, designed for more efficient use of the fiber single-channel capacity by adopting an unconventional multilevel polarization modulation (called “bands of polarization”. Starting from simulative results, a possible reference mathematical model is proposed. Finally, the system performance is analyzed in the presence of shot-noise (coherent detection or thermal noise (direct detection.

  7. Official Statistics and Statistics Education: Bridging the Gap

    Directory of Open Access Journals (Sweden)

    Gal Iddo

    2017-03-01

    Full Text Available This article aims to challenge official statistics providers and statistics educators to ponder on how to help non-specialist adult users of statistics develop those aspects of statistical literacy that pertain to official statistics. We first document the gap in the literature in terms of the conceptual basis and educational materials needed for such an undertaking. We then review skills and competencies that may help adults to make sense of statistical information in areas of importance to society. Based on this review, we identify six elements related to official statistics about which non-specialist adult users should possess knowledge in order to be considered literate in official statistics: (1 the system of official statistics and its work principles; (2 the nature of statistics about society; (3 indicators; (4 statistical techniques and big ideas; (5 research methods and data sources; and (6 awareness and skills for citizens’ access to statistical reports. Based on this ad hoc typology, we discuss directions that official statistics providers, in cooperation with statistics educators, could take in order to (1 advance the conceptualization of skills needed to understand official statistics, and (2 expand educational activities and services, specifically by developing a collaborative digital textbook and a modular online course, to improve public capacity for understanding of official statistics.

  8. Body Size Perceptions and Weight Status of Adults in a Nigerian ...

    African Journals Online (AJOL)

    Subjects and Methods: A cross‑sectional sample of 183 adults living in a rural community, South‑West Nigeria was randomly recruited into the study. Their verbal and visual body size perceptions were assessed through structured questions and body images. Descriptive and inferential statistics were used to analyze the ...

  9. Deductive Updating Is Not Bayesian

    Science.gov (United States)

    Markovits, Henry; Brisson, Janie; de Chantal, Pier-Luc

    2015-01-01

    One of the major debates concerning the nature of inferential reasoning is between counterexample-based theories such as mental model theory and probabilistic theories. This study looks at conclusion updating after the addition of statistical information to examine the hypothesis that deductive reasoning cannot be explained by probabilistic…

  10. The Long-Term Impact of Admission Policies: A Comparative Study of Two Emergent Research Institutions in Texas

    Science.gov (United States)

    Crisp, Gloria; Horn, Catherine; Dizinno, Gerry; Barlow, Libby

    2013-01-01

    The present study explored the long-term impact of admission policies at two aspiring research institutions in Texas. Six years of longitudinal institutional data were analyzed for all full-time first time in college undergraduate students at both universities. Descriptive and inferential statistics were used to identify relationships and…

  11. Students' Understanding of Conditional Probability on Entering University

    Science.gov (United States)

    Reaburn, Robyn

    2013-01-01

    An understanding of conditional probability is essential for students of inferential statistics as it is used in Null Hypothesis Tests. Conditional probability is also used in Bayes' theorem, in the interpretation of medical screening tests and in quality control procedures. This study examines the understanding of conditional probability of…

  12. Perceived effects of climate change on food crops production in Oyo ...

    African Journals Online (AJOL)

    The study assessed the perceived effects of climate change on food crops production in Oyo State. Multi stage sampling procedure was used in selecting 120 respondents for the study. Primary data was collected through interview schedule and it was analyzed using both descriptive and inferential statistics. Results reveal ...

  13. Price generating process and volatility in the Nigerian agricultural ...

    African Journals Online (AJOL)

    The study examined the price generating process and volatility of Nigerian agricultural commodities market using secondary data for price series on meat, cereals, sugar, dairy and food for the period of January 1990 to February 2014. The data were analysed using both descriptive and inferential statistics. The descriptive ...

  14. Exploiting Redundant Measurement of Dose and Developmental Outcome: New Methods from the Behavioral Teratology of Alcohol.

    Science.gov (United States)

    Bookstein, Fred L.; And Others

    1996-01-01

    Discusses the use of new statistical procedures in a study of the enduring effects of prenatal alcohol exposure upon the neurobehavioral development of some 500 children born in 1975-76. Explains how the Partial Least Squares (PLS) methodology can summarize the data powerfully while avoiding familiar inferential pitfalls. (MDM)

  15. Engaging Business Students in Quantitative Skills Development

    Science.gov (United States)

    Cronin, Anthony; Carroll, Paula

    2015-01-01

    In this paper the complex problems of developing quantitative and analytical skills in undergraduate first year, first semester business students are addressed. An action research project, detailing how first year business students perceive the relevance of data analysis and inferential statistics in light of the economic downturn and the…

  16. Applying behavior-analytic methodology to the science and practice of environmental enrichment in zoos and aquariums.

    Science.gov (United States)

    Alligood, Christina A; Dorey, Nicole R; Mehrkam, Lindsay R; Leighty, Katherine A

    2017-05-01

    Environmental enrichment in zoos and aquariums is often evaluated at two overlapping levels: published research and day-to-day institutional record keeping. Several authors have discussed ongoing challenges with small sample sizes in between-groups zoological research and have cautioned against the inappropriate use of inferential statistics (Shepherdson, , International Zoo Yearbook, 38, 118-124; Shepherdson, Lewis, Carlstead, Bauman, & Perrin, Applied Animal Behaviour Science, 147, 298-277; Swaisgood, , Applied Animal Behaviour Science, 102, 139-162; Swaisgood & Shepherdson, , Zoo Biology, 24, 499-518). Multi-institutional studies are the typically-prescribed solution, but these are expensive and difficult to carry out. Kuhar ( Zoo Biology, 25, 339-352) provided a reminder that inferential statistics are only necessary when one wishes to draw general conclusions at the population level. Because welfare is assessed at the level of the individual animal, we argue that evaluations of enrichment efficacy are often instances in which inferential statistics may be neither necessary nor appropriate. In recent years, there have been calls for the application of behavior-analytic techniques to zoo animal behavior management, including environmental enrichment (e.g., Bloomsmith, Marr, & Maple, , Applied Animal Behaviour Science, 102, 205-222; Tarou & Bashaw, , Applied Animal Behaviour Science, 102, 189-204). Single-subject (also called single-case, or small-n) designs provide a means of designing evaluations of enrichment efficacy based on an individual's behavior. We discuss how these designs might apply to research and practice goals at zoos and aquariums, contrast them with standard practices in the field, and give examples of how each could be successfully applied in a zoo or aquarium setting. © 2017 Wiley Periodicals, Inc.

  17. Performance of statistical process control methods for regional surgical site infection surveillance: a 10-year multicentre pilot study.

    Science.gov (United States)

    Baker, Arthur W; Haridy, Salah; Salem, Joseph; Ilieş, Iulian; Ergai, Awatef O; Samareh, Aven; Andrianas, Nicholas; Benneyan, James C; Sexton, Daniel J; Anderson, Deverick J

    2017-11-24

    Traditional strategies for surveillance of surgical site infections (SSI) have multiple limitations, including delayed and incomplete outbreak detection. Statistical process control (SPC) methods address these deficiencies by combining longitudinal analysis with graphical presentation of data. We performed a pilot study within a large network of community hospitals to evaluate performance of SPC methods for detecting SSI outbreaks. We applied conventional Shewhart and exponentially weighted moving average (EWMA) SPC charts to 10 previously investigated SSI outbreaks that occurred from 2003 to 2013. We compared the results of SPC surveillance to the results of traditional SSI surveillance methods. Then, we analysed the performance of modified SPC charts constructed with different outbreak detection rules, EWMA smoothing factors and baseline SSI rate calculations. Conventional Shewhart and EWMA SPC charts both detected 8 of the 10 SSI outbreaks analysed, in each case prior to the date of traditional detection. Among detected outbreaks, conventional Shewhart chart detection occurred a median of 12 months prior to outbreak onset and 22 months prior to traditional detection. Conventional EWMA chart detection occurred a median of 7 months prior to outbreak onset and 14 months prior to traditional detection. Modified Shewhart and EWMA charts additionally detected several outbreaks earlier than conventional SPC charts. Shewhart and SPC charts had low false-positive rates when used to analyse separate control hospital SSI data. Our findings illustrate the potential usefulness and feasibility of real-time SPC surveillance of SSI to rapidly identify outbreaks and improve patient safety. Further study is needed to optimise SPC chart selection and calculation, statistical outbreak detection rules and the process for reacting to signals of potential outbreaks. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights

  18. Age and race differences on career adaptability and employee engagement amongst employees in an insurance company

    Directory of Open Access Journals (Sweden)

    Rebecca Tladinyane

    2015-12-01

    Full Text Available The objective of the study was to determine whether age and race groups differ significantly regarding career adaptability (measured by Career Adapt-Abilities Scale (CAAS and employee engagement measured by Utrecht Work Engagement Scale (UWES. A quantitative survey was conducted with a convenience sample (N = 131 of employees in an insurance company within South Africa. Descriptive and inferential statistical analyses were performed to achieve the objective of the study. The results showed significant differences between age and race groups in relation to the constructs. Organisations need to recognise biographical differences with regards to career adaptability and employee engagement with reference to engagement interventions and the career counselling setting.

  19. Transportation Statistics Annual Report, 2017

    Science.gov (United States)

    2018-01-01

    The Transportation Statistics Annual Report describes the Nations transportation system, the systems performance, its contributions to the economy, and its effects on people and the environment. This 22nd edition of the report is based on infor...

  20. Statistical methods for decision making in mine action

    DEFF Research Database (Denmark)

    Larsen, Jan

    The lecture discusses the basics of statistical decision making in connection with humanitarian mine action. There is special focus on: 1) requirements for mine detection; 2) design and evaluation of mine equipment; 3) performance improvement by statistical learning and information fusion; 4...

  1. The Fishery Performance Indicators: A Management Tool for Triple Bottom Line Outcomes

    Science.gov (United States)

    Anderson, James L.; Anderson, Christopher M.; Chu, Jingjie; Meredith, Jennifer; Asche, Frank; Sylvia, Gil; Smith, Martin D.; Anggraeni, Dessy; Arthur, Robert; Guttormsen, Atle; McCluney, Jessica K.; Ward, Tim; Akpalu, Wisdom; Eggert, Håkan; Flores, Jimely; Freeman, Matthew A.; Holland, Daniel S.; Knapp, Gunnar; Kobayashi, Mimako; Larkin, Sherry; MacLauchlin, Kari; Schnier, Kurt; Soboil, Mark; Tveteras, Sigbjorn; Uchida, Hirotsugu; Valderrama, Diego

    2015-01-01

    Pursuit of the triple bottom line of economic, community and ecological sustainability has increased the complexity of fishery management; fisheries assessments require new types of data and analysis to guide science-based policy in addition to traditional biological information and modeling. We introduce the Fishery Performance Indicators (FPIs), a broadly applicable and flexible tool for assessing performance in individual fisheries, and for establishing cross-sectional links between enabling conditions, management strategies and triple bottom line outcomes. Conceptually separating measures of performance, the FPIs use 68 individual outcome metrics—coded on a 1 to 5 scale based on expert assessment to facilitate application to data poor fisheries and sectors—that can be partitioned into sector-based or triple-bottom-line sustainability-based interpretative indicators. Variation among outcomes is explained with 54 similarly structured metrics of inputs, management approaches and enabling conditions. Using 61 initial fishery case studies drawn from industrial and developing countries around the world, we demonstrate the inferential importance of tracking economic and community outcomes, in addition to resource status. PMID:25946194

  2. New Closed-Form Results on Ordered Statistics of Partial Sums of Gamma Random Variables and its Application to Performance Evaluation in the Presence of Nakagami Fading

    KAUST Repository

    Nam, Sung Sik

    2017-06-19

    Complex wireless transmission systems require multi-dimensional joint statistical techniques for performance evaluation. Here, we first present the exact closed-form results on order statistics of any arbitrary partial sums of Gamma random variables with the closedform results of core functions specialized for independent and identically distributed Nakagami-m fading channels based on a moment generating function-based unified analytical framework. These both exact closed-form results have never been published in the literature. In addition, as a feasible application example in which our new offered derived closed-form results can be applied is presented. In particular, we analyze the outage performance of the finger replacement schemes over Nakagami fading channels as an application of our method. Note that these analysis results are directly applicable to several applications, such as millimeter-wave communication systems in which an antenna diversity scheme operates using an finger replacement schemes-like combining scheme, and other fading scenarios. Note also that the statistical results can provide potential solutions for ordered statistics in any other research topics based on Gamma distributions or other advanced wireless communications research topics in the presence of Nakagami fading.

  3. Modern applied statistics with s-plus

    CERN Document Server

    Venables, W N

    1997-01-01

    S-PLUS is a powerful environment for the statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-PLUS to perform statistical analyses and provides both an introduction to the use of S-PLUS and a course in modern statistical methods. S-PLUS is available for both Windows and UNIX workstations, and both versions are covered in depth. The aim of the book is to show how to use S-PLUS as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-PLUS, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets. Many of the methods discussed are state-of-the-art approaches to topics such as linear and non-linear regression models, robust a...

  4. PENGARUH PERSEPSI EKOWISATA TERHADAP TINGKAT KEPUASAN WISATAWAN DI MONKEY FOREST UBUD, BALI

    Directory of Open Access Journals (Sweden)

    Via Reza Efrida

    2017-07-01

    Full Text Available This study aims to investigate (1 the ecotourism perception of tourists visiting Monkey Forest Ubud; (2 the visitor satisfaction level on Monkey Forest Ubud attraction; and (3 the influence of ecotourism perception on visitor satisfaction level at Monkey Forest Ubud. The result of this research carried out descriptive statistic by using an importance-performance analysis (IPA and inferential statistic by using a simple linear regression analysis. The technique of determining sample size is incidental sampling technique by distributing questionnaires to 170 tourists visiting Monkey Forest Ubud. The result showed that tourists which visit Monkey Forest Ubud strongly agree on the implementation of ecotourism concept. On the other hand, the calculation results of concordance rate showed 89.59% which means that the overall tourist is satisfied with the Monkey Forest Ubud attraction. Moreover, based on the hypothesis testing by using t-test statistical significance showed that there is a significant influence of independent variable (perception of ecotourism on the dependent variable (tourist satisfaction.

  5. Research and development statistics 2001

    CERN Document Server

    2002-01-01

    This publication provides recent basic statistics on the resources devoted to R&D in OECD countries. The statistical series are presented for the last seven years for which data are available and cover expenditure by source of funds and type of costs; personnel by occupation and/or level of qualification; both at the national level by performance sector, for enterprises by industry, and for higher education by field of science. The publication also provides information on the output of science and technology (S&T) activities relating to the technology balance of payments.

  6. The prevalence and nature of sexual harassment in the workplace: A model for early identification and effective management thereof

    OpenAIRE

    A Ramsaroop; S Brijball Parumasur

    2007-01-01

    This study investigates the prevalence and nature of sexual harassment and, assesses the impact of supervisory relations, levels of interaction, appearance and personality and types of behaviour. The study was undertaken at a tertiary institution using a sample of 74 employees, drawn by means of simple random sampling. Data was collected using a self-developed questionnaire, which was statistically tested and, analysed using descriptive and inferential statistics. The results indicate that th...

  7. Ontology, epistemology, and multi-methods

    OpenAIRE

    Chatterhee, Abhishek

    2009-01-01

    Enthusiasm for multi-methods research can possibly be ascribed to the prima facie promise it holds for moving beyond, if not resolving, seemingly intractable debates on the relative merits of “qualitative” (historical, interpretive, etc.) versus “quantitative” (i.e. inferential statistical) research methods. The justification of multi-methods rests on the claim that combining a few case studies with a larger inferential—and not descriptive—statistical study manages to capture the strengths of...

  8. Increasing Effectiveness and Efficiency Through Risk-Based Deployments

    Science.gov (United States)

    2015-12-01

    Shaw and Henry McKay, both University of Chicago professors, began using maps to understand juvenile delinquency better in Chicago, IL.36 In the...André-Michel Guerry’s Ordonnateur Statistique: The First Statistical Calculator?,” The American Statistician 66, no. 3 (August 1, 2012): 195–200...micro or macro levels using basic inferential statistics .”91 5. Protecting Civil Rights and Liberties It is also important to note that a risk

  9. The role of critical thinking skills and learning styles of university students in their academic performance

    Science.gov (United States)

    GHAZIVAKILI, ZOHRE; NOROUZI NIA, ROOHANGIZ; PANAHI, FARIDE; KARIMI, MEHRDAD; GHOLSORKHI, HAYEDE; AHMADI, ZARRIN

    2014-01-01

    Introduction: The Current world needs people who have a lot of different abilities such as cognition and application of different ways of thinking, research, problem solving, critical thinking skills and creativity. In addition to critical thinking, learning styles is another key factor which has an essential role in the process of problem solving. This study aimed to determine the relationship between learning styles and critical thinking of students and their academic performance in Alborz University of Medical Science. Methods: This cross-correlation study was performed in 2012, on 216 students of Alborz University who were selected randomly by the stratified random sampling. The data was obtained via a three-part questionnaire included demographic data, Kolb standardized questionnaire of learning style and California critical thinking standardized questionnaire. The academic performance of the students was extracted by the school records. The validity of the instruments was determined in terms of content validity, and the reliability was gained through internal consistency methods. Cronbach's alpha coefficient was found to be 0.78 for the California critical thinking questionnaire. The Chi Square test, Independent t-test, one way ANOVA and Pearson correlation test were used to determine relationship between variables. The Package SPSS14 statistical software was used to analyze data with a significant level of pcritical thinking of the students showed that the mean of deductive reasoning and evaluation skills were higher than that of other skills and analytical skills had the lowest mean and there was a positive significant relationship between the students’ performance with inferential skill and the total score of critical thinking skills (pcritical thinking had significant difference between different learning styles. Conclusion: The results of this study showed that the learning styles, critical thinking and academic performance are significantly associated

  10. Mediator effect of statistical process control between Total Quality Management (TQM) and business performance in Malaysian Automotive Industry

    Science.gov (United States)

    Ahmad, M. F.; Rasi, R. Z.; Zakuan, N.; Hisyamudin, M. N. N.

    2015-12-01

    In today's highly competitive market, Total Quality Management (TQM) is vital management tool in ensuring a company can success in their business. In order to survive in the global market with intense competition amongst regions and enterprises, the adoption of tools and techniques are essential in improving business performance. There are consistent results between TQM and business performance. However, only few previous studies have examined the mediator effect namely statistical process control (SPC) between TQM and business performance. A mediator is a third variable that changes the association between an independent variable and an outcome variable. This study present research proposed a TQM performance model with mediator effect of SPC with structural equation modelling, which is a more comprehensive model for developing countries, specifically for Malaysia. A questionnaire was prepared and sent to 1500 companies from automotive industry and the related vendors in Malaysia, giving a 21.8 per cent rate. Attempts were made at findings significant impact of mediator between TQM practices and business performance showed that SPC is important tools and techniques in TQM implementation. The result concludes that SPC is partial correlation between and TQM and BP with indirect effect (IE) is 0.25 which can be categorised as high moderator effect.

  11. Comparing the Effects of Elementary Music and Visual Arts Lessons on Standardized Mathematics Test Scores

    Science.gov (United States)

    King, Molly Elizabeth

    2016-01-01

    The purpose of this quantitative, causal-comparative study was to compare the effect elementary music and visual arts lessons had on third through sixth grade standardized mathematics test scores. Inferential statistics were used to compare the differences between test scores of students who took in-school, elementary, music instruction during the…

  12. Top 10% Admissions in the Borderlands: Access and Success of Borderland Top Students at Texas Public Universities

    Science.gov (United States)

    Rodríguez, Cristóbal

    2016-01-01

    This study focuses on Texas Borderland students admitted through the Texas Top 10% admissions policy, which assumes that Top 10% students are college ready for any public university and provides Top 10% high school graduates automatic admission to any 4-year public university in Texas. Using descriptive and inferential statistics, results…

  13. Logical Reasoning versus Information Processing in the Dual-Strategy Model of Reasoning

    Science.gov (United States)

    Markovits, Henry; Brisson, Janie; de Chantal, Pier-Luc

    2017-01-01

    One of the major debates concerning the nature of inferential reasoning is between counterexample-based strategies such as mental model theory and statistical strategies underlying probabilistic models. The dual-strategy model, proposed by Verschueren, Schaeken, & d'Ydewalle (2005a, 2005b), which suggests that people might have access to both…

  14. Processing cassava into chips for industry and export: analysis of ...

    African Journals Online (AJOL)

    Data collected were analyzed with descriptive (such as frequency, percentage and means) and inferential statistics. Results of the study showed that more women(56.1%) were involved in cassava processing than men (43.9%) and that substantial proportion of the small holder processors were ageing ((59.1%) and no ...

  15. Classification of Frequency Abused Drugs amongst Nigerian Youth ...

    African Journals Online (AJOL)

    Descriptive (simple percentage) and inferential statistics (t-test, chi square and ANOVA) were used in analyzing the quota for the students. The result showed that male students are more susceptible to drug abuse than their female counterpart, that students mainly abuse drugs such as Alcohol, Cigarettes, Indian hemp, and ...

  16. Teaching the Concept of the Sampling Distribution of the Mean

    Science.gov (United States)

    Aguinis, Herman; Branstetter, Steven A.

    2007-01-01

    The authors use proven cognitive and learning principles and recent developments in the field of educational psychology to teach the concept of the sampling distribution of the mean, which is arguably one of the most central concepts in inferential statistics. The proposed pedagogical approach relies on cognitive load, contiguity, and experiential…

  17. Improving statistical reasoning theoretical models and practical implications

    CERN Document Server

    Sedlmeier, Peter

    1999-01-01

    This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.

  18. Adaptive Maneuvering Frequency Method of Current Statistical Model

    Institute of Scientific and Technical Information of China (English)

    Wei Sun; Yongjian Yang

    2017-01-01

    Current statistical model(CSM) has a good performance in maneuvering target tracking. However, the fixed maneuvering frequency will deteriorate the tracking results, such as a serious dynamic delay, a slowly converging speedy and a limited precision when using Kalman filter(KF) algorithm. In this study, a new current statistical model and a new Kalman filter are proposed to improve the performance of maneuvering target tracking. The new model which employs innovation dominated subjection function to adaptively adjust maneuvering frequency has a better performance in step maneuvering target tracking, while a fluctuant phenomenon appears. As far as this problem is concerned, a new adaptive fading Kalman filter is proposed as well. In the new Kalman filter, the prediction values are amended in time by setting judgment and amendment rules,so that tracking precision and fluctuant phenomenon of the new current statistical model are improved. The results of simulation indicate the effectiveness of the new algorithm and the practical guiding significance.

  19. Robust Combining of Disparate Classifiers Through Order Statistics

    Science.gov (United States)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  20. Transportation Statistics Annual Report 1997

    Energy Technology Data Exchange (ETDEWEB)

    Fenn, M.

    1997-01-01

    This document is the fourth Transportation Statistics Annual Report (TSAR) prepared by the Bureau of Transportation Statistics (BTS) for the President and Congress. As in previous years, it reports on the state of U.S. transportation system at two levels. First, in Part I, it provides a statistical and interpretive survey of the system—its physical characteristics, its economic attributes, aspects of its use and performance, and the scale and severity of unintended consequences of transportation, such as fatalities and injuries, oil import dependency, and environment impacts. Part I also explores the state of transportation statistics, and new needs of the rapidly changing world of transportation. Second, Part II of the report, as in prior years, explores in detail the performance of the U.S. transportation system from the perspective of desired social outcomes or strategic goals. This year, the performance aspect of transportation chosen for thematic treatment is “Mobility and Access,” which complements past TSAR theme sections on “The Economic Performance of Transportation” (1995) and “Transportation and the Environment” (1996). Mobility and access are at the heart of the transportation system’s performance from the user’s perspective. In what ways and to what extent does the geographic freedom provided by transportation enhance personal fulfillment of the nation’s residents and contribute to economic advancement of people and businesses? This broad question underlies many of the topics examined in Part II: What is the current level of personal mobility in the United States, and how does it vary by sex, age, income level, urban or rural location, and over time? What factors explain variations? Has transportation helped improve people’s access to work, shopping, recreational facilities, and medical services, and in what ways and in what locations? How have barriers, such as age, disabilities, or lack of an automobile, affected these

  1. A simulation study to evaluate the performance of five statistical monitoring methods when applied to different time-series components in the context of control programs for endemic diseases

    DEFF Research Database (Denmark)

    Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

    2017-01-01

    Disease monitoring and surveillance play a crucial role in control and eradication programs, as it is important to track implemented strategies in order to reduce and/or eliminate a specific disease. The objectives of this study were to assess the performance of different statistical monitoring......, decreases and constant sero-prevalence levels (referred as events). Two space-state models were used to model the time series, and different statistical monitoring methods (such as univariate process control algorithms–Shewart Control Chart, Tabular Cumulative Sums, and the V-mask- and monitoring...... of noise in the baseline was greater for the Shewhart Control Chart and Tabular Cumulative Sums than for the V-Mask and trend-based methods. The performance of the different statistical monitoring methods varied when monitoring increases and decreases in disease sero-prevalence. Combining two of more...

  2. A simulation study to evaluate the performance of five statistical monitoring methods when applied to different time-series components in the context of control programs for endemic diseases

    DEFF Research Database (Denmark)

    Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

    2017-01-01

    , decreases and constant sero-prevalence levels (referred as events). Two space-state models were used to model the time series, and different statistical monitoring methods (such as univariate process control algorithms–Shewart Control Chart, Tabular Cumulative Sums, and the V-mask- and monitoring......Disease monitoring and surveillance play a crucial role in control and eradication programs, as it is important to track implemented strategies in order to reduce and/or eliminate a specific disease. The objectives of this study were to assess the performance of different statistical monitoring...... of noise in the baseline was greater for the Shewhart Control Chart and Tabular Cumulative Sums than for the V-Mask and trend-based methods. The performance of the different statistical monitoring methods varied when monitoring increases and decreases in disease sero-prevalence. Combining two of more...

  3. Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task

    Science.gov (United States)

    Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin

    2017-10-01

    Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.

  4. Students’ Reading Comprehension Performance with Emotional Literacy-Based Strategy Intervention

    Directory of Open Access Journals (Sweden)

    Yusfarina Mohd Yussof

    2013-07-01

    Full Text Available An effective reading comprehension process demands a strategy to enhance the cognitive ability to digest text information in the effort to elicit meaning contextually. In addition, the role of emotions also influences the efficacy of this process, especially in narrative text comprehension. This quasi-experimental study aims to observe students’ performance in the Reading Comprehension Test resulting from Emotional Literacy-Based Reading Comprehension Strategy (ELBRCS, which is a combination of cognitive and affective strategies. This study involved 90 students, whereby 45 students were clustered in the Experimental Group and received the ELBRCS intervension. The remaining 45 students were placed in the Control Group and underwent the conventional strategy (prevalent classroom method.The students’ reading comprehension performance was measured using the Reading Comprehension Test (RCT. The findings show that the experimental group received a higher score than the control group for RCT. The intervention has successfully increased student’s Reading Comprehension from literal comprehension to higher levels of comprehension i.e. inferential, evaluative and appreciative levels, as indicated by Barret’s Taxonomy.

  5. U.S. nuclear plant statistics, 8th Edition

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    Wolf Creek was the lowest cost nuclear plant in 1992 according to the annual plant rankings in UDI's comprehensive annual statistical factbook for US nuclear power plants (operating, under construction, deferred, canceled or retired). The book covers operating and maintenance expenses for the past year (1992), annual and lifetime performance statistics, capitalization expenses and changes in capitalization, construction cost information, joint ownership of plants and canceled plants. First published for CY1984 statistics

  6. Measurement of volatile organic compounds emitted in libraries and archives: an inferential indicator of paper decay?

    Directory of Open Access Journals (Sweden)

    Gibson Lorraine T

    2012-05-01

    Full Text Available Abstract Background A sampling campaign of indoor air was conducted to assess the typical concentration of indoor air pollutants in 8 National Libraries and Archives across the U.K. and Ireland. At each site, two locations were chosen that contained various objects in the collection (paper, parchment, microfilm, photographic material etc. and one location was chosen to act as a sampling reference location (placed in a corridor or entrance hallway. Results Of the locations surveyed, no measurable levels of sulfur dioxide were detected and low formaldehyde vapour (-3 was measured throughout. Acetic and formic acids were measured in all locations with, for the most part, higher acetic acid levels in areas with objects compared to reference locations. A large variety of volatile organic compounds (VOCs was measured in all locations, in variable concentrations, however furfural was the only VOC to be identified consistently at higher concentration in locations with paper-based collections, compared to those locations without objects. To cross-reference the sampling data with VOCs emitted directly from books, further studies were conducted to assess emissions from paper using solid phase microextraction (SPME fibres and a newly developed method of analysis; collection of VOCs onto a polydimethylsiloxane (PDMS elastomer strip. Conclusions In this study acetic acid and furfural levels were consistently higher in concentration when measured in locations which contained paper-based items. It is therefore suggested that both acetic acid and furfural (possibly also trimethylbenzenes, ethyltoluene, decane and camphor may be present in the indoor atmosphere as a result of cellulose degradation and together may act as an inferential non-invasive marker for the deterioration of paper. Direct VOC sampling was successfully achieved using SPME fibres and analytes found in the indoor air were also identified as emissive by-products from paper. Finally a new non

  7. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  8. [Statistics for statistics?--Thoughts about psychological tools].

    Science.gov (United States)

    Berger, Uwe; Stöbel-Richter, Yve

    2007-12-01

    Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.

  9. utility of prostate specific antigen (psa) in the indigenous african man

    African Journals Online (AJOL)

    the standard PSA reference levels generated in non-African study subjects. Design: A ... the best use of PSA in the indigenous black African man but also his place in the new ... tendency as well as measures of dispersion. Inferential statistics assumed a 95% confidence interval and .... men: Results from a pilot study.

  10. Understanding Teachers' Concerns about Inclusive Education

    Science.gov (United States)

    Yadav, Monika; Das, Ajay; Sharma, Sushama; Tiwari, Ashwini

    2015-01-01

    This study examined the concerns of regular elementary school teachers in Gurgaon, India, in order to work with students with disabilities in inclusive education settings. A total of 175 teachers responded to a two-part questionnaire. Data were analyzed using descriptive and inferential statistics. The data indicated that the teachers in Gurgaon,…

  11. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  12. 2012 aerospace medical certification statistical handbook.

    Science.gov (United States)

    2013-12-01

    The annual Aerospace Medical Certification Statistical Handbook reports descriptive : characteristics of all active U.S. civil aviation airmen and the aviation medical examiners (AMEs) that : perform the required medical examinations. The 2012 annual...

  13. Introductory life science mathematics and quantitative neuroscience courses.

    Science.gov (United States)

    Duffus, Dwight; Olifer, Andrei

    2010-01-01

    We describe two sets of courses designed to enhance the mathematical, statistical, and computational training of life science undergraduates at Emory College. The first course is an introductory sequence in differential and integral calculus, modeling with differential equations, probability, and inferential statistics. The second is an upper-division course in computational neuroscience. We provide a description of each course, detailed syllabi, examples of content, and a brief discussion of the main issues encountered in developing and offering the courses.

  14. 2011 aerospace medical certification statistical handbook.

    Science.gov (United States)

    2013-01-01

    The annual Aerospace Medical Certification Statistical Handbook reports descriptive characteristics of all active U.S. civil aviation airmen and the aviation medical examiners (AMEs) that perform the required medical examinations. The 2011 annual han...

  15. Calculating statistical distributions from operator relations: The statistical distributions of various intermediate statistics

    International Nuclear Information System (INIS)

    Dai, Wu-Sheng; Xie, Mi

    2013-01-01

    In this paper, we give a general discussion on the calculation of the statistical distribution from a given operator relation of creation, annihilation, and number operators. Our result shows that as long as the relation between the number operator and the creation and annihilation operators can be expressed as a † b=Λ(N) or N=Λ −1 (a † b), where N, a † , and b denote the number, creation, and annihilation operators, i.e., N is a function of quadratic product of the creation and annihilation operators, the corresponding statistical distribution is the Gentile distribution, a statistical distribution in which the maximum occupation number is an arbitrary integer. As examples, we discuss the statistical distributions corresponding to various operator relations. In particular, besides the Bose–Einstein and Fermi–Dirac cases, we discuss the statistical distributions for various schemes of intermediate statistics, especially various q-deformation schemes. Our result shows that the statistical distributions corresponding to various q-deformation schemes are various Gentile distributions with different maximum occupation numbers which are determined by the deformation parameter q. This result shows that the results given in much literature on the q-deformation distribution are inaccurate or incomplete. -- Highlights: ► A general discussion on calculating statistical distribution from relations of creation, annihilation, and number operators. ► A systemic study on the statistical distributions corresponding to various q-deformation schemes. ► Arguing that many results of q-deformation distributions in literature are inaccurate or incomplete

  16. State analysis of BOP using statistical and heuristic methods

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Chang, Soon Heung

    2003-01-01

    Under the deregulation environment, the performance enhancement of BOP in nuclear power plants is being highlighted. To analyze performance level of BOP, we use the performance test procedures provided from an authorized institution such as ASME. However, through plant investigation, it was proved that the requirements of the performance test procedures about the reliability and quantity of sensors was difficult to be satisfied. As a solution of this, state analysis method that are the expanded concept of signal validation, was proposed on the basis of the statistical and heuristic approaches. Authors recommended the statistical linear regression model by analyzing correlation among BOP parameters as a reference state analysis method. Its advantage is that its derivation is not heuristic, it is possible to calculate model uncertainty, and it is easy to apply to an actual plant. The error of the statistical linear regression model is below 3% under normal as well as abnormal system states. Additionally a neural network model was recommended since the statistical model is impossible to apply to the validation of all of the sensors and is sensitive to the outlier that is the signal located out of a statistical distribution. Because there are a lot of sensors need to be validated in BOP, wavelet analysis (WA) were applied as a pre-processor for the reduction of input dimension and for the enhancement of training accuracy. The outlier localization capability of WA enhanced the robustness of the neural network. The trained neural network restored the degraded signals to the values within ±3% of the true signals

  17. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control

    International Nuclear Information System (INIS)

    Létourneau, Daniel; McNiven, Andrea; Keller, Harald; Wang, An; Amin, Md Nurul; Pearce, Jim; Norrlinger, Bernhard; Jaffray, David A.

    2014-01-01

    Purpose: High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. Methods: The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3–4 times/week over a period of 10–11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ±0.5 and ±1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. Results: The precision of the MLC performance monitoring QC test and the MLC itself was within ±0.22 mm for most MLC leaves

  18. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control.

    Science.gov (United States)

    Létourneau, Daniel; Wang, An; Amin, Md Nurul; Pearce, Jim; McNiven, Andrea; Keller, Harald; Norrlinger, Bernhard; Jaffray, David A

    2014-12-01

    High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3-4 times/week over a period of 10-11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ± 0.5 and ± 1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. The precision of the MLC performance monitoring QC test and the MLC itself was within ± 0.22 mm for most MLC leaves and the majority of the

  19. Statistical summary 1990-91

    International Nuclear Information System (INIS)

    1991-01-01

    The information contained in this statistical summary leaflet summarizes in bar charts or pie charts Nuclear Electric's performance in 1990-91 in the areas of finance, plant and plant operations, safety, commercial operations and manpower. It is intended that the information will provide a basis for comparison in future years. The leaflet also includes a summary of Nuclear Electric's environmental policy statement. (UK)

  20. A survey of investigative entrepreneurship in physical education office of Isfahan province

    Directory of Open Access Journals (Sweden)

    Khodayar Momeni

    2012-09-01

    Full Text Available The purpose of this study is to investigative entrepreneur in Physical Education Office of Isfahan province. This research is performed in the administration office associated with sport in Isfahan providence. The method of research is deceptive and of the correlation-type, which is based on the survey. The statistic population includes all of expert staffs, official conventional and contract, which have been announced to be 205 in year 2012. According to Morgan table, the sample was equal to 132 people selected, randomly. Questionnaire of Wisbird entrepreneur organization is used to perform the study. The validity and reliability of the survey is confirmed using Cronback alpha (α=0.91. In this study, descriptive statistic (Frequency, Distribution, Percentage, Mean & Standard deviation and inferential statistic (Pearson correlation test have been used. The result of the study showed that Mean and standard deviation of organization entrepreneurship were 2.79 and. 0.28, respectively. The highest and lowest scores were calculated 4 and 1.54, respectively. Furthermore, we found out that in the distribution of aspects organizational entrepreneurship, the lowest average was related to reward and the highest was related to goal and relation.