International Nuclear Information System (INIS)
Tadaki, Kohtaro
2010-01-01
The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T is an element of (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.
Statistics and Data Interpretation for Social Work
Rosenthal, James
2011-01-01
"Without question, this text will be the most authoritative source of information on statistics in the human services. From my point of view, it is a definitive work that combines a rigorous pedagogy with a down to earth (commonsense) exploration of the complex and difficult issues in data analysis (statistics) and interpretation. I welcome its publication.". -Praise for the First Edition. Written by a social worker for social work students, this is a nuts and bolts guide to statistics that presents complex calculations and concepts in clear, easy-to-understand language. It includes
Combinatorial interpretation of Haldane-Wu fractional exclusion statistics.
Aringazin, A K; Mazhitov, M I
2002-08-01
Assuming that the maximal allowed number of identical particles in a state is an integer parameter, q, we derive the statistical weight and analyze the associated equation that defines the statistical distribution. The derived distribution covers Fermi-Dirac and Bose-Einstein ones in the particular cases q=1 and q--> infinity (n(i)/q-->1), respectively. We show that the derived statistical weight provides a natural combinatorial interpretation of Haldane-Wu fractional exclusion statistics, and present exact solutions of the distribution equation.
Equivalent statistics and data interpretation.
Francis, Gregory
2017-08-01
Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.
Tuuli, Methodius G; Odibo, Anthony O
2011-08-01
The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.
Statistics translated a step-by-step guide to analyzing and interpreting data
Terrell, Steven R
2012-01-01
Written in a humorous and encouraging style, this text shows how the most common statistical tools can be used to answer interesting real-world questions, presented as mysteries to be solved. Engaging research examples lead the reader through a series of six steps, from identifying a researchable problem to stating a hypothesis, identifying independent and dependent variables, and selecting and interpreting appropriate statistical tests. All techniques are demonstrated both manually and with the help of SPSS software. The book provides students and others who may need to read and interpret sta
Advanced statistics to improve the physical interpretation of atomization processes
International Nuclear Information System (INIS)
Panão, Miguel R.O.; Radu, Lucian
2013-01-01
Highlights: ► Finite pdf mixtures improves physical interpretation of sprays. ► Bayesian approach using MCMC algorithm is used to find the best finite mixture. ► Statistical method identifies multiple droplet clusters in a spray. ► Multiple drop clusters eventually associated with multiple atomization mechanisms. ► Spray described by drop size distribution and not only its moments. -- Abstract: This paper reports an analysis of the physics of atomization processes using advanced statistical tools. Namely, finite mixtures of probability density functions, which best fitting is found using a Bayesian approach based on a Markov chain Monte Carlo (MCMC) algorithm. This approach takes into account eventual multimodality and heterogeneities in drop size distributions. Therefore, it provides information about the complete probability density function of multimodal drop size distributions and allows the identification of subgroups in the heterogeneous data. This allows improving the physical interpretation of atomization processes. Moreover, it also overcomes the limitations induced by analyzing the spray droplets characteristics through moments alone, particularly, the hindering of different natures of droplet formation. Finally, the method is applied to physically interpret a case-study based on multijet atomization processes
Interpretation of the results of statistical measurements. [search for basic probability model
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.
Kieffer, Kevin M.; Thompson, Bruce
As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…
Farrell, Mary Beth
2018-06-01
This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being
An Introduction to Statistical Concepts
Lomax, Richard G
2012-01-01
This comprehensive, flexible text is used in both one- and two-semester courses to review introductory through intermediate statistics. Instructors select the topics that are most appropriate for their course. Its conceptual approach helps students more easily understand the concepts and interpret SPSS and research results. Key concepts are simply stated and occasionally reintroduced and related to one another for reinforcement. Numerous examples demonstrate their relevance. This edition features more explanation to increase understanding of the concepts. Only crucial equations are included. I
Menzerath-Altmann Law: Statistical Mechanical Interpretation as Applied to a Linguistic Organization
Eroglu, Sertac
2014-10-01
The distribution behavior described by the empirical Menzerath-Altmann law is frequently encountered during the self-organization of linguistic and non-linguistic natural organizations at various structural levels. This study presents a statistical mechanical derivation of the law based on the analogy between the classical particles of a statistical mechanical organization and the distinct words of a textual organization. The derived model, a transformed (generalized) form of the Menzerath-Altmann model, was termed as the statistical mechanical Menzerath-Altmann model. The derived model allows interpreting the model parameters in terms of physical concepts. We also propose that many organizations presenting the Menzerath-Altmann law behavior, whether linguistic or not, can be methodically examined by the transformed distribution model through the properly defined structure-dependent parameter and the energy associated states.
International Nuclear Information System (INIS)
Lan, B.L.
2001-01-01
An alternative interpretation to Bohm's 'quantum force' and 'active information' is proposed. Numerical evidence is presented, which suggests that the time series of Bohm's 'quantum force' evaluated at the Bohmian position for non-stationary quantum states are typically non-Gaussian stable distributed with a flat power spectrum in classically chaotic Hamiltonian systems. An important implication of these statistical properties is briefly mentioned. (orig.)
Alternative interpretations of statistics on health effects of low-level radiation
International Nuclear Information System (INIS)
Hamilton, L.D.
1983-01-01
Four examples of the interpretation of statistics of data on low-level radiation are reviewed: (a) genetic effects of the atomic bombs at Hiroshima and Nagasaki, (b) cancer at Rocky Flats, (c) childhood leukemia and fallout in Utah, and (d) cancer among workers at the Portsmouth Naval Shipyard. Aggregation of data, adjustment for age, and other problems related to the determination of health effects of low-level radiation are discussed. Troublesome issues related to post hoc analysis are considered
Statistical transformation and the interpretation of inpatient glucose control data.
Saulnier, George E; Castro, Janna C; Cook, Curtiss B
2014-03-01
To introduce a statistical method of assessing hospital-based non-intensive care unit (non-ICU) inpatient glucose control. Point-of-care blood glucose (POC-BG) data from hospital non-ICUs were extracted for January 1 through December 31, 2011. Glucose data distribution was examined before and after Box-Cox transformations and compared to normality. Different subsets of data were used to establish upper and lower control limits, and exponentially weighted moving average (EWMA) control charts were constructed from June, July, and October data as examples to determine if out-of-control events were identified differently in nontransformed versus transformed data. A total of 36,381 POC-BG values were analyzed. In all 3 monthly test samples, glucose distributions in nontransformed data were skewed but approached a normal distribution once transformed. Interpretation of out-of-control events from EWMA control chart analyses also revealed differences. In the June test data, an out-of-control process was identified at sample 53 with nontransformed data, whereas the transformed data remained in control for the duration of the observed period. Analysis of July data demonstrated an out-of-control process sooner in the transformed (sample 55) than nontransformed (sample 111) data, whereas for October, transformed data remained in control longer than nontransformed data. Statistical transformations increase the normal behavior of inpatient non-ICU glycemic data sets. The decision to transform glucose data could influence the interpretation and conclusions about the status of inpatient glycemic control. Further study is required to determine whether transformed versus nontransformed data influence clinical decisions or evaluation of interventions.
Bieber, Frederick R; Buckleton, John S; Budowle, Bruce; Butler, John M; Coble, Michael D
2016-08-31
The evaluation and interpretation of forensic DNA mixture evidence faces greater interpretational challenges due to increasingly complex mixture evidence. Such challenges include: casework involving low quantity or degraded evidence leading to allele and locus dropout; allele sharing of contributors leading to allele stacking; and differentiation of PCR stutter artifacts from true alleles. There is variation in statistical approaches used to evaluate the strength of the evidence when inclusion of a specific known individual(s) is determined, and the approaches used must be supportable. There are concerns that methods utilized for interpretation of complex forensic DNA mixtures may not be implemented properly in some casework. Similar questions are being raised in a number of U.S. jurisdictions, leading to some confusion about mixture interpretation for current and previous casework. Key elements necessary for the interpretation and statistical evaluation of forensic DNA mixtures are described. Given the most common method for statistical evaluation of DNA mixtures in many parts of the world, including the USA, is the Combined Probability of Inclusion/Exclusion (CPI/CPE). Exposition and elucidation of this method and a protocol for use is the focus of this article. Formulae and other supporting materials are provided. Guidance and details of a DNA mixture interpretation protocol is provided for application of the CPI/CPE method in the analysis of more complex forensic DNA mixtures. This description, in turn, should help reduce the variability of interpretation with application of this methodology and thereby improve the quality of DNA mixture interpretation throughout the forensic community.
Statistical Literacy: High School Students in Reading, Interpreting and Presenting Data
Hafiyusholeh, M.; Budayasa, K.; Siswono, T. Y. E.
2018-01-01
One of the foundations for high school students in statistics is to be able to read data; presents data in the form of tables and diagrams and its interpretation. The purpose of this study is to describe high school students’ competencies in reading, interpreting and presenting data. Subjects were consisted of male and female students who had high levels of mathematical ability. Collecting data was done in form of task formulation which is analyzed by reducing, presenting and verifying data. Results showed that the students read the data based on explicit explanations on the diagram, such as explaining the points in the diagram as the relation between the x and y axis and determining the simple trend of a graph, including the maximum and minimum point. In interpreting and summarizing the data, both subjects pay attention to general data trends and use them to predict increases or decreases in data. The male estimates the value of the (n+1) of weight data by using the modus of the data, while the females estimate the weigth by using the average. The male tend to do not consider the characteristics of the data, while the female more carefully consider the characteristics of data.
Experimental statistics for biological sciences.
Bang, Heejung; Davidian, Marie
2010-01-01
In this chapter, we cover basic and fundamental principles and methods in statistics - from "What are Data and Statistics?" to "ANOVA and linear regression," which are the basis of any statistical thinking and undertaking. Readers can easily find the selected topics in most introductory statistics textbooks, but we have tried to assemble and structure them in a succinct and reader-friendly manner in a stand-alone chapter. This text has long been used in real classroom settings for both undergraduate and graduate students who do or do not major in statistical sciences. We hope that from this chapter, readers would understand the key statistical concepts and terminologies, how to design a study (experimental or observational), how to analyze the data (e.g., describe the data and/or estimate the parameter(s) and make inference), and how to interpret the results. This text would be most useful if it is used as a supplemental material, while the readers take their own statistical courses or it would serve as a great reference text associated with a manual for any statistical software as a self-teaching guide.
Misuse of statistics in the interpretation of data on low-level radiation
International Nuclear Information System (INIS)
Hamilton, L.D.
1982-01-01
Four misuses of statistics in the interpretation of data of low-level radiation are reviewed: (1) post-hoc analysis and aggregation of data leading to faulty conclusions in the reanalysis of genetic effects of the atomic bomb, and premature conclusions on the Portsmouth Naval Shipyard data; (2) inappropriate adjustment for age and ignoring differences between urban and rural areas leading to potentially spurious increase in incidence of cancer at Rocky Flats; (3) hazard of summary statistics based on ill-conditioned individual rates leading to spurious association between childhood leukemia and fallout in Utah; and (4) the danger of prematurely published preliminary work with inadequate consideration of epidemiological problems - censored data - leading to inappropriate conclusions, needless alarm at the Portsmouth Naval Shipyard, and diversion of scarce research funds
Misuse of statistics in the interpretation of data on low-level radiation
Energy Technology Data Exchange (ETDEWEB)
Hamilton, L.D.
1982-01-01
Four misuses of statistics in the interpretation of data of low-level radiation are reviewed: (1) post-hoc analysis and aggregation of data leading to faulty conclusions in the reanalysis of genetic effects of the atomic bomb, and premature conclusions on the Portsmouth Naval Shipyard data; (2) inappropriate adjustment for age and ignoring differences between urban and rural areas leading to potentially spurious increase in incidence of cancer at Rocky Flats; (3) hazard of summary statistics based on ill-conditioned individual rates leading to spurious association between childhood leukemia and fallout in Utah; and (4) the danger of prematurely published preliminary work with inadequate consideration of epidemiological problems - censored data - leading to inappropriate conclusions, needless alarm at the Portsmouth Naval Shipyard, and diversion of scarce research funds.
Statistical significance versus clinical relevance.
van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G
2017-04-01
In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Jieyi Li; Arandjelovic, Ognjen
2017-07-01
Computer science and machine learning in particular are increasingly lauded for their potential to aid medical practice. However, the highly technical nature of the state of the art techniques can be a major obstacle in their usability by health care professionals and thus, their adoption and actual practical benefit. In this paper we describe a software tool which focuses on the visualization of predictions made by a recently developed method which leverages data in the form of large scale electronic records for making diagnostic predictions. Guided by risk predictions, our tool allows the user to explore interactively different diagnostic trajectories, or display cumulative long term prognostics, in an intuitive and easily interpretable manner.
Analysis of statistical misconception in terms of statistical reasoning
Maryati, I.; Priatna, N.
2018-05-01
Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.
Localized Smart-Interpretation
Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas; Bach, Torben; Pallesen, Tom
2014-05-01
The complex task of setting up a geological model consists not only of combining available geological information into a conceptual plausible model, but also requires consistency with availably data, e.g. geophysical data. However, in many cases the direct geological information, e.g borehole samples, are very sparse, so in order to create a geological model, the geologist needs to rely on the geophysical data. The problem is however, that the amount of geophysical data in many cases are so vast that it is practically impossible to integrate all of them in the manual interpretation process. This means that a lot of the information available from the geophysical surveys are unexploited, which is a problem, due to the fact that the resulting geological model does not fulfill its full potential and hence are less trustworthy. We suggest an approach to geological modeling that 1. allow all geophysical data to be considered when building the geological model 2. is fast 3. allow quantification of geological modeling. The method is constructed to build a statistical model, f(d,m), describing the relation between what the geologists interpret, d, and what the geologist knows, m. The para- meter m reflects any available information that can be quantified, such as geophysical data, the result of a geophysical inversion, elevation maps, etc... The parameter d reflects an actual interpretation, such as for example the depth to the base of a ground water reservoir. First we infer a statistical model f(d,m), by examining sets of actual interpretations made by a geological expert, [d1, d2, ...], and the information used to perform the interpretation; [m1, m2, ...]. This makes it possible to quantify how the geological expert performs interpolation through f(d,m). As the geological expert proceeds interpreting, the number of interpreted datapoints from which the statistical model is inferred increases, and therefore the accuracy of the statistical model increases. When a model f
Teaching the Assessment of Normality Using Large Easily-Generated Real Data Sets
Kulp, Christopher W.; Sprechini, Gene D.
2016-01-01
A classroom activity is presented, which can be used in teaching students statistics with an easily generated, large, real world data set. The activity consists of analyzing a video recording of an object. The colour data of the recorded object can then be used as a data set to explore variation in the data using graphs including histograms,…
Spreadsheets as tools for statistical computing and statistics education
Neuwirth, Erich
2000-01-01
Spreadsheets are an ubiquitous program category, and we will discuss their use in statistics and statistics education on various levels, ranging from very basic examples to extremely powerful methods. Since the spreadsheet paradigm is very familiar to many potential users, using it as the interface to statistical methods can make statistics more easily accessible.
van Driel, A.F.; Nikolaev, I.; Vergeer, P.; Lodahl, P.; Vanmaekelbergh, D.; Vos, Willem L.
2007-01-01
We present a statistical analysis of time-resolved spontaneous emission decay curves from ensembles of emitters, such as semiconductor quantum dots, with the aim of interpreting ubiquitous non-single-exponential decay. Contrary to what is widely assumed, the density of excited emitters and the
Statistical interpretation of geochemical data
International Nuclear Information System (INIS)
Carambula, M.
1990-01-01
Statistical results have been obtained from a geochemical research from the following four aerial photographies Zapican, Carape, Las Canias, Alferez. They have been studied 3020 samples in total, to 22 chemical elements using plasma emission spectrometry methods.
Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall
2016-01-01
Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.
Saulnier, George E; Castro, Janna C; Cook, Curtiss B
2014-05-01
Glucose control can be problematic in critically ill patients. We evaluated the impact of statistical transformation on interpretation of intensive care unit inpatient glucose control data. Point-of-care blood glucose (POC-BG) data derived from patients in the intensive care unit for 2011 was obtained. Box-Cox transformation of POC-BG measurements was performed, and distribution of data was determined before and after transformation. Different data subsets were used to establish statistical upper and lower control limits. Exponentially weighted moving average (EWMA) control charts constructed from April, October, and November data determined whether out-of-control events could be identified differently in transformed versus nontransformed data. A total of 8679 POC-BG values were analyzed. POC-BG distributions in nontransformed data were skewed but approached normality after transformation. EWMA control charts revealed differences in projected detection of out-of-control events. In April, an out-of-control process resulting in the lower control limit being exceeded was identified at sample 116 in nontransformed data but not in transformed data. October transformed data detected an out-of-control process exceeding the upper control limit at sample 27 that was not detected in nontransformed data. Nontransformed November results remained in control, but transformation identified an out-of-control event less than 10 samples into the observation period. Using statistical methods to assess population-based glucose control in the intensive care unit could alter conclusions about the effectiveness of care processes for managing hyperglycemia. Further study is required to determine whether transformed versus nontransformed data change clinical decisions about the interpretation of care or intervention results. © 2014 Diabetes Technology Society.
Conversion factors and oil statistics
International Nuclear Information System (INIS)
Karbuz, Sohbet
2004-01-01
World oil statistics, in scope and accuracy, are often far from perfect. They can easily lead to misguided conclusions regarding the state of market fundamentals. Without proper attention directed at statistic caveats, the ensuing interpretation of oil market data opens the door to unnecessary volatility, and can distort perception of market fundamentals. Among the numerous caveats associated with the compilation of oil statistics, conversion factors, used to produce aggregated data, play a significant role. Interestingly enough, little attention is paid to conversion factors, i.e. to the relation between different units of measurement for oil. Additionally, the underlying information regarding the choice of a specific factor when trying to produce measurements of aggregated data remains scant. The aim of this paper is to shed some light on the impact of conversion factors for two commonly encountered issues, mass to volume equivalencies (barrels to tonnes) and for broad energy measures encountered in world oil statistics. This paper will seek to demonstrate how inappropriate and misused conversion factors can yield wildly varying results and ultimately distort oil statistics. Examples will show that while discrepancies in commonly used conversion factors may seem trivial, their impact on the assessment of a world oil balance is far from negligible. A unified and harmonised convention for conversion factors is necessary to achieve accurate comparisons and aggregate oil statistics for the benefit of both end-users and policy makers
Shewhart, Mark
1991-01-01
Statistical Process Control (SPC) charts are one of several tools used in quality control. Other tools include flow charts, histograms, cause and effect diagrams, check sheets, Pareto diagrams, graphs, and scatter diagrams. A control chart is simply a graph which indicates process variation over time. The purpose of drawing a control chart is to detect any changes in the process signalled by abnormal points or patterns on the graph. The Artificial Intelligence Support Center (AISC) of the Acquisition Logistics Division has developed a hybrid machine learning expert system prototype which automates the process of constructing and interpreting control charts.
Is the statistic value all we should care about in neuroimaging?
Chen, Gang; Taylor, Paul A; Cox, Robert W
2017-02-15
Here we address an important issue that has been embedded within the neuroimaging community for a long time: the absence of effect estimates in results reporting in the literature. The statistic value itself, as a dimensionless measure, does not provide information on the biophysical interpretation of a study, and it certainly does not represent the whole picture of a study. Unfortunately, in contrast to standard practice in most scientific fields, effect (or amplitude) estimates are usually not provided in most results reporting in the current neuroimaging publications and presentations. Possible reasons underlying this general trend include (1) lack of general awareness, (2) software limitations, (3) inaccurate estimation of the BOLD response, and (4) poor modeling due to our relatively limited understanding of FMRI signal components. However, as we discuss here, such reporting damages the reliability and interpretability of the scientific findings themselves, and there is in fact no overwhelming reason for such a practice to persist. In order to promote meaningful interpretation, cross validation, reproducibility, meta and power analyses in neuroimaging, we strongly suggest that, as part of good scientific practice, effect estimates should be reported together with their corresponding statistic values. We provide several easily adaptable recommendations for facilitating this process. Published by Elsevier Inc.
Mottelson, Ida Nygaard; Sodemann, Morten; Nielsen, Dorthe Susanne
2018-03-01
Immigrants, refugees, and their descendants comprise 12% of Denmark's population. Some of these people do not speak or understand Danish well enough to communicate with the staff in a healthcare setting and therefore need interpreter services. Interpretation through video conferencing equipment (video interpretation) is frequently used and creates a forum where the interpreter is not physically present in the medical consultation. The aim of this study was to investigate the attitudes to and experiences with video interpretation among charge nurses in a Danish university hospital. An electronic questionnaire was sent to 99 charge nurses. The questionnaire comprised both closed and open-ended questions. The answers were analysed using descriptive statistics and thematic text condensation. Of the 99 charge nurses, 78 (79%) completed the questionnaire. Most charge nurses, 21 (91%) of the daily/monthly users, and 21 (72%) of the monthly/yearly users, said that video interpretation increased the quality of their conversations with patients. A total of 19 (24%) departments had not used video interpretation within the last 12 months. The more the charge nurses used video interpretation, the more satisfied they were. Most of the charge nurses using video interpretation expressed satisfaction with the technology and found it easy to use. Some charge nurses are still content to allow family or friends to interpret. To reach its full potential, video interpretation technology has to be reliable and easily accessible for any consultation, including at the bedside.
A synthetic interpretation: the double-preparation theory
International Nuclear Information System (INIS)
Gondran, Michel; Gondran, Alexandre
2014-01-01
In the 1927 Solvay conference, three apparently irreconcilable interpretations of the quantum mechanics wave function were presented: the pilot-wave interpretation by de Broglie, the soliton wave interpretation by Schrödinger and the Born statistical rule by Born and Heisenberg. In this paper, we demonstrate the complementarity of these interpretations corresponding to quantum systems that are prepared differently and we deduce a synthetic interpretation: the double-preparation theory. We first introduce in quantum mechanics the concept of semi-classical statistically prepared particles, and we show that in the Schrödinger equation these particles converge, when h→0, to the equations of a statistical set of classical particles. These classical particles are undiscerned, and if we assume continuity between classical mechanics and quantum mechanics, we conclude the necessity of the de Broglie–Bohm interpretation for the semi-classical statistically prepared particles (statistical wave). We then introduce in quantum mechanics the concept of a semi-classical deterministically prepared particle, and we show that in the Schrödinger equation this particle converges, when h→0, to the equations of a single classical particle. This classical particle is discerned and assuming continuity between classical mechanics and quantum mechanics, we conclude the necessity of the Schrödinger interpretation for the semi-classical deterministically prepared particle (the soliton wave). Finally we propose, in the semi-classical approximation, a new interpretation of quantum mechanics, the ‘theory of the double preparation’, which depends on the preparation of the particles. (paper)
International Nuclear Information System (INIS)
Shafieloo, Arman
2012-01-01
By introducing Crossing functions and hyper-parameters I show that the Bayesian interpretation of the Crossing Statistics [1] can be used trivially for the purpose of model selection among cosmological models. In this approach to falsify a cosmological model there is no need to compare it with other models or assume any particular form of parametrization for the cosmological quantities like luminosity distance, Hubble parameter or equation of state of dark energy. Instead, hyper-parameters of Crossing functions perform as discriminators between correct and wrong models. Using this approach one can falsify any assumed cosmological model without putting priors on the underlying actual model of the universe and its parameters, hence the issue of dark energy parametrization is resolved. It will be also shown that the sensitivity of the method to the intrinsic dispersion of the data is small that is another important characteristic of the method in testing cosmological models dealing with data with high uncertainties
DEFF Research Database (Denmark)
Van Driel, A.F.; Nikolaev, I.S.; Vergeer, P.
2007-01-01
We present a statistical analysis of time-resolved spontaneous emission decay curves from ensembles of emitters, such as semiconductor quantum dots, with the aim of interpreting ubiquitous non-single-exponential decay. Contrary to what is widely assumed, the density of excited emitters...... and the intensity in an emission decay curve are not proportional, but the density is a time integral of the intensity. The integral relation is crucial to correctly interpret non-single-exponential decay. We derive the proper normalization for both a discrete and a continuous distribution of rates, where every...... decay component is multiplied by its radiative decay rate. A central result of our paper is the derivation of the emission decay curve when both radiative and nonradiative decays are independently distributed. In this case, the well-known emission quantum efficiency can no longer be expressed...
Interpretation of commonly used statistical regression models.
Kasza, Jessica; Wolfe, Rory
2014-01-01
A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
International Nuclear Information System (INIS)
Bonetti, R.; Milazzo, L.C.; Melanotte, M.
1983-01-01
A number of (p,n), (n,p), and ( 3 He, p) reactions have been interpreted on the basis of the statistical multistep compound emission mechanism. Good agreement with experiment is found both in spectrum shape and in the value of the coherence widths
Autonomic Differentiation Map: A Novel Statistical Tool for Interpretation of Heart Rate Variability
Directory of Open Access Journals (Sweden)
Daniela Lucini
2018-04-01
Full Text Available In spite of the large body of evidence suggesting Heart Rate Variability (HRV alone or combined with blood pressure variability (providing an estimate of baroreflex gain as a useful technique to assess the autonomic regulation of the cardiovascular system, there is still an ongoing debate about methodology, interpretation, and clinical applications. In the present investigation, we hypothesize that non-parametric and multivariate exploratory statistical manipulation of HRV data could provide a novel informational tool useful to differentiate normal controls from clinical groups, such as athletes, or subjects affected by obesity, hypertension, or stress. With a data-driven protocol in 1,352 ambulant subjects, we compute HRV and baroreflex indices from short-term data series as proxies of autonomic (ANS regulation. We apply a three-step statistical procedure, by first removing age and gender effects. Subsequently, by factor analysis, we extract four ANS latent domains that detain the large majority of information (86.94%, subdivided in oscillatory (40.84%, amplitude (18.04%, pressure (16.48%, and pulse domains (11.58%. Finally, we test the overall capacity to differentiate clinical groups vs. control. To give more practical value and improve readability, statistical results concerning individual discriminant ANS proxies and ANS differentiation profiles are displayed through peculiar graphical tools, i.e., significance diagram and ANS differentiation map, respectively. This approach, which simultaneously uses all available information about the system, shows what domains make up the difference in ANS discrimination. e.g., athletes differ from controls in all domains, but with a graded strength: maximal in the (normalized oscillatory and in the pulse domains, slightly less in the pressure domain and minimal in the amplitude domain. The application of multiple (non-parametric and exploratory statistical and graphical tools to ANS proxies defines
An interpretation of signature inversion
International Nuclear Information System (INIS)
Onishi, Naoki; Tajima, Naoki
1988-01-01
An interpretation in terms of the cranking model is presented to explain why signature inversion occurs for positive γ of the axially asymmetric deformation parameter and emerges into specific orbitals. By introducing a continuous variable, the eigenvalue equation can be reduced to a one dimensional Schroedinger equation by means of which one can easily understand the cause of signature inversion. (author)
Braun, Stefan; Pokorná, Šárka; Šachl, Radek; Hof, Martin; Heerklotz, Heiko; Hoernke, Maria
2018-01-23
The mode of action of membrane-active molecules, such as antimicrobial, anticancer, cell penetrating, and fusion peptides and their synthetic mimics, transfection agents, drug permeation enhancers, and biological signaling molecules (e.g., quorum sensing), involves either the general or local destabilization of the target membrane or the formation of defined, rather stable pores. Some effects aim at killing the cell, while others need to be limited in space and time to avoid serious damage. Biological tests reveal translocation of compounds and cell death but do not provide a detailed, mechanistic, and quantitative understanding of the modes of action and their molecular basis. Model membrane studies of membrane leakage have been used for decades to tackle this issue, but their interpretation in terms of biology has remained challenging and often quite limited. Here we compare two recent, powerful protocols to study model membrane leakage: the microscopic detection of dye influx into giant liposomes and time-correlated single photon counting experiments to characterize dye efflux from large unilamellar vesicles. A statistical treatment of both data sets does not only harmonize apparent discrepancies but also makes us aware of principal issues that have been confusing the interpretation of model membrane leakage data so far. Moreover, our study reveals a fundamental difference between nano- and microscale systems that needs to be taken into account when conclusions about microscale objects, such as cells, are drawn from nanoscale models.
Applying contemporary statistical techniques
Wilcox, Rand R
2003-01-01
Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc
Harrigan, George G; Harrison, Jay M
2012-01-01
New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.
Workplace Statistical Literacy for Teachers: Interpreting Box Plots
Pierce, Robyn; Chick, Helen
2013-01-01
As a consequence of the increased use of data in workplace environments, there is a need to understand the demands that are placed on users to make sense of such data. In education, teachers are being increasingly expected to interpret and apply complex data about student and school performance, and, yet it is not clear that they always have the…
Directory of Open Access Journals (Sweden)
Paul A. Swinton
2018-05-01
Full Text Available The concept of personalized nutrition and exercise prescription represents a topical and exciting progression for the discipline given the large inter-individual variability that exists in response to virtually all performance and health related interventions. Appropriate interpretation of intervention-based data from an individual or group of individuals requires practitioners and researchers to consider a range of concepts including the confounding influence of measurement error and biological variability. In addition, the means to quantify likely statistical and practical improvements are facilitated by concepts such as confidence intervals (CIs and smallest worthwhile change (SWC. The purpose of this review is to provide accessible and applicable recommendations for practitioners and researchers that interpret, and report personalized data. To achieve this, the review is structured in three sections that progressively develop a statistical framework. Section 1 explores fundamental concepts related to measurement error and describes how typical error and CIs can be used to express uncertainty in baseline measurements. Section 2 builds upon these concepts and demonstrates how CIs can be combined with the concept of SWC to assess whether meaningful improvements occur post-intervention. Finally, section 3 introduces the concept of biological variability and discusses the subsequent challenges in identifying individual response and non-response to an intervention. Worked numerical examples and interactive Supplementary Material are incorporated to solidify concepts and assist with implementation in practice.
FIDEA: a server for the functional interpretation of differential expression analysis.
D'Andrea, Daniel
2013-06-10
The results of differential expression analyses provide scientists with hundreds to thousands of differentially expressed genes that need to be interpreted in light of the biology of the specific system under study. This requires mapping the genes to functional classifications that can be, for example, the KEGG pathways or InterPro families they belong to, their GO Molecular Function, Biological Process or Cellular Component. A statistically significant overrepresentation of one or more category terms in the set of differentially expressed genes is an essential step for the interpretation of the biological significance of the results. Ideally, the analysis should be performed by scientists who are well acquainted with the biological problem, as they have a wealth of knowledge about the system and can, more easily than a bioinformatician, discover less obvious and, therefore, more interesting relationships. To allow experimentalists to explore their data in an easy and at the same time exhaustive fashion within a single tool and to test their hypothesis quickly and effortlessly, we developed FIDEA. The FIDEA server is located at http://www.biocomputing.it/fidea; it is free and open to all users, and there is no login requirement.
Directory of Open Access Journals (Sweden)
Brayan Alexander Fonseca Martinez
2017-11-01
Full Text Available One of the most commonly observational study designs employed in veterinary is the cross-sectional study with binary outcomes. To measure an association with exposure, the use of prevalence ratios (PR or odds ratios (OR are possible. In human epidemiology, much has been discussed about the use of the OR exclusively for case–control studies and some authors reported that there is no good justification for fitting logistic regression when the prevalence of the disease is high, in which OR overestimate the PR. Nonetheless, interpretation of OR is difficult since confusing between risk and odds can lead to incorrect quantitative interpretation of data such as “the risk is X times greater,” commonly reported in studies that use OR. The aims of this study were (1 to review articles with cross-sectional designs to assess the statistical method used and the appropriateness of the interpretation of the estimated measure of association and (2 to illustrate the use of alternative statistical methods that estimate PR directly. An overview of statistical methods and its interpretation using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA guidelines was conducted and included a diverse set of peer-reviewed journals among the veterinary science field using PubMed as the search engine. From each article, the statistical method used and the appropriateness of the interpretation of the estimated measure of association were registered. Additionally, four alternative models for logistic regression that estimate directly PR were tested using our own dataset from a cross-sectional study on bovine viral diarrhea virus. The initial search strategy found 62 articles, in which 6 articles were excluded and therefore 56 studies were used for the overall analysis. The review showed that independent of the level of prevalence reported, 96% of articles employed logistic regression, thus estimating the OR. Results of the multivariate models
The Statistical Interpretation of Entropy: An Activity
Timmberlake, Todd
2010-01-01
The second law of thermodynamics, which states that the entropy of an isolated macroscopic system can increase but will not decrease, is a cornerstone of modern physics. Ludwig Boltzmann argued that the second law arises from the motion of the atoms that compose the system. Boltzmann's statistical mechanics provides deep insight into the…
Gerrits, Reinie G; Kringos, Dionne S; van den Berg, Michael J; Klazinga, Niek S
2018-03-07
Policy-makers, managers, scientists, patients and the general public are confronted daily with figures on health and healthcare through public reporting in newspapers, webpages and press releases. However, information on the key characteristics of these figures necessary for their correct interpretation is often not adequately communicated, which can lead to misinterpretation and misinformed decision-making. The objective of this research was to map the key characteristics relevant to the interpretation of figures on health and healthcare, and to develop a Figure Interpretation Assessment Tool-Health (FIAT-Health) through which figures on health and healthcare can be systematically assessed, allowing for a better interpretation of these figures. The abovementioned key characteristics of figures on health and healthcare were identified through systematic expert consultations in the Netherlands on four topic categories of figures, namely morbidity, healthcare expenditure, healthcare outcomes and lifestyle. The identified characteristics were used as a frame for the development of the FIAT-Health. Development of the tool and its content was supported and validated through regular review by a sounding board of potential users. Identified characteristics relevant for the interpretation of figures in the four categories relate to the figures' origin, credibility, expression, subject matter, population and geographical focus, time period, and underlying data collection methods. The characteristics were translated into a set of 13 dichotomous and 4-point Likert scale questions constituting the FIAT-Health, and two final assessment statements. Users of the FIAT-Health were provided with a summary overview of their answers to support a final assessment of the correctness of a figure and the appropriateness of its reporting. FIAT-Health can support policy-makers, managers, scientists, patients and the general public to systematically assess the quality of publicly reported
Handbook of univariate and multivariate data analysis and interpretation with SPSS
Ho, Robert
2006-01-01
Many statistics texts tend to focus more on the theory and mathematics underlying statistical tests than on their applications and interpretation. This can leave readers with little understanding of how to apply statistical tests or how to interpret their findings. While the SPSS statistical software has done much to alleviate the frustrations of social science professionals and students who must analyze data, they still face daunting challenges in selecting the proper tests, executing the tests, and interpreting the test results.With emphasis firmly on such practical matters, this handbook se
The use of easily debondable orthodontic adhesives with ceramic brackets.
Ryu, Chiyako; Namura, Yasuhiro; Tsuruoka, Takashi; Hama, Tomohiko; Kaji, Kaori; Shimizu, Noriyoshi
2011-01-01
We experimentally produced an easily debondable orthodontic adhesive (EDA) containing heat-expandable microcapsules. The purpose of this in vitro study was to evaluate the best debondable condition when EDA was used for ceramic brackets. Shear bond strengths were measured before and after heating and were compared statistically. Temperatures of the bracket base and pulp wall were also examined during heating. Bond strengths of EDA containing 30 wt% and 40 wt% heat-expandable microcapsules were 13.4 and 12.9 MPa, respectively and decreased significantly to 3.8 and 3.7 MPa, respectively, after heating. The temperature of the pulp wall increased 1.8-3.6°C after heating, less than that required to induce pulp damage. Based on the results, we conclude that heating for 8 s during debonding of ceramic brackets bonded using EDA containing 40 wt% heat-expandable microcapsules is the most effective and safest method for the enamel and pulp.
Statistical analysis with Excel for dummies
Schmuller, Joseph
2013-01-01
Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro
Pattern recognition in menstrual bleeding diaries by statistical cluster analysis
Directory of Open Access Journals (Sweden)
Wessel Jens
2009-07-01
Full Text Available Abstract Background The aim of this paper is to empirically identify a treatment-independent statistical method to describe clinically relevant bleeding patterns by using bleeding diaries of clinical studies on various sex hormone containing drugs. Methods We used the four cluster analysis methods single, average and complete linkage as well as the method of Ward for the pattern recognition in menstrual bleeding diaries. The optimal number of clusters was determined using the semi-partial R2, the cubic cluster criterion, the pseudo-F- and the pseudo-t2-statistic. Finally, the interpretability of the results from a gynecological point of view was assessed. Results The method of Ward yielded distinct clusters of the bleeding diaries. The other methods successively chained the observations into one cluster. The optimal number of distinctive bleeding patterns was six. We found two desirable and four undesirable bleeding patterns. Cyclic and non cyclic bleeding patterns were well separated. Conclusion Using this cluster analysis with the method of Ward medications and devices having an impact on bleeding can be easily compared and categorized.
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M
2008-11-07
Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of
Energy Technology Data Exchange (ETDEWEB)
Munoz, Gerard; Bauer, Klaus; Moeck, Inga; Schulze, Albrecht; Ritter, Oliver [Deutsches GeoForschungsZentrum (GFZ), Telegrafenberg, 14473 Potsdam (Germany)
2010-03-15
Exploration for geothermal resources is often challenging because there are no geophysical techniques that provide direct images of the parameters of interest, such as porosity, permeability and fluid content. Magnetotelluric (MT) and seismic tomography methods yield information about subsurface distribution of resistivity and seismic velocity on similar scales and resolution. The lack of a fundamental law linking the two parameters, however, has limited joint interpretation to a qualitative analysis. By using a statistical approach in which the resistivity and velocity models are investigated in the joint parameter space, we are able to identify regions of high correlation and map these classes (or structures) back onto the spatial domain. This technique, applied to a seismic tomography-MT profile in the area of the Gross Schoenebeck geothermal site, allows us to identify a number of classes in accordance with the local geology. In particular, a high-velocity, low-resistivity class is interpreted as related to areas with thinner layers of evaporites; regions where these sedimentary layers are highly fractured may be of higher permeability. (author)
Rumsey, Deborah
2011-01-01
The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou
Making Statistical Data More Easily Accessible on the Web Results of the StatSearch Case Study
Rajman, M; Boynton, I M; Fridlund, B; Fyhrlund, A; Sundgren, B; Lundquist, P; Thelander, H; Wänerskär, M
2005-01-01
In this paper we present the results of the StatSearch case study that aimed at providing an enhanced access to statistical data available on the Web. In the scope of this case study we developed a prototype of an information access tool combining a query-based search engine with semi-automated navigation techniques exploiting the hierarchical structuring of the available data. This tool enables a better control of the information retrieval, improving the quality and ease of the access to statistical information. The central part of the presented StatSearch tool consists in the design of an algorithm for automated navigation through a tree-like hierarchical document structure. The algorithm relies on the computation of query related relevance score distributions over the available database to identify the most relevant clusters in the data structure. These most relevant clusters are then proposed to the user for navigation, or, alternatively, are the support for the automated navigation process. Several appro...
Karuppiah, R.; Faldi, A.; Laurenzi, I.; Usadi, A.; Venkatesh, A.
2014-12-01
An increasing number of studies are focused on assessing the environmental footprint of different products and processes, especially using life cycle assessment (LCA). This work shows how combining statistical methods and Geographic Information Systems (GIS) with environmental analyses can help improve the quality of results and their interpretation. Most environmental assessments in literature yield single numbers that characterize the environmental impact of a process/product - typically global or country averages, often unchanging in time. In this work, we show how statistical analysis and GIS can help address these limitations. For example, we demonstrate a method to separately quantify uncertainty and variability in the result of LCA models using a power generation case study. This is important for rigorous comparisons between the impacts of different processes. Another challenge is lack of data that can affect the rigor of LCAs. We have developed an approach to estimate environmental impacts of incompletely characterized processes using predictive statistical models. This method is applied to estimate unreported coal power plant emissions in several world regions. There is also a general lack of spatio-temporal characterization of the results in environmental analyses. For instance, studies that focus on water usage do not put in context where and when water is withdrawn. Through the use of hydrological modeling combined with GIS, we quantify water stress on a regional and seasonal basis to understand water supply and demand risks for multiple users. Another example where it is important to consider regional dependency of impacts is when characterizing how agricultural land occupation affects biodiversity in a region. We developed a data-driven methodology used in conjuction with GIS to determine if there is a statistically significant difference between the impacts of growing different crops on different species in various biomes of the world.
Fordyce, James A
2010-07-23
Phylogenetic hypotheses are increasingly being used to elucidate historical patterns of diversification rate-variation. Hypothesis testing is often conducted by comparing the observed vector of branching times to a null, pure-birth expectation. A popular method for inferring a decrease in speciation rate, which might suggest an early burst of diversification followed by a decrease in diversification rate is the gamma statistic. Using simulations under varying conditions, I examine the sensitivity of gamma to the distribution of the most recent branching times. Using an exploratory data analysis tool for lineages through time plots, tree deviation, I identified trees with a significant gamma statistic that do not appear to have the characteristic early accumulation of lineages consistent with an early, rapid rate of cladogenesis. I further investigated the sensitivity of the gamma statistic to recent diversification by examining the consequences of failing to simulate the full time interval following the most recent cladogenic event. The power of gamma to detect rate decrease at varying times was assessed for simulated trees with an initial high rate of diversification followed by a relatively low rate. The gamma statistic is extraordinarily sensitive to recent diversification rates, and does not necessarily detect early bursts of diversification. This was true for trees of various sizes and completeness of taxon sampling. The gamma statistic had greater power to detect recent diversification rate decreases compared to early bursts of diversification. Caution should be exercised when interpreting the gamma statistic as an indication of early, rapid diversification.
Directory of Open Access Journals (Sweden)
James A Fordyce
Full Text Available BACKGROUND: Phylogenetic hypotheses are increasingly being used to elucidate historical patterns of diversification rate-variation. Hypothesis testing is often conducted by comparing the observed vector of branching times to a null, pure-birth expectation. A popular method for inferring a decrease in speciation rate, which might suggest an early burst of diversification followed by a decrease in diversification rate is the gamma statistic. METHODOLOGY: Using simulations under varying conditions, I examine the sensitivity of gamma to the distribution of the most recent branching times. Using an exploratory data analysis tool for lineages through time plots, tree deviation, I identified trees with a significant gamma statistic that do not appear to have the characteristic early accumulation of lineages consistent with an early, rapid rate of cladogenesis. I further investigated the sensitivity of the gamma statistic to recent diversification by examining the consequences of failing to simulate the full time interval following the most recent cladogenic event. The power of gamma to detect rate decrease at varying times was assessed for simulated trees with an initial high rate of diversification followed by a relatively low rate. CONCLUSIONS: The gamma statistic is extraordinarily sensitive to recent diversification rates, and does not necessarily detect early bursts of diversification. This was true for trees of various sizes and completeness of taxon sampling. The gamma statistic had greater power to detect recent diversification rate decreases compared to early bursts of diversification. Caution should be exercised when interpreting the gamma statistic as an indication of early, rapid diversification.
Effect size, confidence intervals and statistical power in psychological research.
Directory of Open Access Journals (Sweden)
Téllez A.
2015-07-01
Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.
Application of Statistical Tools for Data Analysis and Interpretation in Rice Plant Pathology
Directory of Open Access Journals (Sweden)
Parsuram Nayak
2018-01-01
Full Text Available There has been a significant advancement in the application of statistical tools in plant pathology during the past four decades. These tools include multivariate analysis of disease dynamics involving principal component analysis, cluster analysis, factor analysis, pattern analysis, discriminant analysis, multivariate analysis of variance, correspondence analysis, canonical correlation analysis, redundancy analysis, genetic diversity analysis, and stability analysis, which involve in joint regression, additive main effects and multiplicative interactions, and genotype-by-environment interaction biplot analysis. The advanced statistical tools, such as non-parametric analysis of disease association, meta-analysis, Bayesian analysis, and decision theory, take an important place in analysis of disease dynamics. Disease forecasting methods by simulation models for plant diseases have a great potentiality in practical disease control strategies. Common mathematical tools such as monomolecular, exponential, logistic, Gompertz and linked differential equations take an important place in growth curve analysis of disease epidemics. The highly informative means of displaying a range of numerical data through construction of box and whisker plots has been suggested. The probable applications of recent advanced tools of linear and non-linear mixed models like the linear mixed model, generalized linear model, and generalized linear mixed models have been presented. The most recent technologies such as micro-array analysis, though cost effective, provide estimates of gene expressions for thousands of genes simultaneously and need attention by the molecular biologists. Some of these advanced tools can be well applied in different branches of rice research, including crop improvement, crop production, crop protection, social sciences as well as agricultural engineering. The rice research scientists should take advantage of these new opportunities adequately in
Statistical inference a short course
Panik, Michael J
2012-01-01
A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causal
Topics in statistical data analysis for high-energy physics
International Nuclear Information System (INIS)
Cowan, G.
2011-01-01
These lectures concert two topics that are becoming increasingly important in the analysis of high-energy physics data: Bayesian statistics and multivariate methods. In the Bayesian approach, we extend the interpretation of probability not only to cover the frequency of repeatable outcomes but also to include a degree of belief. In this way we are able to associate probability with a hypothesis and thus to answer directly questions that cannot be addressed easily with traditional frequentist methods. In multivariate analysis, we try to exploit as much information as possible from the characteristics that we measure for each event to distinguish between event types. In particular we will look at a method that has gained popularity in high-energy physics in recent years: the boosted decision tree. Finally, we give a brief sketch of how multivariate methods may be applied in a search for a new signal process. (author)
Applied statistics in ecology: common pitfalls and simple solutions
E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick
2013-01-01
The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...
Variation in reaction norms: Statistical considerations and biological interpretation.
Morrissey, Michael B; Liefting, Maartje
2016-09-01
Analysis of reaction norms, the functions by which the phenotype produced by a given genotype depends on the environment, is critical to studying many aspects of phenotypic evolution. Different techniques are available for quantifying different aspects of reaction norm variation. We examine what biological inferences can be drawn from some of the more readily applicable analyses for studying reaction norms. We adopt a strongly biologically motivated view, but draw on statistical theory to highlight strengths and drawbacks of different techniques. In particular, consideration of some formal statistical theory leads to revision of some recently, and forcefully, advocated opinions on reaction norm analysis. We clarify what simple analysis of the slope between mean phenotype in two environments can tell us about reaction norms, explore the conditions under which polynomial regression can provide robust inferences about reaction norm shape, and explore how different existing approaches may be used to draw inferences about variation in reaction norm shape. We show how mixed model-based approaches can provide more robust inferences than more commonly used multistep statistical approaches, and derive new metrics of the relative importance of variation in reaction norm intercepts, slopes, and curvatures. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Does environmental data collection need statistics?
Pulles, M.P.J.
1998-01-01
The term 'statistics' with reference to environmental science and policymaking might mean different things: the development of statistical methodology, the methodology developed by statisticians to interpret and analyse such data, or the statistical data that are needed to understand environmental
Statistical Reform in School Psychology Research: A Synthesis
Swaminathan, Hariharan; Rogers, H. Jane
2007-01-01
Statistical reform in school psychology research is discussed in terms of research designs, measurement issues, statistical modeling and analysis procedures, interpretation and reporting of statistical results, and finally statistics education.
Directory of Open Access Journals (Sweden)
Elżbieta Biernat
2014-12-01
Full Text Available Background: The aim of this paper is to assess whether basic descriptive statistics is sufficient to interpret the data on physical activity of Poles within occupational domain of life. Material and Methods: The study group consisted of 964 randomly selected Polish working professionals. The long version of the International Physical Activity Questionnaire (IPAQ was used. Descriptive statistics included characteristics of variables using: mean (M, median (Me, maximal and minimal values (max–min., standard deviation (SD and percentile values. Statistical inference was based on the comparison of variables with the significance level of 0.05 (Kruskal-Wallis and Pearson’s Chi2 tests. Results: Occupational physical activity (OPA was declared by 46.4% of respondents (vigorous – 23.5%, moderate – 30.2%, walking – 39.5%. The total OPA amounted to 2751.1 MET-min/week (Metabolic Equivalent of Task with very high standard deviation (SD = 5302.8 and max = 35 511 MET-min/week. It concerned different types of activities. Approximately 10% (90th percentile overstated the average. However, there was no significant difference depended on the character of the profession, or the type of activity. The average time of sitting was 256 min/day. As many as 39% of the respondents met the World Health Organization standards only due to OPA (42.5% of white-collar workers, 38% of administrative and technical employees and only 37.9% of physical workers. Conclusions: In the data analysis it is necessary to define quantiles to provide a fuller picture of the distributions of OPA in MET-min/week. It is also crucial to update the guidelines for data processing and analysis of long version of IPAQ. It seems that 16 h of activity/day is not a sufficient criterion for excluding the results from further analysis. Med Pr 2014;65(6:743–753
Symmetry, Invariance and Ontology in Physics and Statistics
Directory of Open Access Journals (Sweden)
Julio Michael Stern
2011-09-01
Full Text Available This paper has three main objectives: (a Discuss the formal analogy between some important symmetry-invariance arguments used in physics, probability and statistics. Specifically, we will focus on Noether’s theorem in physics, the maximum entropy principle in probability theory, and de Finetti-type theorems in Bayesian statistics; (b Discuss the epistemological and ontological implications of these theorems, as they are interpreted in physics and statistics. Specifically, we will focus on the positivist (in physics or subjective (in statistics interpretations vs. objective interpretations that are suggested by symmetry and invariance arguments; (c Introduce the cognitive constructivism epistemological framework as a solution that overcomes the realism-subjectivism dilemma and its pitfalls. The work of the physicist and philosopher Max Born will be particularly important in our discussion.
Statistics & probaility for dummies
Rumsey, Deborah J
2013-01-01
Two complete eBooks for one low price! Created and compiled by the publisher, this Statistics I and Statistics II bundle brings together two math titles in one, e-only bundle. With this special bundle, you'll get the complete text of the following two titles: Statistics For Dummies, 2nd Edition Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more. Tra
Kanji, Gopal K
2006-01-01
This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.
Energy Technology Data Exchange (ETDEWEB)
Wjihi, Sarra [Unité de Recherche de Physique Quantique, 11 ES 54, Faculté des Science de Monastir (Tunisia); Dhaou, Houcine [Laboratoire des Etudes des Systèmes Thermiques et Energétiques (LESTE), ENIM, Route de Kairouan, 5019 Monastir (Tunisia); Yahia, Manel Ben; Knani, Salah [Unité de Recherche de Physique Quantique, 11 ES 54, Faculté des Science de Monastir (Tunisia); Jemni, Abdelmajid [Laboratoire des Etudes des Systèmes Thermiques et Energétiques (LESTE), ENIM, Route de Kairouan, 5019 Monastir (Tunisia); Lamine, Abdelmottaleb Ben, E-mail: abdelmottaleb.benlamine@gmail.com [Unité de Recherche de Physique Quantique, 11 ES 54, Faculté des Science de Monastir (Tunisia)
2015-12-15
Statistical physics treatment is used to study the desorption of hydrogen on LaNi{sub 4.75}Fe{sub 0.25}, in order to obtain new physicochemical interpretations at the molecular level. Experimental desorption isotherms of hydrogen on LaNi{sub 4.75}Fe{sub 0.25} are fitted at three temperatures (293 K, 303 K and 313 K), using a monolayer desorption model. Six parameters of the model are fitted, namely the number of molecules per site n{sub α} and n{sub β}, the receptor site densities N{sub αM} and N{sub βM}, and the energetic parameters P{sub α} and P{sub β}. The behaviors of these parameters are discussed in relationship with desorption process. A dynamic study of the α and β phases in the desorption process was then carried out. Finally, the different thermodynamical potential functions are derived by statistical physics calculations from our adopted model.
Applied statistics for social and management sciences
Miah, Abdul Quader
2016-01-01
This book addresses the application of statistical techniques and methods across a wide range of disciplines. While its main focus is on the application of statistical methods, theoretical aspects are also provided as fundamental background information. It offers a systematic interpretation of results often discovered in general descriptions of methods and techniques such as linear and non-linear regression. SPSS is also used in all the application aspects. The presentation of data in the form of tables and graphs throughout the book not only guides users, but also explains the statistical application and assists readers in interpreting important features. The analysis of statistical data is presented consistently throughout the text. Academic researchers, practitioners and other users who work with statistical data will benefit from reading Applied Statistics for Social and Management Sciences. .
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
Interpretation of Confidence Interval Facing the Conflict
Andrade, Luisa; Fernández, Felipe
2016-01-01
As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…
Directory of Open Access Journals (Sweden)
Dominic Beaulieu-Prévost
2006-03-01
Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.
The insignificance of statistical significance testing
Johnson, Douglas H.
1999-01-01
Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.
Theoretical, analytical, and statistical interpretation of environmental data
International Nuclear Information System (INIS)
Lombard, S.M.
1974-01-01
The reliability of data from radiochemical analyses of environmental samples cannot be determined from nuclear counting statistics alone. The rigorous application of the principles of propagation of errors, an understanding of the physics and chemistry of the species of interest in the environment, and the application of information from research on the analytical procedure are all necessary for a valid estimation of the errors associated with analytical results. The specific case of the determination of plutonium in soil is considered in terms of analytical problems and data reliability. (U.S.)
Statistical Power in Longitudinal Network Studies
Stadtfeld, Christoph; Snijders, Tom A. B.; Steglich, Christian; van Duijn, Marijtje
2018-01-01
Longitudinal social network studies may easily suffer from a lack of statistical power. This is the case in particular for studies that simultaneously investigate change of network ties and change of nodal attributes. Such selection and influence studies have become increasingly popular due to the
Conformity and statistical tolerancing
Leblond, Laurent; Pillet, Maurice
2018-02-01
Statistical tolerancing was first proposed by Shewhart (Economic Control of Quality of Manufactured Product, (1931) reprinted 1980 by ASQC), in spite of this long history, its use remains moderate. One of the probable reasons for this low utilization is undoubtedly the difficulty for designers to anticipate the risks of this approach. The arithmetic tolerance (worst case) allows a simple interpretation: conformity is defined by the presence of the characteristic in an interval. Statistical tolerancing is more complex in its definition. An interval is not sufficient to define the conformance. To justify the statistical tolerancing formula used by designers, a tolerance interval should be interpreted as the interval where most of the parts produced should probably be located. This tolerance is justified by considering a conformity criterion of the parts guaranteeing low offsets on the latter characteristics. Unlike traditional arithmetic tolerancing, statistical tolerancing requires a sustained exchange of information between design and manufacture to be used safely. This paper proposes a formal definition of the conformity, which we apply successively to the quadratic and arithmetic tolerancing. We introduce a concept of concavity, which helps us to demonstrate the link between tolerancing approach and conformity. We use this concept to demonstrate the various acceptable propositions of statistical tolerancing (in the space decentring, dispersion).
Applied statistics for economists
Lewis, Margaret
2012-01-01
This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.
Statistical concepts a second course
Lomax, Richard G
2012-01-01
Statistical Concepts consists of the last 9 chapters of An Introduction to Statistical Concepts, 3rd ed. Designed for the second course in statistics, it is one of the few texts that focuses just on intermediate statistics. The book highlights how statistics work and what they mean to better prepare students to analyze their own data and interpret SPSS and research results. As such it offers more coverage of non-parametric procedures used when standard assumptions are violated since these methods are more frequently encountered when working with real data. Determining appropriate sample sizes
SOCR: Statistics Online Computational Resource
Dinov, Ivo D.
2006-01-01
The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis...
Developments in statistical evaluation of clinical trials
Oud, Johan; Ghidey, Wendimagegn
2014-01-01
This book describes various ways of approaching and interpreting the data produced by clinical trial studies, with a special emphasis on the essential role that biostatistics plays in clinical trials. Over the past few decades the role of statistics in the evaluation and interpretation of clinical data has become of paramount importance. As a result the standards of clinical study design, conduct and interpretation have undergone substantial improvement. The book includes 18 carefully reviewed chapters on recent developments in clinical trials and their statistical evaluation, with each chapter providing one or more examples involving typical data sets, enabling readers to apply the proposed procedures. The chapters employ a uniform style to enhance comparability between the approaches.
Quantum physics and statistical physics. 5. ed.
International Nuclear Information System (INIS)
Alonso, Marcelo; Finn, Edward J.
2012-01-01
By logical and uniform presentation this recognized introduction in modern physics treats both the experimental and theoretical aspects. The first part of the book deals with quantum mechanics and their application to atoms, molecules, nuclei, solids, and elementary particles. The statistical physics with classical statistics, thermodynamics, and quantum statistics is theme of the second part. Alsonso and Finn avoid complicated mathematical developments; by numerous sketches and diagrams as well as many problems and examples they make the reader early and above all easily understandably familiar with the formations of concepts of modern physics.
Numeric computation and statistical data analysis on the Java platform
Chekanov, Sergei V
2016-01-01
Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...
Directory of Open Access Journals (Sweden)
Melissa Coulson
2010-07-01
Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.
Permutation statistical methods an integrated approach
Berry, Kenneth J; Johnston, Janis E
2016-01-01
This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...
A statistical model for interpreting computerized dynamic posturography data
Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.
2002-01-01
Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.
Interpreting Statistical Findings A Guide For Health Professionals And Students
Walker, Jan
2010-01-01
This book is aimed at those studying and working in the field of health care, including nurses and the professions allied to medicine, who have little prior knowledge of statistics but for whom critical review of research is an essential skill.
Statistical methods in quality assurance
International Nuclear Information System (INIS)
Eckhard, W.
1980-01-01
During the different phases of a production process - planning, development and design, manufacturing, assembling, etc. - most of the decision rests on a base of statistics, the collection, analysis and interpretation of data. Statistical methods can be thought of as a kit of tools to help to solve problems in the quality functions of the quality loop with respect to produce quality products and to reduce quality costs. Various statistical methods are represented, typical examples for their practical application are demonstrated. (RW)
Optimal state discrimination using particle statistics
International Nuclear Information System (INIS)
Bose, S.; Ekert, A.; Omar, Y.; Paunkovic, N.; Vedral, V.
2003-01-01
We present an application of particle statistics to the problem of optimal ambiguous discrimination of quantum states. The states to be discriminated are encoded in the internal degrees of freedom of identical particles, and we use the bunching and antibunching of the external degrees of freedom to discriminate between various internal states. We show that we can achieve the optimal single-shot discrimination probability using only the effects of particle statistics. We discuss interesting applications of our method to detecting entanglement and purifying mixed states. Our scheme can easily be implemented with the current technology
Waller, Derek L
2008-01-01
Statistical analysis is essential to business decision-making and management, but the underlying theory of data collection, organization and analysis is one of the most challenging topics for business students and practitioners. This user-friendly text and CD-ROM package will help you to develop strong skills in presenting and interpreting statistical information in a business or management environment. Based entirely on using Microsoft Excel rather than more complicated applications, it includes a clear guide to using Excel with the key functions employed in the book, a glossary of terms and
Neave, Henry R
2012-01-01
This book, designed for students taking a basic introductory course in statistical analysis, is far more than just a book of tables. Each table is accompanied by a careful but concise explanation and useful worked examples. Requiring little mathematical background, Elementary Statistics Tables is thus not just a reference book but a positive and user-friendly teaching and learning aid. The new edition contains a new and comprehensive "teach-yourself" section on a simple but powerful approach, now well-known in parts of industry but less so in academia, to analysing and interpreting process dat
Interpreting clinical trial results by deductive reasoning: In search of improved trial design.
Kurbel, Sven; Mihaljević, Slobodan
2017-10-01
Clinical trial results are often interpreted by inductive reasoning, in a trial design-limited manner, directed toward modifications of the current clinical practice. Deductive reasoning is an alternative in which results of relevant trials are combined in indisputable premises that lead to a conclusion easily testable in future trials. © 2017 WILEY Periodicals, Inc.
Szabolcsi, Zoltán; Farkas, Zsuzsa; Borbély, Andrea; Bárány, Gusztáv; Varga, Dániel; Heinrich, Attila; Völgyi, Antónia; Pamjav, Horolma
2015-11-01
When the DNA profile from a crime-scene matches that of a suspect, the weight of DNA evidence depends on the unbiased estimation of the match probability of the profiles. For this reason, it is required to establish and expand the databases that reflect the actual allele frequencies in the population applied. 21,473 complete DNA profiles from Databank samples were used to establish the allele frequency database to represent the population of Hungarian suspects. We used fifteen STR loci (PowerPlex ESI16) including five, new ESS loci. The aim was to calculate the statistical, forensic efficiency parameters for the Databank samples and compare the newly detected data to the earlier report. The population substructure caused by relatedness may influence the frequency of profiles estimated. As our Databank profiles were considered non-random samples, possible relationships between the suspects can be assumed. Therefore, population inbreeding effect was estimated using the FIS calculation. The overall inbreeding parameter was found to be 0.0106. Furthermore, we tested the impact of the two allele frequency datasets on 101 randomly chosen STR profiles, including full and partial profiles. The 95% confidence interval estimates for the profile frequencies (pM) resulted in a tighter range when we used the new dataset compared to the previously published ones. We found that the FIS had less effect on frequency values in the 21,473 samples than the application of minimum allele frequency. No genetic substructure was detected by STRUCTURE analysis. Due to the low level of inbreeding effect and the high number of samples, the new dataset provides unbiased and precise estimates of LR for statistical interpretation of forensic casework and allows us to use lower allele frequencies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
On the statistical interpretation of quantum mechanics: evolution of the density matrix
International Nuclear Information System (INIS)
Benzecri, J.P.
1986-01-01
Without attempting to identify ontological interpretation with a mathematical structure, we reduce philosophical speculation to five theses. In the discussion of these, a central role is devoted to the mathematical problem of the evolution of the density matrix. This article relates to the first 3 of these 5 theses [fr
Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D
2014-03-25
A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.
Structural interpretation of seismic data and inherent uncertainties
Bond, Clare
2013-04-01
Geoscience is perhaps unique in its reliance on incomplete datasets and building knowledge from their interpretation. This interpretation basis for the science is fundamental at all levels; from creation of a geological map to interpretation of remotely sensed data. To teach and understand better the uncertainties in dealing with incomplete data we need to understand the strategies individual practitioners deploy that make them effective interpreters. The nature of interpretation is such that the interpreter needs to use their cognitive ability in the analysis of the data to propose a sensible solution in their final output that is both consistent not only with the original data but also with other knowledge and understanding. In a series of experiments Bond et al. (2007, 2008, 2011, 2012) investigated the strategies and pitfalls of expert and non-expert interpretation of seismic images. These studies focused on large numbers of participants to provide a statistically sound basis for analysis of the results. The outcome of these experiments showed that a wide variety of conceptual models were applied to single seismic datasets. Highlighting not only spatial variations in fault placements, but whether interpreters thought they existed at all, or had the same sense of movement. Further, statistical analysis suggests that the strategies an interpreter employs are more important than expert knowledge per se in developing successful interpretations. Experts are successful because of their application of these techniques. In a new set of experiments a small number of experts are focused on to determine how they use their cognitive and reasoning skills, in the interpretation of 2D seismic profiles. Live video and practitioner commentary were used to track the evolving interpretation and to gain insight on their decision processes. The outputs of the study allow us to create an educational resource of expert interpretation through online video footage and commentary with
Laboratory test result interpretation for primary care doctors in South Africa
Directory of Open Access Journals (Sweden)
Naadira Vanker
2017-03-01
Full Text Available Background: Challenges and uncertainties with test result interpretation can lead to diagnostic errors. Primary care doctors are at a higher risk than specialists of making these errors, due to the range in complexity and severity of conditions that they encounter. Objectives: This study aimed to investigate the challenges that primary care doctors face with test result interpretation, and to identify potential countermeasures to address these. Methods: A survey was sent out to 7800 primary care doctors in South Africa. Questionnaire themes included doctors’ uncertainty with interpreting test results, mechanisms used to overcome this uncertainty, challenges with appropriate result interpretation, and perceived solutions for interpreting results. Results: Of the 552 responses received, the prevalence of challenges with result interpretation was estimated in an average of 17% of diagnostic encounters. The most commonly-reported challenges were not receiving test results in a timely manner (51% of respondents and previous results not being easily available (37%. When faced with diagnostic uncertainty, 84% of respondents would either follow-up and reassess the patient or discuss the case with a specialist, and 67% would contact a laboratory professional. The most useful test utilisation enablers were found to be: interpretive comments (78% of respondents, published guidelines (74%, and a dedicated laboratory phone line (72%. Conclusion: Primary care doctors acknowledge uncertainty with test result interpretation. Potential countermeasures include the addition of patient-specific interpretive comments, the availability of guidelines or algorithms, and a dedicated laboratory phone line. The benefit of enhanced test result interpretation would reduce diagnostic error rates.
Perception in statistical graphics
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
Guner, Huseyin; Close, Patrick L; Cai, Wenxuan; Zhang, Han; Peng, Ying; Gregorich, Zachery R; Ge, Ying
2014-03-01
The rapid advancements in mass spectrometry (MS) instrumentation, particularly in Fourier transform (FT) MS, have made the acquisition of high-resolution and high-accuracy mass measurements routine. However, the software tools for the interpretation of high-resolution MS data are underdeveloped. Although several algorithms for the automatic processing of high-resolution MS data are available, there is still an urgent need for a user-friendly interface with functions that allow users to visualize and validate the computational output. Therefore, we have developed MASH Suite, a user-friendly and versatile software interface for processing high-resolution MS data. MASH Suite contains a wide range of features that allow users to easily navigate through data analysis, visualize complex high-resolution MS data, and manually validate automatically processed results. Furthermore, it provides easy, fast, and reliable interpretation of top-down, middle-down, and bottom-up MS data. MASH Suite is convenient, easily operated, and freely available. It can greatly facilitate the comprehensive interpretation and validation of high-resolution MS data with high accuracy and reliability.
Design research in statistics education : on symbolizing and computer tools
Bakker, A.
2004-01-01
The present knowledge society requires statistical literacy-the ability to interpret, critically evaluate, and communicate about statistical information and messages (Gal, 2002). However, research shows that students generally do not gain satisfactory statistical understanding. The research
CERN. Geneva
2005-01-01
The three lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.
CERN. Geneva
2004-01-01
The three lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.
The Statistics of wood assays for preservative retention
Patricia K. Lebow; Scott W. Conklin
2011-01-01
This paper covers general statistical concepts that apply to interpreting wood assay retention values. In particular, since wood assays are typically obtained from a single composited sample, the statistical aspects, including advantages and disadvantages, of simple compositing are covered.
The emergent Copenhagen interpretation of quantum mechanics
Hollowood, Timothy J.
2014-05-01
We introduce a new and conceptually simple interpretation of quantum mechanics based on reduced density matrices of sub-systems from which the standard Copenhagen interpretation emerges as an effective description of macroscopically large systems. This interpretation describes a world in which definite measurement results are obtained with probabilities that reproduce the Born rule. Wave function collapse is seen to be a useful but fundamentally unnecessary piece of prudent book keeping which is only valid for macro-systems. The new interpretation lies in a class of modal interpretations in that it applies to quantum systems that interact with a much larger environment. However, we show that it does not suffer from the problems that have plagued similar modal interpretations like macroscopic superpositions and rapid flipping between macroscopically distinct states. We describe how the interpretation fits neatly together with fully quantum formulations of statistical mechanics and that a measurement process can be viewed as a process of ergodicity breaking analogous to a phase transition. The key feature of the new interpretation is that joint probabilities for the ergodic subsets of states of disjoint macro-systems only arise as emergent quantities. Finally we give an account of the EPR-Bohm thought experiment and show that the interpretation implies the violation of the Bell inequality characteristic of quantum mechanics but in a way that is rather novel. The final conclusion is that the Copenhagen interpretation gives a completely satisfactory phenomenology of macro-systems interacting with micro-systems.
The emergent Copenhagen interpretation of quantum mechanics
International Nuclear Information System (INIS)
Hollowood, Timothy J
2014-01-01
We introduce a new and conceptually simple interpretation of quantum mechanics based on reduced density matrices of sub-systems from which the standard Copenhagen interpretation emerges as an effective description of macroscopically large systems. This interpretation describes a world in which definite measurement results are obtained with probabilities that reproduce the Born rule. Wave function collapse is seen to be a useful but fundamentally unnecessary piece of prudent book keeping which is only valid for macro-systems. The new interpretation lies in a class of modal interpretations in that it applies to quantum systems that interact with a much larger environment. However, we show that it does not suffer from the problems that have plagued similar modal interpretations like macroscopic superpositions and rapid flipping between macroscopically distinct states. We describe how the interpretation fits neatly together with fully quantum formulations of statistical mechanics and that a measurement process can be viewed as a process of ergodicity breaking analogous to a phase transition. The key feature of the new interpretation is that joint probabilities for the ergodic subsets of states of disjoint macro-systems only arise as emergent quantities. Finally we give an account of the EPR–Bohm thought experiment and show that the interpretation implies the violation of the Bell inequality characteristic of quantum mechanics but in a way that is rather novel. The final conclusion is that the Copenhagen interpretation gives a completely satisfactory phenomenology of macro-systems interacting with micro-systems. (paper)
INVERSE ELECTRON TRANSFER IN PEROXYOXALATE CHEMIEXCITATION USING EASILY REDUCIBLE ACTIVATORS
Bartoloni, Fernando Heering; Monteiro Leite Ciscato, Luiz Francisco; Augusto, Felipe Alberto; Baader, Wilhelm Josef
2010-01-01
INVERSE ELECTRON TRANSFER IN PEROXYOXALATE CHEMIEXCITATION USING EASILY REDUCIBLE ACTIVATORS. Chemiluminescence properties of the peroxyoxalate reaction in the presence of activators bearing electron withdrawing substituents were studied, to evaluate the possible occurrence of an inverse electron
Evaluation of observables in statistical multifragmentation theories
International Nuclear Information System (INIS)
Cole, A.J.
1989-01-01
The canonical formulation of equilibrium statistical multifragmentation is examined. It is shown that the explicit construction of observables (average values) by sampling the partition probabilities is unnecessary insofar as closed expressions in the form of recursion relations can be obtained quite easily. Such expressions may conversely be used to verify the sampling algorithms
Search Databases and Statistics
DEFF Research Database (Denmark)
Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J
2016-01-01
having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....
Statistical and stochastic aspects of the delocalization problem in quantum mechanics
International Nuclear Information System (INIS)
Claverie, P.; Diner, S.
1976-01-01
The space-time behaviour of electrons in atoms and molecules is reviewed. The wave conception of the electron is criticized and the poverty of the non-reductionist attitude is underlined. Further, the two main interpretations of quantum mechanics are recalled: the Copenhagen and the Statistical Interpretations. The meaning and the successes of the Statistical Interpretation are explained and it is shown that it does not solve all problems because quantum mechanics is irreducible to a classical statistical theory. The fluctuation of the particle number and its relationship to loge theory, delocalization and correlation is studied. Finally, different stochastic models for microphysics are reviewed. The markovian Fenyes-Nelson process allows an interpretation of the original heuristic considerations of Schroedinger. Non-markov processes with Schroedinger time evolution are shown to be equivalent to the base state analysis of Feynmann but they are unsatisfactory from a probabilistic point of view. Stochastic electrodynamics is presented as the most satisfactory conception nowadays
Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Guldenring, Daniel; Badilini, Fabio; Libretti, Guido; Peace, Aaron J; Leslie, Stephen J
The 12-lead Electrocardiogram (ECG) has been used to detect cardiac abnormalities in the same format for more than 70years. However, due to the complex nature of 12-lead ECG interpretation, there is a significant cognitive workload required from the interpreter. This complexity in ECG interpretation often leads to errors in diagnosis and subsequent treatment. We have previously reported on the development of an ECG interpretation support system designed to augment the human interpretation process. This computerised decision support system has been named 'Interactive Progressive based Interpretation' (IPI). In this study, a decision support algorithm was built into the IPI system to suggest potential diagnoses based on the interpreter's annotations of the 12-lead ECG. We hypothesise semi-automatic interpretation using a digital assistant can be an optimal man-machine model for ECG interpretation. To improve interpretation accuracy and reduce missed co-abnormalities. The Differential Diagnoses Algorithm (DDA) was developed using web technologies where diagnostic ECG criteria are defined in an open storage format, Javascript Object Notation (JSON), which is queried using a rule-based reasoning algorithm to suggest diagnoses. To test our hypothesis, a counterbalanced trial was designed where subjects interpreted ECGs using the conventional approach and using the IPI+DDA approach. A total of 375 interpretations were collected. The IPI+DDA approach was shown to improve diagnostic accuracy by 8.7% (although not statistically significant, p-value=0.1852), the IPI+DDA suggested the correct interpretation more often than the human interpreter in 7/10 cases (varying statistical significance). Human interpretation accuracy increased to 70% when seven suggestions were generated. Although results were not found to be statistically significant, we found; 1) our decision support tool increased the number of correct interpretations, 2) the DDA algorithm suggested the correct
Representations and Techniques for 3D Object Recognition and Scene Interpretation
Hoiem, Derek
2011-01-01
One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physi
Origin of Disagreements in Tandem Mass Spectra Interpretation by Search Engines.
Tessier, Dominique; Lollier, Virginie; Larré, Colette; Rogniaux, Hélène
2016-10-07
Several proteomic database search engines that interpret LC-MS/MS data do not identify the same set of peptides. These disagreements occur even when the scores of the peptide-to-spectrum matches suggest good confidence in the interpretation. Our study shows that these disagreements observed for the interpretations of a given spectrum are almost exclusively due to the variation of what we call the "peptide space", i.e., the set of peptides that are actually compared to the experimental spectra. We discuss the potential difficulties of precisely defining the "peptide space." Indeed, although several parameters that are generally reported in publications can easily be set to the same values, many additional parameters-with much less straightforward user access-might impact the "peptide space" used by each program. Moreover, in a configuration where each search engine identifies the same candidates for each spectrum, the inference of the proteins may remain quite different depending on the false discovery rate selected.
Interpreting the Customary Rules on Interpretation
Merkouris, Panos
2017-01-01
International courts have at times interpreted the customary rules on interpretation. This is interesting because what is being interpreted is: i) rules of interpretation, which sounds dangerously tautological, and ii) customary law, the interpretation of which has not been the object of critical
Hunting Down Interpretations of the HERA Large-$Q^{2}$ data
Ellis, John R.
1999-01-01
Possible interpretations of the HERA large-Q^2 data are reviewed briefly. The possibility of statistical fluctuations cannot be ruled out, and it seems premature to argue that the H1 and ZEUS anomalies are incompatible. The data cannot be explained away by modifications of parton distributions, nor do contact interactions help. A leptoquark interpretation would need a large tau-q branching ratio. Several R-violating squark interpretations are still viable despite all the constraints, and offer interesting experimental signatures, but please do not hold your breath.
Statistics Poster Challenge for Schools
Payne, Brad; Freeman, Jenny; Stillman, Eleanor
2013-01-01
The analysis and interpretation of data are important life skills. A poster challenge for schoolchildren provides an innovative outlet for these skills and demonstrates their relevance to daily life. We discuss our Statistics Poster Challenge and the lessons we have learned.
Application of descriptive statistics in analysis of experimental data
Mirilović Milorad; Pejin Ivana
2008-01-01
Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...
Statistics As Principled Argument
Abelson, Robert P
2012-01-01
In this illuminating volume, Robert P. Abelson delves into the too-often dismissed problems of interpreting quantitative data and then presenting them in the context of a coherent story about one's research. Unlike too many books on statistics, this is a remarkably engaging read, filled with fascinating real-life (and real-research) examples rather than with recipes for analysis. It will be of true interest and lasting value to beginning graduate students and seasoned researchers alike. The focus of the book is that the purpose of statistics is to organize a useful argument from quantitative
Pyrochemical recovery of easily reducible species from spent nuclear fuel
International Nuclear Information System (INIS)
Jouault, C.
2000-01-01
The purpose of the reprocessing of spent fuel is to separate noble metals and other easily reducible species, actinides and lanthanides. A thermodynamic and bibliographical study allowed us to elaborate a process which realises these separations in several steps. The experimental validation of the steps concerning the extraction of noble metals and easily reducible species required to imagine an apparatus which is conformed to the study of the two steps in question: the reduction by a gas of fission product oxides and the extraction of the metallic particles, obtained by reduction, by digestion in a liquid metal. Experiments on digestion, carried on molybdenum and ruthenium particles, allowed us to conclude that the transfer of metallic particles from a molten salt into a liquid metal is ruled by phenomena of complex wettability between the metallic particle, the molten salt, the liquid metal and the gas. The transfer from the salt to the metal is a chain of two steps: emersion of the particles from the salt to go into the gas, and then transfer from the gas into the metal. Kinetics are limited by the transfer through the metal surface. Kinetics study withdrew the experimental parameters and the metals properties which influence the digestion rate. A model on the transfer into a liquid metal of a particle trapped at the fluid/metal interface ratified the experimental conclusions and informed on the stirring influence. All the results allow us to think that the extraction of noble metals and easily reducible species are feasible in this way. (author) [fr
The non-easily ionized elements as spectrochemical buffers
International Nuclear Information System (INIS)
Tripkovic, M.; Radovanov, S.; Holclajtner-Antunovic, I.; Todorovic, M.
1985-01-01
A method is developed for determining trace elements (In, Ga, B, V, Mo, Mn, Pt, P, Be) in graphite with the aid of a low current d.c. arc. The method makes use of the enhancement of the radiation intensities of trace elements by non-easily ionized elements (NEIE). As a NEIE, this method uses Cd which is added up to a concentration of 150 mg/g sample. The absolute detection limits for all of the above mentioned elements are at the ng-level. (orig.) [de
Momentum conservation decides Heisenberg's interpretation of the uncertainty formulas
International Nuclear Information System (INIS)
Angelidis, T.D.
1977-01-01
In the light of Heisenberg's interpretation of the uncertainty formulas, the conditions necessary for the derivation of the quantitative statement or law of momentum conservation are considered. The result of such considerations is a contradiction between the formalism of quantum physics and the asserted consequences of Heisenberg's interpretation. This contradiction decides against Heisenberg's interpretation of the uncertainty formulas on upholding that the formalism of quantum physics is both consistent and complete, at least insofar as the statement of momentum conservation can be proved within this formalism. A few comments are also included on Bohr's complementarity interpretation of the formalism of quantum physics. A suggestion, based on a statistical mode of empirical testing of the uncertainty formulas, does not give rise to any such contradiction
Multimodal integration in statistical learning
DEFF Research Database (Denmark)
Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan
2014-01-01
, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.......Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study...
An introduction to medical statistics
International Nuclear Information System (INIS)
Hilgers, R.D.; Bauer, P.; Scheiber, V.; Heitmann, K.U.
2002-01-01
This textbook teaches all aspects and methods of biometrics as a field of concentration in medical education. Instrumental interpretations of the theory, concepts and terminology of medical statistics are enhanced by numerous illustrations and examples. With problems, questions and answers. (orig./CB) [de
Ergodic theory, interpretations of probability and the foundations of statistical mechanics
van Lith, J.H.
2001-01-01
The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time
HistFitter software framework for statistical data analysis
Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.
2015-01-01
We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...
Cosmic inflation and big bang interpreted as explosions
Rebhan, E.
2012-12-01
It has become common understanding that the recession of galaxies and the corresponding redshift of light received from them can only be explained by an expansion of the space between them and us. In this paper, for the presently favored case of a universe without spatial curvature, it is shown that this interpretation is restricted to comoving coordinates. It is proven by construction that within the framework of general relativity other coordinates exist in relation to which these phenomena can be explained by a motion of the cosmic substrate across space, caused by an explosionlike big bang or by inflation preceding an almost big bang. At the place of an observer, this motion occurs without any spatial expansion. It is shown that in these “explosion coordinates” the usual redshift comes about by a Doppler shift and a subsequent gravitational shift. Making use of this interpretation, it can easily be understood why in comoving coordinates light rays of short spatial extension expand and thus constitute an exemption from the rule that small objects up to the size of the solar system or even galaxies do not participate in the expansion of the universe. It is also discussed how the two interpretations can be reconciled with each other.
Scheck, Florian
2016-01-01
Scheck’s textbook starts with a concise introduction to classical thermodynamics, including geometrical aspects. Then a short introduction to probabilities and statistics lays the basis for the statistical interpretation of thermodynamics. Phase transitions, discrete models and the stability of matter are explained in great detail. Thermodynamics has a special role in theoretical physics. Due to the general approach of thermodynamics the field has a bridging function between several areas like the theory of condensed matter, elementary particle physics, astrophysics and cosmology. The classical thermodynamics describes predominantly averaged properties of matter, reaching from few particle systems and state of matter to stellar objects. Statistical Thermodynamics covers the same fields, but explores them in greater depth and unifies classical statistical mechanics with quantum theory of multiple particle systems. The content is presented as two tracks: the fast track for master students, providing the essen...
Combination and interpretation of observables in Cosmology
Directory of Open Access Journals (Sweden)
Virey Jean-Marc
2010-04-01
Full Text Available The standard cosmological model has deep theoretical foundations but need the introduction of two major unknown components, dark matter and dark energy, to be in agreement with various observations. Dark matter describes a non-relativistic collisionless fluid of (non baryonic matter which amount to 25% of the total density of the universe. Dark energy is a new kind of fluid not of matter type, representing 70% of the total density which should explain the recent acceleration of the expansion of the universe. Alternatively, one can reject this idea of adding one or two new components but argue that the equations used to make the interpretation should be modified consmological scales. Instead of dark matter one can invoke a failure of Newton's laws. Instead of dark energy, two approaches are proposed : general relativity (in term of the Einstein equation should be modified, or the cosmological principle which fixes the metric used for cosmology should be abandonned. One of the main objective of the community is to find the path of the relevant interpretations thanks to the next generation of experiments which should provide large statistics of observationnal data. Unfortunately, cosmological in formations are difficult to pin down directly fromt he measurements, and it is mandatory to combine the various observables to get the cosmological parameters. This is not problematic from the statistical point of view, but assumptions and approximations made for the analysis may bias our interprettion of the data. Consequently, a strong attention should be paied to the statistical methods used to make parameters estimation and for model testing. After a review of the basics of cosmology where the cosmological parameters are introduced, we discuss the various cosmological probes and their associated observables used to extract cosmological informations. We present the results obtained from several statistical analyses combining data of diferent nature but
Visualization of the variability of 3D statistical shape models by animation.
Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter
2004-01-01
Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.
Kissling, Grace E; Haseman, Joseph K; Zeiger, Errol
2015-09-02
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP's statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP, 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800×0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP's decision making process, overstates the number of statistical comparisons made, and ignores the fact that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus' conclusion that such obvious responses merely "generate a hypothesis" rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. Published by Elsevier Ireland Ltd.
Measurement and statistics for teachers
Van Blerkom, Malcolm
2008-01-01
Written in a student-friendly style, Measurement and Statistics for Teachers shows teachers how to use measurement and statistics wisely in their classes. Although there is some discussion of theory, emphasis is given to the practical, everyday uses of measurement and statistics. The second part of the text provides more complete coverage of basic descriptive statistics and their use in the classroom than in any text now available.Comprehensive and accessible, Measurement and Statistics for Teachers includes:Short vignettes showing concepts in action Numerous classroom examples Highlighted vocabulary Boxes summarizing related concepts End-of-chapter exercises and problems Six full chapters devoted to the essential topic of Classroom Tests Instruction on how to carry out informal assessments, performance assessments, and portfolio assessments, and how to use and interpret standardized tests A five-chapter section on Descriptive Statistics, giving instructors the option of more thoroughly teaching basic measur...
THE STATISTICAL INDICATORS OF POTATO PRODUCED IN ROMANIA
Directory of Open Access Journals (Sweden)
Elena BULARCA
2013-12-01
Full Text Available In this study we have analyzed and interpreted the main statistical indicators of potato produced in Romania. First of all, we start by presenting some information about potatoes: origin and appearance, their importance and necessity in the life of people and animals. Then on the basis of the specific statistical indicators, it was interpreted the evolution of the cultivated area, the percentage of the main counties in the cultivated area with potatoes, the average yield per hectare, as well as the import and export of potatoes in a given period. Each indicator was analyzed and corresponding remarks and conclusions have been drawn.
Introduction to Statistics course
CERN. Geneva HR-RFA
2006-01-01
The four lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.
Shnirelman peak in the level spacing statistics
International Nuclear Information System (INIS)
Chirikov, B.V.; Shepelyanskij, D.L.
1994-01-01
The first results on the statistical properties of the quantum quasidegeneracy are presented. A physical interpretation of the Shnirelman theorem predicted the bulk quasidegeneracy is given. The conditions for the strong impact of the degeneracy on the quantum level statistics are formulated which allows to extend the application of the Shnirelman theorem into a broad class of quantum systems. 14 refs., 3 figs
Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine
2016-08-18
We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.
A DoS/DDoS Attack Detection System Using Chi-Square Statistic Approach
Directory of Open Access Journals (Sweden)
Fang-Yie Leu
2010-04-01
Full Text Available Nowadays, users can easily access and download network attack tools, which often provide friendly interfaces and easily operated features, from the Internet. Therefore, even a naive hacker can also launch a large scale DoS or DDoS attack to prevent a system, i.e., the victim, from providing Internet services. In this paper, we propose an agent based intrusion detection architecture, which is a distributed detection system, to detect DoS/DDoS attacks by invoking a statistic approach that compares source IP addresses' normal and current packet statistics to discriminate whether there is a DoS/DDoS attack. It first collects all resource IPs' packet statistics so as to create their normal packet distribution. Once some IPs' current packet distribution suddenly changes, very often it is an attack. Experimental results show that this approach can effectively detect DoS/DDoS attacks.
Objective interpretation as conforming interpretation
Directory of Open Access Journals (Sweden)
Lidka Rodak
2011-12-01
Full Text Available The practical discourse willingly uses the formula of “objective interpretation”, with no regards to its controversial nature that has been discussed in literature.The main aim of the article is to investigate what “objective interpretation” could mean and how it could be understood in the practical discourse, focusing on the understanding offered by judicature.The thesis of the article is that objective interpretation, as identified with textualists’ position, is not possible to uphold, and should be rather linked with conforming interpretation. And what this actually implies is that it is not the virtue of certainty and predictability – which are usually associated with objectivity- but coherence that makes the foundation of applicability of objectivity in law.What could be observed from the analyses, is that both the phenomenon of conforming interpretation and objective interpretation play the role of arguments in the interpretive discourse, arguments that provide justification that interpretation is not arbitrary or subjective. With regards to the important part of the ideology of legal application which is the conviction that decisions should be taken on the basis of law in order to exclude arbitrariness, objective interpretation could be read as a question “what kind of authority “supports” certain interpretation”? that is almost never free of judicial creativity and judicial activism.One can say that, objective and conforming interpretation are just another arguments used in legal discourse.
Radiologic head CT interpretation errors in pediatric abusive and non-abusive head trauma patients
International Nuclear Information System (INIS)
Kralik, Stephen F.; Finke, Whitney; Wu, Isaac C.; Ho, Chang Y.; Hibbard, Roberta A.; Hicks, Ralph A.
2017-01-01
Pediatric head trauma, including abusive head trauma, is a significant cause of morbidity and mortality. The purpose of this research was to identify and evaluate radiologic interpretation errors of head CTs performed on abusive and non-abusive pediatric head trauma patients from a community setting referred for a secondary interpretation at a tertiary pediatric hospital. A retrospective search identified 184 patients <5 years of age with head CT for known or potential head trauma who had a primary interpretation performed at a referring community hospital by a board-certified radiologist. Two board-certified fellowship-trained neuroradiologists at an academic pediatric hospital independently interpreted the head CTs, compared their interpretations to determine inter-reader discrepancy rates, and resolved discrepancies to establish a consensus second interpretation. The primary interpretation was compared to the consensus second interpretation using the RADPEER trademark scoring system to determine the primary interpretation-second interpretation overall and major discrepancy rates. MRI and/or surgical findings were used to validate the primary interpretation or second interpretation when possible. The diagnosis of abusive head trauma was made using clinical and imaging data by a child abuse specialist to separate patients into abusive head trauma and non-abusive head trauma groups. Discrepancy rates were compared for both groups. Lastly, primary interpretations and second interpretations were evaluated for discussion of imaging findings concerning for abusive head trauma. There were statistically significant differences between primary interpretation-second interpretation versus inter-reader overall and major discrepancy rates (28% vs. 6%, P=0.0001; 16% vs. 1%, P=0.0001). There were significant differences in the primary interpretation-second interpretation overall and major discrepancy rates for abusive head trauma patients compared to non-abusive head trauma
Radiologic head CT interpretation errors in pediatric abusive and non-abusive head trauma patients
Energy Technology Data Exchange (ETDEWEB)
Kralik, Stephen F.; Finke, Whitney; Wu, Isaac C.; Ho, Chang Y. [Indiana University School of Medicine, Department of Radiology and Imaging Sciences, Indianapolis, IN (United States); Hibbard, Roberta A.; Hicks, Ralph A. [Indiana University School of Medicine, Department of Pediatrics, Section of Child Protection Programs, Indianapolis, IN (United States)
2017-07-15
Pediatric head trauma, including abusive head trauma, is a significant cause of morbidity and mortality. The purpose of this research was to identify and evaluate radiologic interpretation errors of head CTs performed on abusive and non-abusive pediatric head trauma patients from a community setting referred for a secondary interpretation at a tertiary pediatric hospital. A retrospective search identified 184 patients <5 years of age with head CT for known or potential head trauma who had a primary interpretation performed at a referring community hospital by a board-certified radiologist. Two board-certified fellowship-trained neuroradiologists at an academic pediatric hospital independently interpreted the head CTs, compared their interpretations to determine inter-reader discrepancy rates, and resolved discrepancies to establish a consensus second interpretation. The primary interpretation was compared to the consensus second interpretation using the RADPEER trademark scoring system to determine the primary interpretation-second interpretation overall and major discrepancy rates. MRI and/or surgical findings were used to validate the primary interpretation or second interpretation when possible. The diagnosis of abusive head trauma was made using clinical and imaging data by a child abuse specialist to separate patients into abusive head trauma and non-abusive head trauma groups. Discrepancy rates were compared for both groups. Lastly, primary interpretations and second interpretations were evaluated for discussion of imaging findings concerning for abusive head trauma. There were statistically significant differences between primary interpretation-second interpretation versus inter-reader overall and major discrepancy rates (28% vs. 6%, P=0.0001; 16% vs. 1%, P=0.0001). There were significant differences in the primary interpretation-second interpretation overall and major discrepancy rates for abusive head trauma patients compared to non-abusive head trauma
Authentic interpretations of Return of the Lands Act (1839
Directory of Open Access Journals (Sweden)
Stanković Uroš
2011-01-01
Full Text Available The article sheds light on three interpretations of Return of the Lands Act, introduced in 1839 and entitling landowners whose land was usurped by prince Miloš Obrenović (1815-1839, 1858-1860 and distinguished people's headmen to claim retrial of litigations over land adjudicated unjustly and return of their lawlessly disposed property. Two main questions arose in relation with interpretive rules - what were legislative power's goals when interpreting the Act and to what they were due. The author seeked the answer to the first dilemma by scrutinizing texts of interpretations, in order to determine their semantic meaning. In an attempt to provide the explanation for the second problem, he explored social context preceeding introduction of interpretive rules (namely, number of litigations before courts and political ambiance in Serbia. The first interpretation, dating back to March, 2nd 1843, is inclined towards previous owners of the land. Such solution was caused mainly by the political situation - Russia contested Aleksandar Karađorđević's first election for prince of Serbia in 1842 and the assembly foreseen to elect ruler anew was to be summoned in June 1843 In the meantime, new regime embodied in so-called constitution-defenders (the group of distinguished political leaders opposed to Obrenović dynasty struggled to ensure enthronement of its candidate and therefore issued a demagogic interpretation. On the contrary, two remaining interpretations, from the years 1844 and 1845, were aimed to retain status quo regarding land property by diminishing possibilities for new trial. The legislators opted for restrictions having learned that number of litigations had increased greatly. Besides, political climate was to the most extent convenient for taking such measures, as several lesser rebellions incited by the followers of Obrenović dynasty had been quelled easily, after which the opponents of the regime remained passive for a longer period
The disagreeable behaviour of the kappa statistic.
Flight, Laura; Julious, Steven A
2015-01-01
It is often of interest to measure the agreement between a number of raters when an outcome is nominal or ordinal. The kappa statistic is used as a measure of agreement. The statistic is highly sensitive to the distribution of the marginal totals and can produce unreliable results. Other statistics such as the proportion of concordance, maximum attainable kappa and prevalence and bias adjusted kappa should be considered to indicate how well the kappa statistic represents agreement in the data. Each kappa should be considered and interpreted based on the context of the data being analysed. Copyright © 2014 John Wiley & Sons, Ltd.
Interpreting estimates of heritability--a note on the twin decomposition.
Stenberg, Anders
2013-03-01
While most outcomes may in part be genetically mediated, quantifying genetic heritability is a different matter. To explore data on twins and decompose the variation is a classical method to determine whether variation in outcomes, e.g. IQ or schooling, originate from genetic endowments or environmental factors. Despite some criticism, the model is still widely used. The critique is generally related to how estimates of heritability may encompass environmental mediation. This aspect is sometimes left implicit by authors even though its relevance for the interpretation is potentially profound. This short note is an appeal for clarity from authors when interpreting the magnitude of heritability estimates. It is demonstrated how disregarding existing theoretical contributions can easily lead to unnecessary misinterpretations and/or controversies. The key arguments are relevant also for estimates based on data of adopted children or from modern molecular genetics research. Copyright © 2012 Elsevier B.V. All rights reserved.
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
2012-03-15
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright Â© 2012 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
International Nuclear Information System (INIS)
Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t
2012-01-01
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Mantovani, Daniela; Sutherland, Holly
2003-01-01
This paper reports an exercise to validate EUROMOD output for 1998 by comparing income statistics calculated from the baseline micro-output with comparable statistics from other sources, including the European Community Household Panel. The main potential reasons for discrepancies are identified. While there are some specific national issues that arise, there are two main general points to consider in interpreting EUROMOD estimates of social indicators across EU member States: (a) the method ...
Probabilistic and Statistical Aspects of Quantum Theory
Holevo, Alexander S
2011-01-01
This book is devoted to aspects of the foundations of quantum mechanics in which probabilistic and statistical concepts play an essential role. The main part of the book concerns the quantitative statistical theory of quantum measurement, based on the notion of positive operator-valued measures. During the past years there has been substantial progress in this direction, stimulated to a great extent by new applications such as Quantum Optics, Quantum Communication and high-precision experiments. The questions of statistical interpretation, quantum symmetries, theory of canonical commutation re
Statistics and probability with applications for engineers and scientists
Gupta, Bhisham C
2013-01-01
Introducing the tools of statistics and probability from the ground up An understanding of statistical tools is essential for engineers and scientists who often need to deal with data analysis over the course of their work. Statistics and Probability with Applications for Engineers and Scientists walks readers through a wide range of popular statistical techniques, explaining step-by-step how to generate, analyze, and interpret data for diverse applications in engineering and the natural sciences. Unique among books of this kind, Statistics and Prob
Interval Coded Scoring: a toolbox for interpretable scoring systems
Directory of Open Access Journals (Sweden)
Lieven Billiet
2018-04-01
Full Text Available Over the last decades, clinical decision support systems have been gaining importance. They help clinicians to make effective use of the overload of available information to obtain correct diagnoses and appropriate treatments. However, their power often comes at the cost of a black box model which cannot be interpreted easily. This interpretability is of paramount importance in a medical setting with regard to trust and (legal responsibility. In contrast, existing medical scoring systems are easy to understand and use, but they are often a simplified rule-of-thumb summary of previous medical experience rather than a well-founded system based on available data. Interval Coded Scoring (ICS connects these two approaches, exploiting the power of sparse optimization to derive scoring systems from training data. The presented toolbox interface makes this theory easily applicable to both small and large datasets. It contains two possible problem formulations based on linear programming or elastic net. Both allow to construct a model for a binary classification problem and establish risk profiles that can be used for future diagnosis. All of this requires only a few lines of code. ICS differs from standard machine learning through its model consisting of interpretable main effects and interactions. Furthermore, insertion of expert knowledge is possible because the training can be semi-automatic. This allows end users to make a trade-off between complexity and performance based on cross-validation results and expert knowledge. Additionally, the toolbox offers an accessible way to assess classification performance via accuracy and the ROC curve, whereas the calibration of the risk profile can be evaluated via a calibration curve. Finally, the colour-coded model visualization has particular appeal if one wants to apply ICS manually on new observations, as well as for validation by experts in the specific application domains. The validity and applicability
SOCR: Statistics Online Computational Resource
Directory of Open Access Journals (Sweden)
Ivo D. Dinov
2006-10-01
Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.
An approach to the interpretation of backpropagation neural network models in QSAR studies.
Baskin, I I; Ait, A O; Halberstam, N M; Palyulin, V A; Zefirov, N S
2002-03-01
An approach to the interpretation of backpropagation neural network models for quantitative structure-activity and structure-property relationships (QSAR/QSPR) studies is proposed. The method is based on analyzing the first and second moments of distribution of the values of the first and the second partial derivatives of neural network outputs with respect to inputs calculated at data points. The use of such statistics makes it possible not only to obtain actually the same characteristics as for the case of traditional "interpretable" statistical methods, such as the linear regression analysis, but also to reveal important additional information regarding the non-linear character of QSAR/QSPR relationships. The approach is illustrated by an example of interpreting a backpropagation neural network model for predicting position of the long-wave absorption band of cyane dyes.
A primer of multivariate statistics
Harris, Richard J
2014-01-01
Drawing upon more than 30 years of experience in working with statistics, Dr. Richard J. Harris has updated A Primer of Multivariate Statistics to provide a model of balance between how-to and why. This classic text covers multivariate techniques with a taste of latent variable approaches. Throughout the book there is a focus on the importance of describing and testing one's interpretations of the emergent variables that are produced by multivariate analysis. This edition retains its conversational writing style while focusing on classical techniques. The book gives the reader a feel for why
Statistics for scientists and engineers
Shanmugam , Ramalingam
2015-01-01
This book provides the theoretical framework needed to build, analyze and interpret various statistical models. It helps readers choose the correct model, distinguish among various choices that best captures the data, or solve the problem at hand. This is an introductory textbook on probability and statistics. The authors explain theoretical concepts in a step-by-step manner and provide practical examples. The introductory chapter in this book presents the basic concepts. Next, the authors discuss the measures of location, popular measures of spread, and measures of skewness and kurtosis. Prob
On the statistical properties of photons
International Nuclear Information System (INIS)
Cini, M.
1990-01-01
The interpretation in terms of a transition from Maxwell-Boltzmann to Bose-Einstein statistics of the effect in quantum optics of degenerate light discovered by De Martini and Di Fonzo is discussed. It is shown that the results of the experiment can be explained by using only the quantum-mechanical rule that the states of an assembly of bosons should be completely symmetrical, without mentioning in any way their statistical properties. This means that photons are indeed identical particles
Singamsetti, Rao
2007-01-01
In this paper an attempt is made to highlight some issues of interpretation of statistical concepts and interpretation of results as taught in undergraduate Business statistics courses. The use of modern technology in the class room is shown to have increased the efficiency and the ease of learning and teaching in statistics. The importance of…
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome. Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
Applied statistics for economics and business
Özdemir, Durmuş
2016-01-01
This textbook introduces readers to practical statistical issues by presenting them within the context of real-life economics and business situations. It presents the subject in a non-threatening manner, with an emphasis on concise, easily understandable explanations. It has been designed to be accessible and student-friendly and, as an added learning feature, provides all the relevant data required to complete the accompanying exercises and computing problems, which are presented at the end of each chapter. It also discusses index numbers and inequality indices in detail, since these are of particular importance to students and commonly omitted in textbooks. Throughout the text it is assumed that the student has no prior knowledge of statistics. It is aimed primarily at business and economics undergraduates, providing them with the basic statistical skills necessary for further study of their subject. However, students of other disciplines will also find it relevant.
Applied Statistics for the Social and Health Sciences
Gordon, Rachel A A
2012-01-01
Applied Statistics for the Social and Health Sciences provides graduate students in the social and health sciences with the basic skills that they need to estimate, interpret, present, and publish statistical models using contemporary standards. The book targets the social and health science branches such as human development, public health, sociology, psychology, education, and social work in which students bring a wide range of mathematical skills and have a wide range of methodological affinities. For these students, a successful course in statistics will not only offer statistical content
A method for easily customizable gradient gel electrophoresis.
Miller, Andrew J; Roman, Brandon; Norstrom, Eric
2016-09-15
Gradient polyacrylamide gel electrophoresis is a powerful tool for the resolution of polypeptides by relative mobility. Here, we present a simplified method for generating polyacrylamide gradient gels for routine analysis without the need for specialized mixing equipment. The method allows for easily customizable gradients which can be optimized for specific polypeptide resolution requirements. Moreover, the method eliminates the possibility of buffer cross contamination in mixing equipment, and the time and resources saved with this method in place of traditional gradient mixing, or the purchase of pre-cast gels, are noteworthy given the frequency with which many labs use gradient gel SDS-PAGE. Copyright © 2016 Elsevier Inc. All rights reserved.
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
Securing wide appreciation of health statistics.
PYRRAIT A M DO, A; AUBENQUE, M J; BENJAMIN, B; DE GROOT, M J; KOHN, R
1954-01-01
All the authors are agreed on the need for a certain publicizing of health statistics, but do Amaral Pyrrait points out that the medical profession prefers to convince itself rather than to be convinced. While there is great utility in articles and reviews in the professional press (especially for paramedical personnel) Aubenque, de Groot, and Kohn show how appreciation can effectively be secured by making statistics more easily understandable to the non-expert by, for instance, including readable commentaries in official publications, simplifying charts and tables, and preparing simple manuals on statistical methods. Aubenque and Kohn also stress the importance of linking health statistics to other economic and social information. Benjamin suggests that the principles of market research could to advantage be applied to health statistics to determine the precise needs of the "consumers". At the same time, Aubenque points out that the value of the ultimate results must be clear to those who provide the data; for this, Kohn suggests that the enumerators must know exactly what is wanted and why.There is general agreement that some explanation of statistical methods and their uses should be given in the curricula of medical schools and that lectures and postgraduate courses should be arranged for practising physicians.
"What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"
Ozturk, Elif
2012-01-01
The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…
HistFitter software framework for statistical data analysis
Energy Technology Data Exchange (ETDEWEB)
Baak, M. [CERN, Geneva (Switzerland); Besjes, G.J. [Radboud University Nijmegen, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Cote, D. [University of Texas, Arlington (United States); Koutsman, A. [TRIUMF, Vancouver (Canada); Lorenz, J. [Ludwig-Maximilians-Universitaet Muenchen, Munich (Germany); Excellence Cluster Universe, Garching (Germany); Short, D. [University of Oxford, Oxford (United Kingdom)
2015-04-15
We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)
HistFitter software framework for statistical data analysis
International Nuclear Information System (INIS)
Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.
2015-01-01
We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Statistical ensembles in quantum mechanics
International Nuclear Information System (INIS)
Blokhintsev, D.
1976-01-01
The interpretation of quantum mechanics presented in this paper is based on the concept of quantum ensembles. This concept differs essentially from the canonical one by that the interference of the observer into the state of a microscopic system is of no greater importance than in any other field of physics. Owing to this fact, the laws established by quantum mechanics are not of less objective character than the laws governing classical statistical mechanics. The paradoxical nature of some statements of quantum mechanics which result from the interpretation of the wave functions as the observer's notebook greatly stimulated the development of the idea presented. (Auth.)
International Nuclear Information System (INIS)
Lee, Dong Soo; Lee, Jae Sung; Kim, Kyeong Min; Chung, June Key; Lee, Myung Chul
1998-01-01
We investigated the statistical methods to compose the functional brain map of human working memory and the principal factors that have an effect on the methods for localization. Repeated PET scans with successive four tasks, which consist of one control and three different activation tasks, were performed on six right-handed normal volunteers for 2 minutes after bolus injections of 925 MBq H 2 15 O at the intervals of 30 minutes. Image data were analyzed using SPM96 (Statistical Parametric Mapping) implemented with Matlab (Mathworks Inc., U.S.A.). Images from the same subject were spatially registered and were normalized using linear and nonlinear transformation methods. Significant difference between control and each activation state was estimated at every voxel based on the general linear model. Differences of global counts were removed using analysis of covariance (ANCOVA) with global activity as covariate. Using the mean and variance for each condition which was adjusted using ANCOVA, t-statistics was performed on every voxel. To interpret the results more easily, t-values were transformed to the standard Gaussian distribution (Z-score). All the subjects carried out the activation and control tests successfully. Average rate of correct answers was 95%. The numbers of activated blobs were 4 for verbal memory I, 9 for verbal memory II, 9 for visual memory, and 6 for conjunctive activation of these three tasks. The verbal working memory activates predominantly left-sided structures, and the visual memory activates the right hemisphere. We conclude that rCBF PET imaging and statistical parametric mapping method were useful in the localization of the brain regions for verbal and visual working memory
Method for statistical data analysis of multivariate observations
Gnanadesikan, R
1997-01-01
A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte
An Interpreter's Interpretation: Sign Language Interpreters' View of Musculoskeletal Disorders
National Research Council Canada - National Science Library
Johnson, William L
2003-01-01
Sign language interpreters are at increased risk for musculoskeletal disorders. This study used content analysis to obtain detailed information about these disorders from the interpreters' point of view...
Statistical Estimation of Heterogeneities: A New Frontier in Well Testing
Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.
2001-12-01
Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.
Truth, possibility and probability new logical foundations of probability and statistical inference
Chuaqui, R
1991-01-01
Anyone involved in the philosophy of science is naturally drawn into the study of the foundations of probability. Different interpretations of probability, based on competing philosophical ideas, lead to different statistical techniques, and frequently to mutually contradictory consequences. This unique book presents a new interpretation of probability, rooted in the traditional interpretation that was current in the 17th and 18th centuries. Mathematical models are constructed based on this interpretation, and statistical inference and decision theory are applied, including some examples in artificial intelligence, solving the main foundational problems. Nonstandard analysis is extensively developed for the construction of the models and in some of the proofs. Many nonstandard theorems are proved, some of them new, in particular, a representation theorem that asserts that any stochastic process can be approximated by a process defined over a space with equiprobable outcomes.
Statistical interpretation of the process of evolution and functioning of Audiovisual Archives
Directory of Open Access Journals (Sweden)
Nuno Miguel Epifânio
2013-03-01
Full Text Available The article provides a type of the operating conditions of audiovisual archives, using for this purpose the interpretation of the results obtained in the study of quantitative sampling. The study involved 43 institutions of different nature of dimension since the national and foreign organizations, from of the questions answered by services of communication and of cultural institutions. The analysis of the object of study found a variety of guidelines on the management of information preservation, as featured the typology of records collections of each file. The data collection thus allowed building an overview of the operating model of each organization surveyed in this study.
The reaction of organocerium reagents with easily enolizable ketones
International Nuclear Information System (INIS)
Imamoto, Tsuneo; Kusumoto, Tetsuo; Sugiura, Yasushi; Suzuki, Nobuyo; Takiyama, Nobuyuki
1985-01-01
Organocerium (III) reagents were conveniently generated by the reaction of organolithium compounds with anhydrous cerium (III) chloride. The reagents are less basic than organolithiums and Grignard reagents, and they react readily at -78 deg C with easily enolizable ketones such as 2-tetralone to afford addition products in high yields. Cerium (III) enolates were also generated from lithium enolates and cerium (III) chloride. The cerium (III) enolates undergo aldol addition with ketones or sterically crowded aldehyde to give the corresponding β-hydroxy ketones in good to high yields. (author)
College Students' Interpretation of Research Reports on Group Differences: The Tall-Tale Effect
Hogan, Thomas P.; Zaboski, Brian A.; Perry, Tiffany R.
2015-01-01
How does the student untrained in advanced statistics interpret results of research that reports a group difference? In two studies, statistically untrained college students were presented with abstracts or professional associations' reports and asked for estimates of scores obtained by the original participants in the studies. These estimates…
ADHD Rating Scale-IV: Checklists, Norms, and Clinical Interpretation
Pappas, Danielle
2006-01-01
This article reviews the "ADHD Rating Scale-IV: Checklist, norms, and clinical interpretation," is a norm-referenced checklist that measures the symptoms of attention deficit/hyperactivity disorder (ADHD) according to the diagnostic criteria of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV; American Psychiatric…
Wilhelm Wundt's Theory of Interpretation
Directory of Open Access Journals (Sweden)
Jochen Fahrenberg
2008-09-01
Full Text Available Wilhelm WUNDT was a pioneer in experimental and physiological psychology. However, his theory of interpretation (hermeneutics remains virtually neglected. According to WUNDT psychology belongs to the domain of the humanities (Geisteswissenschaften, and, throughout his books and research, he advocated two basic methodologies: experimentation (as the means of controlled self-observation and interpretative analysis of mental processes and products. He was an experimental psychologist and a profound expert in traditional hermeneutics. Today, he still may be acknowledged as the author of the monumental Völkerpsychologie, but not his advances in epistemology and methodology. His subsequent work, the Logik (1908/1921, contains about 120 pages on hermeneutics. In the present article a number of issues are addressed. Noteworthy was WUNDT's general intention to account for the logical constituents and the psychological process of understanding, and his reflections on quality control. In general, WUNDT demanded methodological pluralism and a complementary approach to the study of consciousness and neurophysiological processes. In the present paper WUNDT's approach is related to the continuing controversy on basic issues in methodology; e.g. experimental and statistical methods vs. qualitative (hermeneutic methods. Varied explanations are given for the one-sided or distorted reception of WUNDT's methodology. Presently, in Germany the basic program of study in psychology lacks thorough teaching and training in qualitative (hermeneutic methods. Appropriate courses are not included in the curricula, in contrast to the training in experimental design, observation methods, and statistics. URN: urn:nbn:de:0114-fqs0803291
ARSENIC CONTAMINATION IN GROUNDWATER: A STATISTICAL MODELING
Palas Roy; Naba Kumar Mondal; Biswajit Das; Kousik Das
2013-01-01
High arsenic in natural groundwater in most of the tubewells of the Purbasthali- Block II area of Burdwan district (W.B, India) has recently been focused as a serious environmental concern. This paper is intending to illustrate the statistical modeling of the arsenic contaminated groundwater to identify the interrelation of that arsenic contain with other participating groundwater parameters so that the arsenic contamination level can easily be predicted by analyzing only such parameters. Mul...
Statistical ecology comes of age
Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-01-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151
Statistical ecology comes of age.
Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-12-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.
International Conference on Robust Statistics 2015
Basu, Ayanendranath; Filzmoser, Peter; Mukherjee, Diganta
2016-01-01
This book offers a collection of recent contributions and emerging ideas in the areas of robust statistics presented at the International Conference on Robust Statistics 2015 (ICORS 2015) held in Kolkata during 12–16 January, 2015. The book explores the applicability of robust methods in other non-traditional areas which includes the use of new techniques such as skew and mixture of skew distributions, scaled Bregman divergences, and multilevel functional data methods; application areas being circular data models and prediction of mortality and life expectancy. The contributions are of both theoretical as well as applied in nature. Robust statistics is a relatively young branch of statistical sciences that is rapidly emerging as the bedrock of statistical analysis in the 21st century due to its flexible nature and wide scope. Robust statistics supports the application of parametric and other inference techniques over a broader domain than the strictly interpreted model scenarios employed in classical statis...
Components of the Pearson-Fisher chi-squared statistic
Directory of Open Access Journals (Sweden)
G. D. Raynery
2002-01-01
interpretation of the corresponding test statistic components has not previously been investigated. This paper provides the necessary details, as well as an overview of the decomposition options available, and revisits two published examples.
Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures
Energy Technology Data Exchange (ETDEWEB)
Udey, Ruth Norma [Michigan State Univ., East Lansing, MI (United States)
2013-01-01
Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.
The Role of the Sampling Distribution in Understanding Statistical Inference
Lipson, Kay
2003-01-01
Many statistics educators believe that few students develop the level of conceptual understanding essential for them to apply correctly the statistical techniques at their disposal and to interpret their outcomes appropriately. It is also commonly believed that the sampling distribution plays an important role in developing this understanding.…
Statistical and Methodological Considerations for the Interpretation of Intranasal Oxytocin Studies.
Walum, Hasse; Waldman, Irwin D; Young, Larry J
2016-02-01
Over the last decade, oxytocin (OT) has received focus in numerous studies associating intranasal administration of this peptide with various aspects of human social behavior. These studies in humans are inspired by animal research, especially in rodents, showing that central manipulations of the OT system affect behavioral phenotypes related to social cognition, including parental behavior, social bonding, and individual recognition. Taken together, these studies in humans appear to provide compelling, but sometimes bewildering, evidence for the role of OT in influencing a vast array of complex social cognitive processes in humans. In this article, we investigate to what extent the human intranasal OT literature lends support to the hypothesis that intranasal OT consistently influences a wide spectrum of social behavior in humans. We do this by considering statistical features of studies within this field, including factors like statistical power, prestudy odds, and bias. Our conclusion is that intranasal OT studies are generally underpowered and that there is a high probability that most of the published intranasal OT findings do not represent true effects. Thus, the remarkable reports that intranasal OT influences a large number of human social behaviors should be viewed with healthy skepticism, and we make recommendations to improve the reliability of human OT studies in the future. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Statistical methods to monitor the West Valley off-gas system
International Nuclear Information System (INIS)
Eggett, D.L.
1990-01-01
This paper reports on the of-gas system for the ceramic melter operated at the West Valley Demonstration Project at West Valley, NY, monitored during melter operation. A one-at-a-time method of monitoring the parameters of the off-gas system is not statistically sound. Therefore, multivariate statistical methods appropriate for the monitoring of many correlated parameters will be used. Monitoring a large number of parameters increases the probability of a false out-of-control signal. If the parameters being monitored are statistically independent, the control limits can be easily adjusted to obtain the desired probability of a false out-of-control signal. The principal component (PC) scores have desirable statistical properties when the original variables are distributed as multivariate normals. Two statistics derived from the PC scores and used to form multivariate control charts are outlined and their distributional properties reviewed
Statistical Power in Plant Pathology Research.
Gent, David H; Esker, Paul D; Kriss, Alissa B
2018-01-01
In null hypothesis testing, failure to reject a null hypothesis may have two potential interpretations. One interpretation is that the treatments being evaluated do not have a significant effect, and a correct conclusion was reached in the analysis. Alternatively, a treatment effect may have existed but the conclusion of the study was that there was none. This is termed a Type II error, which is most likely to occur when studies lack sufficient statistical power to detect a treatment effect. In basic terms, the power of a study is the ability to identify a true effect through a statistical test. The power of a statistical test is 1 - (the probability of Type II errors), and depends on the size of treatment effect (termed the effect size), variance, sample size, and significance criterion (the probability of a Type I error, α). Low statistical power is prevalent in scientific literature in general, including plant pathology. However, power is rarely reported, creating uncertainty in the interpretation of nonsignificant results and potentially underestimating small, yet biologically significant relationships. The appropriate level of power for a study depends on the impact of Type I versus Type II errors and no single level of power is acceptable for all purposes. Nonetheless, by convention 0.8 is often considered an acceptable threshold and studies with power less than 0.5 generally should not be conducted if the results are to be conclusive. The emphasis on power analysis should be in the planning stages of an experiment. Commonly employed strategies to increase power include increasing sample sizes, selecting a less stringent threshold probability for Type I errors, increasing the hypothesized or detectable effect size, including as few treatment groups as possible, reducing measurement variability, and including relevant covariates in analyses. Power analysis will lead to more efficient use of resources and more precisely structured hypotheses, and may even
Concept of probability in statistical physics
Guttmann, Y M
1999-01-01
Foundational issues in statistical mechanics and the more general question of how probability is to be understood in the context of physical theories are both areas that have been neglected by philosophers of physics. This book fills an important gap in the literature by providing a most systematic study of how to interpret probabilistic assertions in the context of statistical mechanics. The book explores both subjectivist and objectivist accounts of probability, and takes full measure of work in the foundations of probability theory, in statistical mechanics, and in mathematical theory. It will be of particular interest to philosophers of science, physicists and mathematicians interested in foundational issues, and also to historians of science.
Use of demonstrations and experiments in teaching business statistics
Johnson, D. G.; John, J. A.
2003-01-01
The aim of a business statistics course should be to help students think statistically and to interpret and understand data, rather than to focus on mathematical detail and computation. To achieve this students must be thoroughly involved in the learning process, and encouraged to discover for themselves the meaning, importance and relevance of statistical concepts. In this paper we advocate the use of experiments and demonstrations as aids to achieving these goals. A number of demonstrations...
Wind Statistics from a Forested Landscape
DEFF Research Database (Denmark)
Arnqvist, Johan; Segalini, Antonio; Dellwik, Ebba
2015-01-01
An analysis and interpretation of measurements from a 138-m tall tower located in a forested landscape is presented. Measurement errors and statistical uncertainties are carefully evaluated to ensure high data quality. A 40(Formula presented.) wide wind-direction sector is selected as the most...... representative for large-scale forest conditions, and from that sector first-, second- and third-order statistics, as well as analyses regarding the characteristic length scale, the flux-profile relationship and surface roughness are presented for a wide range of stability conditions. The results are discussed...
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg
2012-01-01
In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...
An 'electronic' extramural course in epidemiology and medical statistics.
Ostbye, T
1989-03-01
This article describes an extramural university course in epidemiology and medical statistics taught using a computer conferencing system, microcomputers and data communications. Computer conferencing was shown to be a powerful, yet quite easily mastered, vehicle for distance education. It allows health personnel unable to attend regular classes due to geographical or time constraints, to take part in an interactive learning environment at low cost. This overcomes part of the intellectual and social isolation associated with traditional correspondence courses. Teaching of epidemiology and medical statistics is well suited to computer conferencing, even if the asynchronicity of the medium makes discussion of the most complex statistical concepts a little cumbersome. Computer conferencing may also prove to be a useful tool for teaching other medical and health related subjects.
Degree-based statistic and center persistency for brain connectivity analysis.
Yoo, Kwangsun; Lee, Peter; Chung, Moo K; Sohn, William S; Chung, Sun Ju; Na, Duk L; Ju, Daheen; Jeong, Yong
2017-01-01
Brain connectivity analyses have been widely performed to investigate the organization and functioning of the brain, or to observe changes in neurological or psychiatric conditions. However, connectivity analysis inevitably introduces the problem of mass-univariate hypothesis testing. Although, several cluster-wise correction methods have been suggested to address this problem and shown to provide high sensitivity, these approaches fundamentally have two drawbacks: the lack of spatial specificity (localization power) and the arbitrariness of an initial cluster-forming threshold. In this study, we propose a novel method, degree-based statistic (DBS), performing cluster-wise inference. DBS is designed to overcome the above-mentioned two shortcomings. From a network perspective, a few brain regions are of critical importance and considered to play pivotal roles in network integration. Regarding this notion, DBS defines a cluster as a set of edges of which one ending node is shared. This definition enables the efficient detection of clusters and their center nodes. Furthermore, a new measure of a cluster, center persistency (CP) was introduced. The efficiency of DBS with a known "ground truth" simulation was demonstrated. Then they applied DBS to two experimental datasets and showed that DBS successfully detects the persistent clusters. In conclusion, by adopting a graph theoretical concept of degrees and borrowing the concept of persistence from algebraic topology, DBS could sensitively identify clusters with centric nodes that would play pivotal roles in an effect of interest. DBS is potentially widely applicable to variable cognitive or clinical situations and allows us to obtain statistically reliable and easily interpretable results. Hum Brain Mapp 38:165-181, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Reading Statistics And Research
Akbulut, Reviewed By Yavuz
2008-01-01
The book demonstrates the best and most conservative ways to decipher and critique research reports particularly for social science researchers. In addition, new editions of the book are always better organized, effectively structured and meticulously updated in line with the developments in the field of research statistics. Even the most trivial issues are revisited and updated in new editions. For instance, purchaser of the previous editions might check the interpretation of skewness and ku...
Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.
2012-01-01
Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…
Philosophical perspectives on quantum chaos: Models and interpretations
Bokulich, Alisa Nicole
2001-09-01
The problem of quantum chaos is a special case of the larger problem of understanding how the classical world emerges from quantum mechanics. While we have learned that chaos is pervasive in classical systems, it appears to be almost entirely absent in quantum systems. The aim of this dissertation is to determine what implications the interpretation of quantum mechanics has for attempts to explain the emergence of classical chaos. There are three interpretations of quantum mechanics that have set out programs for solving the problem of quantum chaos: the standard interpretation, the statistical interpretation, and the deBroglie-Bohm causal interpretation. One of the main conclusions of this dissertation is that an interpretation alone is insufficient for solving the problem of quantum chaos and that the phenomenon of decoherence must be taken into account. Although a completely satisfactory solution of the problem of quantum chaos is still outstanding, I argue that the deBroglie-Bohm interpretation with the help of decoherence outlines the most promising research program to pursue. In addition to making a contribution to the debate in the philosophy of physics concerning the interpretation of quantum mechanics, this dissertation reveals two important methodological lessons for the philosophy of science. First, issues of reductionism and intertheoretic relations cannot be divorced from questions concerning the interpretation of the theories involved. Not only is the exploration of intertheoretic relations a central part of the articulation and interpretation of an individual theory, but the very terms used to discuss intertheoretic relations, such as `state' and `classical limit', are themselves defined by particular interpretations of the theory. The second lesson that emerges is that, when it comes to characterizing the relationship between classical chaos and quantum mechanics, the traditional approaches to intertheoretic relations, namely reductionism and
Interpretive Media Study and Interpretive Social Science.
Carragee, Kevin M.
1990-01-01
Defines the major theoretical influences on interpretive approaches in mass communication, examines the central concepts of these perspectives, and provides a critique of these approaches. States that the adoption of interpretive approaches in mass communication has ignored varied critiques of interpretive social science. Suggests that critical…
Interpreters, Interpreting, and the Study of Bilingualism.
Valdes, Guadalupe; Angelelli, Claudia
2003-01-01
Discusses research on interpreting focused specifically on issues raised by this literature about the nature of bilingualism. Suggests research carried out on interpreting--while primarily produced with a professional audience in mind and concerned with improving the practice of interpreting--provides valuable insights about complex aspects of…
Statistical Signal Process in R Language in the Pharmacovigilance Programme of India.
Kumar, Aman; Ahuja, Jitin; Shrivastava, Tarani Prakash; Kumar, Vipin; Kalaiselvan, Vivekanandan
2018-05-01
The Ministry of Health & Family Welfare, Government of India, initiated the Pharmacovigilance Programme of India (PvPI) in July 2010. The purpose of the PvPI is to collect data on adverse reactions due to medications, analyze it, and use the reference to recommend informed regulatory intervention, besides communicating the risk to health care professionals and the public. The goal of the present study was to apply statistical tools to find the relationship between drugs and ADRs for signal detection by R programming. Four statistical parameters were proposed for quantitative signal detection. These 4 parameters are IC 025 , PRR and PRR lb , chi-square, and N 11 ; we calculated these 4 values using R programming. We analyzed 78,983 drug-ADR combinations, and the total count of drug-ADR combination was 4,20,060. During the calculation of the statistical parameter, we use 3 variables: (1) N 11 (number of counts), (2) N 1. (Drug margin), and (3) N .1 (ADR margin). The structure and calculation of these 4 statistical parameters in R language are easily understandable. On the basis of the IC value (IC value >0), out of the 78,983 drug-ADR combination (drug-ADR combination), we found the 8,667 combinations to be significantly associated. The calculation of statistical parameters in R language is time saving and allows to easily identify new signals in the Indian ICSR (Individual Case Safety Reports) database.
On the interpretations of Langevin stochastic equation in different coordinate systems
International Nuclear Information System (INIS)
Martinez, E.; Lopez-Diaz, L.; Torres, L.; Alejos, O.
2004-01-01
The stochastic Langevin Landau-Lifshitz equation is usually utilized in micromagnetics formalism to account for thermal effects. Commonly, two different interpretations of the stochastic integrals can be made: Ito and Stratonovich. In this work, the Langevin-Landau-Lifshitz (LLL) equation is written in both Cartesian and Spherical coordinates. If Spherical coordinates are employed, the noise is additive, and therefore, Ito and Stratonovich solutions are equal. This is not the case when (LLL) equation is written in Cartesian coordinates. In this case, the Langevin equation must be interpreted in the Stratonovich sense in order to reproduce correct statistical results. Nevertheless, the statistics of the numerical results obtained from Euler-Ito and Euler-Stratonovich schemes are equivalent due to the additional numerical constraint imposed in Cartesian system after each time step, which itself assures that the magnitude of the magnetization is preserved
Blunt bilateral diaphragmatic rupture—A right side can be easily missed
Directory of Open Access Journals (Sweden)
Maria Michailidou
2015-12-01
Full Text Available Blunt diaphragmatic rupture (BDR is uncommon with a reported incidence range of 1%–2%. The true incidence is not known. Bilateral BDR is particularly rare. We presented a case of bilateral BDR and we think that the incidence is under-recognised thanks to an easily missed and difficult to diagnose right sided injury. Keywords: Blunt, Diaphragm, Bilateral, Injury
Clearly written, easily comprehended? The readability of websites providing information on epilepsy
Brigo, Francesco; Otte, Wim; Igwe, Stanley C.; Tezzon, Frediano; Nardone, Raffaele
2015-01-01
There is a general need for high-quality, easily accessible, and comprehensive health-care information on epilepsy to better inform the general population about this highly stigmatized neurological disorder. The aim of this study was to evaluate the health literacy level of eight popular
Bestelmeyer, Brandon T.; Williamson, Jeb C.; Talbot, Curtis J.; Cates, Greg W.; Duniway, Michael C.; Brown, Joel R.
2016-01-01
State-and-transition models (STMs) are useful tools for management, but they can be difficult to use and have limited content.STMs created for groups of related ecological sites could simplify and improve their utility. The amount of information linked to models can be increased using tables that communicate management interpretations and important within-group variability.We created a new web-based information system (the Ecosystem Dynamics Interpretive Tool) to house STMs, associated tabular information, and other ecological site data and descriptors.Fewer, more informative, better organized, and easily accessible STMs should increase the accessibility of science information.
Toward Establishing the Validity of the Resource Interpreter's Self-Efficacy Instrument
Smith, Grant D.
Interpretive rangers serve as one of the major educational resources that visitors may encounter during their visit to a park or other natural area, yet our understanding of their professional growth remains limited. This study helps address this issue by developing an instrument that evaluates the beliefs of resource interpreters regarding their capabilities of communicating with the public. The resulting 11-item instrument was built around the construct of Albert Bandura's self-efficacy theory (Bandura, 1977, 1986, 1997), used guidelines and principles developed over the course of 30 years of teacher efficacy studies (Bandura, 2006; Gibson & Dembo, 1984; Riggs & Enochs, 1990; Tschannen-Moran & Hoy, 2001; Tschannen-Moran, Hoy, & Hoy, 1998), and probed areas of challenge that are unique to the demands of resource interpretation (Brochu & Merriman, 2002; Ham, 1992; Knudson, Cable, & Beck, 2003; Larsen, 2003; Tilden, 1977). A voluntary convenience sample of 364 National Park Service rangers was collected in order to conduct the statistical analyses needed to winnow the draft instrument down from 47 items in its original form to 11 items in its final state. Statistical analyses used in this process included item-total correlation, index of discrimination, exploratory factor analysis, and confirmatory factor analysis.
Quantum Statistics and Entanglement Problems
Trainor, L. E. H.; Lumsden, Charles J.
2002-01-01
Interpretations of quantum measurement theory have been plagued by two questions, one concerning the role of observer consciousness and the other the entanglement phenomenon arising from the superposition of quantum states. We emphasize here the remarkable role of quantum statistics in describing the entanglement problem correctly and discuss the relationship to issues arising from current discussions of intelligent observers in entangled, decohering quantum worlds.
Crunching Numbers: What Cancer Screening Statistics Really Tell Us
Cancer screening studies have shown that more screening does not necessarily translate into fewer cancer deaths. This article explains how to interpret the statistics used to describe the results of screening studies.
Toward smartphone applications for geoparks information and interpretation systems in China
Li, Qian; Tian, Mingzhong; Li, Xingle; Shi, Yihua; Zhou, Xu
2015-11-01
Geopark information and interpretation systems are both necessary infrastructure in geopark planning and construction program, and they are also essential for geoeducation and geoconservation in geopark tourism. The current state and development of information and interpretation systems in China's geoparks were presented and analyzed in this paper. Statistics showed that fewer than half of geoparks run websites, and less than that amount maintained database, and less than one percent of all Internet/smartphone applications were used for geopark tourism. The results of our analysis indicated that smartphone applications in geopark information and interpretation systems would provide benefits such as accelerated geopark science popularization and education and facilitated interactive communication between geoparks and tourists.
Vocational students' learning preferences: the interpretability of ipsative data.
Smith, P J
2000-02-01
A number of researchers have argued that ipsative data are not suitable for statistical procedures designed for normative data. Others have argued that the interpretability of such analyses of ipsative data are little affected where the number of variables and the sample size are sufficiently large. The research reported here represents a factor analysis of the scores on the Canfield Learning Styles Inventory for 1,252 students in vocational education. The results of the factor analysis of these ipsative data were examined in a context of existing theory and research on vocational students and lend support to the argument that the factor analysis of ipsative data can provide sensibly interpretable results.
The statistics of multi-step direct reactions
International Nuclear Information System (INIS)
Koning, A.J.; Akkermans, J.M.
1991-01-01
We propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. We present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplifications of the leading-particle statistics theory. A more comprehensive exposition will appear before long. (author). 32 refs, 4 figs
Statistical learning from a regression perspective
Berk, Richard A
2016-01-01
This textbook considers statistical learning applications when interest centers on the conditional distribution of the response variable, given a set of predictors, and when it is important to characterize how the predictors are related to the response. As a first approximation, this can be seen as an extension of nonparametric regression. This fully revised new edition includes important developments over the past 8 years. Consistent with modern data analytics, it emphasizes that a proper statistical learning data analysis derives from sound data collection, intelligent data management, appropriate statistical procedures, and an accessible interpretation of results. A continued emphasis on the implications for practice runs through the text. Among the statistical learning procedures examined are bagging, random forests, boosting, support vector machines and neural networks. Response variables may be quantitative or categorical. As in the first edition, a unifying theme is supervised learning that can be trea...
Data analysis using the Gnu R system for statistical computation
Energy Technology Data Exchange (ETDEWEB)
Simone, James; /Fermilab
2011-07-01
R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial
An easily regenerable enzyme reactor prepared from polymerized high internal phase emulsions
International Nuclear Information System (INIS)
Ruan, Guihua; Wu, Zhenwei; Huang, Yipeng; Wei, Meiping; Su, Rihui; Du, Fuyou
2016-01-01
A large-scale high-efficient enzyme reactor based on polymerized high internal phase emulsion monolith (polyHIPE) was prepared. First, a porous cross-linked polyHIPE monolith was prepared by in-situ thermal polymerization of a high internal phase emulsion containing styrene, divinylbenzene and polyglutaraldehyde. The enzyme of TPCK-Trypsin was then immobilized on the monolithic polyHIPE. The performance of the resultant enzyme reactor was assessed according to the conversion ability of N_α-benzoyl-L-arginine ethyl ester to N_α-benzoyl-L-arginine, and the protein digestibility of bovine serum albumin (BSA) and cytochrome (Cyt-C). The results showed that the prepared enzyme reactor exhibited high enzyme immobilization efficiency and fast and easy-control protein digestibility. BSA and Cyt-C could be digested in 10 min with sequence coverage of 59% and 78%, respectively. The peptides and residual protein could be easily rinsed out from reactor and the reactor could be regenerated easily with 4 M HCl without any structure destruction. Properties of multiple interconnected chambers with good permeability, fast digestion facility and easily reproducibility indicated that the polyHIPE enzyme reactor was a good selector potentially applied in proteomics and catalysis areas. - Graphical abstract: Schematic illustration of preparation of hypercrosslinking polyHIPE immobilized enzyme reactor for on-column protein digestion. - Highlights: • A reactor was prepared and used for enzyme immobilization and continuous on-column protein digestion. • The new polyHIPE IMER was quite suit for protein digestion with good properties. • On-column digestion revealed that the IMER was easy regenerated by HCl without any structure destruction.
An easily regenerable enzyme reactor prepared from polymerized high internal phase emulsions
Energy Technology Data Exchange (ETDEWEB)
Ruan, Guihua, E-mail: guihuaruan@hotmail.com [Guangxi Key Laboratory of Electrochemical and Magnetochemical Functional Materials, College of Chemistry and Bioengineering, Guilin University of Technology, Guangxi 541004 (China); Guangxi Collaborative Innovation Center for Water Pollution Control and Water Safety in Karst Area, Guilin University of Technology, Guilin 541004 (China); Wu, Zhenwei; Huang, Yipeng; Wei, Meiping; Su, Rihui [Guangxi Key Laboratory of Electrochemical and Magnetochemical Functional Materials, College of Chemistry and Bioengineering, Guilin University of Technology, Guangxi 541004 (China); Du, Fuyou, E-mail: dufu2005@126.com [Guangxi Key Laboratory of Electrochemical and Magnetochemical Functional Materials, College of Chemistry and Bioengineering, Guilin University of Technology, Guangxi 541004 (China); Guangxi Collaborative Innovation Center for Water Pollution Control and Water Safety in Karst Area, Guilin University of Technology, Guilin 541004 (China)
2016-04-22
A large-scale high-efficient enzyme reactor based on polymerized high internal phase emulsion monolith (polyHIPE) was prepared. First, a porous cross-linked polyHIPE monolith was prepared by in-situ thermal polymerization of a high internal phase emulsion containing styrene, divinylbenzene and polyglutaraldehyde. The enzyme of TPCK-Trypsin was then immobilized on the monolithic polyHIPE. The performance of the resultant enzyme reactor was assessed according to the conversion ability of N{sub α}-benzoyl-L-arginine ethyl ester to N{sub α}-benzoyl-L-arginine, and the protein digestibility of bovine serum albumin (BSA) and cytochrome (Cyt-C). The results showed that the prepared enzyme reactor exhibited high enzyme immobilization efficiency and fast and easy-control protein digestibility. BSA and Cyt-C could be digested in 10 min with sequence coverage of 59% and 78%, respectively. The peptides and residual protein could be easily rinsed out from reactor and the reactor could be regenerated easily with 4 M HCl without any structure destruction. Properties of multiple interconnected chambers with good permeability, fast digestion facility and easily reproducibility indicated that the polyHIPE enzyme reactor was a good selector potentially applied in proteomics and catalysis areas. - Graphical abstract: Schematic illustration of preparation of hypercrosslinking polyHIPE immobilized enzyme reactor for on-column protein digestion. - Highlights: • A reactor was prepared and used for enzyme immobilization and continuous on-column protein digestion. • The new polyHIPE IMER was quite suit for protein digestion with good properties. • On-column digestion revealed that the IMER was easy regenerated by HCl without any structure destruction.
Guler, Mustafa; Gursoy, Kadir; Guven, Bulent
2016-01-01
Understanding and interpreting biased data, decision-making in accordance with the data, and critically evaluating situations involving data are among the fundamental skills necessary in the modern world. To develop these required skills, emphasis on statistical literacy in school mathematics has been gradually increased in recent years. The…
STATISTICS IN SERVICE QUALITY ASSESSMENT
Directory of Open Access Journals (Sweden)
Dragana Gardašević
2012-09-01
Full Text Available For any quality evaluation in sports, science, education, and so, it is useful to collect data to construct a strategy to improve the quality of services offered to the user. For this purpose, we use statistical software packages for data processing data collected in order to increase customer satisfaction. The principle is demonstrated by the example of the level of student satisfaction ratings Belgrade Polytechnic (as users the quality of institutions (Belgrade Polytechnic. Here, the emphasis on statistical analysis as a tool for quality control in order to improve the same, and not the interpretation of results. Therefore, the above can be used as a model in sport to improve the overall results.
Pattern statistics on Markov chains and sensitivity to parameter estimation
Directory of Open Access Journals (Sweden)
Nuel Grégory
2006-10-01
Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.
Plasmonic Films Can Easily Be Better: Rules and Recipes
2015-01-01
High-quality materials are critical for advances in plasmonics, especially as researchers now investigate quantum effects at the limit of single surface plasmons or exploit ultraviolet- or CMOS-compatible metals such as aluminum or copper. Unfortunately, due to inexperience with deposition methods, many plasmonics researchers deposit metals under the wrong conditions, severely limiting performance unnecessarily. This is then compounded as others follow their published procedures. In this perspective, we describe simple rules collected from the surface-science literature that allow high-quality plasmonic films of aluminum, copper, gold, and silver to be easily deposited with commonly available equipment (a thermal evaporator). Recipes are also provided so that films with optimal optical properties can be routinely obtained. PMID:25950012
Using recurrence plot analysis for software execution interpretation and fault detection
Mosdorf, M.
2015-09-01
This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.
Energy Technology Data Exchange (ETDEWEB)
Lee, Dong Soo; Lee, Jae Sung; Kim, Kyeong Min; Chung, June Key; Lee, Myung Chul [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)
1998-08-01
We investigated the statistical methods to compose the functional brain map of human working memory and the principal factors that have an effect on the methods for localization. Repeated PET scans with successive four tasks, which consist of one control and three different activation tasks, were performed on six right-handed normal volunteers for 2 minutes after bolus injections of 925 MBq H{sub 2}{sup 15}O at the intervals of 30 minutes. Image data were analyzed using SPM96 (Statistical Parametric Mapping) implemented with Matlab (Mathworks Inc., U.S.A.). Images from the same subject were spatially registered and were normalized using linear and nonlinear transformation methods. Significant difference between control and each activation state was estimated at every voxel based on the general linear model. Differences of global counts were removed using analysis of covariance (ANCOVA) with global activity as covariate. Using the mean and variance for each condition which was adjusted using ANCOVA, t-statistics was performed on every voxel. To interpret the results more easily, t-values were transformed to the standard Gaussian distribution (Z-score). All the subjects carried out the activation and control tests successfully. Average rate of correct answers was 95%. The numbers of activated blobs were 4 for verbal memory I, 9 for verbal memory II, 9 for visual memory, and 6 for conjunctive activation of these three tasks. The verbal working memory activates predominantly left-sided structures, and the visual memory activates the right hemisphere. We conclude that rCBF PET imaging and statistical parametric mapping method were useful in the localization of the brain regions for verbal and visual working memory.
Statistical considerations in the development of injury risk functions.
McMurry, Timothy L; Poplin, Gerald S
2015-01-01
We address 4 frequently misunderstood and important statistical ideas in the construction of injury risk functions. These include the similarities of survival analysis and logistic regression, the correct scale on which to construct pointwise confidence intervals for injury risk, the ability to discern which form of injury risk function is optimal, and the handling of repeated tests on the same subject. The statistical models are explored through simulation and examination of the underlying mathematics. We provide recommendations for the statistically valid construction and correct interpretation of single-predictor injury risk functions. This article aims to provide useful and understandable statistical guidance to improve the practice in constructing injury risk functions.
Statistical analogues of thermodynamic extremum principles
Ramshaw, John D.
2018-05-01
As shown by Jaynes, the canonical and grand canonical probability distributions of equilibrium statistical mechanics can be simply derived from the principle of maximum entropy, in which the statistical entropy S=- {k}{{B}}{\\sum }i{p}i{log}{p}i is maximised subject to constraints on the mean values of the energy E and/or number of particles N in a system of fixed volume V. The Lagrange multipliers associated with those constraints are then found to be simply related to the temperature T and chemical potential μ. Here we show that the constrained maximisation of S is equivalent to, and can therefore be replaced by, the essentially unconstrained minimisation of the obvious statistical analogues of the Helmholtz free energy F = E ‑ TS and the grand potential J = F ‑ μN. Those minimisations are more easily performed than the maximisation of S because they formally eliminate the constraints on the mean values of E and N and their associated Lagrange multipliers. This procedure significantly simplifies the derivation of the canonical and grand canonical probability distributions, and shows that the well known extremum principles for the various thermodynamic potentials possess natural statistical analogues which are equivalent to the constrained maximisation of S.
Rabin, Laura A.; Nutter-Upham, Katherine E.
2010-01-01
We describe an active learning exercise intended to improve undergraduate students' understanding of statistics by grounding complex concepts within a meaningful, applied context. Students in a journal excerpt activity class read brief excerpts of statistical reporting from published research articles, answered factual and interpretive questions,…
Tools to support interpreting multiple regression in the face of multicollinearity.
Kraha, Amanda; Turner, Heather; Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K
2012-01-01
While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses.
An objective interpretation of Lagrangian quantum mechanics
International Nuclear Information System (INIS)
Roberts, K.V.
1978-01-01
Unlike classical mechanics, the Copenhagen interpretation of quantum mechanics does not provide an objective space-time picture of the actual history of a physical system. This paper suggests how the conceptual foundations of quantum mechanics can be reformulated, without changing the mathematical content of the theory or its detailed agreement with experiment and without introducing any hidden variables, in order to provide an objective, covariant, Lagrangian description of reality which is deterministic and time-symmetric on the microscopic scale. The basis of this description can be expressed either as an action functional or as a summation over Feynman diagrams or paths. The probability laws associated with the quantum-mechanical measurement process, and the asymmetry in time of the principles of macroscopic causality and of the laws of statistical mechanics, are interpreted as consequences of the particular boundary conditions that apply to the actual universe. The objective interpretation does not include the observer and the measurement process among the fundamental concepts of the theory, but it does not entail a revision of the ideas of determinism and of time, since in a Lagrangian theory both initial and final boundary conditions on the action functional are required. (author)
Data analysis and interpretation for environmental surveillance
International Nuclear Information System (INIS)
1992-06-01
The Data Analysis and Interpretation for Environmental Surveillance Conference was held in Lexington, Kentucky, February 5--7, 1990. The conference was sponsored by what is now the Office of Environmental Compliance and Documentation, Oak Ridge National Laboratory. Participants included technical professionals from all Martin Marietta Energy Systems facilities, Westinghouse Materials Company of Ohio, Pacific Northwest Laboratory, and several technical support contractors. Presentations at the conference ranged the full spectrum of issues that effect the analysis and interpretation of environmental data. Topics included tracking systems for samples and schedules associated with ongoing programs; coalescing data from a variety of sources and pedigrees into integrated data bases; methods for evaluating the quality of environmental data through empirical estimates of parameters such as charge balance, pH, and specific conductance; statistical applications to the interpretation of environmental information; and uses of environmental information in risk and dose assessments. Hearing about and discussing this wide variety of topics provided an opportunity to capture the subtlety of each discipline and to appreciate the continuity that is required among the disciplines in order to perform high-quality environmental information analysis
Directory of Open Access Journals (Sweden)
Rochelle E. Tractenberg
2016-12-01
Full Text Available Statistical literacy is essential to an informed citizenry; and two emerging trends highlight a growing need for training that achieves this literacy. The first trend is towards “big” data: while automated analyses can exploit massive amounts of data, the interpretation—and possibly more importantly, the replication—of results are challenging without adequate statistical literacy. The second trend is that science and scientific publishing are struggling with insufficient/inappropriate statistical reasoning in writing, reviewing, and editing. This paper describes a model for statistical literacy (SL and its development that can support modern scientific practice. An established curriculum development and evaluation tool—the Mastery Rubric—is integrated with a new, developmental, model of statistical literacy that reflects the complexity of reasoning and habits of mind that scientists need to cultivate in order to recognize, choose, and interpret statistical methods. This developmental model provides actionable evidence, and explicit opportunities for consequential assessment that serves students, instructors, developers/reviewers/accreditors of a curriculum, and institutions. By supporting the enrichment, rather than increasing the amount, of statistical training in the basic and life sciences, this approach supports curriculum development, evaluation, and delivery to promote statistical literacy for students and a collective quantitative proficiency more broadly.
International Nuclear Information System (INIS)
Altarelli, Fabrizio; Monasson, Remi; Zamponi, Francesco
2007-01-01
For large clause-to-variable ratios, typical K-SAT instances drawn from the uniform distribution have no solution. We argue, based on statistical mechanics calculations using the replica and cavity methods, that rare satisfiable instances from the uniform distribution are very similar to typical instances drawn from the so-called planted distribution, where instances are chosen uniformly between the ones that admit a given solution. It then follows, from a recent article by Feige, Mossel and Vilenchik (2006 Complete convergence of message passing algorithms for some satisfiability problems Proc. Random 2006 pp 339-50), that these rare instances can be easily recognized (in O(log N) time and with probability close to 1) by a simple message-passing algorithm
Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory
2017-04-01
The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.
Basic statistical tools in research and data analysis
Directory of Open Access Journals (Sweden)
Zulfiqar Ali
2016-01-01
Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.
Ether and interpretation of some physical phenomena and concepts
International Nuclear Information System (INIS)
Rzayev, S.G.
2008-01-01
On the basis of the concept of existence of an ether representation about time, space, matters and physical field are profound and also the essence of such phenomena, as corpuscular - wave dualism, change of time, scale and mass at movement body's is opened. The opportunity of transition from probability-statistical interpretation of the quantum phenomena to Laplace's determinism is shown
Converting analog interpretive data to digital formats for use in database and GIS applications
Flocks, James G.
2004-01-01
There is a growing need by researchers and managers for comprehensive and unified nationwide datasets of scientific data. These datasets must be in a digital format that is easily accessible using database and GIS applications, providing the user with access to a wide variety of current and historical information. Although most data currently being collected by scientists are already in a digital format, there is still a large repository of information in the literature and paper archive. Converting this information into a format accessible by computer applications is typically very difficult and can result in loss of data. However, since scientific data are commonly collected in a repetitious, concise matter (i.e., forms, tables, graphs, etc.), these data can be recovered digitally by using a conversion process that relates the position of an attribute in two-dimensional space to the information that the attribute signifies. For example, if a table contains a certain piece of information in a specific row and column, then the space that the row and column occupies becomes an index of that information. An index key is used to identify the relation between the physical location of the attribute and the information the attribute contains. The conversion process can be achieved rapidly, easily and inexpensively using widely available digitizing and spreadsheet software, and simple programming code. In the geological sciences, sedimentary character is commonly interpreted from geophysical profiles and descriptions of sediment cores. In the field and laboratory, these interpretations were typically transcribed to paper. The information from these paper archives is still relevant and increasingly important to scientists, engineers and managers to understand geologic processes affecting our environment. Direct scanning of this information produces a raster facsimile of the data, which allows it to be linked to the electronic world. But true integration of the content with
An AAA-DDD triply hydrogen-bonded complex easily accessible for supramolecular polymers.
Han, Yi-Fei; Chen, Wen-Qiang; Wang, Hong-Bo; Yuan, Ying-Xue; Wu, Na-Na; Song, Xiang-Zhi; Yang, Lan
2014-12-15
For a complementary hydrogen-bonded complex, when every hydrogen-bond acceptor is on one side and every hydrogen-bond donor is on the other, all secondary interactions are attractive and the complex is highly stable. AAA-DDD (A=acceptor, D=donor) is considered to be the most stable among triply hydrogen-bonded sequences. The easily synthesized and further derivatized AAA-DDD system is very desirable for hydrogen-bonded functional materials. In this case, AAA and DDD, starting from 4-methoxybenzaldehyde, were synthesized with the Hantzsch pyridine synthesis and Friedländer annulation reaction. The association constant determined by fluorescence titration in chloroform at room temperature is 2.09×10(7) M(-1) . The AAA and DDD components are not coplanar, but form a V shape in the solid state. Supramolecular polymers based on AAA-DDD triply hydrogen bonded have also been developed. This work may make AAA-DDD triply hydrogen-bonded sequences easily accessible for stimuli-responsive materials. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SpaSM: A MATLAB Toolbox for Sparse Statistical Modeling
DEFF Research Database (Denmark)
Sjöstrand, Karl; Clemmensen, Line Harder; Larsen, Rasmus
2018-01-01
Applications in biotechnology such as gene expression analysis and image processing have led to a tremendous development of statistical methods with emphasis on reliable solutions to severely underdetermined systems. Furthermore, interpretations of such solutions are of importance, meaning...
A basic introduction to statistics for the orthopaedic surgeon.
Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef
2012-02-01
Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.
LAMINAR FLOW THROUGH A TUBE WITH AN EASILY PENETRABLE ROUGHNESS NEAR AXIS
Directory of Open Access Journals (Sweden)
Є.О. Гаєв
2012-12-01
Full Text Available Mathematical model has been suggested and investigation carried out of laminar flow through a round tube with a porous insertion (easily penetrable roughness, EPR in its middle along the axis. Velocity and shear fields have been found analytically for stable flow region, as well as hydraulic resistance as functions of EPR density and its height.
Confidence Level Computation for Combining Searches with Small Statistics
Junk, Thomas
1999-01-01
This article describes an efficient procedure for computing approximate confidence levels for searches for new particles where the expected signal and background levels are small enough to require the use of Poisson statistics. The results of many independent searches for the same particle may be combined easily, regardless of the discriminating variables which may be measured for the candidate events. The effects of systematic uncertainty in the signal and background models are incorporated ...
The Scythe Statistical Library: An Open Source C++ Library for Statistical Computation
Directory of Open Access Journals (Sweden)
Daniel Pemstein
2011-08-01
Full Text Available The Scythe Statistical Library is an open source C++ library for statistical computation. It includes a suite of matrix manipulation functions, a suite of pseudo-random number generators, and a suite of numerical optimization routines. Programs written using Scythe are generally much faster than those written in commonly used interpreted languages, such as R and proglang{MATLAB}; and can be compiled on any system with the GNU GCC compiler (and perhaps with other C++ compilers. One of the primary design goals of the Scythe developers has been ease of use for non-expert C++ programmers. Ease of use is provided through three primary mechanisms: (1 operator and function over-loading, (2 numerous pre-fabricated utility functions, and (3 clear documentation and example programs. Additionally, Scythe is quite flexible and entirely extensible because the source code is available to all users under the GNU General Public License.
Planar-channeling spatial density under statistical equilibrium
International Nuclear Information System (INIS)
Ellison, J.A.; Picraux, S.T.
1978-01-01
The phase-space density for planar channeled particles has been derived for the continuum model under statistical equilibrium. This is used to obtain the particle spatial probability density as a function of incident angle. The spatial density is shown to depend on only two parameters, a normalized incident angle and a normalized planar spacing. This normalization is used to obtain, by numerical calculation, a set of universal curves for the spatial density and also for the channeled-particle wavelength as a function of amplitude. Using these universal curves, the statistical-equilibrium spatial density and the channeled-particle wavelength can be easily obtained for any case for which the continuum model can be applied. Also, a new one-parameter analytic approximation to the spatial density is developed. This parabolic approximation is shown to give excellent agreement with the exact calculations
The Development of On-Line Statistics Program for Radiation Oncology
International Nuclear Information System (INIS)
Kim, Yoon Jong; Lee, Dong Hoon; Ji, Young Hoon; Lee, Dong Han; Jo, Chul Ku; Kim, Mi Sook; Ru, Sung Rul; Hong, Seung Hong
2001-01-01
Purpose : By developing on-line statistics program to record the information of radiation oncology to share the information with internet. It is possible to supply basic reference data for administrative plans to improve radiation oncology. Materials and methods : The information of radiation oncology statistics had been collected by paper forms about 52 hospitals in the past. Now, we can input the data by internet web browsers. The statistics program used windows NT 4.0 operation system, Internet Information Server 4.0 (IIS4.0) as a web server and the Microsoft Access MDB. We used Structured Query Language (SQL), Visual Basic, VBScript and JAVAScript to display the statistics according to years and hospitals. Results : This program shows present conditions about man power, research, therapy machines, technic, brachytherapy, clinic statistics, radiation safety management, institution, quality assurance and radioisotopes in radiation oncology department. The database consists of 38 inputs and 6 outputs windows. Statistical output windows can be increased continuously according to user need. Conclusion : We have developed statistics program to process all of the data in department of radiation oncology for reference information. Users easily could input the data by internet web browsers and share the information
Brief guidelines for methods and statistics in medical research
Ab Rahman, Jamalludin
2015-01-01
This book serves as a practical guide to methods and statistics in medical research. It includes step-by-step instructions on using SPSS software for statistical analysis, as well as relevant examples to help those readers who are new to research in health and medical fields. Simple texts and diagrams are provided to help explain the concepts covered, and print screens for the statistical steps and the SPSS outputs are provided, together with interpretations and examples of how to report on findings. Brief Guidelines for Methods and Statistics in Medical Research offers a valuable quick reference guide for healthcare students and practitioners conducting research in health related fields, written in an accessible style.
Uniform angular overlap model interpretation of the crystal field effect in U(5+) fluoride compounds
Energy Technology Data Exchange (ETDEWEB)
Gajek, Z.; Mulak, J. (W. Trzebiatowski Inst. of Low Temperature and Structure Research, Polish Academy of Sciences, Wroclaw (Poland))
1990-11-01
The uniform interpretation of the crystal field effect in three different U(5+) fluoride compounds: CsUF{sub 6}, {alpha}-UF{sub 5} and {beta}-UF{sub 5} within the angular overlap model (AOM) is given. Some characteristic relations between the AOM parameters and their distance dependencies resulting from ab initio calculations are introduced and examined from a phenomenological point of view. The traditional simplest approach with only one independent parameter, i.e. e{sub {sigma}} with e{sub {pi}}:e{sub {sigma}} = 0.32 and e{sub {delta}} = 0, is shown to provide a consistent interpretation of the crystal field effect of the whole class of the compounds. The parameters obtained for one compound are easily and successfully extrapolated to others. The specificity and importance of the e{sub {delta}} parameter for 5f{sup 1} systems is discussed. (orig.).
The statistical interpretations of counting data from measurements of low-level radioactivity
International Nuclear Information System (INIS)
Donn, J.J.; Wolke, R.L.
1977-01-01
The statistical model appropriate to measurements of low-level or background-dominant radioactivity is examined and the derived relationships are applied to two practical problems involving hypothesis testing: 'Does the sample exhibit a net activity above background' and 'Is the activity of the sample below some preselected limit'. In each of these cases, the appropriate decision rule is formulated, procedures are developed for estimating the preset count which is necessary to achieve a desired probability of detection, and a specific sequence of operations is provided for the worker in the field. (author)
Spriestersbach, Albert; Röhrig, Bernd; du Prel, Jean-Baptist; Gerhold-Ay, Aslihan; Blettner, Maria
2009-09-01
Descriptive statistics are an essential part of biometric analysis and a prerequisite for the understanding of further statistical evaluations, including the drawing of inferences. When data are well presented, it is usually obvious whether the author has collected and evaluated them correctly and in keeping with accepted practice in the field. Statistical variables in medicine may be of either the metric (continuous, quantitative) or categorical (nominal, ordinal) type. Easily understandable examples are given. Basic techniques for the statistical description of collected data are presented and illustrated with examples. The goal of a scientific study must always be clearly defined. The definition of the target value or clinical endpoint determines the level of measurement of the variables in question. Nearly all variables, whatever their level of measurement, can be usefully presented graphically and numerically. The level of measurement determines what types of diagrams and statistical values are appropriate. There are also different ways of presenting combinations of two independent variables graphically and numerically. The description of collected data is indispensable. If the data are of good quality, valid and important conclusions can already be drawn when they are properly described. Furthermore, data description provides a basis for inferential statistics.
Principles of Statistics: What the Sports Medicine Professional Needs to Know.
Riemann, Bryan L; Lininger, Monica R
2018-07-01
Understanding the results and statistics reported in original research remains a large challenge for many sports medicine practitioners and, in turn, may be among one of the biggest barriers to integrating research into sports medicine practice. The purpose of this article is to provide minimal essentials a sports medicine practitioner needs to know about interpreting statistics and research results to facilitate the incorporation of the latest evidence into practice. Topics covered include the difference between statistical significance and clinical meaningfulness; effect sizes and confidence intervals; reliability statistics, including the minimal detectable difference and minimal important difference; and statistical power. Copyright © 2018 Elsevier Inc. All rights reserved.
New Jersey StreamStats: A web application for streamflow statistics and basin characteristics
Watson, Kara M.; Janowicz, Jon A.
2017-08-02
StreamStats is an interactive, map-based web application from the U.S. Geological Survey (USGS) that allows users to easily obtain streamflow statistics and watershed characteristics for both gaged and ungaged sites on streams throughout New Jersey. Users can determine flood magnitude and frequency, monthly flow-duration, monthly low-flow frequency statistics, and watershed characteristics for ungaged sites by selecting a point along a stream, or they can obtain this information for streamgages by selecting a streamgage location on the map. StreamStats provides several additional tools useful for water-resources planning and management, as well as for engineering purposes. StreamStats is available for most states and some river basins through a single web portal.Streamflow statistics for water resources professionals include the 1-percent annual chance flood flow (100-year peak flow) used to define flood plain areas and the monthly 7-day, 10-year low flow (M7D10Y) used in water supply management and studies of recreation, wildlife conservation, and wastewater dilution. Additionally, watershed or basin characteristics, including drainage area, percent area forested, and average percent of impervious areas, are commonly used in land-use planning and environmental assessments. These characteristics are easily derived through StreamStats.
The challenges of transportation/traffic statistics in Japan and directions for the future
Directory of Open Access Journals (Sweden)
Shigeru Kawasaki
2015-07-01
Full Text Available In order to respond to new challenges in transportation and traffic problems, it is essential to enhance statistics in this field that provides the basis for policy researches. Many of the statistics in this field in Japan consist of “official statistics” created by the government. This paper gives a review of the current status of transportation and traffic statistics (hereinafter called “transportation statistics” in short in Japan. Furthermore, the paper discusses challenges in such statistics in the new environment and the direction that statistics that should take in the future. For Japan’s transportation statistics to play vital roles in more sophisticated analyses, it is necessary to improve the environment that facilitates the use of microdata for analysis. It is also necessary to establish an environment where big data can be more easily used for compilation of official statistics and performing policy researches. To achieve this end, close cooperation among the government, academia, and businesses will be essential.
Festing, Michael F W
2014-01-01
The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.
Directory of Open Access Journals (Sweden)
Michael F W Festing
Full Text Available The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.
Design of two easily-testable VLSI array multipliers
Energy Technology Data Exchange (ETDEWEB)
Ferguson, J.; Shen, J.P.
1983-01-01
Array multipliers are well-suited to VLSI implementation because of the regularity in their iterative structure. However, most VLSI circuits are very difficult to test. This paper shows that, with appropriate cell design, array multipliers can be designed to be very easily testable. An array multiplier is called c-testable if all its adder cells can be exhaustively tested while requiring only a constant number of test patterns. The testability of two well-known array multiplier structures are studied. The conventional design of the carry-save array multipler is shown to be not c-testable. However, a modified design, using a modified adder cell, is generated and shown to be c-testable and requires only 16 test patterns. Similar results are obtained for the baugh-wooley two's complement array multiplier. A modified design of the baugh-wooley array multiplier is shown to be c-testable and requires 55 test patterns. The implementation of a practical c-testable 16*16 array multiplier is also presented. 10 references.
Hendikawati, P.; Arifudin, R.; Zahid, M. Z.
2018-03-01
This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.
Vasikaran, Samuel
2008-08-01
* Clinical laboratories should be able to offer interpretation of the results they produce. * At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed. * Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment. * Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available. * Standard tied comments ("canned" comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available. * Interpretative commenting should only be provided by appropriately trained and credentialed personnel. * Audit of comments and continued professional development of personnel providing them are important for quality assurance.
Statistics corner: A guide to appropriate use of correlation coefficient in medical research.
Mukaka, M M
2012-09-01
Correlation is a statistical method used to assess a possible linear association between two continuous variables. It is simple both to calculate and to interpret. However, misuse of correlation is so common among researchers that some statisticians have wished that the method had never been devised at all. The aim of this article is to provide a guide to appropriate use of correlation in medical research and to highlight some misuse. Examples of the applications of the correlation coefficient have been provided using data from statistical simulations as well as real data. Rule of thumb for interpreting size of a correlation coefficient has been provided.
The Impact of Language Experience on Language and Reading: A Statistical Learning Approach
Seidenberg, Mark S.; MacDonald, Maryellen C.
2018-01-01
This article reviews the important role of statistical learning for language and reading development. Although statistical learning--the unconscious encoding of patterns in language input--has become widely known as a force in infants' early interpretation of speech, the role of this kind of learning for language and reading comprehension in…
Chang, Xiaoyen Y.; Sewell, Thomas D.; Raff, Lionel M.; Thompson, Donald L.
1992-11-01
The possibility of utilizing different types of power spectra obtained from classical trajectories as a diagnostic tool to identify the presence of nonstatistical dynamics is explored by using the unimolecular bond-fission reactions of 1,2-difluoroethane and the 2-chloroethyl radical as test cases. In previous studies, the reaction rates for these systems were calculated by using a variational transition-state theory and classical trajectory methods. A comparison of the results showed that 1,2-difluoroethane is a nonstatistical system, while the 2-chloroethyl radical behaves statistically. Power spectra for these two systems have been generated under various conditions. The characteristics of these spectra are as follows: (1) The spectra for the 2-chloroethyl radical are always broader and more coupled to other modes than is the case for 1,2-difluoroethane. This is true even at very low levels of excitation. (2) When an internal energy near or above the dissociation threshold is initially partitioned into a local C-H stretching mode, the power spectra for 1,2-difluoroethane broaden somewhat, but discrete and somewhat isolated bands are still clearly evident. In contrast, the analogous power spectra for the 2-chloroethyl radical exhibit a near complete absence of isolated bands. The general appearance of the spectrum suggests a very high level of mode-to-mode coupling, large intramolecular vibrational energy redistribution (IVR) rates, and global statistical behavior. (3) The appearance of the power spectrum for the 2-chloroethyl radical is unaltered regardless of whether the initial C-H excitation is in the CH2 or the CH2Cl group. This result also suggests statistical behavior. These results are interpreted to mean that power spectra may be used as a diagnostic tool to assess the statistical character of a system. The presence of a diffuse spectrum exhibiting a nearly complete loss of isolated structures indicates that the dissociation dynamics of the molecule will
SDE decomposition and A-type stochastic interpretation in nonequilibrium processes
Yuan, Ruoshi; Tang, Ying; Ao, Ping
2017-12-01
An innovative theoretical framework for stochastic dynamics based on the decomposition of a stochastic differential equation (SDE) into a dissipative component, a detailed-balance-breaking component, and a dual-role potential landscape has been developed, which has fruitful applications in physics, engineering, chemistry, and biology. It introduces the A-type stochastic interpretation of the SDE beyond the traditional Ito or Stratonovich interpretation or even the α-type interpretation for multidimensional systems. The potential landscape serves as a Hamiltonian-like function in nonequilibrium processes without detailed balance, which extends this important concept from equilibrium statistical physics to the nonequilibrium region. A question on the uniqueness of the SDE decomposition was recently raised. Our review of both the mathematical and physical aspects shows that uniqueness is guaranteed. The demonstration leads to a better understanding of the robustness of the novel framework. In addition, we discuss related issues including the limitations of an approach to obtaining the potential function from a steady-state distribution.
Emergence of quantum mechanics from classical statistics
International Nuclear Information System (INIS)
Wetterich, C
2009-01-01
The conceptual setting of quantum mechanics is subject to an ongoing debate from its beginnings until now. The consequences of the apparent differences between quantum statistics and classical statistics range from the philosophical interpretations to practical issues as quantum computing. In this note we demonstrate how quantum mechanics can emerge from classical statistical systems. We discuss conditions and circumstances for this to happen. Quantum systems describe isolated subsystems of classical statistical systems with infinitely many states. While infinitely many classical observables 'measure' properties of the subsystem and its environment, the state of the subsystem can be characterized by the expectation values of only a few probabilistic observables. They define a density matrix, and all the usual laws of quantum mechanics follow. No concepts beyond classical statistics are needed for quantum physics - the differences are only apparent and result from the particularities of those classical statistical systems which admit a quantum mechanical description. In particular, we show how the non-commuting properties of quantum operators are associated to the use of conditional probabilities within the classical system, and how a unitary time evolution reflects the isolation of the subsystem.
A mathematical model for interpretable clinical decision support with applications in gynecology.
Directory of Open Access Journals (Sweden)
Vanya M C A Van Belle
Full Text Available Over time, methods for the development of clinical decision support (CDS systems have evolved from interpretable and easy-to-use scoring systems to very complex and non-interpretable mathematical models. In order to accomplish effective decision support, CDS systems should provide information on how the model arrives at a certain decision. To address the issue of incompatibility between performance, interpretability and applicability of CDS systems, this paper proposes an innovative model structure, automatically leading to interpretable and easily applicable models. The resulting models can be used to guide clinicians when deciding upon the appropriate treatment, estimating patient-specific risks and to improve communication with patients.We propose the interval coded scoring (ICS system, which imposes that the effect of each variable on the estimated risk is constant within consecutive intervals. The number and position of the intervals are automatically obtained by solving an optimization problem, which additionally performs variable selection. The resulting model can be visualised by means of appealing scoring tables and color bars. ICS models can be used within software packages, in smartphone applications, or on paper, which is particularly useful for bedside medicine and home-monitoring. The ICS approach is illustrated on two gynecological problems: diagnosis of malignancy of ovarian tumors using a dataset containing 3,511 patients, and prediction of first trimester viability of pregnancies using a dataset of 1,435 women. Comparison of the performance of the ICS approach with a range of prediction models proposed in the literature illustrates the ability of ICS to combine optimal performance with the interpretability of simple scoring systems.The ICS approach can improve patient-clinician communication and will provide additional insights in the importance and influence of available variables. Future challenges include extensions of the
The application of bayesian statistic in data fit processing
International Nuclear Information System (INIS)
Guan Xingyin; Li Zhenfu; Song Zhaohui
2010-01-01
The rationality and disadvantage of least squares fitting that is usually used in data processing is analyzed, and the theory and commonly method that Bayesian statistic is applied in data processing is shown in detail. As it is proved in analysis, Bayesian approach avoid the limitative hypothesis that least squares fitting has in data processing, and the result has traits that it is more scientific and more easily understood, may replace the least squares fitting to apply in data processing. (authors)
An easily Prepared Fluorescent pH Probe Based on Dansyl.
Sha, Chunming; Chen, Yuhua; Chen, Yufen; Xu, Dongmei
2016-09-01
A novel fluorescent pH probe from dansyl chloride and thiosemicarbazide was easily prepared and fully characterized by (1)H NMR, (13)C NMR, LC-MS, Infrared spectra and elemental analysis. The probe exhibited high selectivity and sensitivity to H(+) with a pK a value of 4.98. The fluorescence intensity at 510 nm quenched 99.5 % when the pH dropped from 10.88 to 1.98. In addition, the dansyl-based probe could respond quickly and reversibly to the pH variation and various common metal ions showed negligible interference. The recognition could be ascribed to the intramolecular charge transfer caused by the protonation of the nitrogen in the dimethylamino group.
Ooms, L.; Veenhof, C.
2014-01-01
Introduction: The Dutch government stimulates sport and physical activity opportunities in the neighborhood to make it easier for people to adopt a physically active lifestyle. Seven National Sports Federations (NSFs) were funded to develop easily accessible sporting programs, targeted at groups
Use of the dynamic stiffness method to interpret experimental data from a nonlinear system
Tang, Bin; Brennan, M. J.; Gatti, G.
2018-05-01
The interpretation of experimental data from nonlinear structures is challenging, primarily because of dependency on types and levels of excitation, and coupling issues with test equipment. In this paper, the use of the dynamic stiffness method, which is commonly used in the analysis of linear systems, is used to interpret the data from a vibration test of a controllable compressed beam structure coupled to a test shaker. For a single mode of the system, this method facilitates the separation of mass, stiffness and damping effects, including nonlinear stiffness effects. It also allows the separation of the dynamics of the shaker from the structure under test. The approach needs to be used with care, and is only suitable if the nonlinear system has a response that is predominantly at the excitation frequency. For the structure under test, the raw experimental data revealed little about the underlying causes of the dynamic behaviour. However, the dynamic stiffness approach allowed the effects due to the nonlinear stiffness to be easily determined.
Consequences of Not Interpreting Structure Coefficients in Published CFA Research: A Reminder
Graham, James M.; Guthrie, Abbie C.; Thompson, Bruce
2003-01-01
Confirmatory factor analysis (CFA) is a statistical procedure frequently used to test the fit of data to measurement models. Published CFA studies typically report factor pattern coefficients. Few reports, however, also present factor structure coefficients, which can be essential for the accurate interpretation of CFA results. The interpretation…
Nuclear material statistical accountancy system
International Nuclear Information System (INIS)
Argentest, F.; Casilli, T.; Franklin, M.
1979-01-01
The statistical accountancy system developed at JRC Ispra is refered as 'NUMSAS', ie Nuclear Material Statistical Accountancy System. The principal feature of NUMSAS is that in addition to an ordinary material balance calcultation, NUMSAS can calculate an estimate of the standard deviation of the measurement error accumulated in the material balance calculation. The purpose of the report is to describe in detail, the statistical model on wich the standard deviation calculation is based; the computational formula which is used by NUMSAS in calculating the standard deviation and the information about nuclear material measurements and the plant measurement system which are required as data for NUMSAS. The material balance records require processing and interpretation before the material balance calculation is begun. The material balance calculation is the last of four phases of data processing undertaken by NUMSAS. Each of these phases is implemented by a different computer program. The activities which are carried out in each phase can be summarised as follows; the pre-processing phase; the selection and up-date phase; the transformation phase, and the computation phase
Statistical application of groundwater monitoring data at the Hanford Site
International Nuclear Information System (INIS)
Chou, C.J.; Johnson, V.G.; Hodges, F.N.
1993-09-01
Effective use of groundwater monitoring data requires both statistical and geohydrologic interpretations. At the Hanford Site in south-central Washington state such interpretations are used for (1) detection monitoring, assessment monitoring, and/or corrective action at Resource Conservation and Recovery Act sites; (2) compliance testing for operational groundwater surveillance; (3) impact assessments at active liquid-waste disposal sites; and (4) cleanup decisions at Comprehensive Environmental Response Compensation and Liability Act sites. Statistical tests such as the Kolmogorov-Smirnov two-sample test are used to test the hypothesis that chemical concentrations from spatially distinct subsets or populations are identical within the uppermost unconfined aquifer. Experience at the Hanford Site in applying groundwater background data indicates that background must be considered as a statistical distribution of concentrations, rather than a single value or threshold. The use of a single numerical value as a background-based standard ignores important information and may result in excessive or unnecessary remediation. Appropriate statistical evaluation techniques include Wilcoxon rank sum test, Quantile test, ''hot spot'' comparisons, and Kolmogorov-Smirnov types of tests. Application of such tests is illustrated with several case studies derived from Hanford groundwater monitoring programs. To avoid possible misuse of such data, an understanding of the limitations is needed. In addition to statistical test procedures, geochemical, and hydrologic considerations are integral parts of the decision process. For this purpose a phased approach is recommended that proceeds from simple to the more complex, and from an overview to detailed analysis
Application of multivariate statistical techniques in microbial ecology.
Paliy, O; Shankar, V
2016-03-01
Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.
Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P
1999-01-01
Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149
Interpreting Impoliteness: Interpreters’ Voices
Directory of Open Access Journals (Sweden)
Tatjana Radanović Felberg
2017-11-01
Full Text Available Interpreters in the public sector in Norway interpret in a variety of institutional encounters, and the interpreters evaluate the majority of these encounters as polite. However, some encounters are evaluated as impolite, and they pose challenges when it comes to interpreting impoliteness. This issue raises the question of whether interpreters should take a stance on their own evaluation of impoliteness and whether they should interfere in communication. In order to find out more about how interpreters cope with this challenge, in 2014 a survey was sent to all interpreters registered in the Norwegian Register of Interpreters. The survey data were analyzed within the theoretical framework of impoliteness theory using the notion of moral order as an explanatory tool in a close reading of interpreters’ answers. The analysis shows that interpreters reported using a variety of strategies for interpreting impoliteness, including omissions and downtoning. However, the interpreters also gave examples of individual strategies for coping with impoliteness, such as interrupting and postponing interpreting. These strategies border behavioral strategies and conflict with the Norwegian ethical guidelines for interpreting. In light of the ethical guidelines and actual practice, mapping and discussing different strategies used by interpreters might heighten interpreters’ and interpreter-users’ awareness of the role impoliteness can play in institutional interpreter– mediated encounters.
Linear mixed models a practical guide using statistical software
West, Brady T; Galecki, Andrzej T
2006-01-01
Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo
Neuman, Yair
2010-10-01
Interpretation is at the center of psychoanalytic activity. However, interpretation is always challenged by that which is beyond our grasp, the 'dark matter' of our mind, what Bion describes as ' O'. O is one of the most central and difficult concepts in Bion's thought. In this paper, I explain the enigmatic nature of O as a high-dimensional mental space and point to the price one should pay for substituting the pre-symbolic lexicon of the emotion-laden and high-dimensional unconscious for a low-dimensional symbolic representation. This price is reification--objectifying lived experience and draining it of vitality and complexity. In order to address the difficulty of approaching O through symbolization, I introduce the term 'Penultimate Interpretation'--a form of interpretation that seeks 'loopholes' through which the analyst and the analysand may reciprocally save themselves from the curse of reification. Three guidelines for 'Penultimate Interpretation' are proposed and illustrated through an imaginary dialogue. Copyright © 2010 Institute of Psychoanalysis.
DEFF Research Database (Denmark)
Denwood, M.J.; McKendrick, I.J.; Matthews, L.
Introduction. There is an urgent need for a method of analysing FECRT data that is computationally simple and statistically robust. A method for evaluating the statistical power of a proposed FECRT study would also greatly enhance the current guidelines. Methods. A novel statistical framework has...... been developed that evaluates observed FECRT data against two null hypotheses: (1) the observed efficacy is consistent with the expected efficacy, and (2) the observed efficacy is inferior to the expected efficacy. The method requires only four simple summary statistics of the observed data. Power...... that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...
Statistical Process Control in a Modern Production Environment
DEFF Research Database (Denmark)
Windfeldt, Gitte Bjørg
gathered here and standard statistical software. In Paper 2 a new method for process monitoring is introduced. The method uses a statistical model of the quality characteristic and a sliding window of observations to estimate the probability that the next item will not respect the specications......Paper 1 is aimed at practicians to help them test the assumption that the observations in a sample are independent and identically distributed. An assumption that is essential when using classical Shewhart charts. The test can easily be performed in the control chart setup using the samples....... If the estimated probability exceeds a pre-determined threshold the process will be stopped. The method is exible, allowing a complexity in modeling that remains invisible to the end user. Furthermore, the method allows to build diagnostic plots based on the parameters estimates that can provide valuable insight...
Groen-Blokhuis, Maria M; Middeldorp, Christel M; M van Beijsterveldt, Catharina E; Boomsma, Dorret I
2011-10-01
In order to estimate the influence of genetic and environmental factors on 'crying without a cause' and 'being easily upset' in 2-year-old children, a large twin study was carried out. Prospective data were available for ~18,000 2-year-old twin pairs from the Netherlands Twin Register. A bivariate genetic analysis was performed using structural equation modeling in the Mx software package. The influence of maternal personality characteristics and demographic and lifestyle factors was tested to identify specific risk factors that may underlie the shared environment of twins. Furthermore, it was tested whether crying without a cause and being easily upset were predictive of later internalizing, externalizing and attention problems. Crying without a cause yielded a heritability estimate of 60% in boys and girls. For easily upset, the heritability was estimated at 43% in boys and 31% in girls. The variance explained by shared environment varied between 35% and 63%. The correlation between crying without a cause and easily upset (r = .36) was explained both by genetic and shared environmental factors. Birth cohort, gestational age, socioeconomic status, parental age, parental smoking behavior and alcohol use during pregnancy did not explain the shared environmental component. Neuroticism of the mother explained a small proportion of the additive genetic, but not of the shared environmental effects for easily upset. Crying without a cause and being easily upset at age 2 were predictive of internalizing, externalizing and attention problems at age 7, with effect sizes of .28-.42. A large influence of shared environmental factors on crying without a cause and easily upset was detected. Although these effects could be specific to these items, we could not explain them by personality characteristics of the mother or by demographic and lifestyle factors, and we recognize that these effects may reflect other maternal characteristics. A substantial influence of genetic factors
Some statistical issues important to future developments in human radiation research
International Nuclear Information System (INIS)
Vaeth, Michael
1991-01-01
Using his two years experience at the Radiation Effects Research Foundation at Hiroshima, the author tries to outline some of the areas of statistics where methodologies relevant to the future developments in human radiation research are likely to be found. Problems related to statistical analysis of existing data are discussed, together with methodological developments in non-parametric and semi-parametric regression modelling, and interpretation and presentation of results. (Author)
Misleading reporting and interpretation of results in major infertility journals.
Glujovsky, Demian; Sueldo, Carlos E; Borghi, Carolina; Nicotra, Pamela; Andreucci, Sara; Ciapponi, Agustín
2016-05-01
To evaluate the proportion of randomized controlled trials (RCTs) published in top infertility journals indexed on PubMed that reported their results with proper effect estimates and their precision estimation, while correctly interpreting both measures. Cross-sectional study evaluating all the RCTs published in top infertility journals during 2014. Not applicable. Not applicable. Not applicable. Proportion of RCTs that reported both relative and absolute effect size measures and its precision. Among the 32 RCTs published in 2014 in the top infertility journals reviewed, 37.5% (95% confidence interval [CI], 21.1-56.3) did not mention in their abstracts whether the difference among the study arms was statistically or clinically significant, and only 6.3% (95% CI, 0.8-20.8) used a CI of the absolute difference. Similarly, in the results section, these elements were observed in 28.2% (95% CI, 13.7-46.7) and 15.6% (95% CI, 5.3-32.8), respectively. Only one study clearly expressed the minimal clinically important difference in their methods section, but we found related proxies in 53% (95% CI, 34.7-70.9). None of the studies used CIs to draw conclusions about the clinical or statistical significance. We found 13 studies where the interpretation of the findings could be misleading. Recommended reporting items are underused in top infertility journals, which could lead to misleading interpretations. Authors, reviewers, and editorial boards should emphasize their use to improve reporting quality. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Mark W Perlin
Full Text Available Mixtures are a commonly encountered form of biological evidence that contain DNA from two or more contributors. Laboratory analysis of mixtures produces data signals that usually cannot be separated into distinct contributor genotypes. Computer modeling can resolve the genotypes up to probability, reflecting the uncertainty inherent in the data. Human analysts address the problem by simplifying the quantitative data in a threshold process that discards considerable identification information. Elevated stochastic threshold levels potentially discard more information. This study examines three different mixture interpretation methods. In 72 criminal cases, 111 genotype comparisons were made between 92 mixture items and relevant reference samples. TrueAllele computer modeling was done on all the evidence samples, and documented in DNA match reports that were provided as evidence for each case. Threshold-based Combined Probability of Inclusion (CPI and stochastically modified CPI (mCPI analyses were performed as well. TrueAllele's identification information in 101 positive matches was used to assess the reliability of its modeling approach. Comparison was made with 81 CPI and 53 mCPI DNA match statistics that were manually derived from the same data. There were statistically significant differences between the DNA interpretation methods. TrueAllele gave an average match statistic of 113 billion, CPI averaged 6.68 million, and mCPI averaged 140. The computer was highly specific, with a false positive rate under 0.005%. The modeling approach was precise, having a factor of two within-group standard deviation. TrueAllele accuracy was indicated by having uniformly distributed match statistics over the data set. The computer could make genotype comparisons that were impossible or impractical using manual methods. TrueAllele computer interpretation of DNA mixture evidence is sensitive, specific, precise, accurate and more informative than manual
Pointwise probability reinforcements for robust statistical inference.
Frénay, Benoît; Verleysen, Michel
2014-02-01
Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Evaluation of easily measured risk factors in the prediction of osteoporotic fractures
Directory of Open Access Journals (Sweden)
Brown Jacques P
2005-09-01
Full Text Available Abstract Background Fracture represents the single most important clinical event in patients with osteoporosis, yet remains under-predicted. As few premonitory symptoms for fracture exist, it is of critical importance that physicians effectively and efficiently identify individuals at increased fracture risk. Methods Of 3426 postmenopausal women in CANDOO, 40, 158, 99, and 64 women developed a new hip, vertebral, wrist or rib fracture, respectively. Seven easily measured risk factors predictive of fracture in research trials were examined in clinical practice including: age (, 65–69, 70–74, 75–79, 80+ years, rising from a chair with arms (yes, no, weight (≥ 57kg, maternal history of hip facture (yes, no, prior fracture after age 50 (yes, no, hip T-score (>-1, -1 to >-2.5, ≤-2.5, and current smoking status (yes, no. Multivariable logistic regression analysis was conducted. Results The inability to rise from a chair without the use of arms (3.58; 95% CI: 1.17, 10.93 was the most significant risk factor for new hip fracture. Notable risk factors for predicting new vertebral fractures were: low body weight (1.57; 95% CI: 1.04, 2.37, current smoking (1.95; 95% CI: 1.20, 3.18 and age between 75–79 years (1.96; 95% CI: 1.10, 3.51. New wrist fractures were significantly identified by low body weight (1.71, 95% CI: 1.01, 2.90 and prior fracture after 50 years (1.96; 95% CI: 1.19, 3.22. Predictors of new rib fractures include a maternal history of a hip facture (2.89; 95% CI: 1.04, 8.08 and a prior fracture after 50 years (2.16; 95% CI: 1.20, 3.87. Conclusion This study has shown that there exists a variety of predictors of future fracture, besides BMD, that can be easily assessed by a physician. The significance of each variable depends on the site of incident fracture. Of greatest interest is that an inability to rise from a chair is perhaps the most readily identifiable significant risk factor for hip fracture and can be easily incorporated
Interobserver variability in interpretation of mammogram
International Nuclear Information System (INIS)
Lee, Kyung Jae; Lee, Hae Kyung; Lee, Won Chul; Hwang, In Young; Park, Young Gyu; Jung, Sang Seol; Kim, Hoon Kyo; Kim, Mi Hye; Kim, Hak Hee
2004-01-01
The purpose of this study was to evaluate the performance of radiologists for mammographic screening, and to analyze interobserver agreement in the interpretation of mammograms. 50 women were selected as subjects from the patients who were screened with mammograms at two university hospitals. The images were analyzed by five radiologists working independently and without their having any knowledge of the final diagnosis. The interobserver variation was analyzed by using the kappa statistic. There were moderate agreements for the findings of the parenchymal pattern (k=0.44; 95% CI 0.39-0.49). calcification type (k=0.66; 95% CI 0.60-0.72) and calcification distribution (K=0.43; 95% CI 0.38-0.48). The mean kappa values ranged from 0.66 to 0.42 for the mass findings. The mean kappa value for the final conclusion was 0.44 (95% CI 0.38-0.51). In general, moderate agreement was evident for all the categories that were evaluated. The general agreement was moderate, but there was wide variability in some findings. To improve the accuracy and reduce variability among physicians in interpretation, proper training of radiologists and standardization of criteria are essential for breast screening
A Framework for Assessing High School Students' Statistical Reasoning.
Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang
2016-01-01
Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.
OntologyWidget – a reusable, embeddable widget for easily locating ontology terms
Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin
2007-01-01
Abstract Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to r...
Parker, Loran Carleton; Gleichsner, Alyssa M.; Adedokun, Omolola A.; Forney, James
2016-01-01
Transformation of research in all biological fields necessitates the design, analysis and, interpretation of large data sets. Preparing students with the requisite skills in experimental design, statistical analysis, and interpretation, and mathematical reasoning will require both curricular reform and faculty who are willing and able to integrate…
READING STATISTICS AND RESEARCH
Directory of Open Access Journals (Sweden)
Reviewed by Yavuz Akbulut
2008-10-01
Full Text Available The book demonstrates the best and most conservative ways to decipher and critique research reports particularly for social science researchers. In addition, new editions of the book are always better organized, effectively structured and meticulously updated in line with the developments in the field of research statistics. Even the most trivial issues are revisited and updated in new editions. For instance, purchaser of the previous editions might check the interpretation of skewness and kurtosis indices in the third edition (p. 34 and in the fifth edition (p.29 to see how the author revisits every single detail. Theory and practice always go hand in hand in all editions of the book. Re-reading previous editions (e.g. third edition before reading the fifth edition gives the impression that the author never stops ameliorating his instructional text writing methods. In brief, “Reading Statistics and Research” is among the best sources showing research consumers how to understand and critically assess the statistical information and research results contained in technical research reports. In this respect, the review written by Mirko Savić in Panoeconomicus (2008, 2, pp. 249-252 will help the readers to get a more detailed overview of each chapters. I cordially urge the beginning researchers to pick a highlighter to conduct a detailed reading with the book. A thorough reading of the source will make the researchers quite selective in appreciating the harmony between the data analysis, results and discussion sections of typical journal articles. If interested, beginning researchers might begin with this book to grasp the basics of research statistics, and prop up their critical research reading skills with some statistics package applications through the help of Dr. Andy Field’s book, Discovering Statistics using SPSS (second edition published by Sage in 2005.
Born in an infinite universe: A cosmological interpretation of quantum mechanics
International Nuclear Information System (INIS)
Aguirre, Anthony; Tegmark, Max
2011-01-01
We study the quantum measurement problem in the context of an infinite, statistically uniform space, as could be generated by eternal inflation. It has recently been argued that when identical copies of a quantum measurement system exist, the standard projection operators and Born rule method for calculating probabilities must be supplemented by estimates of relative frequencies of observers. We argue that an infinite space actually renders the Born rule redundant, by physically realizing all outcomes of a quantum measurement in different regions, with relative frequencies given by the square of the wave-function amplitudes. Our formal argument hinges on properties of what we term the quantum confusion operator, which projects onto the Hilbert subspace where the Born rule fails, and we comment on its relation to the oft-discussed quantum frequency operator. This analysis unifies the classical and quantum levels of parallel universes that have been discussed in the literature, and has implications for several issues in quantum measurement theory. Replacing the standard hypothetical ensemble of measurements repeated ad infinitum by a concrete decohered spatial collection of experiments carried out in different distant regions of space provides a natural context for a statistical interpretation of quantum mechanics. It also shows how, even for a single measurement, probabilities may be interpreted as relative frequencies in unitary (Everettian) quantum mechanics. We also argue that after discarding a zero-norm part of the wave function, the remainder consists of a superposition of indistinguishable terms, so that arguably 'collapse' of the wave function is irrelevant, and the ''many worlds'' of Everett's interpretation are unified into one. Finally, the analysis suggests a 'cosmological interpretation' of quantum theory in which the wave function describes the actual spatial collection of identical quantum systems, and quantum uncertainty is attributable to the
Neutrons and antimony physical measurements and interpretations
International Nuclear Information System (INIS)
Smith, A. B.
2000-01-01
New experimental information for the elastic and inelastic scattering of ∼ 4--10 MeV neutrons from elemental antimony is presented. The differential measurements are made at ∼ 40 or more scattering angles and at incident neutron-energy intervals of ∼ 0.5 MeV. The present experimental results, those previously reported from this laboratory and as found in the literature are comprehensively interpreted using spherical optical-statistical and dispersive-optical models. Direct vibrational processes via core-excitation, isospin and shell effects are discussed. Antimony models for applications are proposed and compared with global, regional, and specific models reported in the literature
Quantifying Treatment Benefit in Molecular Subgroups to Assess a Predictive Biomarker.
Iasonos, Alexia; Chapman, Paul B; Satagopan, Jaya M
2016-05-01
An increased interest has been expressed in finding predictive biomarkers that can guide treatment options for both mutation carriers and noncarriers. The statistical assessment of variation in treatment benefit (TB) according to the biomarker carrier status plays an important role in evaluating predictive biomarkers. For time-to-event endpoints, the hazard ratio (HR) for interaction between treatment and a biomarker from a proportional hazards regression model is commonly used as a measure of variation in TB. Although this can be easily obtained using available statistical software packages, the interpretation of HR is not straightforward. In this article, we propose different summary measures of variation in TB on the scale of survival probabilities for evaluating a predictive biomarker. The proposed summary measures can be easily interpreted as quantifying differential in TB in terms of relative risk or excess absolute risk due to treatment in carriers versus noncarriers. We illustrate the use and interpretation of the proposed measures with data from completed clinical trials. We encourage clinical practitioners to interpret variation in TB in terms of measures based on survival probabilities, particularly in terms of excess absolute risk, as opposed to HR. Clin Cancer Res; 22(9); 2114-20. ©2016 AACR. ©2016 American Association for Cancer Research.
Adams, Wendy K.; Alhadlaq, Hisham; Malley, Christopher V.; Perkins, Katherine K.; Olson, Jonathan; Alshaya, Fahad; Alabdulkareem, Saleh; Wieman, Carl E.
2012-02-01
The PhET Interactive Simulations Project partnered with the Excellence Research Center of Science and Mathematics Education at King Saud University with the joint goal of making simulations useable worldwide. One of the main challenges of this partnership is to make PhET simulations and the website easily translatable into any language. The PhET project team overcame this challenge by creating the Translation Utility. This tool allows a person fluent in both English and another language to easily translate any of the PhET simulations and requires minimal computer expertise. In this paper we discuss the technical issues involved in this software solution, as well as the issues involved in obtaining accurate translations. We share our solutions to many of the unexpected problems we encountered that would apply generally to making on-line scientific course materials available in many different languages, including working with: languages written right-to-left, different character sets, and different conventions for expressing equations, variables, units and scientific notation.
Use of keyword hierarchies to interpret gene expression patterns.
Masys, D R; Welsh, J B; Lynn Fink, J; Gribskov, M; Klacansky, I; Corbeil, J
2001-04-01
High-density microarray technology permits the quantitative and simultaneous monitoring of thousands of genes. The interpretation challenge is to extract relevant information from this large amount of data. A growing variety of statistical analysis approaches are available to identify clusters of genes that share common expression characteristics, but provide no information regarding the biological similarities of genes within clusters. The published literature provides a potential source of information to assist in interpretation of clustering results. We describe a data mining method that uses indexing terms ('keywords') from the published literature linked to specific genes to present a view of the conceptual similarity of genes within a cluster or group of interest. The method takes advantage of the hierarchical nature of Medical Subject Headings used to index citations in the MEDLINE database, and the registry numbers applied to enzymes.
[How to fit and interpret multilevel models using SPSS].
Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael
2007-05-01
Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.
Statistical black-hole thermodynamics
International Nuclear Information System (INIS)
Bekenstein, J.D.
1975-01-01
Traditional methods from statistical thermodynamics, with appropriate modifications, are used to study several problems in black-hole thermodynamics. Jaynes's maximum-uncertainty method for computing probabilities is used to show that the earlier-formulated generalized second law is respected in statistically averaged form in the process of spontaneous radiation by a Kerr black hole discovered by Hawking, and also in the case of a Schwarzschild hole immersed in a bath of black-body radiation, however cold. The generalized second law is used to motivate a maximum-entropy principle for determining the equilibrium probability distribution for a system containing a black hole. As an application we derive the distribution for the radiation in equilibrium with a Kerr hole (it is found to agree with what would be expected from Hawking's results) and the form of the associated distribution among Kerr black-hole solution states of definite mass. The same results are shown to follow from a statistical interpretation of the concept of black-hole entropy as the natural logarithm of the number of possible interior configurations that are compatible with the given exterior black-hole state. We also formulate a Jaynes-type maximum-uncertainty principle for black holes, and apply it to obtain the probability distribution among Kerr solution states for an isolated radiating Kerr hole
On court interpreters' visibility
DEFF Research Database (Denmark)
Dubslaff, Friedel; Martinsen, Bodil
of the service they receive. Ultimately, the findings will be used for training purposes. Future - and, for that matter, already practising - interpreters as well as the professional users of interpreters ought to take the reality of the interpreters' work in practice into account when assessing the quality...... on the interpreter's interpersonal role and, in particular, on signs of the interpreter's visibility, i.e. active co-participation. At first sight, the interpreting assignment in question seems to be a short and simple routine task which would not require the interpreter to deviate from the traditional picture...
Do Interpreters Indeed Have Superior Working Memory in Interpreting
Institute of Scientific and Technical Information of China (English)
于飞
2012-01-01
With the frequent communications between China and western countries in the field of economy,politics and culture,etc,Inter preting becomes more and more important to people in all walks of life.This paper aims to testify the author’s hypothesis "professional interpreters have similar short-term memory with unprofessional interpreters,but they have superior working memory." After the illustration of literatures concerning with consecutive interpreting,short-term memory and working memory,experiments are designed and analysis are described.
Studying the microlenses mass function from statistical analysis of the caustic concentration
Energy Technology Data Exchange (ETDEWEB)
Mediavilla, T; Ariza, O [Departamento de Estadistica e Investigacion Operativa, Universidad de Cadiz, Avda de Ramon Puyol, s/n 11202 Algeciras (Spain); Mediavilla, E [Instituto de Astrofisica de Canarias, Avda Via Lactea s/n, La Laguna (Spain); Munoz, J A, E-mail: teresa.mediavilla@ca.uca.es, E-mail: octavio.ariza@uca.es, E-mail: emg@iac.es [Departamento de Astrofisica y Astronomia, Universidad de Valencia, Burjassot, Valencia (Spain)
2011-09-22
The statistical distribution of caustic crossings by the images of a lensed quasar depends on the properties of the distribution of microlenses in the lens galaxy. We use a procedure based in Inverse Polygon Mapping to easily identify the critical and caustic curves generated by a distribution of stars in the lens galaxy. We analyze the statistical distributions of the number of caustic crossings by a pixel size source for several projected mass densities and different mass distributions. We compare the results of simulations with theoretical binomial distributions. Finally we apply this method to the study of the stellar mass distribution in the lens galaxy of QSO 2237+0305.
Three Insights from a Bayesian Interpretation of the One-Sided "P" Value
Marsman, Maarten; Wagenmakers, Eric-Jan
2017-01-01
P values have been critiqued on several grounds but remain entrenched as the dominant inferential method in the empirical sciences. In this article, we elaborate on the fact that in many statistical models, the one-sided "P" value has a direct Bayesian interpretation as the approximate posterior mass for values lower than zero. The…
Application of Multivariable Statistical Techniques in Plant-wide WWTP Control Strategies Analysis
DEFF Research Database (Denmark)
Flores Alsina, Xavier; Comas, J.; Rodríguez-Roda, I.
2007-01-01
The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant...... analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii......) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation...
Applications of quantum entropy to statistics
International Nuclear Information System (INIS)
Silver, R.N.; Martz, H.F.
1994-01-01
This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to heirarchical Bayes methods
Reese, Hayne W.
1997-01-01
Recommends that when repeated-measures Latin-square designs are used to counterbalance treatments across a procedural variable or to reduce the number of treatment combinations given to each participant, effects be analyzed statistically, and that in all uses, researchers consider alternative interpretations of the variance associated with the…
DEFF Research Database (Denmark)
Auken, Sune
2015-01-01
Despite the immensity of genre studies as well as studies in interpretation, our understanding of the relationship between genre and interpretation is sketchy at best. The article attempts to unravel some of intricacies of that relationship through an analysis of the generic interpretation carrie...
Smith, Joseph M.; Mather, Martha E.
2012-01-01
Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the
Engineering Definitional Interpreters
DEFF Research Database (Denmark)
Midtgaard, Jan; Ramsay, Norman; Larsen, Bradford
2013-01-01
A definitional interpreter should be clear and easy to write, but it may run 4--10 times slower than a well-crafted bytecode interpreter. In a case study focused on implementation choices, we explore ways of making definitional interpreters faster without expending much programming effort. We imp...
Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy
2015-10-15
The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii
The Nirex Sellafield site investigation: the role of geophysical interpretation
International Nuclear Information System (INIS)
Muir Wood, R.; Woo, G.; MacMillan, G.
1992-01-01
This report reviews the methods by which geophysical data are interpreted, and used to characterize the 3-D geology of a site for potential storage of radioactive waste. The report focuses on the NIREX site investigation at Sellafield, for which geophysical observations provide a significant component of the structural geological understanding. In outlining the basic technical principles of seismic data processing and interpretation, and borehole logging, an attempt has been made to identify errors, uncertainties, and the implicit use of expert judgement. To enhance the reliability of a radiological probabilistic risk assessment, recommendations are proposed for independent use of the primary NIREX geophysical site investigation data in characterizing the site geology. These recommendations include quantitative procedures for undertaking an uncertainty audit using a combination of statistical analysis and expert judgement. (author)
Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
2011-01-01
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.
Estimation of the Effects of Statistical Discrimination on the Gender Wage Gap
Atsuko Tanaka
2015-01-01
How much of the gender wage gap can be attributed to statistical discrimination? Applying an employer learning model and Instrumental Variable (IV) estimation strategy to Japanese panel data, I examine how women's generally weak labor force attachment affects wages when employers cannot easily observe an individual's labor force intentions. To overcome endogeneity issues, I use survey information on individual workers' intentions to continue working after having children and Japanese panel da...
Which statistics should tropical biologists learn?
Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián
2011-09-01
Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.
Statistical analysis of quality control of automatic processor
International Nuclear Information System (INIS)
Niu Yantao; Zhao Lei; Zhang Wei; Yan Shulin
2002-01-01
Objective: To strengthen the scientific management of automatic processor and promote QC, based on analyzing QC management chart for automatic processor by statistical method, evaluating and interpreting the data and trend of the chart. Method: Speed, contrast, minimum density of step wedge of film strip were measured everyday and recorded on the QC chart. Mean (x-bar), standard deviation (s) and range (R) were calculated. The data and the working trend were evaluated and interpreted for management decisions. Results: Using relative frequency distribution curve constructed by measured data, the authors can judge whether it is a symmetric bell-shaped curve or not. If not, it indicates a few extremes overstepping control limits possibly are pulling the curve to the left or right. If it is a normal distribution, standard deviation (s) is observed. When x-bar +- 2s lies in upper and lower control limits of relative performance indexes, it indicates the processor works in stable status in this period. Conclusion: Guided by statistical method, QC work becomes more scientific and quantified. The authors can deepen understanding and application of the trend chart, and improve the quality management to a new step
Statistical behavior of the tensile property of heated cotton fiber
The temperature dependence of the tensile property of single cotton fiber was studied in the range of 160-300°C using Favimat test, and its statistical behavior was interpreted in terms of structural changes. The tenacity of control cotton fiber was well described by the single Weibull distribution,...
Interpretation of Spirometry: Selection of Predicted Values and Defining Abnormality.
Chhabra, S K
2015-01-01
Spirometry is the most frequently performed investigation to evaluate pulmonary function. It provides clinically useful information on the mechanical properties of the lung and the thoracic cage and aids in taking management-related decisions in a wide spectrum of diseases and disorders. Few measurements in medicine are so dependent on factors related to equipment, operator and the patient. Good spirometry requires quality assured measurements and a systematic approach to interpretation. Standard guidelines on the technical aspects of equipment and their calibration as well as the test procedure have been developed and revised from time-to-time. Strict compliance with standardisation guidelines ensures quality control. Interpretation of spirometry data is based only on two basic measurements--the forced vital capacity (FVC) and the forced expiratory volume in 1 second (FEV1) and their ratio, FEV1/FVC. A meaningful and clinically useful interpretation of the measured data requires a systematic approach and consideration of several important issues. Central to interpretation is the understanding of the development and application of prediction equations. Selection of prediction equations that are appropriate for the ethnic origin of the patient is vital to avoid erroneous interpretation. Defining abnormal values is a debatable but critical aspect of spirometry. A statistically valid definition of the lower limits of normal has been advocated as the better method over the more commonly used approach of defining abnormality as a fixed percentage of the predicted value. Spirometry rarely provides a specific diagnosis. Examination of the flow-volume curve and the measured data provides information to define patterns of ventilatory impairment. Spirometry must be interpreted in conjunction with clinical information including results of other investigations.
Statistical learning of speech, not music, in congenital amusia.
Peretz, Isabelle; Saffran, Jenny; Schön, Daniele; Gosselin, Nathalie
2012-04-01
The acquisition of both speech and music uses general principles: learners extract statistical regularities present in the environment. Yet, individuals who suffer from congenital amusia (commonly called tone-deafness) have experienced lifelong difficulties in acquiring basic musical skills, while their language abilities appear essentially intact. One possible account for this dissociation between music and speech is that amusics lack normal experience with music. If given appropriate exposure, amusics might be able to acquire basic musical abilities. To test this possibility, a group of 11 adults with congenital amusia, and their matched controls, were exposed to a continuous stream of syllables or tones for 21-minute. Their task was to try to identify three-syllable nonsense words or three-tone motifs having an identical statistical structure. The results of five experiments show that amusics can learn novel words as easily as controls, whereas they systematically fail on musical materials. Thus, inappropriate musical exposure cannot fully account for the musical disorder. Implications of the results for the domain specificity of statistical learning are discussed. © 2012 New York Academy of Sciences.
Lineament interpretation. Short review and methodology
Energy Technology Data Exchange (ETDEWEB)
Tiren, Sven (GEOSIGMA AB (Sweden))
2010-11-15
interpretation, and the skill of the interpreter. Images and digital terrain models that display the relief of the studied area should, if possible, be illuminated in at least four directions to reduce biases regarding the orientation of structures. The resolution in the source data should be fully used and extrapolation of structures avoided in the primary interpretation of the source data. The interpretation of lineaments should be made in steps: a. Interpretation of each data set/image/terrain model is conducted separately; b. Compilation of all interpretations in a base lineament map and classification of the lineaments; and c. Construction of thematical maps, e.g. structural maps, rock block maps, and statistic presentation of lineaments. Generalisations and extrapolations of lineaments/structures may be made when producing the thematical maps. The construction of thematical maps should be supported by auxiliary information (geological and geomorphologic data and information on human impact in the area). Inferred tectonic structures should be controlled in field
Lineament interpretation. Short review and methodology
International Nuclear Information System (INIS)
Tiren, Sven
2010-11-01
the interpreter. Images and digital terrain models that display the relief of the studied area should, if possible, be illuminated in at least four directions to reduce biases regarding the orientation of structures. The resolution in the source data should be fully used and extrapolation of structures avoided in the primary interpretation of the source data. The interpretation of lineaments should be made in steps: a. Interpretation of each data set/image/terrain model is conducted separately; b. Compilation of all interpretations in a base lineament map and classification of the lineaments; and c. Construction of thematical maps, e.g. structural maps, rock block maps, and statistic presentation of lineaments. Generalisations and extrapolations of lineaments/structures may be made when producing the thematical maps. The construction of thematical maps should be supported by auxiliary information (geological and geomorphologic data and information on human impact in the area). Inferred tectonic structures should be controlled in field
GoCxx: a tool to easily leverage C++ legacy code for multicore-friendly Go libraries and frameworks
International Nuclear Information System (INIS)
Binet, Sébastien
2012-01-01
Current HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a ‘single-thread’ processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. Writing scalable code in C++ for multicore architectures, while doable, is no panacea. Sure, C++11 will improve on the current situation (by standardizing on std::thread, introducing lambda functions and defining a memory model) but it will do so at the price of complicating further an already quite sophisticated language. This level of sophistication has probably already strongly motivated analysis groups to migrate to CPython, hoping for its current limitations with respect to multicore scalability to be either lifted (Grand Interpreter Lock removal) or for the advent of a new Python VM better tailored for this kind of environment (PyPy, Jython, …) Could HENP migrate to a language with none of the deficiencies of C++ (build time, deployment, low level tools for concurrency) and with the fast turn-around time, simplicity and ease of coding of Python? This paper will try to make the case for Go - a young open source language with built-in facilities to easily express and expose concurrency - being such a language. We introduce GoCxx, a tool leveraging gcc-xml's output to automatize the tedious work of creating Go wrappers for foreign languages, a critical task for any language wishing to leverage legacy and field-tested code. We will conclude with the first results of applying GoCxx to real C++ code.
Lee, Inseok; Hwang, Won-Gue
2015-01-01
A survey was conducted to examine how personal experiences affect the interpretation of the meaning of display and control colours on electric control panels (ECPs). In Korea, the red light on ECPs represents a normal state of operation, while the green light represents a stopped state of operation; this appears to contradict the general stereotypes surrounding these colours. The survey results indicated that the participants who had experience in using ECPs interpreted the colour meaning differently from the other participant group. More than half of the experienced participants regarded the coloured displays and controls as they were designed, while most participants in the other group appeared to interpret the colours in accordance with the stereotypes. It is presumed that accidents related to human errors can occur when non-experienced people use the ECPs, which are easily accessible in many buildings. Practitioner Summary: A survey was conducted to investigate how personal experiences affect the interpretation of the function meanings of coloured lights on electrical control panels. It was found that the interpretation varies according to personal experiences, which can induce accidents related to human errors while operating electrical equipment.
Analytical and statistical analysis of elemental composition of lichens
International Nuclear Information System (INIS)
Calvelo, S.; Baccala, N.; Bubach, D.; Arribere, M.A.; Riberio Guevara, S.
1997-01-01
The elemental composition of lichens from remote southern South America regions has been studied with analytical and statistical techniques to determine if the values obtained reflect species, growth forms or habitat characteristics. The enrichment factors are calculated discriminated by species and collection site and compared with data available in the literature. The elemental concentrations are standardized and compared for different species. The information was statistically processed, a cluster analysis was performed using the three first principal axes of the PCA; the three groups formed are presented. Their relationship with the species, collection sites and the lichen growth forms are interpreted. (author)
The crossing statistic: dealing with unknown errors in the dispersion of Type Ia supernovae
International Nuclear Information System (INIS)
Shafieloo, Arman; Clifton, Timothy; Ferreira, Pedro
2011-01-01
We propose a new statistic that has been designed to be used in situations where the intrinsic dispersion of a data set is not well known: The Crossing Statistic. This statistic is in general less sensitive than χ 2 to the intrinsic dispersion of the data, and hence allows us to make progress in distinguishing between different models using goodness of fit to the data even when the errors involved are poorly understood. The proposed statistic makes use of the shape and trends of a model's predictions in a quantifiable manner. It is applicable to a variety of circumstances, although we consider it to be especially well suited to the task of distinguishing between different cosmological models using type Ia supernovae. We show that this statistic can easily distinguish between different models in cases where the χ 2 statistic fails. We also show that the last mode of the Crossing Statistic is identical to χ 2 , so that it can be considered as a generalization of χ 2
Statistical modelling of transcript profiles of differentially regulated genes
Directory of Open Access Journals (Sweden)
Sergeant Martin J
2008-07-01
Full Text Available Abstract Background The vast quantities of gene expression profiling data produced in microarray studies, and the more precise quantitative PCR, are often not statistically analysed to their full potential. Previous studies have summarised gene expression profiles using simple descriptive statistics, basic analysis of variance (ANOVA and the clustering of genes based on simple models fitted to their expression profiles over time. We report the novel application of statistical non-linear regression modelling techniques to describe the shapes of expression profiles for the fungus Agaricus bisporus, quantified by PCR, and for E. coli and Rattus norvegicus, using microarray technology. The use of parametric non-linear regression models provides a more precise description of expression profiles, reducing the "noise" of the raw data to produce a clear "signal" given by the fitted curve, and describing each profile with a small number of biologically interpretable parameters. This approach then allows the direct comparison and clustering of the shapes of response patterns between genes and potentially enables a greater exploration and interpretation of the biological processes driving gene expression. Results Quantitative reverse transcriptase PCR-derived time-course data of genes were modelled. "Split-line" or "broken-stick" regression identified the initial time of gene up-regulation, enabling the classification of genes into those with primary and secondary responses. Five-day profiles were modelled using the biologically-oriented, critical exponential curve, y(t = A + (B + CtRt + ε. This non-linear regression approach allowed the expression patterns for different genes to be compared in terms of curve shape, time of maximal transcript level and the decline and asymptotic response levels. Three distinct regulatory patterns were identified for the five genes studied. Applying the regression modelling approach to microarray-derived time course data
Directory of Open Access Journals (Sweden)
Nora K Frisch
2015-01-01
Full Text Available Background: ThinPrep® (TP cervical cytology, as a liquid-based method, has many benefits but also a relatively high unsatisfactory rate due to debris/lubricant contamination and the presence of blood. These contaminants clog the TP filter and prevent the deposition of adequate diagnostic cells on the slide. An acetic acid wash (AAW protocol is often used to lyse red blood cells, before preparing the TP slides. Design: From 23,291 TP cervical cytology specimens over a 4-month period, 2739 underwent AAW protocol due to initial unsatisfactory smear (UNS with scant cellularity due to blood or being grossly bloody. Randomly selected 2739 cervical cytology specimens which did not undergo AAW from the same time period formed the control (non-AAW group. Cytopathologic interpretations of AAW and non-AAW groups were compared using the Chi-square test. Results: About 94.2% of the 2739 cases which underwent AAW were subsequently satisfactory for evaluation with interpretations of atypical squamous cells of undetermined significance (ASCUS 4.9% (135, low-grade squamous intraepithelial lesions (LSIL 3.7% (102, and high-grade squamous intraepithelial lesions (HSIL 1% (28. From the 2739 control cases, 96.3% were satisfactory with ASCUS 5.5% (151, LSIL 5.1% (139, and HSIL 0.7% (19. The prevalence of ASCUS interpretations was similar (P = 0.33. Although there were 32% more HSIL interpretations in the AAW group (28 in AAW vs. 19 in non-AAW, the difference was statistically insignificant (P = 0.18. AAW category; however, had significantly fewer LSIL interpretations (P = 0.02. The percentage of UNS cases remained higher in the AAW group with statistical significance (P < 0.01. Conclusions: While AAW had a significantly higher percent of UNS interpretations, the protocol was effective in rescuing 94.2% of specimens which otherwise may have been reported unsatisfactory. This improved patient care by avoiding a repeat test. The prevalence of ASCUS and HSIL
Cellulose with a High Fractal Dimension Is Easily Hydrolysable under Acid Catalysis
Directory of Open Access Journals (Sweden)
Mariana Díaz
2017-05-01
Full Text Available The adsorption of three diverse amino acids couples onto the surface of microcrystalline cellulose was studied. Characterisation of modified celluloses included changes in the polarity and in roughness. The amino acids partially break down the hydrogen bonding network of the cellulose structure, leading to more reactive cellulose residues that were easily hydrolysed to glucose in the presence of hydrochloric acid or tungstophosphoric acid catalysts. The conversion of cellulose and selectivity for glucose was highly dependent on the self-assembled amino acids adsorbed onto the cellulose and the catalyst.
Smith, Tony N; Traise, Peter; Cook, Aiden
2009-01-01
In regional, rural and remote clinical practice, radiographers work closely with medical members of the acute care team in the interpretation of radiographic images, particularly when no radiologist is available. However, the misreading of radiographs by non-radiologist physicians has been shown to be the most common type of clinical error in the emergency department. Further, in Australia few rural radiographers are specifically trained to interpret and report on images. This study aimed to evaluate the accuracy of a group of rural radiographers in interpreting musculoskeletal plain radiographs, and to assess the effectiveness of continuing education (CE) in improving their accuracy within a short time frame. Following ethics approval, 16 rural radiographers were recruited to the study. At inception a purpose-designed 'test-object' of 25 cases compiled by a radiologist was used to assess image interpretation accuracy. The cases were categorised into three grades of complexity. The radiographers entered their answers on a structured radiographer opinion form (ROF) that had three levels of response - 'general opinion', 'observations' and 'open comment'. Subsequent to base-line testing, the radiographers participated in a CE program aimed at improving their image interpretation skills. After a 4 month period they were re-tested using the same methodology. The ROFs were scored by the radiologist and the pooled results analysed for statistically significant changes at all ROF levels and grades of complexity. While for the small number of less complex grade 1 cases there was no change in image interpretation accuracy, for the more numerous and more complex grade 2 and grade 3 cases there was a statistically significant improvement at the 'general opinion' and 'observation' levels (paired t-test, p radiologist. However, radiographers' ability to use radiological vocabulary needs improvement. The complementary role that exists between radiographers and other members of
A Comprehensive Statistically-Based Method to Interpret Real-Time Flowing Measurements
Energy Technology Data Exchange (ETDEWEB)
Keita Yoshioka; Pinan Dawkrajai; Analis A. Romero; Ding Zhu; A. D. Hill; Larry W. Lake
2007-01-15
With the recent development of temperature measurement systems, continuous temperature profiles can be obtained with high precision. Small temperature changes can be detected by modern temperature measuring instruments such as fiber optic distributed temperature sensor (DTS) in intelligent completions and will potentially aid the diagnosis of downhole flow conditions. In vertical wells, since elevational geothermal changes make the wellbore temperature sensitive to the amount and the type of fluids produced, temperature logs can be used successfully to diagnose the downhole flow conditions. However, geothermal temperature changes along the wellbore being small for horizontal wells, interpretations of a temperature log become difficult. The primary temperature differences for each phase (oil, water, and gas) are caused by frictional effects. Therefore, in developing a thermal model for horizontal wellbore, subtle temperature changes must be accounted for. In this project, we have rigorously derived governing equations for a producing horizontal wellbore and developed a prediction model of the temperature and pressure by coupling the wellbore and reservoir equations. Also, we applied Ramey's model (1962) to the build section and used an energy balance to infer the temperature profile at the junction. The multilateral wellbore temperature model was applied to a wide range of cases at varying fluid thermal properties, absolute values of temperature and pressure, geothermal gradients, flow rates from each lateral, and the trajectories of each build section. With the prediction models developed, we present inversion studies of synthetic and field examples. These results are essential to identify water or gas entry, to guide flow control devices in intelligent completions, and to decide if reservoir stimulation is needed in particular horizontal sections. This study will complete and validate these inversion studies.
Assistive Technologies for Second-Year Statistics Students Who Are Blind
Erhardt, Robert J.; Shuman, Michael P.
2015-01-01
At Wake Forest University, a student who is blind enrolled in a second course in statistics. The course covered simple and multiple regression, model diagnostics, model selection, data visualization, and elementary logistic regression. These topics required that the student both interpret and produce three sets of materials: mathematical writing,…
Liao, Ying; Lin, Wen-He
2016-01-01
In the era when digitalization is pursued, numbers are the major medium of information performance and statistics is the primary instrument to interpret and analyze numerical information. For this reason, the cultivation of fundamental statistical literacy should be a key in the learning area of mathematics at the stage of compulsory education.…
How to Interpret the Responses of a Karstic Field to a Harmonic Pumping
Fischer, P.; Jardani, A.; Cardiff, M. A.; Lecoq, N.
2017-12-01
In a karstic field, the flow paths are very complex as they globally follow the conduit network. The drawdown responses to a pumping test at constant rate in this type of aquifer are highly variable spatially and difficult to interpret. Furthermore, a constant-rate pumping tends to mobilize matrix diffusive flows and, thus, the conduit flows become `blurred'. Harmonic pumping tests represent a new investigation method for characterizing the subsurface groundwater flows. They have several advantages compared to a constant-rate pumping (i.e. more signal possibilities, extracting the signal in the responses, possibility of closed loop investigation). In the case of a karstic field investigation, several works have shown that a harmonic pumping test allows a better characterization of the local field hydraulic properties. We show in our recent works that interpreting the responses from a harmonic pumping test permit to go further in the conduit network characterization by delineating a connectivity degree between measurement points. We have studied the amplitude and phase offset values in the responses to a harmonic pumping test in a theoretical synthetic modeling case in order to define an interpretation method for the responses. According to the amplitude and phase offset values in a response, relative to the pumping signal, we have distinguished three different type of responses to be interpreted: a direct connectivity response (conduit flow), an indirect connectivity (conduit and short matrix flows), and an absence of connectivity. We have applied this interpretation method on a true field responses (from a karstic field in Southern France). Firstly we have stated that the whole set of field responses appears to be coherent toward the observation that have been made in the theoretical case. Then, by comparing the periodic responses between them and with the pumping signal, we could interpret and delineate easily and quickly the main flow paths, through the degree
Index of subfactors and statistics of quantum fields. Pt. 2
International Nuclear Information System (INIS)
Longo, R.
1990-01-01
The endomorphism semigroup End(M) of an infinite factor M is endowed with a natural conjugation (modulo inner automorphisms) anti ρ=ρ -1. γ, where γ is the canonical endomorphism of ρ(M) into M. In Quantum Field Theory conjugate endomorphisms are shown to correspond to conjugate superselection sectors in the description of Doplicher, Haag and Roberts. On the other hand one easily sees that conjugate endormorphisms correspond to conjugate correspondences in the setting of A. Connes. In particular we identify the canonical tower associated with the inclusion ρ(A(O)is contained inA(O) relative to a sector ρ. As a corollary, making use of our previously established index-statistics correspondence, we conpletely describe, in low dimensional theories, the statistics of a selfconjugate superselection sector ρ with 3 or less channels, in particular with statistical dimension d(ρ)<2, by obtaining the braid group representations of V. Jones and Birman, Wenzyl and Murakami. The statistics is thus described in these cases by the polynomial invariants for knots and links of Jones and Kauffman. Selfconjugate sectors are subdivided in real and pseudoreal ones and the effect of this distinction on the statistics is analyzed. The FYHLMO polynomial describes arbitrary 2-channels sectors. (orig.)
Methods for interpreting lists of affected genes obtained in a DNA microarray experiment
Directory of Open Access Journals (Sweden)
Hedegaard Jakob
2009-07-01
Full Text Available Abstract Background The aim of this paper was to describe and compare the methods used and the results obtained by the participants in a joint EADGENE (European Animal Disease Genomic Network of Excellence and SABRE (Cutting Edge Genomics for Sustainable Animal Breeding workshop focusing on post analysis of microarray data. The participating groups were provided with identical lists of microarray probes, including test statistics for three different contrasts, and the normalised log-ratios for each array, to be used as the starting point for interpreting the affected probes. The data originated from a microarray experiment conducted to study the host reactions in broilers occurring shortly after a secondary challenge with either a homologous or heterologous species of Eimeria. Results Several conceptually different analytical approaches, using both commercial and public available software, were applied by the participating groups. The following tools were used: Ingenuity Pathway Analysis, MAPPFinder, LIMMA, GOstats, GOEAST, GOTM, Globaltest, TopGO, ArrayUnlock, Pathway Studio, GIST and AnnotationDbi. The main focus of the approaches was to utilise the relation between probes/genes and their gene ontology and pathways to interpret the affected probes/genes. The lack of a well-annotated chicken genome did though limit the possibilities to fully explore the tools. The main results from these analyses showed that the biological interpretation is highly dependent on the statistical method used but that some common biological conclusions could be reached. Conclusion It is highly recommended to test different analytical methods on the same data set and compare the results to obtain a reliable biological interpretation of the affected genes in a DNA microarray experiment.
Feo, Rebecca; Conroy, Tiffany; Marshall, Rhianon J; Rasmussen, Philippa; Wiechula, Richard; Kitson, Alison L
2017-04-01
Nursing policy and healthcare reform are focusing on two, interconnected areas: person-centred care and fundamental care. Each initiative emphasises a positive nurse-patient relationship. For these initiatives to work, nurses require guidance for how they can best develop and maintain relationships with their patients in practice. Although empirical evidence on the nurse-patient relationship is increasing, findings derived from this research are not readily or easily transferable to the complexities and diversities of nursing practice. This study describes a novel methodological approach, called holistic interpretive synthesis (HIS), for interpreting empirical research findings to create practice-relevant recommendations for nurses. Using HIS, umbrella review findings on the nurse-patient relationship are interpreted through the lens of the Fundamentals of Care Framework. The recommendations for the nurse-patient relationship created through this approach can be used by nurses to establish, maintain and evaluate therapeutic relationships with patients to deliver person-centred fundamental care. Future research should evaluate the validity and impact of these recommendations and test the feasibility of using HIS for other areas of nursing practice and further refine the approach. © 2016 John Wiley & Sons Ltd.
Do doctors need statistics? Doctors' use of and attitudes to probability and statistics.
Swift, Louise; Miles, Susan; Price, Gill M; Shepstone, Lee; Leinster, Sam J
2009-07-10
There is little published evidence on what doctors do in their work that requires probability and statistics, yet the General Medical Council (GMC) requires new doctors to have these skills. This study investigated doctors' use of and attitudes to probability and statistics with a view to informing undergraduate teaching.An email questionnaire was sent to 473 clinicians with an affiliation to the University of East Anglia's Medical School.Of 130 respondents approximately 90 per cent of doctors who performed each of the following activities found probability and statistics useful for that activity: accessing clinical guidelines and evidence summaries, explaining levels of risk to patients, assessing medical marketing and advertising material, interpreting the results of a screening test, reading research publications for general professional interest, and using research publications to explore non-standard treatment and management options.Seventy-nine per cent (103/130, 95 per cent CI 71 per cent, 86 per cent) of participants considered probability and statistics important in their work. Sixty-three per cent (78/124, 95 per cent CI 54 per cent, 71 per cent) said that there were activities that they could do better or start doing if they had an improved understanding of these areas and 74 of these participants elaborated on this. Themes highlighted by participants included: being better able to critically evaluate other people's research; becoming more research-active, having a better understanding of risk; and being better able to explain things to, or teach, other people.Our results can be used to inform how probability and statistics should be taught to medical undergraduates and should encourage today's medical students of the subjects' relevance to their future careers. Copyright 2009 John Wiley & Sons, Ltd.
Statistical methods for quantitative indicators of impacts, applied to transmission line projects
International Nuclear Information System (INIS)
Ospina Norena, Jesus Efren; Lema Tapias, Alvaro de Jesus
2005-01-01
Multivariate statistical analyses are proposed for encountering the relationships between variables and impacts, to obtain high explanatory power for interpretation of the causes and effects and achieve the highest certainty possible, to evaluate and classify impacts by their level of influence
Quantum Statistical Mechanics, L-Series and Anabelian Geometry I: Partition Functions
Marcolli, Matilde; Cornelissen, Gunther
2014-01-01
The zeta function of a number field can be interpreted as the partition function of an associated quantum statistical mechanical (QSM) system, built from abelian class field theory. We introduce a general notion of isomorphism of QSM-systems and prove that it preserves (extremal) KMS equilibrium
Data-driven inference for the spatial scan statistic.
Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C
2011-08-02
Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.
Semiclassical statistical mechanics
International Nuclear Information System (INIS)
Stratt, R.M.
1979-04-01
On the basis of an approach devised by Miller, a formalism is developed which allows the nonperturbative incorporation of quantum effects into equilibrium classical statistical mechanics. The resulting expressions bear a close similarity to classical phase space integrals and, therefore, are easily molded into forms suitable for examining a wide variety of problems. As a demonstration of this, three such problems are briefly considered: the simple harmonic oscillator, the vibrational state distribution of HCl, and the density-independent radial distribution function of He 4 . A more detailed study is then made of two more general applications involving the statistical mechanics of nonanalytic potentials and of fluids. The former, which is a particularly difficult problem for perturbative schemes, is treated with only limited success by restricting phase space and by adding an effective potential. The problem of fluids, however, is readily found to yield to a semiclassical pairwise interaction approximation, which in turn permits any classical many-body model to be expressed in a convenient form. The remainder of the discussion concentrates on some ramifications of having a phase space version of quantum mechanics. To test the breadth of the formulation, the task of constructing quantal ensemble averages of phase space functions is undertaken, and in the process several limitations of the formalism are revealed. A rather different approach is also pursued. The concept of quantum mechanical ergodicity is examined through the use of numerically evaluated eigenstates of the Barbanis potential, and the existence of this quantal ergodicity - normally associated with classical phase space - is verified. 21 figures, 4 tables
Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.
Chalmers, R Philip
2018-06-01
This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.
A empiric expression to interpret the approximation of λ cI phages to E. coli C600 bacteria
International Nuclear Information System (INIS)
Garces, F.; Vidania, R. de
1984-01-01
In general the process of adsorption of phages to bacteria is considered in the bibliography as an statistical process. In this work we use an empiric expression which allows to interpret the approximation of λcI pages to E. coli C 6 00 bacteria. This expression introduces some changes respect to a pure statistical description of the approximation process. (Author) 26 refs
van Bömmel, Alena; Song, Song; Majer, Piotr; Mohr, Peter N C; Heekeren, Hauke R; Härdle, Wolfgang K
2014-07-01
Decision making usually involves uncertainty and risk. Understanding which parts of the human brain are activated during decisions under risk and which neural processes underly (risky) investment decisions are important goals in neuroeconomics. Here, we analyze functional magnetic resonance imaging (fMRI) data on 17 subjects who were exposed to an investment decision task from Mohr, Biele, Krugel, Li, and Heekeren (in NeuroImage 49, 2556-2563, 2010b). We obtain a time series of three-dimensional images of the blood-oxygen-level dependent (BOLD) fMRI signals. We apply a panel version of the dynamic semiparametric factor model (DSFM) presented in Park, Mammen, Wolfgang, and Borak (in Journal of the American Statistical Association 104(485), 284-298, 2009) and identify task-related activations in space and dynamics in time. With the panel DSFM (PDSFM) we can capture the dynamic behavior of the specific brain regions common for all subjects and represent the high-dimensional time-series data in easily interpretable low-dimensional dynamic factors without large loss of variability. Further, we classify the risk attitudes of all subjects based on the estimated low-dimensional time series. Our classification analysis successfully confirms the estimated risk attitudes derived directly from subjects' decision behavior.
Electromagnetic SAMPO monitoring soundings at OLKILUOTO in 2012 with updated interpretations
International Nuclear Information System (INIS)
Korhonen, K.
2013-11-01
The Geological Survey of Finland (GTK) has carried out electromagnetic depth soundings annually at fixed stations at Olkiluoto since 2004 as part of a monitoring programme. The goal of the programme is to detect and monitor changes in the electrical properties of the bedrock above and in the vicinity of the ONKALO tunnel which will serve as a part of the future underground nuclear waste disposal facility. A new Sampo monitoring survey was carried out during October 2012. The survey plan of 2011 was slightly modified and 36 soundings at 16 measurement stations were carried out. The nominal coil separations of 200, 400, 500, 600 and 800 meters were used. Interpretations at eight selected stations were updated with the new data. The interpretations indicate consistent statistically significant changes. Annual increases in resistivity were detected at stations to the East of ONKALO while annual decreases in resistivity were detected to the West of ONKALO. However, these changes need to be considered keeping in mind the high degree of uncertainty associated with the data and their interpretations. (orig.)
Systematic interpretation of microarray data using experiment annotations
Directory of Open Access Journals (Sweden)
Frohme Marcus
2006-12-01
Full Text Available Abstract Background Up to now, microarray data are mostly assessed in context with only one or few parameters characterizing the experimental conditions under study. More explicit experiment annotations, however, are highly useful for interpreting microarray data, when available in a statistically accessible format. Results We provide means to preprocess these additional data, and to extract relevant traits corresponding to the transcription patterns under study. We found correspondence analysis particularly well-suited for mapping such extracted traits. It visualizes associations both among and between the traits, the hereby annotated experiments, and the genes, revealing how they are all interrelated. Here, we apply our methods to the systematic interpretation of radioactive (single channel and two-channel data, stemming from model organisms such as yeast and drosophila up to complex human cancer samples. Inclusion of technical parameters allows for identification of artifacts and flaws in experimental design. Conclusion Biological and clinical traits can act as landmarks in transcription space, systematically mapping the variance of large datasets from the predominant changes down toward intricate details.
Distributed data collection for a database of radiological image interpretations
Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.
1997-01-01
The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.
de Zoete, J.; Curran, J.; Sjerps, M.
2015-01-01
Existing methods for the interpretation of RNA profiles as evidence for the presence of certain cell types aim for making categorical statements. Such statements limit the possibility to report the associated uncertainty. From a statistical point of view, a probabilistic approach is a preferable
HOW TO SELECT APPROPRIATE STATISTICAL TEST IN SCIENTIFIC ARTICLES
Directory of Open Access Journals (Sweden)
Vladimir TRAJKOVSKI
2016-09-01
Full Text Available Statistics is mathematical science dealing with the collection, analysis, interpretation, and presentation of masses of numerical data in order to draw relevant conclusions. Statistics is a form of mathematical analysis that uses quantified models, representations and synopses for a given set of experimental data or real-life studies. The students and young researchers in biomedical sciences and in special education and rehabilitation often declare that they have chosen to enroll that study program because they have lack of knowledge or interest in mathematics. This is a sad statement, but there is much truth in it. The aim of this editorial is to help young researchers to select statistics or statistical techniques and statistical software appropriate for the purposes and conditions of a particular analysis. The most important statistical tests are reviewed in the article. Knowing how to choose right statistical test is an important asset and decision in the research data processing and in the writing of scientific papers. Young researchers and authors should know how to choose and how to use statistical methods. The competent researcher will need knowledge in statistical procedures. That might include an introductory statistics course, and it most certainly includes using a good statistics textbook. For this purpose, there is need to return of Statistics mandatory subject in the curriculum of the Institute of Special Education and Rehabilitation at Faculty of Philosophy in Skopje. Young researchers have a need of additional courses in statistics. They need to train themselves to use statistical software on appropriate way.
International Nuclear Information System (INIS)
Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi
1991-01-01
This report is to show how the life due to stress corrosion cracking breakdown of fuel cladding tubes is evaluated by applying the statistical techniques to that examined by a few testing methods. The statistical distribution of the limiting values of constant load stress corrosion cracking life, the statistical analysis by making the probabilistic interpretation of constant load stress corrosion cracking life, and the statistical analysis of stress corrosion cracking life by the slow strain rate test (SSRT) method are described. (K.I.)
Advances in Statistical Methods for Substance Abuse Prevention Research
MacKinnon, David P.; Lockwood, Chondra M.
2010-01-01
The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467
Prospective elementary and secondary school mathematics teachers’ statistical reasoning
Directory of Open Access Journals (Sweden)
Rabia KARATOPRAK
2015-04-01
Full Text Available This study investigated prospective elementary (PEMTs and secondary (PSMTs school mathematics teachers’ statistical reasoning. The study began with the adaptation of the Statistical Reasoning Assessment (Garfield, 2003 test. Then, the test was administered to 82 PEMTs and 91 PSMTs in a metropolitan city of Turkey. Results showed that both groups were equally successful in understanding independence, and understanding importance of large samples. However, results from selecting appropriate measures of center together with the misconceptions assessing the same subscales showed that both groups selected mode rather than mean as an appropriate average. This suggested their lack of attention to the categorical and interval/ratio variables while examining data. Similarly, both groups were successful in interpreting and computing probability; however, they had equiprobability bias, law of small numbers and representativeness misconceptions. The results imply a change in some questions in the Statistical Reasoning Assessment test and that teacher training programs should include statistics courses focusing on studying characteristics of samples.
International Nuclear Information System (INIS)
Martin, Robert P.; Nutt, William T.
2011-01-01
Research highlights: → Historical recitation on application of order-statistics models to nuclear power plant thermal-hydraulics safety analysis. → Interpretation of regulatory language regarding 10 CFR 50.46 reference to a 'high level of probability'. → Derivation and explanation of order-statistics-based evaluation methodologies considering multi-variate acceptance criteria. → Summary of order-statistics models and recommendations to the nuclear power plant thermal-hydraulics safety analysis community. - Abstract: The application of order-statistics in best-estimate plus uncertainty nuclear safety analysis has received a considerable amount of attention from methodology practitioners, regulators, and academia. At the root of the debate are two questions: (1) what is an appropriate quantitative interpretation of 'high level of probability' in regulatory language appearing in the LOCA rule, 10 CFR 50.46 and (2) how best to mathematically characterize the multi-variate case. An original derivation is offered to provide a quantitative basis for 'high level of probability.' At root of the second question is whether one should recognize a probability statement based on the tolerance region method of Wald and Guba, et al., for multi-variate problems, one explicitly based on the regulatory limits, best articulated in the Wallis-Nutt 'Testing Method', or something else entirely. This paper reviews the origins of the different positions, key assumptions, limitations, and relationship to addressing acceptance criteria. It presents a mathematical interpretation of the regulatory language, including a complete derivation of uni-variate order-statistics (as credited in AREVA's Realistic Large Break LOCA methodology) and extension to multi-variate situations. Lastly, it provides recommendations for LOCA applications, endorsing the 'Testing Method' and addressing acceptance methods allowing for limited sample failures.
Identifying Reflectors in Seismic Images via Statistic and Syntactic Methods
Directory of Open Access Journals (Sweden)
Carlos A. Perez
2010-04-01
Full Text Available In geologic interpretation of seismic reflection data, accurate identification of reflectors is the foremost step to ensure proper subsurface structural definition. Reflector information, along with other data sets, is a key factor to predict the presence of hydrocarbons. In this work, mathematic and pattern recognition theory was adapted to design two statistical and two syntactic algorithms which constitute a tool in semiautomatic reflector identification. The interpretive power of these four schemes was evaluated in terms of prediction accuracy and computational speed. Among these, the semblance method was confirmed to render the greatest accuracy and speed. Syntactic methods offer an interesting alternative due to their inherently structural search method.
Analysis and classification of ECG-waves and rhythms using circular statistics and vector strength
Directory of Open Access Journals (Sweden)
Janßen Jan-Dirk
2017-09-01
Full Text Available The most common way to analyse heart rhythm is to calculate the RR-interval and the heart rate variability. For further evaluation, descriptive statistics are often used. Here we introduce a new and more natural heart rhythm analysis tool that is based on circular statistics and vector strength. Vector strength is a tool to measure the periodicity or lack of periodicity of a signal. We divide the signal into non-overlapping window segments and project the detected R-waves around the unit circle using the complex exponential function and the median RR-interval. In addition, we calculate the vector strength and apply circular statistics as wells as an angular histogram on the R-wave vectors. This approach enables an intuitive visualization and analysis of rhythmicity. Our results show that ECG-waves and rhythms can be easily visualized, analysed and classified by circular statistics and vector strength.
Naeger, D M; Chang, S D; Kolli, P; Shah, V; Huang, W; Thoeni, R F
2011-01-01
Objective The study compared the sensitivity, specificity, confidence and interpretation time of readers of differing experience in diagnosing acute appendicitis with contrast-enhanced CT using neutral vs positive oral contrast agents. Methods Contrast-enhanced CT for right lower quadrant or right flank pain was performed in 200 patients with neutral and 200 with positive oral contrast including 199 with proven acute appendicitis and 201 with other diagnoses. Test set disease prevalence was 50%. Two experienced gastrointestinal radiologists, one fellow and two first-year residents blindly assessed all studies for appendicitis (2000 readings) and assigned confidence scores (1=poor to 4=excellent). Receiver operating characteristic (ROC) curves were generated. Total interpretation time was recorded. Each reader's interpretation with the two agents was compared using standard statistical methods. Results Average reader sensitivity was found to be 96% (range 91–99%) with positive and 95% (89–98%) with neutral oral contrast; specificity was 96% (92–98%) and 94% (90–97%). For each reader, no statistically significant difference was found between the two agents (sensitivities p-values >0.6; specificities p-values>0.08), in the area under the ROC curve (range 0.95–0.99) or in average interpretation times. In cases without appendicitis, positive oral contrast demonstrated improved appendix identification (average 90% vs 78%) and higher confidence scores for three readers. Average interpretation times showed no statistically significant differences between the agents. Conclusion Neutral vs positive oral contrast does not affect the accuracy of contrast-enhanced CT for diagnosing acute appendicitis. Although positive oral contrast might help to identify normal appendices, we continue to use neutral oral contrast given its other potential benefits. PMID:20959365
Statistical classifiers on multifractal parameters for optical diagnosis of cervical cancer
Mukhopadhyay, Sabyasachi; Pratiher, Sawon; Kumar, Rajeev; Krishnamoorthy, Vigneshram; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2017-06-01
An augmented set of multifractal parameters with physical interpretations have been proposed to quantify the varying distribution and shape of the multifractal spectrum. The statistical classifier with accuracy of 84.17% validates the adequacy of multi-feature MFDFA characterization of elastic scattering spectroscopy for optical diagnosis of cancer.
An Online Course of Business Statistics: The Proportion of Successful Students
Pena-Sanchez, Rolando
2009-01-01
This article describes the students' academic progress in an online course of business statistics through interactive software assignments and diverse educational homework, which helps these students to build their own e-learning through basic competences; i.e. interpreting results and solving problems. Cross-tables were built for the categorical…
Can four-quark states be easily detected in baryon-antibaryon scattering?
International Nuclear Information System (INIS)
Roberts, W.; Silvestre-Brac, B.; Gignoux, C.
1990-01-01
We attempt to explain the experimental sparsity of diquonia candidates given the theoretical abundance of such states. We do this by investigating the lowest-order contributions of such states as intermediates in p bar p scattering into exclusive baryon-antibaryon final states. We find that the contributions depend on the partial widths for the meson-meson decays of the diquonia, and that resonant effects can be easily made to disappear. We conclude that if the meson-meson widths of diquonia are larger than about 50 MeV, most of these states will be extremely difficult to observe in p bar p scattering, for instance. We note that diquonia may offer a convenient means of describing some aspects of the dynamics of baryon-antibaryon scattering
Hobden, Sally
2014-01-01
Information on the HIV/AIDS epidemic in Southern Africa is often interpreted through a veil of secrecy and shame and, I argue, with flawed understanding of basic statistics. This research determined the levels of statistical literacy evident in 316 future Mathematical Literacy teachers' explanations of the median in the context of HIV/AIDS…
Statistical physics of hard optimization problems
International Nuclear Information System (INIS)
Zdeborova, L.
2009-01-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfy ability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named ”locked” constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfy ability.
Statistical physics of hard optimization problems
International Nuclear Information System (INIS)
Zdeborova, L.
2009-01-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an non-deterministic polynomial-complete problem the practically arising instances might, in fact, be easy to solve. The principal the question we address in the article is: How to recognize if an non-deterministic polynomial-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named 'locked' constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability (Authors)
Statistical physics of hard optimization problems
Zdeborová, Lenka
2009-06-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
Line identification studies using traditional techniques and wavelength coincidence statistics
International Nuclear Information System (INIS)
Cowley, C.R.; Adelman, S.J.
1990-01-01
Traditional line identification techniques result in the assignment of individual lines to an atomic or ionic species. These methods may be supplemented by wavelength coincidence statistics (WCS). The strength and weakness of these methods are discussed using spectra of a number of normal and peculiar B and A stars that have been studied independently by both methods. The present results support the overall findings of some earlier studies. WCS would be most useful in a first survey, before traditional methods have been applied. WCS can quickly make a global search for all species and in this way may enable identifications of an unexpected spectrum that could easily be omitted entirely from a traditional study. This is illustrated by O I. WCS is a subject to well known weakness of any statistical technique, for example, a predictable number of spurious results are to be expected. The danger of small number statistics are illustrated. WCS is at its best relative to traditional methods in finding a line-rich atomic species that is only weakly present in a complicated stellar spectrum
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Sergio Iannazzo
2007-03-01
Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.
The role of key image notes in CT imaging study interpretation.
Fan, Shu-Feng; Xu, Zhe; He, Hai-Qing; Ding, Jian-Rong; Teng, Gao-Jun
2011-04-01
The objective of the study was to investigate the clinical effects of CT key image notes (KIN) in the interpretation of a CT image study. All experiments were approved by the ethics committee of the local district. Six experienced radiologists were equally divided into routine reporting (RR) group and KIN reporting (KIN) group. CT scans of each 100 consecutive cases before and after using KIN technique were randomly selected, and the reports were made by group RR and KIN, respectively. All the reports were again reviewed 3 months later by both groups. All the results with using or not using KIN were interpreted and reinterpreted after 3 months by six clinicians, who were experienced in picture archiving and communication system (PACS) applications and were equally divided into the clinical routine report group and the clinical KIN report group, respectively. The results were statistically analyzed; the time used in making a report, the re-reading time 3 months later, and the consistency of imaging interpretation were determined and compared between groups. After using KIN technique, the time used in making a report was significantly increased (8.77 ± 5.27 vs. 10.53 ± 5.71 min, P < 0.05), the re-reading time was decreased (5.23 ± 2.54 vs. 4.99 ± 1.70 min, P < 0.05), the clinical interpretation and reinterpretation time after 3 months were decreased, and the consistency of the interpretation, reinterpretation between different doctors in different time was markedly improved (P < 0.01). CT report with KIN technique in PACS can significantly improve the consistency of the interpretation and efficiency in routine clinical work.
International Nuclear Information System (INIS)
Kurk, Toby; Adams, David G.; Connell, Simon D.; Thomson, Neil H.
2010-01-01
Imaging signals derived from the atomic force microscope (AFM) are typically presented as separate adjacent images with greyscale or pseudo-colour palettes. We propose that information-rich false-colour composites are a useful means of presenting three-channel AFM image data. This method can aid the interpretation of complex surfaces and facilitate the perception of information that is convoluted across data channels. We illustrate this approach with images of filamentous cyanobacteria imaged in air and under aqueous buffer, using both deflection-modulation (contact) mode and amplitude-modulation (tapping) mode. Topography-dependent contrast in the error and tertiary signals aids the interpretation of the topography signal by contributing additional data, resulting in a more detailed image, and by showing variations in the probe-surface interaction. Moreover, topography-independent contrast and topography-dependent contrast in the tertiary data image (phase or friction) can be distinguished more easily as a consequence of the three dimensional colour-space.
Energy Technology Data Exchange (ETDEWEB)
Kurk, Toby, E-mail: phytak@leeds.ac.uk [School of Physics and Astronomy, University of Leeds, Leeds, West Yorkshire LS2 9JT (United Kingdom); Adams, David G., E-mail: D.G.Adams@leeds.ac.uk [Faculty of Biological Sciences, University of Leeds, Leeds, West Yorkshire LS2 9JT (United Kingdom); Connell, Simon D., E-mail: S.D.A.Connell@leeds.ac.uk [School of Physics and Astronomy, University of Leeds, Leeds, West Yorkshire LS2 9JT (United Kingdom); Thomson, Neil H., E-mail: N.H.Thomson@leeds.ac.uk [School of Physics and Astronomy, University of Leeds, Leeds, West Yorkshire LS2 9JT (United Kingdom); Dental Institute, University of Leeds, Leeds, West Yorkshire LS2 9JT (United Kingdom)
2010-05-15
Imaging signals derived from the atomic force microscope (AFM) are typically presented as separate adjacent images with greyscale or pseudo-colour palettes. We propose that information-rich false-colour composites are a useful means of presenting three-channel AFM image data. This method can aid the interpretation of complex surfaces and facilitate the perception of information that is convoluted across data channels. We illustrate this approach with images of filamentous cyanobacteria imaged in air and under aqueous buffer, using both deflection-modulation (contact) mode and amplitude-modulation (tapping) mode. Topography-dependent contrast in the error and tertiary signals aids the interpretation of the topography signal by contributing additional data, resulting in a more detailed image, and by showing variations in the probe-surface interaction. Moreover, topography-independent contrast and topography-dependent contrast in the tertiary data image (phase or friction) can be distinguished more easily as a consequence of the three dimensional colour-space.
Schrodinger's mechanics interpretation
Cook, David B
2018-01-01
The interpretation of quantum mechanics has been in dispute for nearly a century with no sign of a resolution. Using a careful examination of the relationship between the final form of classical particle mechanics (the HamiltonJacobi Equation) and Schrödinger's mechanics, this book presents a coherent way of addressing the problems and paradoxes that emerge through conventional interpretations.Schrödinger's Mechanics critiques the popular way of giving physical interpretation to the various terms in perturbation theory and other technologies and places an emphasis on development of the theory and not on an axiomatic approach. When this interpretation is made, the extension of Schrödinger's mechanics in relation to other areas, including spin, relativity and fields, is investigated and new conclusions are reached.
Hierarchical modelling for the environmental sciences statistical methods and applications
Clark, James S
2006-01-01
New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.
What dementia reveals about proverb interpretation and its neuroanatomical correlates.
Kaiser, Natalie C; Lee, Grace J; Lu, Po H; Mather, Michelle J; Shapira, Jill; Jimenez, Elvira; Thompson, Paul M; Mendez, Mario F
2013-08-01
Neuropsychologists frequently include proverb interpretation as a measure of executive abilities. A concrete interpretation of proverbs, however, may reflect semantic impairments from anterior temporal lobes, rather than executive dysfunction from frontal lobes. The investigation of proverb interpretation among patients with different dementias with varying degrees of temporal and frontal dysfunction may clarify the underlying brain-behavior mechanisms for abstraction from proverbs. We propose that patients with behavioral variant frontotemporal dementia (bvFTD), who are characteristically more impaired on proverb interpretation than those with Alzheimer's disease (AD), are disproportionately impaired because of anterior temporal-mediated semantic deficits. Eleven patients with bvFTD and 10 with AD completed the Delis-Kaplan Executive Function System (D-KEFS) Proverbs Test and a series of neuropsychological measures of executive and semantic functions. The analysis included both raw and age-adjusted normed data for multiple choice responses on the D-KEFS Proverbs Test using independent samples t-tests. Tensor-based morphometry (TBM) applied to 3D T1-weighted MRI scans mapped the association between regional brain volume and proverb performance. Computations of mean Jacobian values within select regions of interest provided a numeric summary of regional volume, and voxel-wise regression yielded 3D statistical maps of the association between tissue volume and proverb scores. The patients with bvFTD were significantly worse than those with AD in proverb interpretation. The worse performance of the bvFTD patients involved a greater number of concrete responses to common, familiar proverbs, but not to uncommon, unfamiliar ones. These concrete responses to common proverbs correlated with semantic measures, whereas concrete responses to uncommon proverbs correlated with executive functions. After controlling for dementia diagnosis, TBM analyses indicated significant
GIGMF - A statistical model program
International Nuclear Information System (INIS)
Vladuca, G.; Deberth, C.
1978-01-01
The program GIGMF computes the differential and integrated statistical model cross sections for the reactions proceeding through a compound nuclear stage. The computational method is based on the Hauser-Feshbach-Wolfenstein theory, modified to include the modern version of Tepel et al. Although the program was written for a PDP-15 computer, with 16K high speed memory, many reaction channels can be taken into account with the following restrictions: the pro ectile spin must be less than 2, the maximum spin momenta of the compound nucleus can not be greater than 10. These restrictions are due solely to the storage allotments and may be easily relaxed. The energy of the impinging particle, the target and projectile masses, the spin and paritjes of the projectile, target, emergent and residual nuclei the maximum orbital momentum and transmission coefficients for each reaction channel are the input parameters of the program. (author)
International Nuclear Information System (INIS)
Remler, E.A.
1977-01-01
A gauge-invariant version of the Wigner representation is used to relate relativistic mechanics, statistical mechanics, and quantum field theory in the context of the electrodynamics of scalar particles. A unified formulation of quantum field theory and statistical mechanics is developed which clarifies the physics interpretation of the single-particle Wigner function. A covariant form of Ehrenfest's theorem is derived. Classical electrodynamics is derived from quantum field theory after making a random-phase approximation. The validity of this approximation is discussed
Gingras, Bruno; Asselin, Pierre-Yves; McAdams, Stephen
2013-01-01
Although a growing body of research has examined issues related to individuality in music performance, few studies have attempted to quantify markers of individuality that transcend pieces and musical styles. This study aims to identify such meta-markers by discriminating between influences linked to specific pieces or interpretive goals and performer-specific playing styles, using two complementary statistical approaches: linear mixed models (LMMs) to estimate fixed (piece and interpretation) and random (performer) effects, and similarity analyses to compare expressive profiles on a note-by-note basis across pieces and expressive parameters. Twelve professional harpsichordists recorded three pieces representative of the Baroque harpsichord repertoire, including three interpretations of one of these pieces, each emphasizing a different melodic line, on an instrument equipped with a MIDI console. Four expressive parameters were analyzed: articulation, note onset asynchrony, timing, and velocity. LMMs showed that piece-specific influences were much larger for articulation than for other parameters, for which performer-specific effects were predominant, and that piece-specific influences were generally larger than effects associated with interpretive goals. Some performers consistently deviated from the mean values for articulation and velocity across pieces and interpretations, suggesting that global measures of expressivity may in some cases constitute valid markers of artistic individuality. Similarity analyses detected significant associations among the magnitudes of the correlations between the expressive profiles of different performers. These associations were found both when comparing across parameters and within the same piece or interpretation, or on the same parameter and across pieces or interpretations. These findings suggest the existence of expressive meta-strategies that can manifest themselves across pieces, interpretive goals, or expressive devices.
Directory of Open Access Journals (Sweden)
Bruno eGingras
2013-11-01
Full Text Available Although a growing body of research has examined issues related to individuality in music performance, few studies have attempted to quantify markers of individuality that transcend pieces and musical styles. This study aims to identify such meta-markers by discriminating between influences linked to specific pieces or interpretive goals and performer-specific playing styles, using two complementary statistical approaches: linear mixed models (LMMs to estimate fixed (piece and interpretation and random (performer effects, and similarity analyses to compare expressive profiles on a note-by-note basis across pieces and expressive parameters. Twelve professional harpsichordists recorded three pieces representative of the Baroque harpsichord repertoire, including three interpretations of one of these pieces, each emphasizing a different melodic line, on an instrument equipped with a MIDI console. Four expressive parameters were analyzed: articulation, note onset asynchrony, timing, and velocity. LMMs showed that piece-specific influences were much larger for articulation than for other parameters, for which performer-specific effects were predominant, and that piece-specific influences were generally larger than effects associated with interpretive goals. Some performers consistently deviated from the mean values for articulation and velocity across pieces and interpretations, suggesting that global measures of expressivity may in some cases constitute valid markers of artistic individuality. Similarity analyses detected significant associations among the magnitudes of the correlations between the expressive profiles of different performers. These associations were found both when comparing across parameters and within the same piece or interpretation, or on the same parameter and across pieces or interpretations. These findings suggest the existence of expressive meta-strategies that can manifest themselves across pieces, interpretive goals, or
Wilson, Donald A
2014-01-01
Base retracement on solid research and historically accurate interpretation Interpreting Land Records is the industry's most complete guide to researching and understanding the historical records germane to land surveying. Coverage includes boundary retracement and the primary considerations during new boundary establishment, as well as an introduction to historical records and guidance on effective research and interpretation. This new edition includes a new chapter titled "Researching Land Records," and advice on overcoming common research problems and insight into alternative resources wh
Statistical Data Editing in Scientific Articles.
Habibzadeh, Farrokh
2017-07-01
Scientific journals are important scholarly forums for sharing research findings. Editors have important roles in safeguarding standards of scientific publication and should be familiar with correct presentation of results, among other core competencies. Editors do not have access to the raw data and should thus rely on clues in the submitted manuscripts. To identify probable errors, they should look for inconsistencies in presented results. Common statistical problems that can be picked up by a knowledgeable manuscript editor are discussed in this article. Manuscripts should contain a detailed section on statistical analyses of the data. Numbers should be reported with appropriate precisions. Standard error of the mean (SEM) should not be reported as an index of data dispersion. Mean (standard deviation [SD]) and median (interquartile range [IQR]) should be used for description of normally and non-normally distributed data, respectively. If possible, it is better to report 95% confidence interval (CI) for statistics, at least for main outcome variables. And, P values should be presented, and interpreted with caution, if there is a hypothesis. To advance knowledge and skills of their members, associations of journal editors are better to develop training courses on basic statistics and research methodology for non-experts. This would in turn improve research reporting and safeguard the body of scientific evidence. © 2017 The Korean Academy of Medical Sciences.
DEFF Research Database (Denmark)
Agerbo, Heidi
2017-01-01
Approximately a decade ago, it was suggested that a new function should be added to the lexicographical function theory: the interpretive function(1). However, hardly any research has been conducted into this function, and though it was only suggested that this new function was relevant...... to incorporate into lexicographical theory, some scholars have since then assumed that this function exists(2), including the author of this contribution. In Agerbo (2016), I present arguments supporting the incorporation of the interpretive function into the function theory and suggest how non-linguistic signs...... can be treated in specific dictionary articles. However, in the current article, due to the results of recent research, I argue that the interpretive function should not be considered an individual main function. The interpretive function, contrary to some of its definitions, is not connected...
Bersimis, Sotiris; Panaretos, John; Psarakis, Stelios
2005-01-01
Woodall and Montgomery [35] in a discussion paper, state that multivariate process control is one of the most rapidly developing sections of statistical process control. Nowadays, in industry, there are many situations in which the simultaneous monitoring or control, of two or more related quality - process characteristics is necessary. Process monitoring problems in which several related variables are of interest are collectively known as Multivariate Statistical Process Control (MSPC).This ...
A scan statistic to extract causal gene clusters from case-control genome-wide rare CNV data
Directory of Open Access Journals (Sweden)
Scherer Stephen W
2011-05-01
Full Text Available Abstract Background Several statistical tests have been developed for analyzing genome-wide association data by incorporating gene pathway information in terms of gene sets. Using these methods, hundreds of gene sets are typically tested, and the tested gene sets often overlap. This overlapping greatly increases the probability of generating false positives, and the results obtained are difficult to interpret, particularly when many gene sets show statistical significance. Results We propose a flexible statistical framework to circumvent these problems. Inspired by spatial scan statistics for detecting clustering of disease occurrence in the field of epidemiology, we developed a scan statistic to extract disease-associated gene clusters from a whole gene pathway. Extracting one or a few significant gene clusters from a global pathway limits the overall false positive probability, which results in increased statistical power, and facilitates the interpretation of test results. In the present study, we applied our method to genome-wide association data for rare copy-number variations, which have been strongly implicated in common diseases. Application of our method to a simulated dataset demonstrated the high accuracy of this method in detecting disease-associated gene clusters in a whole gene pathway. Conclusions The scan statistic approach proposed here shows a high level of accuracy in detecting gene clusters in a whole gene pathway. This study has provided a sound statistical framework for analyzing genome-wide rare CNV data by incorporating topological information on the gene pathway.
Directory of Open Access Journals (Sweden)
Takahiro eKawabe
2013-09-01
Full Text Available Humans can acquire the statistical features of the external world and employ them to control behaviors. Some external events occur in harmony with an agent’s action, and thus humans should also be able to acquire the statistical features between an action and its external outcome. We report that the acquired action-outcome statistical features alter the visual appearance of the action outcome. Pressing either of two assigned keys triggered visual motion whose direction was statistically biased either upward or downward, and observers judged the stimulus motion direction. Points of subjective equality (PSE for judging motion direction were shifted repulsively from the mean of the distribution associated with each key. Our Bayesian model accounted for the PSE shifts, indicating the optimal acquisition of the action-effect statistical relation. The PSE shifts were moderately attenuated when the action-outcome contingency was reduced. The Bayesian model again accounted for the attenuated PSE shifts. On the other hand, when the action-outcome contiguity was greatly reduced, the PSE shifts were greatly attenuated, and however, the Bayesian model could not accounted for the shifts. The results indicate that visual appearance can be modified by prediction based on the optimal acquisition of action-effect causal relation.
International Nuclear Information System (INIS)
Pirkle, F.L.
1981-04-01
STAARS is a new series which is being published to disseminate information concerning statistical procedures for interpreting aerial radiometric data. The application of a particular data interpretation technique to geologic understanding for delineating regions favorable to uranium deposition is the primary concern of STAARS. Statements concerning the utility of a technique on aerial reconnaissance data as well as detailed aerial survey data will be included
Data-driven inference for the spatial scan statistic
Directory of Open Access Journals (Sweden)
Duczmal Luiz H
2011-08-01
Full Text Available Abstract Background Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. Results A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. Conclusions A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.
Interpreter-mediated dentistry.
Bridges, Susan; Drew, Paul; Zayts, Olga; McGrath, Colman; Yiu, Cynthia K Y; Wong, H M; Au, T K F
2015-05-01
The global movements of healthcare professionals and patient populations have increased the complexities of medical interactions at the point of service. This study examines interpreter mediated talk in cross-cultural general dentistry in Hong Kong where assisting para-professionals, in this case bilingual or multilingual Dental Surgery Assistants (DSAs), perform the dual capabilities of clinical assistant and interpreter. An initial language use survey was conducted with Polyclinic DSAs (n = 41) using a logbook approach to provide self-report data on language use in clinics. Frequencies of mean scores using a 10-point visual analogue scale (VAS) indicated that the majority of DSAs spoke mainly Cantonese in clinics and interpreted for postgraduates and professors. Conversation Analysis (CA) examined recipient design across a corpus (n = 23) of video-recorded review consultations between non-Cantonese speaking expatriate dentists and their Cantonese L1 patients. Three patterns of mediated interpreting indicated were: dentist designated expansions; dentist initiated interpretations; and assistant initiated interpretations to both the dentist and patient. The third, rather than being perceived as negative, was found to be framed either in response to patient difficulties or within the specific task routines of general dentistry. The findings illustrate trends in dentistry towards personalized care and patient empowerment as a reaction to product delivery approaches to patient management. Implications are indicated for both treatment adherence and the education of dental professionals. Copyright © 2015 Elsevier Ltd. All rights reserved.
Statistical and Visualization Data Mining Tools for Foundry Production
Directory of Open Access Journals (Sweden)
M. Perzyk
2007-07-01
Full Text Available In recent years a rapid development of a new, interdisciplinary knowledge area, called data mining, is observed. Its main task is extracting useful information from previously collected large amount of data. The main possibilities and potential applications of data mining in manufacturing industry are characterized. The main types of data mining techniques are briefly discussed, including statistical, artificial intelligence, data base and visualization tools. The statistical methods and visualization methods are presented in more detail, showing their general possibilities, advantages as well as characteristic examples of applications in foundry production. Results of the author’s research are presented, aimed at validation of selected statistical tools which can be easily and effectively used in manufacturing industry. A performance analysis of ANOVA and contingency tables based methods, dedicated for determination of the most significant process parameters as well as for detection of possible interactions among them, has been made. Several numerical tests have been performed using simulated data sets, with assumed hidden relationships as well some real data, related to the strength of ductile cast iron, collected in a foundry. It is concluded that the statistical methods offer relatively easy and fairly reliable tools for extraction of that type of knowledge about foundry manufacturing processes. However, further research is needed, aimed at explanation of some imperfections of the investigated tools as well assessment of their validity for more complex tasks.
Heimann, Dennis; Nieschulze, Jens; König-Ries, Birgitta
2010-04-20
Data management in the life sciences has evolved from simple storage of data to complex information systems providing additional functionalities like analysis and visualization capabilities, demanding the integration of statistical tools. In many cases the used statistical tools are hard-coded within the system. That leads to an expensive integration, substitution, or extension of tools because all changes have to be done in program code. Other systems are using generic solutions for tool integration but adapting them to another system is mostly rather extensive work. This paper shows a way to provide statistical functionality over a statistics web service, which can be easily integrated in any information system and set up using XML configuration files. The statistical functionality is extendable by simply adding the description of a new application to a configuration file. The service architecture as well as the data exchange process between client and service and the adding of analysis applications to the underlying service provider are described. Furthermore a practical example demonstrates the functionality of the service.
Interpretation of coagulation test results using a web-based reporting system.
Quesada, Andres E; Jabcuga, Christine E; Nguyen, Alex; Wahed, Amer; Nedelcu, Elena; Nguyen, Andy N D
2014-01-01
Web-based synoptic reporting has been successfully integrated into diverse fields of pathology, improving efficiency and reducing typographic errors. Coagulation is a challenging field for practicing pathologists and pathologists-in-training alike. To develop a Web-based program that can expedite the generation of a individualized interpretive report for a variety of coagulation tests. We developed a Web-based synoptic reporting system composed of 119 coagulation report templates and 38 thromboelastography (TEG) report templates covering a wide range of findings. Our institution implemented this reporting system in July 2011; it is currently used by pathology residents and attending pathologists. Feedback from the users of these reports have been overwhelmingly positive. Surveys note the time saved and reduced errors. Our easily accessible, user-friendly, Web-based synoptic reporting system for coagulation is a valuable asset to our laboratory services. Copyright© by the American Society for Clinical Pathology (ASCP).
Robust statistics and geochemical data analysis
International Nuclear Information System (INIS)
Di, Z.
1987-01-01
Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures
Statistical bootstrap approach to hadronic matter and multiparticle reactions
International Nuclear Information System (INIS)
Ilgenfritz, E.M.; Kripfganz, J.; Moehring, H.J.
1977-01-01
The authors present the main ideas behind the statistical bootstrap model and recent developments within this model related to the description of fireball cascade decay. Mathematical methods developed in this model might be useful in other phenomenological schemes of strong interaction physics; they are described in detail. The present status of applications of the model to various hadronic reactions is discussed. When discussing the relations of the statistical bootstrap model to other models of hadron physics the authors point out possibly fruitful analogies and dynamical mechanisms which are modelled by the bootstrap dynamics under definite conditions. This offers interpretations for the critical temperature typical for the model and indicates futher fields of application. (author)
A study on reliability of roentgenographic interpretation; An important of dual reading
International Nuclear Information System (INIS)
Kim, Chu Wan; Kim, Kun Sang
1973-01-01
The photofluorography is the single best method for mass survey for defection of chest diseases, especially for determination of the pulmonary tuberculosis. However, the reliability of photofluorography as a diagnostic method is mostly depend upon quality of films, reader's perceptual experiences and his knowledge on chest diseases, and upon the process of interpretation, i.e., interpretation by single or dual reading. A statistical analysis and discussion were made on the results of 1,394,581 photofluorograms in the mass survey which had carried out from 1969 to 1971. Results were as follows: 1. The ratio of positive cases to all examines shows a range of 1.45%- 4.05%. 2. The most frequent disease in the positive case was minimal pulmonary tuberculosis. 3. There it showed good expectancy for detecting positive cases at the second reading which was interpreted by experienced radiologist than at the first reading done by the other doctors. It gives an accent one an importance of dual reading by the experienced radiologist for the higher reliability in diagnosis of chest disease
A study on reliability of roentgenographic interpretation; An important of dual reading
Energy Technology Data Exchange (ETDEWEB)
Kim, Chu Wan; Kim, Kun Sang [Seoul National University College of Medicine, Seoul (Korea, Republic of)
1973-04-15
The photofluorography is the single best method for mass survey for defection of chest diseases, especially for determination of the pulmonary tuberculosis. However, the reliability of photofluorography as a diagnostic method is mostly depend upon quality of films, reader's perceptual experiences and his knowledge on chest diseases, and upon the process of interpretation, i.e., interpretation by single or dual reading. A statistical analysis and discussion were made on the results of 1,394,581 photofluorograms in the mass survey which had carried out from 1969 to 1971. Results were as follows: 1. The ratio of positive cases to all examines shows a range of 1.45%- 4.05%. 2. The most frequent disease in the positive case was minimal pulmonary tuberculosis. 3. There it showed good expectancy for detecting positive cases at the second reading which was interpreted by experienced radiologist than at the first reading done by the other doctors. It gives an accent one an importance of dual reading by the experienced radiologist for the higher reliability in diagnosis of chest disease.
Chains, Shops and Networks: Official Statistics and the Creation of Public Value
Directory of Open Access Journals (Sweden)
Asle Rolland
2015-06-01
Full Text Available The paper concerns offi cial statistics, particularly as produced by the NSIs. Their contribution to the society is considered well captured by the concept of public value. Official statistics create value for the democracy as foundation for evidence-based politics. Democracies and autocracies alike need statistics to govern the public. Unique for the democracy is the need of statistics to govern the governors, for which the independence of the NSI is crucial. Three ways of creating public value are the value chain, the value shop and the value network. The chain is appropriate for the production, the shop for the interpretation and the network for the dissemination of statistics. Automation reduces the need to rely on the value chain as core business model. Thereto automation increases the statistical output, which in turn increases the need of shop and network activities. Replacing the chain with the shop as core model will elevate the NSIs from commodity producers to a processing industry.
Empirical approach to interpreting card-sorting data
Directory of Open Access Journals (Sweden)
Steven F. Wolf1,2,*
2012-05-01
Full Text Available Since it was first published 30 years ago, the seminal paper of Chi et al. on expert and novice categorization of introductory problems led to a plethora of follow-up studies within and outside of the area of physics [ Cogn. Sci. 5 121 (1981]. These studies frequently encompass “card-sorting” exercises whereby the participants group problems. While this technique certainly allows insights into problem solving approaches, simple descriptive statistics more often than not fail to find significant differences between experts and novices. In moving beyond descriptive statistics, we describe a novel microscopic approach that takes into account the individual identity of the cards and uses graph theory and models to visualize, analyze, and interpret problem categorization experiments. We apply these methods to an introductory physics (mechanics problem categorization experiment, and find that most of the variation in sorting outcome is not due to the sorter being an expert versus a novice, but rather due to an independent characteristic that we named “stacker” versus “spreader.” The fact that the expert-novice distinction only accounts for a smaller amount of the variation may explain the frequent null results when conducting these experiments.
Energy Technology Data Exchange (ETDEWEB)
Voutay, O.
2003-02-01
Seismic data contain further geological information than well, due to their good spatial extent. But the seismic measure is band pass limited and the contrasts in acoustic or elastic properties derived from seismic are not directly linked to the reservoir properties. Thus, it is difficult to give a geological interpretation to seismic data. Basically, relevant seismic attributes are extracted at the reservoir level, and then are calibrated with information available at wells by using pattern recognition and statistical estimation techniques. These methods are successfully used in the post-stack domain. But, for multi-cube seismic information such as pre-stack or 4D data, the number of attributes can considerably increase and statistical methods are not often used. It is necessary to find a parameterization allowing an optimal description the seismic variability in the time window of interest. We propose to extract new attributes from seismic multi-cube data with Generalised Principal Analysis and to use them for reservoir interpretation with statistical techniques. The new attributes can be clearly related to the initial data set, and then be physically interpreted, while optimally summarizing the initial seismic information. By applying the Generalised Principal Analysis to 3D pre-stack surveys, the contribution of the pre-stack seismic information to reservoir characterisation is compared to the post-stack seismic one, in both synthetic and real cases. By applying the Generalised Principal Analysis to real 4D surveys, the seismic repeatability is quantified and the seismic changes in the reservoir with calendar time are highlighted and interpreted. A coherency cube has also been defined, based on the Generalised Principal Analysis. This attribute is a coherence measurement in three dimensions representing the local similarity between 4D or AVO surveys. (author)
Intraobserver interpretation of breast ultrasonography following the BI-RADS classification
International Nuclear Information System (INIS)
Calas, M.J.G.; Almeida, R.M.V.R.; Gutfilen, B.; Pereira, W.C.A.
2010-01-01
Purpose: To use the BI-RADS ultrasound classification in an intraobserver retrospective study of the interpretation of breast images. Materials and Methods: The study used 40 breast ultrasound images recorded in orthogonal planes, obtained from patients with an indication for surgery. Eight professionals experienced in breast imaging analysis retrospectively reviewed these lesions, in three rounds of image interpretation (with a 3-6 months interval between rounds). Observers had no access to information from medical records or histopathological results, and, without their knowledge, in each new round were assigned the same images previously interpreted by them. Fleiss-modified Kappa measures were the study main concordance index. Besides the BI-RADS, a scale grouping its categories 2-3 and 4-5 was also used. The statistical analysis concerned the intraobserver agreement. Results: Kappa values ranged from 0.37 to 0.75 (original categories) and from 0.73 to 0.87 (grouped categories). Overall, out of the 8 observers, 7 presented moderate to substantial concordance (Kappa values 0.51 to 0.74). Conclusion: The BI-RADS is a reporting tool that provides a standardized terminology for US exams. In this study, moderate to substantial concordance in Kappa values was found, in agreement with other studies of the literature.
Implementation of International Standards in Russia's Foreign Trade Statistics
Directory of Open Access Journals (Sweden)
Natalia E. Grigoruk
2015-01-01
Full Text Available The article analyzes the basic documents of international organizations in recent years, which have become the global standard for the development and improvement of statistics of foreign economic relations of most countries, including the Russian Federation. The article describes the key features of the theory and practice of modern foreign trade statistics in Russia and abroad, with an emphasis on the methodological problems of its main parts - the external trade statistics. It shows their interpretation in the most recent recommendations by UN statistical apparatus and other international organizations; considers a range of problems associated with the implementation of the national statistical practices of countries, including Russia and the countries of the Customs Union, the main international standard of foreign trade statistics - UN document "International Merchandise Trade Statistics". The main attention is paid to methodological issues such as: the criteria for selecting the objects of statistical accounting in accordance with international standards, quantitative and cost parameters of foreign trade statistics, statistical methods and estimates of commodity exports and imports, the problems of comparability of data; to a comparison of international standards in 2010 with documents on key precursor methodology of foreign trade statistics, characterized by the practice of introducing these standards in the foreign trade statistics of Russia and the countries of the Customs Union. The article analyzes the content given in the official statistical manuals of Russia foreign trade and foreign countries, covers the main methodological problems of World Trade in conjunction with the major current international statistical standards - System of National Accounts, Manual on Statistics of International Trade in Services and other documents; provides specific data describing the current structure of Russian foreign trade and especially its
The study on development of easily chewable and swallowable foods for elderly.
Kim, Soojeong; Joo, Nami
2015-08-01
When the functions involved in the ingestion of food occurs failure, not only loss of enjoyment of eating, it will be faced with protein-energy malnutrition. Dysmasesis and difficulty of swallowing occurs in various diseases, but it may be a major cause of aging, and elderly people with authoring and dysmasesis and difficulty of swallowing in the aging society is expected to increase rapidly. In this study, we carried out a survey targeting nutritionists who work in elderly care facilities, and examined characteristics of offering of foods for elderly and the degree of demand of development of easily chewable and swallowable foods for the elderly who can crush foods and take that by their own tongues, and sometimes have difficulty in drinking water and tea. In elderly care facilities, it was found to provide a finely chopped food or ground food that was ground with water in a blender for elderly with dysmasesis. Elderly satisfaction of provided foods is appeared overall low. Results of investigating the applicability of foods for elderly and the reflection will of menus, were showed the highest response rate in a gelification method in molecular gastronomic science technics, and results of investigating the frequent food of the elderly; representative menu of beef, pork, white fish, anchovies and spinach, were showed Korean barbecue beef, hot pepper paste stir fried pork, pan fried white fish, stir fried anchovy, seasoned spinach were the highest offer frequency. This study will provide the fundamentals of the development of easily chewable and swallowable foods, gelification, for the elderly. The study will also illustrate that, in the elderly, food undergone gelification will reduce the risk of swallowing down to the wrong pipe and improve overall food preference.
Biagini, Francesca
2016-01-01
This book provides an introduction to elementary probability and to Bayesian statistics using de Finetti's subjectivist approach. One of the features of this approach is that it does not require the introduction of sample space – a non-intrinsic concept that makes the treatment of elementary probability unnecessarily complicate – but introduces as fundamental the concept of random numbers directly related to their interpretation in applications. Events become a particular case of random numbers and probability a particular case of expectation when it is applied to events. The subjective evaluation of expectation and of conditional expectation is based on an economic choice of an acceptable bet or penalty. The properties of expectation and conditional expectation are derived by applying a coherence criterion that the evaluation has to follow. The book is suitable for all introductory courses in probability and statistics for students in Mathematics, Informatics, Engineering, and Physics.
International Nuclear Information System (INIS)
Luo Chuanwen; Wang Gang; Wang Chuncheng; Wei Junjie
2009-01-01
The concepts of uniform index and expectation uniform index are two mathematical descriptions of the uniformity and the mean uniformity of a finite set in a polyhedron. The concepts of instantaneous chaometry (ICM) and k step chaometry (k SCM) are introduced in order to apply the method in statistics for studying the nonlinear difference equations. It is found that k step chaometry is an indirect estimation of the expectation uniform index. The simulation illustrate that the expectation uniform index for the Lorenz System is increasing linearly, but increasing nonlinearly for the Chen's System with parameter b. In other words, the orbits for each system become more and more uniform with parameter b increasing. Finally, a conjecture is also brought forward, which implies that chaos can be interpreted by its orbit's mean uniformity described by the expectation uniform index and indirectly estimated by k SCM. The k SCM of the heart rate showes the feeble and old process of the heart.
Neuroimaging with functional near infrared spectroscopy: From formation to interpretation
Herrera-Vega, Javier; Treviño-Palacios, Carlos G.; Orihuela-Espina, Felipe
2017-09-01
Functional Near Infrared Spectroscopy (fNIRS) is gaining momentum as a functional neuroimaging modality to investigate the cerebral hemodynamics subsequent to neural metabolism. As other neuroimaging modalities, it is neuroscience's tool to understand brain systems functions at behaviour and cognitive levels. To extract useful knowledge from functional neuroimages it is critical to understand the series of transformations applied during the process of the information retrieval and how they bound the interpretation. This process starts with the irradiation of the head tissues with infrared light to obtain the raw neuroimage and proceeds with computational and statistical analysis revealing hidden associations between pixels intensities and neural activity encoded to end up with the explanation of some particular aspect regarding brain function.To comprehend the overall process involved in fNIRS there is extensive literature addressing each individual step separately. This paper overviews the complete transformation sequence through image formation, reconstruction and analysis to provide an insight of the final functional interpretation.
Jacobson generators, Fock representations and statistics of sl(n + 1)
International Nuclear Information System (INIS)
Palev, T.D.; Jeugt, J. van der
2000-10-01
The properties of A-statistics, related to the class of simple Lie algebras sl(n + 1), n is an element of Z + (Palev, T.D.: Preprint JINR E17-10550 (1977); hep-th/9705032), are further investigated. The description of each sl(n + 1) is carried out via generators and their relations (see eq. (2.5)), first introduced by Jacobson. The related Fock spaces W p , p is an element of N, are finite-dimensional irreducible sl(n + 1)-modules. The Pauli principle of the underlying statistics is formulated. In addition the paper contains the following new results: (a) the A-statistics are interpreted as exclusion statistics; (b) within each W p operators B(p) 1 ± ,...,B(p) n ± , proportional to the Jacobson generators, are introduced. It is proved that in an appropriate topology (Definition 2) lim p→∞ B(p) i ± = B i ± , where B i ± are Bose creation and annihilation operators; (c) it is shown that the local statistics of the degenerated hard-core Bose models and of the related Heisenberg spin models is p = I A-statistics. (author)
Statistics and risk philosophy in human activities
International Nuclear Information System (INIS)
Failla, L.
1983-01-01
Two leading interpretations of the use of statistics exist, the first one considering statistics as a physical law regulating the phenomena studied, and the other considering statistics as a method allowing to achieve exhaustive knowledge of the phenomena. The Author chooses the second theory, applies this concept of statistics to the risk involved in human activities and discusses the different kinds of risk in this field. The Author distinguishes between the risk that can be eliminated or at least reduced, and the risk inherent in the activity itself, that can never be completely eliminated -unless the activity is suppressed-; yet, also this kind of risk can be kept under control. Furthermore, she distinguishes between risks that can or cannot be foreseen. The Author supports the theory according to which the risk foreseen must be prevented through up-to-date techniques: this should be done coherently with the aim of the activity but independently of the economic cost. The theory considering risk probability as a physical law is mainly based on events happened in the past: it uses the occurrence probability as a law. This theory accepts the statistical risk and estimates its costs, including the ''human cost''. The Author examines the different statistical possibilities to study this specific phenomenon: so, the possibility to avoid the risks may arise along with -last but not least- the opportunity to eliminate or reduce the damage connected. On the contrary, statistics used as a physical law implies the acceptable of a given amount of risk compared with the cost of improving the technical conditions required to eliminate damages. In the end, a practical example of this theory is described
Algorithm for the generation of nuclear spin species and nuclear spin statistical weights
International Nuclear Information System (INIS)
Balasubramanian, K.
1982-01-01
A set of algorithms for the computer generation of nuclear spin species and nuclear spin statistical weights potentially useful in molecular spectroscopy is developed. These algorithms generate the nuclear spin species from group structures known as generalized character cycle indices (GCCIs). Thus the required input for these algorithms is just the set of all GCCIs for the symmetry group of the molecule which can be computed easily from the character table. The algorithms are executed and illustrated with examples
Statistical mechanics out of equilibrium the irreversibility
International Nuclear Information System (INIS)
Alvarez Estrada, R. F.
2001-01-01
A Round Table about the issue of Irreversibility and related matters has taken place during the last (20th) Statistical Mechanics Conference, held in Paris (July 1998). This article tries to provide a view (necessarily limited, and hence, uncompleted) of some approaches to the subject: the one based upon deterministic chaos (which is currently giving rise to a very active research) and the classical interpretation due to Boltzmann. An attempt has been made to write this article in a self-contained way, and to avoid a technical presentation wherever possible. (Author) 29 refs
Solar-assisted photodegradation of isoproturon over easily recoverable titania catalysts.
Tolosana-Moranchel, A; Carbajo, J; Faraldos, M; Bahamonde, A
2017-03-01
An easily recoverable homemade TiO 2 catalyst (GICA-1) has been evaluated during the overall photodegradation process, understood as photocatalytic efficiency and catalyst recovery step, in the solar light-assisted photodegradation of isoproturon and its reuse in two consecutive cycles. The global feasibility has been compared to the commercial TiO 2 P25. The homemade GICA-1 catalyst presented better sedimentation efficiency than TiO 2 P25 at all studied pHs, which could be explained by its higher average hydrodynamic particle size (3 μm) and other physicochemical surface properties. The evaluation of the overall process (isoproturon photo-oxidation + catalyst recovery) revealed GICA-1 homemade titania catalyst strengths: total removal of isoproturon in less than 60 min, easy recovery by sedimentation, and reusability in two consecutive cycles, without any loss of photocatalytic efficiency. Therefore, considering the whole photocatalytic cycle (good performance in photodegradation plus catalyst recovery step), the homemade GICA-1 photocatalyst resulted in more affordability than commercial TiO 2 P25. Graphical abstract.
Ulyanov, Sergey S.; Ulianova, Onega V.; Zaytsev, Sergey S.; Saltykov, Yury V.; Feodorova, Valentina A.
2018-04-01
The transformation mechanism for a nucleotide sequence of the Chlamydia trachomatis gene into a speckle pattern has been considered. The first and second-order statistics of gene-based speckles have been analyzed. It has been demonstrated that gene-based speckles do not obey Gaussian statistics and belong to the class of speckles with a small number of scatterers. It has been shown that gene polymorphism can be easily detected through analysis of the statistical characteristics of gene-based speckles.
Statistical interpretation of transient current power-law decay in colloidal quantum dot arrays
Energy Technology Data Exchange (ETDEWEB)
Sibatov, R T, E-mail: ren_sib@bk.ru [Ulyanovsk State University, 432000, 42 Leo Tolstoy Street, Ulyanovsk (Russian Federation)
2011-08-01
A new statistical model of the charge transport in colloidal quantum dot arrays is proposed. It takes into account Coulomb blockade forbidding multiple occupancy of nanocrystals and the influence of energetic disorder of interdot space. The model explains power-law current transients and the presence of the memory effect. The fractional differential analogue of the Ohm law is found phenomenologically for nanocrystal arrays. The model combines ideas that were considered as conflicting by other authors: the Scher-Montroll idea about the power-law distribution of waiting times in localized states for disordered semiconductors is applied taking into account Coulomb blockade; Novikov's condition about the asymptotic power-law distribution of time intervals between successful current pulses in conduction channels is fulfilled; and the carrier injection blocking predicted by Ginger and Greenham (2000 J. Appl. Phys. 87 1361) takes place.
Statistical interpretation of transient current power-law decay in colloidal quantum dot arrays
International Nuclear Information System (INIS)
Sibatov, R T
2011-01-01
A new statistical model of the charge transport in colloidal quantum dot arrays is proposed. It takes into account Coulomb blockade forbidding multiple occupancy of nanocrystals and the influence of energetic disorder of interdot space. The model explains power-law current transients and the presence of the memory effect. The fractional differential analogue of the Ohm law is found phenomenologically for nanocrystal arrays. The model combines ideas that were considered as conflicting by other authors: the Scher-Montroll idea about the power-law distribution of waiting times in localized states for disordered semiconductors is applied taking into account Coulomb blockade; Novikov's condition about the asymptotic power-law distribution of time intervals between successful current pulses in conduction channels is fulfilled; and the carrier injection blocking predicted by Ginger and Greenham (2000 J. Appl. Phys. 87 1361) takes place.
Statistical mechanics of program systems
International Nuclear Information System (INIS)
Neirotti, Juan P; Caticha, Nestor
2006-01-01
We discuss the collective behaviour of a set of operators and variables that constitute a program and the emergence of meaningful computational properties in the language of statistical mechanics. This is done by appropriately modifying available Monte Carlo methods to deal with hierarchical structures. The study suggests, in analogy with simulated annealing, a method to automatically design programs. Reasonable solutions can be found, at low temperatures, when the method is applied to simple toy problems such as finding an algorithm that determines the roots of a function or one that makes a nonlinear regression. Peaks in the specific heat are interpreted as signalling phase transitions which separate regions where different algorithmic strategies are used to solve the problem
Statistics of Low-Mass Companions to Stars: Implications for Their Origin
Stepinski, T. F.; Black, D. C.
2001-01-01
One of the more significant results from observational astronomy over the past few years has been the detection, primarily via radial velocity studies, of low-mass companions (LMCs) to solar-like stars. The commonly held interpretation of these is that the majority are "extrasolar planets" whereas the rest are brown dwarfs, the distinction made on the basis of apparent discontinuity in the distribution of M sin i for LMCs as revealed by a histogram. We report here results from statistical analysis of M sin i, as well as of the orbital elements data for available LMCs, to rest the assertion that the LMCs population is heterogeneous. The outcome is mixed. Solely on the basis of the distribution of M sin i a heterogeneous model is preferable. Overall, we find that a definitive statement asserting that LMCs population is heterogeneous is, at present, unjustified. In addition we compare statistics of LMCs with a comparable sample of stellar binaries. We find a remarkable statistical similarity between these two populations. This similarity coupled with marked populational dissimilarity between LMCs and acknowledged planets motivates us to suggest a common origin hypothesis for LMCs and stellar binaries as an alternative to the prevailing interpretation. We discuss merits of such a hypothesis and indicate a possible scenario for the formation of LMCs.
Hsieh, Elaine; Kramer, Eric Mark
2012-10-01
This study explores the tensions, challenges, and dangers when a utilitarian view of interpreter is constructed, imposed, and/or reinforced in health care settings. We conducted in-depth interviews and focus groups with 26 medical interpreters from 17 different languages and cultures and 39 providers of five specialties. Grounded theory was used for data analysis. The utilitarian view to interpreters' roles and functions influences providers in the following areas: (a) hierarchical structure and unidirectional communication, (b) the interpreter seen as information gatekeeper, (c) the interpreter seen as provider proxy, and (d) interpreter's emotional support perceived as tools. When interpreters are viewed as passive instruments, a utilitarian approach may compromise the quality of care by silencing patients' and interpreters' voice, objectifying interpreters' emotional work, and exploiting patients' needs. Providers need to recognize that a utilitarian approach to the interpreter's role and functions may create interpersonal and ethical dilemmas that compromise the quality of care. By viewing interpreters as smart technology (rather than passive instruments), both providers and interpreters can learn from and co-evolve with each other, allowing them to maintain control over their expertise and to work as collaborators in providing quality care. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Effects of easily ionizable elements on the liquid sampling-atmospheric pressure glow discharge
International Nuclear Information System (INIS)
Venzie, Jacob L.; Marcus, R. Kenneth
2006-01-01
A series of studies has been undertaken to determine the susceptibility of the liquid sampling-atmospheric pressure glow discharge (LS-APGD) atomic emission source to easily ionizable element (EIE) effects. The initial portions of the study involved monitoring the voltage drop across the plasma as a function of the pH to ascertain whether or not the conductivity of the liquid eluent alters the plasma energetics and subsequently the analyte signal strength. It was found that altering the pH (0.0 to 2.0) in the sample matrix did not significantly change the discharge voltage. The emission signal intensities for Cu(I) 327.4 nm, Mo(I) 344.7 nm, Sc(I) 326.9 nm and Hg(I) 253.6 nm were measured as a function of the easily ionizable element (sodium and calcium) concentration in the injection matrix. A range of 0.0 to 0.1% (w/v) EIE in the sample matrix did not cause a significant change in the Cu, Sc, and Mo signal-to-background ratios, with only a slight change noted for Hg. In addition to this test of analyte response, the plasma energetics as a function of EIE concentration are assessed using the ratio of Mg(II) to Mg(I) (280.2 nm and 285.2 nm, respectively) intensities. The Mg(II)/Mg(I) ratio showed that the plasma energetics did not change significantly over the same range of EIE addition. These results are best explained by the electrolytic nature of the eluent acting as an ionic (and perhaps spectrochemical) buffer
Server-side Statistics Scripting in PHP
Directory of Open Access Journals (Sweden)
Jan de Leeuw
1997-06-01
Full Text Available On the UCLA Statistics WWW server there are a large number of demos and calculators that can be used in statistics teaching and research. Some of these demos require substantial amounts of computation, others mainly use graphics. These calculators and demos are implemented in various different ways, reflecting developments in WWW based computing. As usual, one of the main choices is between doing the work on the client-side (i.e. in the browser or on the server-side (i.e. on our WWW server. Obviously, client-side computation puts fewer demands on the server. On the other hand, it requires that the client downloads Java applets, or installs plugins and/or helpers. If JavaScript is used, client-side computations will generally be slow. We also have to assume that the client is installed properly, and has the required capabilities. Requiring too much on the client-side has caused browsing machines such as Netscape Communicator to grow beyond all reasonable bounds, both in size and RAM requirements. Moreover requiring Java and JavaScript rules out such excellent browsers as Lynx or Emacs W3. For server-side computing, we can configure the server and its resources ourselves, and we need not worry about browser capabilities and configuration. Nothing needs to be downloaded, except the usual HTML pages and graphics. In the same way as on the client side, there is a scripting solution, where code is interpreted, or a ob ject-code solution using compiled code. For the server-side scripting, we use embedded languages, such as PHP/FI. The scripts in the HTML pages are interpreted by a CGI program, and the output of the CGI program is send to the clients. Of course the CGI program is compiled, but the statistics procedures will usually be interpreted, because PHP/FI does not have the appropriate functions in its scripting language. This will tend to be slow, because embedded languages do not deal efficiently with loops and similar constructs. Thus a first
The advent of new higher throughput analytical instrumentation has put a strain on interpreting and explaining the results from complex studies. Contemporary human, environmental, and biomonitoring data sets are comprised of tens or hundreds of analytes, multiple repeat measures...
Common errors in statistics (and how to avoid them)
Good, Phillip I
2012-01-01
The Fourth Edition of this tried-and-true book elaborates on many key topics such as epidemiological studies, distribution of data; baseline data incorporation; case control studies; simulations; statistical theory publication; biplots; instrumental variables; ecological regression; result reporting, survival analysis; etc. Including new modifications and figures, the book also covers such topics as research plan creation; data collection; hypothesis formulation and testing; coefficient estimates; sample size specifications; assumption checking; p-values interpretations and confidence interval
Geological interpretation of borehole image and sonic logs. A case study from the North Sea
Energy Technology Data Exchange (ETDEWEB)
Vahle, C. [Eriksfiord GmbH, Walldorf (Germany)
2013-08-01
Borehole imagers and dipole sonic tools form the ideal pair of instruments for observation and evaluation of structural tilt, faulting and fracturing as well as sediment transport direction and depositional architecture. In addition, the stress field can be inverted in combination with rock mechanical data. The structural tilt and its variation along the well are evaluated in stereograms and projections along the well trace. Changes in structural tilt are attributed to fault block rotation as well as angular unconformities. Fault zones are usually easily recognised in borehole images by e.g. juxtaposition of different strata/facies and deformation of adjacent layering. Integration with micro-scale core data as well as macro-scale seismics, if available, is of vital importance. Furthermore, calibration against core observations is helpful for e.g. fracture characterisation. The stress field orientation is interpreted from breakout and drilling-induced fractures, which are usually easy to detect in borehole images. However, in case of slanted and highly deviated wells the full stress tensor including the stress magnitudes is necessary to evaluate the stress field orientation. The full stress tensor is inverted by utilising rock mechanical data from core measurement and/or from empirical relations with elastic properties such as Poission's ratio and Young's modulus with respect to breakout and drilling-induced fractures. In addition, the stress field can be simulated using numerical methods to match the current observations. Sedimentary features such as cross-beds or slumps may indicate sediment transport directions after the data set was corrected for structural tilt. Image facies and their stacking patterns in combination with standard petrophysical curves are interpreted with respect to the depositional environment and included in a sequence stratigraphic framework. A correlation with core observations provides important calibration of the image facies
Mathematical statistics essays on history and methodology
Pfanzagl, Johann
2017-01-01
This book presents a detailed description of the development of statistical theory. In the mid twentieth century, the development of mathematical statistics underwent an enduring change, due to the advent of more refined mathematical tools. New concepts like sufficiency, superefficiency, adaptivity etc. motivated scholars to reflect upon the interpretation of mathematical concepts in terms of their real-world relevance. Questions concerning the optimality of estimators, for instance, had remained unanswered for decades, because a meaningful concept of optimality (based on the regularity of the estimators, the representation of their limit distribution and assertions about their concentration by means of Anderson’s Theorem) was not yet available. The rapidly developing asymptotic theory provided approximate answers to questions for which non-asymptotic theory had found no satisfying solutions. In four engaging essays, this book presents a detailed description of how the use of mathematical methods stimulated...
animation : An R Package for Creating Animations and Demonstrating Statistical Methods
Directory of Open Access Journals (Sweden)
Yihui Xie
2013-04-01
Full Text Available Animated graphs that demonstrate statistical ideas and methods can both attract interest and assist understanding. In this paper we first discuss how animations can be related to some statistical topics such as iterative algorithms, random simulations, (resampling methods and dynamic trends, then we describe the approaches that may be used to create animations, and give an overview to the R package animation, including its design, usage and the statistical topics in the package. With the animation package, we can export the animations produced by R into a variety of formats, such as a web page, a GIF animation, a Flash movie, a PDF document, or an MP4/AVI video, so that users can publish the animations fairly easily. The design of this package is flexible enough to be readily incorporated into web applications, e.g., we can generate animations online with Rweb, which means we do not even need R to be installed locally to create animations. We will show examples of the use of animations in teaching statistics and in the presentation of statistical reports using Sweave or knitr. In fact, this paper itself was written with the knitr and animation package, and the animations are embedded in the PDF document, so that readers can watch the animations in real time when they read the paper (the Adobe Reader is required.Animations can add insight and interest to traditional static approaches to teaching statistics and reporting, making statistics a more interesting and appealing subject.
Density profiles in the Scrape-Off Layer interpreted through filament dynamics
Militello, Fulvio
2017-10-01
We developed a new theoretical framework to clarify the relation between radial Scrape-Off Layer density profiles and the fluctuations that generate them. The framework provides an interpretation of the experimental features of the profiles and of the turbulence statistics on the basis of simple properties of the filaments, such as their radial motion and their draining towards the divertor. L-mode and inter-ELM filaments are described as a Poisson process in which each event is independent and modelled with a wave function of amplitude and width statistically distributed according to experimental observations and evolving according to fluid equations. We will rigorously show that radially accelerating filaments, less efficient parallel exhaust and also a statistical distribution of their radial velocity can contribute to induce flatter profiles in the far SOL and therefore enhance plasma-wall interactions. A quite general result of our analysis is the resiliency of this non-exponential nature of the profiles and the increase of the relative fluctuation amplitude towards the wall, as experimentally observed. According to the framework, profile broadening at high fueling rates can be caused by interactions with neutrals (e.g. charge exchange) in the divertor or by a significant radial acceleration of the filaments. The framework assumptions were tested with 3D numerical simulations of seeded SOL filaments based on a two fluid model. In particular, filaments interact through the electrostatic field they generate only when they are in close proximity (separation comparable to their width in the drift plane), thus justifying our independence hypothesis. In addition, we will discuss how isolated filament motion responds to variations in the plasma conditions, and specifically divertor conditions. Finally, using the theoretical framework we will reproduce and interpret experimental results obtained on JET, MAST and HL-2A.
Homeostasis and Gauss statistics: barriers to understanding natural variability.
West, Bruce J
2010-06-01
In this paper, the concept of knowledge is argued to be the top of a three-tiered system of science. The first tier is that of measurement and data, followed by information consisting of the patterns within the data, and ending with theory that interprets the patterns and yields knowledge. Thus, when a scientific theory ceases to be consistent with the database the knowledge based on that theory must be re-examined and potentially modified. Consequently, all knowledge, like glory, is transient. Herein we focus on the non-normal statistics of physiologic time series and conclude that the empirical inverse power-law statistics and long-time correlations are inconsistent with the theoretical notion of homeostasis. We suggest replacing the notion of homeostasis with that of Fractal Physiology.
Statistical studies of powerful extragalactic radio sources
Energy Technology Data Exchange (ETDEWEB)
Macklin, J T
1981-01-01
This dissertation is mainly about the use of efficient statistical tests to study the properties of powerful extragalactic radio sources. Most of the analysis is based on subsets of a sample of 166 bright (3CR) sources selected at 178 MHz. The first chapter is introductory and it is followed by three on the misalignment and symmetry of double radio sources. The properties of nuclear components in extragalactic sources are discussed in the next chapter, using statistical tests which make efficient use of upper limits, often the only available information on the flux density from the nuclear component. Multifrequency observations of four 3CR sources are presented in the next chapter. The penultimate chapter is about the analysis of correlations involving more than two variables. The Spearman partial rank correlation coefficient is shown to be the most powerful test available which is based on non-parametric statistics. It is therefore used to study the dependences of the properties of sources on their size at constant redshift, and the results are interpreted in terms of source evolution. Correlations of source properties with luminosity and redshift are then examined.
REQUIREMENTS FOR A GENERAL INTERPRETATION THEORY
Directory of Open Access Journals (Sweden)
Anda Laura Lungu Petruescu
2013-06-01
Full Text Available Time has proved that Economic Analysis is not enough as to ensure all the needs of the economic field. The present study wishes to propose a new approach method of the economic phenomena and processes based on the researches made outside the economic space- a new general interpretation theory- which is centered on the human being as the basic actor of economy. A general interpretation theory must assure the interpretation of the causalities among the economic phenomena and processes- causal interpretation; the interpretation of the correlations and dependencies among indicators- normative interpretation; the interpretation of social and communicational processes in economic organizations- social and communicational interpretation; the interpretation of the community status of companies- transsocial interpretation; the interpretation of the purposes of human activities and their coherency – teleological interpretation; the interpretation of equilibrium/ disequilibrium from inside the economic systems- optimality interpretation. In order to respond to such demands, rigor, pragmatism, praxiology and contextual connectors are required. In order to progress, the economic science must improve its language, both its syntax and its semantics. The clarity of exposure requires a language clarity and the scientific theory progress asks for the need of hypotheses in the building of the theories. The switch from the common language to the symbolic one means the switch from ambiguity to rigor and rationality, that is order in thinking. But order implies structure, which implies formalization. Our paper should be a plea for these requirements, requirements which should be fulfilled by a modern interpretation theory.
Directory of Open Access Journals (Sweden)
Elżbieta Sandurska
2016-12-01
Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.
MacKinnon, Edward
2012-01-01
This book is the first to offer a systematic account of the role of language in the development and interpretation of physics. An historical-conceptual analysis of the co-evolution of mathematical and physical concepts leads to the classical/quatum interface. Bohrian orthodoxy stresses the indispensability of classical concepts and the functional role of mathematics. This book analyses ways of extending, and then going beyond this orthodoxy orthodoxy. Finally, the book analyzes how a revised interpretation of physics impacts on basic philosophical issues: conceptual revolutions, realism, and r
Impact of educational intervention on the inter-rater agreement of nasal endoscopy interpretation
Colley, Patrick; Mace, Jess C.; Schaberg, Madeleine R.; Smith, Timothy L.; Tabaee, Abtin
2015-01-01
OBJECTIVE Nasal endoscopy is integral to the evaluation of sinonasal disorders. However, prior studies have shown significant variability in the inter-rater agreement of nasal endoscopy interpretation amongst practicing rhinologists. The objective of the current study is to evaluate the inter-rater agreement of nasal endoscopy amongst otolaryngology residents from a single training program at baseline and following an educational intervention. METHODS 11 otolaryngology residents completed nasal endoscopy grading forms for 8 digitally recorded nasal endoscopic examinations. An instructional lecture reviewing nasal endoscopy interpretation was subsequently provided. The residents then completed grading forms for 8 different nasal endoscopic examinations. Inter-rate agreement amongst residents for the pre- and post-lecture videos was calculated using the unweighted Fleiss’ kappa statistic (Kf) and intra-class correlation agreement (ICC). RESULTS Inter-rater agreement improved from a baseline level of fair (Kf range 0.268–0.383) to a post-educational level of moderate (Kf range 0.401–0.547) for nasal endoscopy findings of middle meatus mucosa, middle turbinate mucosa, middle meatus discharge, sphenoethmoid recess mucosa, sphenoethmoid recess discharge and atypical lesions (ICC, pendoscopy interpretation amongst otolaryngology residents. The inter-rater agreement for the majority of the characteristics that were evaluated improved after educational intervention. Further study is needed to improve nasal endoscopy interpretation. PMID:25781864
Rapp, J.B.
1991-01-01
Q-mode factor analysis was used to quantitate the distribution of the major aliphatic hydrocarbon (n-alkanes, pristane, phytane) systems in sediments from a variety of marine environments. The compositions of the pure end members of the systems were obtained from factor scores and the distribution of the systems within each sample was obtained from factor loadings. All the data, from the diverse environments sampled (estuarine (San Francisco Bay), fresh-water (San Francisco Peninsula), polar-marine (Antarctica) and geothermal-marine (Gorda Ridge) sediments), were reduced to three major systems: a terrestrial system (mostly high molecular weight aliphatics with odd-numbered-carbon predominance), a mature system (mostly low molecular weight aliphatics without predominance) and a system containing mostly high molecular weight aliphatics with even-numbered-carbon predominance. With this statistical approach, it is possible to assign the percentage contribution from various sources to the observed distribution of aliphatic hydrocarbons in each sediment sample. ?? 1991.
Prediction of transmission loss through an aircraft sidewall using statistical energy analysis
Ming, Ruisen; Sun, Jincai
1989-06-01
The transmission loss of randomly incident sound through an aircraft sidewall is investigated using statistical energy analysis. Formulas are also obtained for the simple calculation of sound transmission loss through single- and double-leaf panels. Both resonant and nonresonant sound transmissions can be easily calculated using the formulas. The formulas are used to predict sound transmission losses through a Y-7 propeller airplane panel. The panel measures 2.56 m x 1.38 m and has two windows. The agreement between predicted and measured values through most of the frequency ranges tested is quite good.
Directory of Open Access Journals (Sweden)
Michael Robert Cunningham
2016-10-01
Full Text Available The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger, Wood, Stiff, and Chatzisarantis, 2010. Meta-analyses are supposed to reduce bias in literature reviews. Carter, Kofler, Forster, and McCullough’s (2015 meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and funnel plot asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test [PET] and Precision Effect Estimate with Standard Error (PEESE procedures. Despite these serious problems, the Carter et al. meta-analysis results actually indicate that there is a real depletion effect – contrary to their title.
Statistical Analysis of Designed Experiments Theory and Applications
Tamhane, Ajit C
2012-01-01
A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the
Negative values of quasidistributions and quantum wave and number statistics
Peřina, J.; Křepelka, J.
2018-04-01
We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.
CT colonography: accuracy of initial interpretation by radiographers in routine clinical practice
International Nuclear Information System (INIS)
Burling, D.; Wylie, P.; Gupta, A.; Illangovan, R.; Muckian, J.; Ahmad, R.; Marshall, M.; Taylor, S.A.
2010-01-01
Aim: To investigate performance of computed-assisted detection (CAD)-assisted radiographers interpreting computed tomography colonography (CTC) in routine practice. Materials and methods: Three hundred and three consecutive symptomatic patients underwent CTC. Examinations were double-read by trained radiographers using primary two-dimensional/three-dimensional (2D/3D) analysis supplemented by 'second reader' CAD. Radiographers recorded colonic neoplasia, interpretation times, and patient management strategy code (S0, inadequate; S1, normal; S2, 6-9 mm polyp; S3, ≥10 mm polyp; S4, cancer; S5, diverticular stricture) for each examination. Strategies were compared to the reference standard using kappa statistic, interpretation times using paired t-test, learning curves using logistic regression and Pearson's correlation coefficient. Results: Of 303 examinations, 69 (23%) were abnormal. CAD-assisted radiographers detected 17/17 (100%) cancers, 21/28 (72%) polyps ≥10 mm and 42/60 (70%) 6-9 mm polyps. The overall agreement between radiographers and the reference management strategy was good (kappa 0.72; CI: 0.65, 0.78) with agreement for S1 strategy in 189/211 (90%) exams; S2 in 19/27 (70%); S3 in 12/19 (63%); S4 in 17/17 (100%); S5 in 5/6 (83%). The mean interpretation time was 17 min (SD = 11) compared with 8 min (SD = 3.5) for radiologists. There was no learning curve for recording correct strategies (OR 0.88; p = 0.12) but a significant reduction in interpretation times, mean 14 and 31 min (last/first 50 exams; -0.46; p < 0.001). Conclusion: Routine CTC interpretation by radiographers is effective for initial triage of patients with cancer, but independent reporting is currently not recommended.
Automated, computer interpreted radioimmunoassay results
International Nuclear Information System (INIS)
Hill, J.C.; Nagle, C.E.; Dworkin, H.J.; Fink-Bennett, D.; Freitas, J.E.; Wetzel, R.; Sawyer, N.; Ferry, D.; Hershberger, D.
1984-01-01
90,000 Radioimmunoassay results have been interpreted and transcribed automatically using software developed for use on a Hewlett Packard Model 1000 mini-computer system with conventional dot matrix printers. The computer program correlates the results of a combination of assays, interprets them and prints a report ready for physician review and signature within minutes of completion of the assay. The authors designed and wrote a computer program to query their patient data base for radioassay laboratory results and to produce a computer generated interpretation of these results using an algorithm that produces normal and abnormal interpretives. Their laboratory assays 50,000 patient samples each year using 28 different radioassays. Of these 85% have been interpreted using our computer program. Allowances are made for drug and patient history and individualized reports are generated with regard to the patients age and sex. Finalization of reports is still subject to change by the nuclear physician at the time of final review. Automated, computerized interpretations have realized cost savings through reduced personnel and personnel time and provided uniformity of the interpretations among the five physicians. Prior to computerization of interpretations, all radioassay results had to be dictated and reviewed for signing by one of the resident or staff physicians. Turn around times for reports prior to the automated computer program generally were two to three days. Whereas, the computerized interpret system allows reports to generally be issued the day assays are completed
Statistical sampling techniques as applied to OSE inspections
International Nuclear Information System (INIS)
Davis, J.J.; Cote, R.W.
1987-01-01
The need has been recognized for statistically valid methods for gathering information during OSE inspections; and for interpretation of results, both from performance testing and from records reviews, interviews, etc. Battelle Columbus Division, under contract to DOE OSE has performed and is continuing to perform work in the area of statistical methodology for OSE inspections. This paper represents some of the sampling methodology currently being developed for use during OSE inspections. Topics include population definition, sample size requirements, level of confidence and practical logistical constraints associated with the conduct of an inspection based on random sampling. Sequential sampling schemes and sampling from finite populations are also discussed. The methods described are applicable to various data gathering activities, ranging from the sampling and examination of classified documents to the sampling of Protective Force security inspectors for skill testing
User manual for Blossom statistical package for R
Talbert, Marian; Cade, Brian S.
2005-01-01
Blossom is an R package with functions for making statistical comparisons with distance-function based permutation tests developed by P.W. Mielke, Jr. and colleagues at Colorado State University (Mielke and Berry, 2001) and for testing parameters estimated in linear models with permutation procedures developed by B. S. Cade and colleagues at the Fort Collins Science Center, U.S. Geological Survey. This manual is intended to provide identical documentation of the statistical methods and interpretations as the manual by Cade and Richards (2005) does for the original Fortran program, but with changes made with respect to command inputs and outputs to reflect the new implementation as a package for R (R Development Core Team, 2012). This implementation in R has allowed for numerous improvements not supported by the Cade and Richards (2005) Fortran implementation, including use of categorical predictor variables in most routines.
Azad Henareh Khalyani; William A. Gould; Eric Harmsen; Adam Terando; Maya Quinones; Jaime A. Collazo
2016-01-01
statistically downscaled general circulation models (GCMs) taking Puerto Rico as a test case. Two model selection/model averaging strategies were used: the average of all available GCMs and the av-erage of the models that are able to...
Waldinger, Marcel D.; Zwinderman, Aeilko H.; Olivier, Berend; Schweitzer, Dave H.
2008-01-01
INTRODUCTION: The intravaginal ejaculation latency time (IELT) behaves in a skewed manner and needs the appropriate statistics for correct interpretation of treatment results. AIMS: To explain the rightful use of geometrical mean IELT values and the fold increase of the geometric mean IELT because
A study of statistics anxiety levels of graduate dental hygiene students.
Welch, Paul S; Jacks, Mary E; Smiley, Lynn A; Walden, Carolyn E; Clark, William D; Nguyen, Carol A
2015-02-01
In light of increased emphasis on evidence-based practice in the profession of dental hygiene, it is important that today's dental hygienist comprehend statistical measures to fully understand research articles, and thereby apply scientific evidence to practice. Therefore, the purpose of this study was to investigate statistics anxiety among graduate dental hygiene students in the U.S. A web-based self-report, anonymous survey was emailed to directors of 17 MSDH programs in the U.S. with a request to distribute to graduate students. The survey collected data on statistics anxiety, sociodemographic characteristics and evidence-based practice. Statistic anxiety was assessed using the Statistical Anxiety Rating Scale. Study significance level was α=0.05. Only 8 of the 17 invited programs participated in the study. Statistical Anxiety Rating Scale data revealed graduate dental hygiene students experience low to moderate levels of statistics anxiety. Specifically, the level of anxiety on the Interpretation Anxiety factor indicated this population could struggle with making sense of scientific research. A decisive majority (92%) of students indicated statistics is essential for evidence-based practice and should be a required course for all dental hygienists. This study served to identify statistics anxiety in a previously unexplored population. The findings should be useful in both theory building and in practical applications. Furthermore, the results can be used to direct future research. Copyright © 2015 The American Dental Hygienists’ Association.
Empirical approach to interpreting card-sorting data
Directory of Open Access Journals (Sweden)
Steven F. Wolf
2012-05-01
Full Text Available Since it was first published 30 years ago, the seminal paper of Chi et al. on expert and novice categorization of introductory problems led to a plethora of follow-up studies within and outside of the area of physics [Cogn. Sci. 5, 121 (1981COGSD50364-021310.1207/s15516709cog0502_2]. These studies frequently encompass “card-sorting” exercises whereby the participants group problems. While this technique certainly allows insights into problem solving approaches, simple descriptive statistics more often than not fail to find significant differences between experts and novices. In moving beyond descriptive statistics, we describe a novel microscopic approach that takes into account the individual identity of the cards and uses graph theory and models to visualize, analyze, and interpret problem categorization experiments. We apply these methods to an introductory physics (mechanics problem categorization experiment, and find that most of the variation in sorting outcome is not due to the sorter being an expert versus a novice, but rather due to an independent characteristic that we named “stacker” versus “spreader.” The fact that the expert-novice distinction only accounts for a smaller amount of the variation may explain the frequent null results when conducting these experiments.
Normative interpretations of diversity
DEFF Research Database (Denmark)
Lægaard, Sune
2009-01-01
Normative interpretations of particular cases consist of normative principles or values coupled with social theoretical accounts of the empirical facts of the case. The article reviews the most prominent normative interpretations of the Muhammad cartoons controversy over the publication of drawings...... of the Prophet Muhammad in the Danish newspaper Jyllands-Posten. The controversy was seen as a case of freedom of expression, toleration, racism, (in)civility and (dis)respect, and the article notes different understandings of these principles and how the application of them to the controversy implied different...... social theoretical accounts of the case. In disagreements between different normative interpretations, appeals are often made to the ‘context', so it is also considered what roles ‘context' might play in debates over normative interpretations...
Barak’s Purposive Interpretation in Law as a Pattern of Constitutional Interpretative Fidelity
Directory of Open Access Journals (Sweden)
Marinković Tanasije
2016-12-01
Full Text Available Political jurisprudence points out that constitutional court judges sometimes act like political actors, and that their decisions are a function of strategic and ideological as much as legal considerations. Consequently, the proper role of the courts, notably in exercising their review of constitutionality, has been one of the most debated issues in modern political and legal theory. Part of the controversy is also how to measure the interpretative fidelity of judges to the constitutional texts, or conversely, the level of their political engagement. This paper argues for the reconsideration of Aharon Barak’s Purposive Interpretation in Law in that light. Barak’s work was intended to provide, in the first place, judges and other lawyers with a sort of judicial philosophy – a holistic system of legal reasoning, applying both to the interpretation of will, contract, statute and constitution. Nevertheless, these conventions of legal reasoning, modified and readapted, could well be used also as heuristic tools by the academics in measuring the interpretative fidelity of judges to various sources of law. Accordingly, this paper clings closely to the presentation of Barak’s precepts for the purposive interpretation of constitutions, by focusing on the notions of subjective and objective purpose in interpreting constitutions, and how the potential conflicts between these purposes are resolved.
Does PACS improve diagnostic accuracy in chest radiograph interpretations in clinical practice?
International Nuclear Information System (INIS)
Hurlen, Petter; Borthne, Arne; Dahl, Fredrik A.; Østbye, Truls; Gulbrandsen, Pål
2012-01-01
Objectives: To assess the impact of a Picture Archiving and Communication System (PACS) on the diagnostic accuracy of the interpretation of chest radiology examinations in a “real life” radiology setting. Materials and methods: During a period before PACS was introduced to radiologists, when images were still interpreted on film and reported on paper, images and reports were also digitally stored in an image database. The same database was used after the PACS introduction. This provided a unique opportunity to conduct a blinded retrospective study, comparing sensitivity (the main outcome parameter) in the pre and post-PACS periods. We selected 56 digitally stored chest radiograph examinations that were originally read and reported on film, and 66 examinations that were read and reported on screen 2 years after the PACS introduction. Each examination was assigned a random number, and both reports and images were scored independently for pathological findings. The blinded retrospective score for the original reports were then compared with the score for the images (the gold standard). Results: Sensitivity was improved after the PACS introduction. When both certain and uncertain findings were included, this improvement was statistically significant. There were no other statistically significant changes. Conclusion: The result is consistent with prospective studies concluding that diagnostic accuracy is at least not reduced after PACS introduction. The sensitivity may even be improved.
Rational integration of noisy evidence and prior semantic expectations in sentence interpretation.
Gibson, Edward; Bergen, Leon; Piantadosi, Steven T
2013-05-14
Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be "well designed"--in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian "size principle"; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel.
Mortality and air pollution: lessons from statistics
International Nuclear Information System (INIS)
Lipfert, F.W.
1982-01-01
Cross sectional studies which attempt to link persistent geographic differences in mortality rates with air pollution are reviewed. Some early studies are mentioned and detailed results are given for seven major contemporary studies, two of which are still in the publication process. Differences among the studies are discussed with regard to statistical techniques, trends in the results over time (1959 to 1974), and interpretation and use of the results. The analysis concludes that there are far too many problems with this technique to allow causality to be firmly established, and thus the results should not be used for cost benefit or policy analysis
Sparse approximation of currents for statistics on curves and surfaces.
Durrleman, Stanley; Pennec, Xavier; Trouvé, Alain; Ayache, Nicholas
2008-01-01
Computing, processing, visualizing statistics on shapes like curves or surfaces is a real challenge with many applications ranging from medical image analysis to computational geometry. Modelling such geometrical primitives with currents avoids feature-based approach as well as point-correspondence method. This framework has been proved to be powerful to register brain surfaces or to measure geometrical invariants. However, if the state-of-the-art methods perform efficiently pairwise registrations, new numerical schemes are required to process groupwise statistics due to an increasing complexity when the size of the database is growing. Statistics such as mean and principal modes of a set of shapes often have a heavy and highly redundant representation. We propose therefore to find an adapted basis on which mean and principal modes have a sparse decomposition. Besides the computational improvement, this sparse representation offers a way to visualize and interpret statistics on currents. Experiments show the relevance of the approach on 34 sets of 70 sulcal lines and on 50 sets of 10 meshes of deep brain structures.
Parameter Estimation as a Problem in Statistical Thermodynamics.
Earle, Keith A; Schneider, David J
2011-03-14
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
Acceleration transforms and statistical kinetic models
International Nuclear Information System (INIS)
LuValle, M.J.; Welsher, T.L.; Svoboda, K.
1988-01-01
For a restricted class of problems a mathematical model of microscopic degradation processes, statistical kinetics, is developed and linked through acceleration transforms to the information which can be obtained from a system in which the only observable sign of degradation is sudden and catastrophic failure. The acceleration transforms were developed in accelerated life testing applications as a tool for extrapolating from the observable results of an accelerated life test to the dynamics of the underlying degradation processes. A particular concern of a physicist attempting to interpreted the results of an analysis based on acceleration transforms is determining the physical species involved in the degradation process. These species may be (a) relatively abundant or (b) relatively rare. The main results of this paper are a theorem showing that for an important subclass of statistical kinetic models, acceleration transforms cannot be used to distinguish between cases a and b, and an example showing that in some cases falling outside the restrictions of the theorem, cases a and b can be distinguished by their acceleration transforms
Vaughn, Brandon K.; Wang, Pei-Yu
2009-01-01
The emergence of technology has led to numerous changes in mathematical and statistical teaching and learning which has improved the quality of instruction and teacher/student interactions. The teaching of statistics, for example, has shifted from mathematical calculations to higher level cognitive abilities such as reasoning, interpretation, and…
Combinatorial interpretations of binomial coefficient analogues related to Lucas sequences
Sagan, Bruce; Savage, Carla
2009-01-01
Let s and t be variables. Define polynomials {n} in s, t by {0}=0, {1}=1, and {n}=s{n-1}+t{n-2} for n >= 2. If s, t are integers then the corresponding sequence of integers is called a Lucas sequence. Define an analogue of the binomial coefficients by C{n,k}={n}!/({k}!{n-k}!) where {n}!={1}{2}...{n}. It is easy to see that C{n,k} is a polynomial in s and t. The purpose of this note is to give two combinatorial interpretations for this polynomial in terms of statistics on integer partitions in...
Fundamentals of statistical experimental design and analysis
Easterling, Robert G
2015-01-01
Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. - benefit from using statistical experimental design to better understand their worlds and then use that understanding to improve the products, processes, and programs they are responsible for. This book aims to provide the practitioners of tomorrow with a memorable, easy to read, engaging guide to statistics and experimental design. This book uses examples, drawn from a variety of established texts, and embeds them in a business or scientific context, seasoned with a dash of humor, to emphasize the issues and ideas that led to the experiment and the what-do-we-do-next? steps after the experiment. Graphical data displays are emphasized as means of discovery and communication and formulas are minimized, with a focus on interpreting the results that software produce. The role of subject-matter knowledge, and passion, is also illustrated. The examples do not require specialized knowledge, and t...
Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent
2012-07-01
96-well checkerboard plates. The results were obtained separately using the four different interpretation methods frequently preferred by researchers. Thus, it was aimed to detect to what extent the rates of synergistic, indifferent and antagonistic interactions were affected by different interpretation methods. The differences between the interpretation methods were tested by chi-square analysis for each combination used. Statistically significant differences were detected between the four different interpretation methods for the determination of synergistic and indifferent interactions (pfractional inhibitory concentration index of all the non-turbid wells along the turbidity/non-turbidity interface. There was no statistically significant difference between the four methods for the detection of antagonism (p> 0.05). In conclusion although there is a standard procedure for checkerboard synergy testing it fails to exhibit standard results owing to different methods of interpretation of the results. Thus, there is a need to standardise the interpretation method for checkerboard synergy testing. To determine the most appropriate method of interpretation further studies investigating the clinical benefits of synergic combinations and additionally comparing the consistency of the results obtained from the other standard combination tests like time-kill studies, are required.
Court Interpreting in Denmark - the role of court interpreters in Danish courtrooms
DEFF Research Database (Denmark)
Jacobsen, Bente
1999-01-01
Court interpreters in Denmark are expected to follow the guidelines laid down in the document Instructions for Interpreters, which was published in 1994, and which deals with four principal areas: accuracy and completeness, impartiality, confidentiality and conflict of interest. This paper contends...
Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F
2013-08-01
To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will
Directory of Open Access Journals (Sweden)
Florentina A. Cziple
2006-10-01
Full Text Available The paper forwards the conclusions of a survey performed on a mathematical model of the phase equilibrium from the ternary system Al-Cu-Si. The author presents the calculus of the statistic equation of the liquidus surface model from this diagram, the plotting and statistical-mathematical interpretation of the results obtained.
Thiese, Matthew S; Walker, Skyler; Lindsey, Jenna
2017-10-01
Distribution of valuable research discoveries are needed for the continual advancement of patient care. Publication and subsequent reliance of false study results would be detrimental for patient care. Unfortunately, research misconduct may originate from many sources. While there is evidence of ongoing research misconduct in all it's forms, it is challenging to identify the actual occurrence of research misconduct, which is especially true for misconduct in clinical trials. Research misconduct is challenging to measure and there are few studies reporting the prevalence or underlying causes of research misconduct among biomedical researchers. Reported prevalence estimates of misconduct are probably underestimates, and range from 0.3% to 4.9%. There have been efforts to measure the prevalence of research misconduct; however, the relatively few published studies are not freely comparable because of varying characterizations of research misconduct and the methods used for data collection. There are some signs which may point to an increased possibility of research misconduct, however there is a need for continued self-policing by biomedical researchers. There are existing resources to assist in ensuring appropriate statistical methods and preventing other types of research fraud. These included the "Statistical Analyses and Methods in the Published Literature", also known as the SAMPL guidelines, which help scientists determine the appropriate method of reporting various statistical methods; the "Strengthening Analytical Thinking for Observational Studies", or the STRATOS, which emphases on execution and interpretation of results; and the Committee on Publication Ethics (COPE), which was created in 1997 to deliver guidance about publication ethics. COPE has a sequence of views and strategies grounded in the values of honesty and accuracy.
Simulations for designing and interpreting intervention trials in infectious diseases.
Halloran, M Elizabeth; Auranen, Kari; Baird, Sarah; Basta, Nicole E; Bellan, Steven E; Brookmeyer, Ron; Cooper, Ben S; DeGruttola, Victor; Hughes, James P; Lessler, Justin; Lofgren, Eric T; Longini, Ira M; Onnela, Jukka-Pekka; Özler, Berk; Seage, George R; Smith, Thomas A; Vespignani, Alessandro; Vynnycky, Emilia; Lipsitch, Marc
2017-12-29
Interventions in infectious diseases can have both direct effects on individuals who receive the intervention as well as indirect effects in the population. In addition, intervention combinations can have complex interactions at the population level, which are often difficult to adequately assess with standard study designs and analytical methods. Herein, we urge the adoption of a new paradigm for the design and interpretation of intervention trials in infectious diseases, particularly with regard to emerging infectious diseases, one that more accurately reflects the dynamics of the transmission process. In an increasingly complex world, simulations can explicitly represent transmission dynamics, which are critical for proper trial design and interpretation. Certain ethical aspects of a trial can also be quantified using simulations. Further, after a trial has been conducted, simulations can be used to explore the possible explanations for the observed effects. Much is to be gained through a multidisciplinary approach that builds collaborations among experts in infectious disease dynamics, epidemiology, statistical science, economics, simulation methods, and the conduct of clinical trials.
Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S
2018-03-01
Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.
International Nuclear Information System (INIS)
Yeh, L.
1992-01-01
The phase-space-picture approach to quantum non-equilibrium statistical mechanics via the characteristic function of infinite- mode squeezed coherent states is introduced. We use quantum Brownian motion as an example to show how this approach provides an interesting geometrical interpretation of quantum non-equilibrium phenomena
The interpretation of administrative contracts
Directory of Open Access Journals (Sweden)
Cătălin-Silviu SĂRARU
2014-06-01
Full Text Available The article analyzes the principles of interpretation for administrative contracts, in French law and in Romanian law. In the article are highlighted derogations from the rules of contract interpretation in common law. Are examined the exceptions to the principle of good faith, the principle of common intention (willingness of the parties, the principle of good administration, the principle of extensive interpretation of the administrative contract. The article highlights the importance and role of the interpretation in administrative contracts.
Hussain, Bilal; Sultana, Tayyaba; Sultana, Salma; Al-Ghanim, Khalid Abdullah; Masoud, Muhammad Shahreef; Mahboob, Shahid
2018-04-01
Cirrhinus mrigala, Labeo rohita, and Catla catla are economically important fish for human consumption in Pakistan, but industrial and sewage pollution has drastically reduced their population in the River Chenab. Statistics are an important tool to analyze and interpret comet assay results. The specific aims of the study were to determine the DNA damage in Cirrhinus mrigala, Labeo rohita, and Catla catla due to chemical pollution and to assess the validity of statistical analyses to determine the viability of the comet assay for a possible use with these freshwater fish species as a good indicator of pollution load and habitat degradation. Comet assay results indicated a significant (P comet head diameter, comet tail length, and % DNA damage. Regression analysis and correlation matrices conducted among the parameters of the comet assay affirmed the precision and the legitimacy of the results. The present study, therefore, strongly recommends that genotoxicological studies conduct appropriate analysis of the various components of comet assays to offer better interpretation of the assay data.
Statistics and Informatics in Space Astrophysics
Feigelson, E.
2017-12-01
The interest in statistical and computational methodology has seen rapid growth in space-based astrophysics, parallel to the growth seen in Earth remote sensing. There is widespread agreement that scientific interpretation of the cosmic microwave background, discovery of exoplanets, and classifying multiwavelength surveys is too complex to be accomplished with traditional techniques. NASA operates several well-functioning Science Archive Research Centers providing 0.5 PBy datasets to the research community. These databases are integrated with full-text journal articles in the NASA Astrophysics Data System (200K pageviews/day). Data products use interoperable formats and protocols established by the International Virtual Observatory Alliance. NASA supercomputers also support complex astrophysical models of systems such as accretion disks and planet formation. Academic researcher interest in methodology has significantly grown in areas such as Bayesian inference and machine learning, and statistical research is underway to treat problems such as irregularly spaced time series and astrophysical model uncertainties. Several scholarly societies have created interest groups in astrostatistics and astroinformatics. Improvements are needed on several fronts. Community education in advanced methodology is not sufficiently rapid to meet the research needs. Statistical procedures within NASA science analysis software are sometimes not optimal, and pipeline development may not use modern software engineering techniques. NASA offers few grant opportunities supporting research in astroinformatics and astrostatistics.
Reeve, Joanne
2010-01-01
Patient-centredness is a core value of general practice; it is defined as the interpersonal processes that support the holistic care of individuals. To date, efforts to demonstrate their relationship to patient outcomes have been disappointing, whilst some studies suggest values may be more rhetoric than reality. Contextual issues influence the quality of patient-centred consultations, impacting on outcomes. The legitimate use of knowledge, or evidence, is a defining aspect of modern practice, and has implications for patient-centredness. Based on a critical review of the literature, on my own empirical research, and on reflections from my clinical practice, I critique current models of the use of knowledge in supporting individualised care. Evidence-Based Medicine (EBM), and its implementation within health policy as Scientific Bureaucratic Medicine (SBM), define best evidence in terms of an epistemological emphasis on scientific knowledge over clinical experience. It provides objective knowledge of disease, including quantitative estimates of the certainty of that knowledge. Whilst arguably appropriate for secondary care, involving episodic care of selected populations referred in for specialist diagnosis and treatment of disease, application to general practice can be questioned given the complex, dynamic and uncertain nature of much of the illness that is treated. I propose that general practice is better described by a model of Interpretive Medicine (IM): the critical, thoughtful, professional use of an appropriate range of knowledges in the dynamic, shared exploration and interpretation of individual illness experience, in order to support the creative capacity of individuals in maintaining their daily lives. Whilst the generation of interpreted knowledge is an essential part of daily general practice, the profession does not have an adequate framework by which this activity can be externally judged to have been done well. Drawing on theory related to the
Raposo, Letícia M; Nobre, Flavio F
2017-08-30
Resistance to antiretrovirals (ARVs) is a major problem faced by HIV-infected individuals. Different rule-based algorithms were developed to infer HIV-1 susceptibility to antiretrovirals from genotypic data. However, there is discordance between them, resulting in difficulties for clinical decisions about which treatment to use. Here, we developed ensemble classifiers integrating three interpretation algorithms: Agence Nationale de Recherche sur le SIDA (ANRS), Rega, and the genotypic resistance interpretation system from Stanford HIV Drug Resistance Database (HIVdb). Three approaches were applied to develop a classifier with a single resistance profile: stacked generalization, a simple plurality vote scheme and the selection of the interpretation system with the best performance. The strategies were compared with the Friedman's test and the performance of the classifiers was evaluated using the F-measure, sensitivity and specificity values. We found that the three strategies had similar performances for the selected antiretrovirals. For some cases, the stacking technique with naïve Bayes as the learning algorithm showed a statistically superior F-measure. This study demonstrates that ensemble classifiers can be an alternative tool for clinical decision-making since they provide a single resistance profile from the most commonly used resistance interpretation systems.
Image interpretation performance: A longitudinal study from novice to professional
International Nuclear Information System (INIS)
Wright, C.; Reeves, P.
2017-01-01
Purpose: Universities need to deliver educational programmes that create radiography graduates who are ready and able to participate in abnormality detection schemes, ultimately delivering safe and reliable performance because junior doctors are exposed to the risk of misdiagnosis if unsupported by other healthcare professionals. Radiographers are ideally suited to this role having the responsibility for conducting the actual X-ray examination. Method: The image interpretation performance of one cohort of student radiographers was measured upon enrolment from UCAS in the first week of university education and then again prior to graduation using RadBench (n = 23). Results: The results identified that novices have a range of natural image interpretation skills; accuracy 35–85%, sensitivity 45–100%, specificity 15–85%, mean ROC 0.691. Graduates presented a narrower range; accuracy 60–90%, sensitivity 40–100%, specificity 60–90%, mean ROC 0.841. The positive shift in graduate mean accuracy (+16%) was driven by increases in specificity (+27%) rather than sensitivity (+5%). No statistically significant differences (ANOVA) could be found between age group, gender and previous education however trends were identified. 56.5% of the population (n = 13) met a benchmark accurate standard of 80%, including one graduate who met 90%. Conclusion: Image interpretation testing at the point of UCAS entry is a useful indicator of future performance and is a recommended factor for consideration as part of the selection process. Whilst image interpretation now forms an integral part of undergraduate radiography programmes, new graduates may not necessary possess the reliability in decision making to justify participation in abnormality detection schemes, highlighting the need for continuous professional development. - Highlights: • Some novices appear to have inherent skills in fracture identification. • RadBench testing as part of the UCAS selection process
Domain general constraints on statistical learning.
Thiessen, Erik D
2011-01-01
All theories of language development suggest that learning is constrained. However, theories differ on whether these constraints arise from language-specific processes or have domain-general origins such as the characteristics of human perception and information processing. The current experiments explored constraints on statistical learning of patterns, such as the phonotactic patterns of an infants' native language. Infants in these experiments were presented with a visual analog of a phonotactic learning task used by J. R. Saffran and E. D. Thiessen (2003). Saffran and Thiessen found that infants' phonotactic learning was constrained such that some patterns were learned more easily than other patterns. The current results indicate that infants' learning of visual patterns shows the same constraints as infants' learning of phonotactic patterns. This is consistent with theories suggesting that constraints arise from domain-general sources and, as such, should operate over many kinds of stimuli in addition to linguistic stimuli. © 2011 The Author. Child Development © 2011 Society for Research in Child Development, Inc.
Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable
Energy Technology Data Exchange (ETDEWEB)
Menkov, V. [Indiana Univ., Bloomington, IN (United States)
1996-12-31
An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.
Statistical approaches to orofacial pain and temporomandibular disorders research
Manfredini, Daniele; Nardini, Luca Guarda; Carrozzo, Eleonora; Salmaso, Luigi
2014-01-01
This book covers the biostatistical methods utilized to interpret and analyze dental research in the areas of orofacial pain and temporomandibular disorders. It will guide practitioners in these fields who would like to interpret research findings or find examples on the design of clinical investigations. After an introduction dealing with the basic issues, the central sections of the textbook are dedicated to the different types of investigations in sight of specific goals researchers may have. The final section contains more elaborate statistical concepts for expert professionals. The field of orofacial pain and temporomandibular disorders is emerging as one of the most critical areas of clinical research in dentistry. Due to the complexity of clinical pictures, the multifactorial etiology, and the importance of psychosocial factors in all aspects of the TMD practice, clinicians often find it hard to appraise their modus operandi, and researchers must constantly increase their knowledge in epidemiology and ...
Spiva, LeeAnna; Johnson, Kimberly; Robertson, Bethany; Barrett, Darcy T; Jarrell, Nicole M; Hunter, Donna; Mendoza, Inocencia
2012-02-01
Historically, the instructional method of choice has been traditional lecture or face-to-face education; however, changes in the health care environment, including resource constraints, have necessitated examination of this practice. A descriptive pre-/posttest method was used to determine the effectiveness of alternative teaching modalities on nurses' knowledge and confidence in electrocardiogram (EKG) interpretation. A convenience sample of 135 nurses was recruited in an integrated health care system in the Southeastern United States. Nurses attended an instructor-led course, an online learning (e-learning) platform with no study time or 1 week of study time, or an e-learning platform coupled with a 2-hour post-course instructor-facilitated debriefing with no study time or 1 week of study time. Instruments included a confidence scale, an online EKG test, and a course evaluation. Statistically significant differences in knowledge and confidence were found for individual groups after nurses participated in the intervention. Statistically significant differences were found in pre-knowledge and post-confidence when groups were compared. Organizations that use various instructional methods to educate nurses in EKG interpretation can use different teaching modalities without negatively affecting nurses' knowledge or confidence in this skill. Copyright 2012, SLACK Incorporated.
Statistical methods for data analysis in particle physics
AUTHOR|(CDS)2070643
2015-01-01
This concise set of course-based notes provides the reader with the main concepts and tools to perform statistical analysis of experimental data, in particular in the field of high-energy physics (HEP). First, an introduction to probability theory and basic statistics is given, mainly as reminder from advanced undergraduate studies, yet also in view to clearly distinguish the Frequentist versus Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on upper limits as many applications in HEP concern hypothesis testing, where often the main goal is to provide better and better limits so as to be able to distinguish eventually between competing hypotheses or to rule out some of them altogether. Many worked examples will help newcomers to the field and graduate students to understand the pitfalls in applying theoretical concepts to actual data
Dean, Robyn K.; Pollard, Robert Q., Jr.
2001-01-01
This article uses the framework of demand-control theory to examine the occupation of sign language interpreting. It discusses the environmental, interpersonal, and intrapersonal demands that impinge on the interpreter's decision latitude and notes the prevalence of cumulative trauma disorders, turnover, and burnout in the interpreting profession.…
TRANSALPINA CAN EASILY BE CONSIDERED THE DIAMOND COUNTRY LANDSCAPES, ADVENTURE AND MYSTERY
Directory of Open Access Journals (Sweden)
Constanta ENEA
2014-05-01
Full Text Available If Transfăgărăşan is pearl Romanian mountains, the road easily qill be considered the diamond country landscapes, adventure and mystery. Hell 's Kitchen has developed and evolved naturally. Have no certainty of success and money required to carry out the infrastructure first and then see if investors come, so we can not blame the local authorities find here. The difficulties encountered in implementing funding programs made for funds to obtain hard enough. In this paper, I will briefly mention some ideas that could make the two cities, the holder of administratively to Rancière, the burgeoning tourist development area of Gorj County. I sincerely hope uhat there is among us and other people with vision who want to stand up and take action to provide a decent future for our children.
Quantum versus classical statistical dynamics of an ultracold Bose gas
International Nuclear Information System (INIS)
Berges, Juergen; Gasenzer, Thomas
2007-01-01
We investigate the conditions under which quantum fluctuations are relevant for the quantitative interpretation of experiments with ultracold Bose gases. This requires to go beyond the description in terms of the Gross-Pitaevskii and Hartree-Fock-Bogoliubov mean-field theories, which can be obtained as classical (statistical) field-theory approximations of the quantum many-body problem. We employ functional-integral techniques based on the two-particle irreducible (2PI) effective action. The role of quantum fluctuations is studied within the nonperturbative 2PI 1/N expansion to next-to-leading order. At this accuracy level memory integrals enter the dynamic equations, which differ for quantum and classical statistical descriptions. This can be used to obtain a classicality condition for the many-body dynamics. We exemplify this condition by studying the nonequilibrium evolution of a one-dimensional Bose gas of sodium atoms, and discuss some distinctive properties of quantum versus classical statistical dynamics
International Nuclear Information System (INIS)
Beck, W.
1984-01-01
From the complexity of computer programs for the solution of scientific and technical problems results a lot of questions. Typical questions concern the strength and weakness of computer programs, the propagation of incertainties among the input data, the sensitivity of input data on output data and the substitute of complex models by more simple ones, which provide equivalent results in certain ranges. Those questions have a general practical meaning, principle answers may be found by statistical methods, which are based on the Monte Carlo Method. In this report the statistical methods are chosen, described and valuated. They are implemented into the modular program system STAR, which is an own component of the program system RSYST. The design of STAR considers users with different knowledge of data processing and statistics. The variety of statistical methods, generating and evaluating procedures. The processing of large data sets in complex structures. The coupling to other components of RSYST and RSYST foreign programs. That the system can be easily modificated and enlarged. Four examples are given, which demonstrate the application of STAR. (orig.) [de
A highly versatile and easily configurable system for plant electrophysiology.
Gunsé, Benet; Poschenrieder, Charlotte; Rankl, Simone; Schröeder, Peter; Rodrigo-Moreno, Ana; Barceló, Juan
2016-01-01
In this study we present a highly versatile and easily configurable system for measuring plant electrophysiological parameters and ionic flow rates, connected to a computer-controlled highly accurate positioning device. The modular software used allows easy customizable configurations for the measurement of electrophysiological parameters. Both the operational tests and the experiments already performed have been fully successful and rendered a low noise and highly stable signal. Assembly, programming and configuration examples are discussed. The system is a powerful technique that not only gives precise measuring of plant electrophysiological status, but also allows easy development of ad hoc configurations that are not constrained to plant studies. •We developed a highly modular system for electrophysiology measurements that can be used either in organs or cells and performs either steady or dynamic intra- and extracellular measurements that takes advantage of the easiness of visual object-oriented programming.•High precision accuracy in data acquisition under electrical noisy environments that allows it to run even in a laboratory close to electrical equipment that produce electrical noise.•The system makes an improvement of the currently used systems for monitoring and controlling high precision measurements and micromanipulation systems providing an open and customizable environment for multiple experimental needs.
International Nuclear Information System (INIS)
Robin, J.P.; Tollier, M.Th.; Guilbot, A.
1978-01-01
Besides compounds of low molecular mass, the gamma irradiation of granular starch produces radiodextrins with a mass lying between that of the low-molecular-mass compounds and that of the amylose and amylopectin macromolecules from which they derive. The authors present the main results relating to characterization of the radiodextrins of strongly irradiated relating to characterization of the radiodextrins of strongly irradiated normal and waxy maize starches. The method of study - both enzymatic and chromatographic - is the one that has been in use for some ten years for studying the fine structure of α-1.4-1.6 glucanes. An attempt is made to interpret and integrate the results in the light of new data derived from a study of the controlled acid hydrolysis of starch. In particular, the following hypothesis is advanced: the 'hydrolytic' effect of irradiation is, at the qualitative level, independent of the nature of the starch and similar to the effect produced by acid hydrolysis; as with acid hydrolysis, the breaking of the covalent bonds is a function of the internal strucutre of the grain and especially of its amorphous/crystalline organization; the zones of an amorphous character are more easily degraded; on the other hand, the 'crystalline' zones are better protected; in fact, the 'hydrolytic' effect of irradiation is not homogeneous and does not conform to a statistical pattern. (author)
The impact of working memory on interpreting
Institute of Scientific and Technical Information of China (English)
白云安; 张国梅
2016-01-01
This paper investigates the roles of working memory in interpreting process. First of all, it gives a brief introduction to interpreting. Secondly, the paper exemplifies the role of working memory in interpreting. The result reveals that the working memory capacity of interpreters is not adsolutely proportional to the quality of interpreting in the real interpreting conditions. The performance of an interpreter with well-equipped working memory capacity will comprehensively influenced by various elements.
International Nuclear Information System (INIS)
Fu Jie; Zhou Qifu; Chen Dongliang
2011-01-01
This paper mainly introduces the contents of shielding design in the No. 151 report of NCRP published in 2005, discusses some issues that easily to be overlooked during the environmental impact assessment of medical electrical accelerators in China. Some references will be provided in the medical electrical accelerators' shielding design and assessment to achieved the purpose of scientific, reasonable, feasible and economical radiation shielding protection. (authors)
Phillips, Richard L.; Chang, Kyu Hyun; Friedler, Sorelle A.
2017-01-01
Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. W...
Interpreter in Criminal Cases: Allrounders First!
Frid, Arthur
1974-01-01
The interpreter in criminal cases generally has had a purely linguistic training with no difference from the education received by his colleague interpreters. The position of interpreters in criminal cases is vague and their role depends to a large extent on individual interpretation of officials involved in the criminal procedure. Improvements on…
DEFF Research Database (Denmark)
Poulsen, Søren Erbs; Alberdi Pagola, Maria
2015-01-01
A method for real-time interpretation of ongoing thermal response tests of vertical borehole heat exchangers is presented. The method utilizes a statistically based stopping criterion for ongoing tests. The study finds minimum testing times for synthetic and actual TRTs to be in the interval 12–2...
Health significance and statistical uncertainty. The value of P-value.
Consonni, Dario; Bertazzi, Pier Alberto
2017-10-27
The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".
Mind, Matter, Information and Quantum Interpretations
Directory of Open Access Journals (Sweden)
Reza Maleeh
2015-07-01
Full Text Available In this paper I give a new information-theoretic analysis of the formalisms and interpretations of quantum mechanics (QM in general, and of two mainstream interpretations of quantum mechanics in particular: The Copenhagen interpretation and David Bohm’s interpretation of quantum mechanics. Adopting Juan G. Roederer’s reading of the notion of pragmatic information, I argue that pragmatic information is not applicable to the Copenhagen interpretation since the interpretation is primarily concerned with epistemology rather than ontology. However it perfectly fits Bohm’s ontological interpretation of quantum mechanics in the realms of biotic and artificial systems. Viewing Bohm’s interpretation of QM in the context of pragmatic information imposes serious limitations to the qualitative aspect of such an interpretation, making his extension of the notion active information to every level of reality illegitimate. Such limitations lead to the idea that, contrary to Bohm’s claim, mind is not a more subtle aspect of reality via the quantum potential as active information, but the quantum potential as it affects particles in the double-slit experiment represents the non-algorithmic aspect of the mind as a genuine information processing system. This will provide an information-based ground, firstly, for refreshing our views on quantum interpretations and secondly, for a novel qualitative theory of the relationship of mind and matter in which mind-like properties are exclusive attributes of living systems. To this end, I will also take an information-theoretic approach to the notion of intentionality as interpreted by John Searle.
A utilização da estatística na Ortodontia The application of statistics in Orthodontics
Directory of Open Access Journals (Sweden)
Tien Li An
2004-12-01
Full Text Available A estatística exerce um papel fundamental no método científico, a qual se preocupa em organizar, descrever, analisar e interpretar os dados obtidos a partir de uma observação ou de um experimento. Todavia, ela continua sendo pouco acessível na interpretação, seja por parte do leitor ou por parte daquele que a utiliza. Às vezes, a estatística é mal aplicada ou não compreendida devido aos termos, com significado próprio. Além disso, existem poucos artigos científicos da área de Ortodontia relacionados com este assunto. Assim, o presente artigo propõe realizar um levantamento sobre as freqüências do uso da estatística, fazer considerações sobre a seleção dos seus métodos e explanar sobre as possíveis interpretações dos resultados.Statistics plays a fundamental role in scientific method, which aims to organize, to describe, to analyze and to interpret the data obtained from an observation or an experiment. However, it continues unavailable, in interpretation, by the readers as well as by those who employ it. Sometimes, statistics is mistakenly applied or comprehended due to some terms, which, statistically, present their own meanings. Additionally, there are few articles in Orthodontics that describe this subject. Thus, the present article aims to perform a survey about the frequencies of the use of statistics, to make considerations about the selection of its methods and to explain about the possible interpretations of the data.
Interpretation of computed tomographic images
International Nuclear Information System (INIS)
Stickle, R.L.; Hathcock, J.T.
1993-01-01
This article discusses the production of optimal CT images in small animal patients as well as principles of radiographic interpretation. Technical factors affecting image quality and aiding image interpretation are included. Specific considerations for scanning various anatomic areas are given, including indications and potential pitfalls. Principles of radiographic interpretation are discussed. Selected patient images are illustrated
Zendedel, Rena; Schouten, Barbara C; van Weert, Julia C M; van den Putte, Bas
2018-06-01
The aim of this observational study was twofold. First, we examined how often and which roles informal interpreters performed during consultations between Turkish-Dutch migrant patients and general practitioners (GPs). Second, relations between these roles and patients' and GPs' perceived control, trust in informal interpreters and satisfaction with the consultation were assessed. A coding instrument was developed to quantitatively code informal interpreters' roles from transcripts of 84 audio-recorded interpreter-mediated consultations in general practice. Patients' and GPs' perceived control, trust and satisfaction were assessed in a post consultation questionnaire. Informal interpreters most often performed the conduit role (almost 25% of all coded utterances), and also frequently acted as replacers and excluders of patients and GPs by asking and answering questions on their own behalf, and by ignoring and omitting patients' and GPs' utterances. The role of information source was negatively related to patients' trust and the role of GP excluder was negatively related to patients' perceived control. Patients and GPs are possibly insufficiently aware of the performed roles of informal interpreters, as these were barely related to patients' and GPs' perceived trust, control and satisfaction. Patients and GPs should be educated about the possible negative consequences of informal interpreting. Copyright © 2018 Elsevier B.V. All rights reserved.
Objective interpretation as conforming interpretation
Lidka Rodak
2011-01-01
The practical discourse willingly uses the formula of “objective interpretation”, with no regards to its controversial nature that has been discussed in literature.The main aim of the article is to investigate what “objective interpretation” could mean and how it could be understood in the practical discourse, focusing on the understanding offered by judicature.The thesis of the article is that objective interpretation, as identified with textualists’ position, is not possible to uphold, and ...
Renyi statistics in equilibrium statistical mechanics
International Nuclear Information System (INIS)
Parvan, A.S.; Biro, T.S.
2010-01-01
The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.
International Nuclear Information System (INIS)
Espada, L.; Sanjurjo, M.; Urrejola, S.; Bouzada, F.; Rey, G.; Sanchez, A.
2003-01-01
Given its simplicity and low cost compared to other types of methodologies, the measurement and interpretation of Electrochemical Noise, is consolidating itself as one of the analysis methods most frequently used for the interpretation of corrosion. As the technique is still evolving, standard treatment methodologies for data retrieved in experiments do not exist yet. To date, statistical analysis and the Fourier analysis are commonly used in order to establish the parameters that may characterize the recording of potential and current electrochemical noise. This study introduces a new methodology based on wavelet analysis and presents its advantages with regards to the Fourier analysis in distinguishes periodical and non-periodical variations in the signal power in time and frequency, as opposed to the Fourier analysis that only considers the frequency. (Author) 15 refs
Walkden, N. R.; Wynn, A.; Militello, F.; Lipschultz, B.; Matthews, G.; Guillemaut, C.; Harrison, J.; Moulton, D.; Contributors, JET
2017-08-01
This paper presents the use of a novel modelling technique based around intermittent transport due to filament motion, to interpret experimental profile and fluctuation data in the scrape-off layer (SOL) of JET during the onset and evolution of a density profile shoulder. A baseline case is established, prior to shoulder formation, and the stochastic model is shown to be capable of simultaneously matching the time averaged profile measurement as well as the PDF shape and autocorrelation function from the ion-saturation current time series at the outer wall. Aspects of the stochastic model are then varied with the aim of producing a profile shoulder with statistical measurements consistent with experiment. This is achieved through a strong localised reduction in the density sink acting on the filaments within the model. The required reduction of the density sink occurs over a highly localised region with the timescale of the density sink increased by a factor of 25. This alone is found to be insufficient to model the expansion and flattening of the shoulder region as the density increases, which requires additional changes within the stochastic model. An example is found which includes both a reduction in the density sink and filament acceleration and provides a consistent match to the experimental data as the shoulder expands, though the uniqueness of this solution can not be guaranteed. Within the context of the stochastic model, this implies that the localised reduction in the density sink can trigger shoulder formation, but additional physics is required to explain the subsequent evolution of the profile.
Webb, Emily M; Vella, Maya; Straus, Christopher M; Phelps, Andrew; Naeger, David M
2015-04-01
There are little data as to whether appropriate, cost effective, and safe ordering of imaging examinations are adequately taught in US medical school curricula. We sought to determine the proportion of noninterpretive content (such as appropriate ordering) versus interpretive content (such as reading a chest x-ray) in the top-selling medical student radiology textbooks. We performed an online search to identify a ranked list of the six top-selling general radiology textbooks for medical students. Each textbook was reviewed including content in the text, tables, images, figures, appendices, practice questions, question explanations, and glossaries. Individual pages of text and individual images were semiquantitatively scored on a six-level scale as to the percentage of material that was interpretive versus noninterpretive. The predominant imaging modality addressed in each was also recorded. Descriptive statistical analysis was performed. All six books had more interpretive content. On average, 1.4 pages of text focused on interpretation for every one page focused on noninterpretive content. Seventeen images/figures were dedicated to interpretive skills for every one focused on noninterpretive skills. In all books, the largest proportion of text and image content was dedicated to plain films (51.2%), with computed tomography (CT) a distant second (16%). The content on radiographs (3.1:1) and CT (1.6:1) was more interpretive than not. The current six top-selling medical student radiology textbooks contain a preponderance of material teaching image interpretation compared to material teaching noninterpretive skills, such as appropriate imaging examination selection, rational utilization, and patient safety. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Hernandez-Cardoso, G. G.; Alfaro-Gomez, M.; Rojas-Landeros, S. C.; Salas-Gutierrez, I.; Castro-Camus, E.
2018-03-01
In this article, we present a series of hydration mapping images of the foot soles of diabetic and non-diabetic subjects measured by terahertz reflectance. In addition to the hydration images, we present a series of RYG-color-coded (red yellow green) images where pixels are assigned one of the three colors in order to easily identify areas in risk of ulceration. We also present the statistics of the number of pixels with each color as a potential quantitative indicator for diabetic foot-syndrome deterioration.
Directory of Open Access Journals (Sweden)
Elske eSalemink
2013-05-01
Full Text Available Interpretive biases play a crucial role in anxiety disorders. The aim of the current study was to examine factors that determine the relative strength of threat-related interpretive biases that are characteristic of individuals high in social anxiety. Different (dual process models argue that both implicit and explicit processes determine information processing biases and behaviour, and that their impact is moderated by the availability of executive resources such as working memory capacity (WMC. Based on these models, we expected indicators of implicit social anxiety to predict threat-related interpretive bias in individuals low, but not high in WMC. Indicators of explicit social anxiety should predict threat-related interpretive bias in individuals high, but not low in WMC. As expected, WMC moderated the impact of implicit social anxiety on threat-related interpretive bias, although the simple slope for individuals low in WMC was not statistically significant. The hypotheses regarding explicit social anxiety (with fear of negative evaluation used as an indicator were fully supported. The clinical implications of these findings are discussed.
Learning Predictive Statistics: Strategies and Brain Mechanisms.
Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe
2017-08-30
When immersed in a new environment, we are challenged to decipher initially incomprehensible streams of sensory information. However, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multisession fMRI in human participants (male and female), we track the corticostriatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) versus matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory-motor regions, and basal ganglia (dorsal caudate, putamen), whereas matching engages occipitotemporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct corticostriatal mechanisms that facilitate our ability to extract behaviorally relevant statistics to make predictions. SIGNIFICANCE STATEMENT Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity from simple repetition to complex probabilistic combinations. Here, we combine behavior and multisession fMRI to track the brain mechanisms that mediate our ability to adapt to
A robust interpretation of duration calculus
DEFF Research Database (Denmark)
Franzle, M.; Hansen, Michael Reichhardt
2005-01-01
We transfer the concept of robust interpretation from arithmetic first-order theories to metric-time temporal logics. The idea is that the interpretation of a formula is robust iff its truth value does not change under small variation of the constants in the formula. Exemplifying this on Duration...... Calculus (DC), our findings are that the robust interpretation of DC is equivalent to a multi-valued interpretation that uses the real numbers as semantic domain and assigns Lipschitz-continuous interpretations to all operators of DC. Furthermore, this continuity permits approximation between discrete...
Statistical comparison of the geometry of second-phase particles
Energy Technology Data Exchange (ETDEWEB)
Benes, Viktor, E-mail: benesv@karlin.mff.cuni.cz [Charles University in Prague, Faculty of Mathematics and Physics, Department of Probability and Mathematical Statistics, Sokolovska 83, 186 75 Prague 8-Karlin (Czech Republic); Lechnerova, Radka, E-mail: radka.lech@seznam.cz [Private College on Economical Studies, Ltd., Lindnerova 575/1, 180 00 Prague 8-Liben (Czech Republic); Klebanov, Lev [Charles University in Prague, Faculty of Mathematics and Physics, Department of Probability and Mathematical Statistics, Sokolovska 83, 186 75 Prague 8-Karlin (Czech Republic); Slamova, Margarita, E-mail: slamova@vyzkum-kovu.cz [Research Institute for Metals, Ltd., Panenske Brezany 50, 250 70 Odolena Voda (Czech Republic); Slama, Peter [Research Institute for Metals, Ltd., Panenske Brezany 50, 250 70 Odolena Voda (Czech Republic)
2009-10-15
In microscopic studies of materials, there is often a need to provide a statistical test as to whether two microstructures are different or not. Typically, there are some random objects (particles, grains, pores) and the comparison concerns their density, individual geometrical parameters and their spatial distribution. The problem is that neighbouring objects observed in a single window cannot be assumed to be stochastically independent, therefore classical statistical testing based on random sampling is not applicable. The aim of the present paper is to develop a test based on N-distances in probability theory. Using the measurements from a few independent windows, we consider a two-sample test, which involves a large amount of information collected from each window. An application is presented consisting in a comparison of metallographic samples of aluminium alloys, and the results are interpreted.
Statistical comparison of the geometry of second-phase particles
International Nuclear Information System (INIS)
Benes, Viktor; Lechnerova, Radka; Klebanov, Lev; Slamova, Margarita; Slama, Peter
2009-01-01
In microscopic studies of materials, there is often a need to provide a statistical test as to whether two microstructures are different or not. Typically, there are some random objects (particles, grains, pores) and the comparison concerns their density, individual geometrical parameters and their spatial distribution. The problem is that neighbouring objects observed in a single window cannot be assumed to be stochastically independent, therefore classical statistical testing based on random sampling is not applicable. The aim of the present paper is to develop a test based on N-distances in probability theory. Using the measurements from a few independent windows, we consider a two-sample test, which involves a large amount of information collected from each window. An application is presented consisting in a comparison of metallographic samples of aluminium alloys, and the results are interpreted.
Descriptive and inferential statistical methods used in burns research.
Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars
2010-05-01
Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals
Feng, Sheng; Wang, Shengchu; Chen, Chia-Cheng; Lan, Lan
2011-01-21
In designing genome-wide association (GWA) studies it is important to calculate statistical power. General statistical power calculation procedures for quantitative measures often require information concerning summary statistics of distributions such as mean and variance. However, with genetic studies, the effect size of quantitative traits is traditionally expressed as heritability, a quantity defined as the amount of phenotypic variation in the population that can be ascribed to the genetic variants among individuals. Heritability is hard to transform into summary statistics. Therefore, general power calculation procedures cannot be used directly in GWA studies. The development of appropriate statistical methods and a user-friendly software package to address this problem would be welcomed. This paper presents GWAPower, a statistical software package of power calculation designed for GWA studies with quantitative traits, where genetic effect is defined as heritability. Based on several popular one-degree-of-freedom genetic models, this method avoids the need to specify the non-centrality parameter of the F-distribution under the alternative hypothesis. Therefore, it can use heritability information directly without approximation. In GWAPower, the power calculation can be easily adjusted for adding covariates and linkage disequilibrium information. An example is provided to illustrate GWAPower, followed by discussions. GWAPower is a user-friendly free software package for calculating statistical power based on heritability in GWA studies with quantitative traits. The software is freely available at: http://dl.dropbox.com/u/10502931/GWAPower.zip.
Comparative study of methods on outlying data detection in experimental results
International Nuclear Information System (INIS)
Oliveira, P.M.S.; Munita, C.S.; Hazenfratz, R.
2009-01-01
The interpretation of experimental results through multivariate statistical methods might reveal the outliers existence, which is rarely taken into account by the analysts. However, their presence can influence the results interpretation, generating false conclusions. This paper shows the importance of the outliers determination for one data base of 89 samples of ceramic fragments, analyzed by neutron activation analysis. The results were submitted to five procedures to detect outliers: Mahalanobis distance, cluster analysis, principal component analysis, factor analysis, and standardized residual. The results showed that although cluster analysis is one of the procedures most used to identify outliers, it can fail by not showing the samples that are easily identified as outliers by other methods. In general, the statistical procedures for the identification of the outliers are little known by the analysts. (author)
Statistical methods for data analysis in particle physics
Lista, Luca
2017-01-01
This concise set of course-based notes provides the reader with the main concepts and tools needed to perform statistical analyses of experimental data, in particular in the field of high-energy physics (HEP). First, the book provides an introduction to probability theory and basic statistics, mainly intended as a refresher from readers’ advanced undergraduate studies, but also to help them clearly distinguish between the Frequentist and Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on both discoveries and upper limits, as many applications in HEP concern hypothesis testing, where the main goal is often to provide better and better limits so as to eventually be able to distinguish between competing hypotheses, or to rule out some of them altogether. Many worked-out examples will help newcomers to the field and graduate students alike understand the pitfalls involved in applying theoretical co...
Fundamental data analyses for measurement control
International Nuclear Information System (INIS)
Campbell, K.; Barlich, G.L.; Fazal, B.; Strittmatter, R.B.
1987-02-01
A set of measurment control data analyses was selected for use by analysts responsible for maintaining measurement quality of nuclear materials accounting instrumentation. The analyses consist of control charts for bias and precision and statistical tests used as analytic supplements to the control charts. They provide the desired detection sensitivity and yet can be interpreted locally, quickly, and easily. The control charts provide for visual inspection of data and enable an alert reviewer to spot problems possibly before statistical tests detect them. The statistical tests are useful for automating the detection of departures from the controlled state or from the underlying assumptions (such as normality). 8 refs., 3 figs., 5 tabs
AlMoghrabi, Nouran; Huijding, Jorg; Franken, Ingmar H A
2018-03-01
Cognitive theories of aggression propose that biased information processing is causally related to aggression. To test these ideas, the current study investigated the effects of a novel cognitive bias modification paradigm (CBM-I) designed to target interpretations associated with aggressive behavior. Participants aged 18-33 years old were randomly assigned to either a single session of positive training (n = 40) aimed at increasing prosocial interpretations or negative training (n = 40) aimed at increasing hostile interpretations. The results revealed that the positive training resulted in an increase in prosocial interpretations while the negative training seemed to have no effect on interpretations. Importantly, in the positive condition, a positive change in interpretations was related to lower anger and verbal aggression scores after the training. In this condition, participants also reported an increase in happiness. In the negative training no such effects were found. However, the better participants performed on the negative training, the more their interpretations were changed in a negative direction and the more aggression they showed on the behavioral aggression task. Participants were healthy university students. Therefore, results should be confirmed within a clinical population. These findings provide support for the idea that this novel CBM-I paradigm can be used to modify interpretations, and suggests that these interpretations are related to mood and aggressive behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimization Model for Uncertain Statistics Based on an Analytic Hierarchy Process
Directory of Open Access Journals (Sweden)
Yongchao Hou
2014-01-01
Full Text Available Uncertain statistics is a methodology for collecting and interpreting the expert’s experimental data by uncertainty theory. In order to estimate uncertainty distributions, an optimization model based on analytic hierarchy process (AHP and interpolation method is proposed in this paper. In addition, the principle of least squares method is presented to estimate uncertainty distributions with known functional form. Finally, the effectiveness of this method is illustrated by an example.
DEFF Research Database (Denmark)
Eslamimanesh, Ali; Gharagheizi, Farhad; Mohammadi, Amir H.
2012-01-01
We, herein, present a statistical method for diagnostics of the outliers in phase equilibrium data (dissociation data) of simple clathrate hydrates. The applied algorithm is performed on the basis of the Leverage mathematical approach, in which the statistical Hat matrix, Williams Plot, and the r......We, herein, present a statistical method for diagnostics of the outliers in phase equilibrium data (dissociation data) of simple clathrate hydrates. The applied algorithm is performed on the basis of the Leverage mathematical approach, in which the statistical Hat matrix, Williams Plot...... in exponential form is used to represent/predict the hydrate dissociation pressures for three-phase equilibrium conditions (liquid water/ice–vapor-hydrate). The investigated hydrate formers are methane, ethane, propane, carbon dioxide, nitrogen, and hydrogen sulfide. It is interpreted from the obtained results...
Understanding Statistics - Cancer Statistics
Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.
Link, J; Pachaly, J
1975-08-01
In a retrospective 18-month study the infusion therapy applied in a great anesthesia institute is examined. The data of the course of anesthesia recorded on magnetic tape by routine are analysed for this purpose bya computer with the statistical program SPSS. It could be proved that the behaviour of the several anesthetists is very different. Various correlations are discussed.
18 CFR 385.1901 - Interpretations and interpretative rules under the NGPA (Rule 1901).
2010-04-01
... to requests for interpretations to prospective, existing or completed facts, acts, or transactions..., knowledge, and belief there is no untrue statement of a material or relevant fact and there is no omission... misrepresented or omitted or if any material or relevant fact changes after an interpretation is issued or if the...
Linguistics in Text Interpretation
DEFF Research Database (Denmark)
Togeby, Ole
2011-01-01
A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'.......A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'....