WorldWideScience

Sample records for preliminary statistical analyses

  1. A preliminary study of the statistical analyses and sampling strategies associated with the integration of remote sensing capabilities into the current agricultural crop forecasting system

    Science.gov (United States)

    Sand, F.; Christie, R.

    1975-01-01

    Extending the crop survey application of remote sensing from small experimental regions to state and national levels requires that a sample of agricultural fields be chosen for remote sensing of crop acreage, and that a statistical estimate be formulated with measurable characteristics. The critical requirements for the success of the application are reviewed in this report. The problem of sampling in the presence of cloud cover is discussed. Integration of remotely sensed information about crops into current agricultural crop forecasting systems is treated on the basis of the USDA multiple frame survey concepts, with an assumed addition of a new frame derived from remote sensing. Evolution of a crop forecasting system which utilizes LANDSAT and future remote sensing systems is projected for the 1975-1990 time frame.

  2. Statistical analyses in disease surveillance systems.

    Science.gov (United States)

    Lescano, Andres G; Larasati, Ria Purwita; Sedyaningsih, Endang R; Bounlu, Khanthong; Araujo-Castillo, Roger V; Munayco-Escate, Cesar V; Soto, Giselle; Mundaca, C Cecilia; Blazes, David L

    2008-11-14

    The performance of disease surveillance systems is evaluated and monitored using a diverse set of statistical analyses throughout each stage of surveillance implementation. An overview of their main elements is presented, with a specific emphasis on syndromic surveillance directed to outbreak detection in resource-limited settings. Statistical analyses are proposed for three implementation stages: planning, early implementation, and consolidation. Data sources and collection procedures are described for each analysis.During the planning and pilot stages, we propose to estimate the average data collection, data entry and data distribution time. This information can be collected by surveillance systems themselves or through specially designed surveys. During the initial implementation stage, epidemiologists should study the completeness and timeliness of the reporting, and describe thoroughly the population surveyed and the epidemiology of the health events recorded. Additional data collection processes or external data streams are often necessary to assess reporting completeness and other indicators. Once data collection processes are operating in a timely and stable manner, analyses of surveillance data should expand to establish baseline rates and detect aberrations. External investigations can be used to evaluate whether abnormally increased case frequency corresponds to a true outbreak, and thereby establish the sensitivity and specificity of aberration detection algorithms.Statistical methods for disease surveillance have focused mainly on the performance of outbreak detection algorithms without sufficient attention to the data quality and representativeness, two factors that are especially important in developing countries. It is important to assess data quality at each state of implementation using a diverse mix of data sources and analytical methods. Careful, close monitoring of selected indicators is needed to evaluate whether systems are reaching their

  3. Statistical analyses of a screen cylinder wake

    Science.gov (United States)

    Mohd Azmi, Azlin; Zhou, Tongming; Zhou, Yu; Cheng, Liang

    2017-02-01

    The evolution of a screen cylinder wake was studied by analysing its statistical properties over a streamwise range of x/d={10-60}. The screen cylinder was made of a stainless steel screen mesh of 67% porosity. The experiments were conducted in a wind tunnel at a Reynolds number of 7000 using an X-probe. The results were compared with those obtained in the wake generated by a solid cylinder. It was observed that the evolution of the statistics in the wake of the screen cylinder was different from that of a solid cylinder, reflecting the differences in the formation of the organized large-scale vortices in both wakes. The streamwise evolution of the Reynolds stresses, energy spectra and cross-correlation coefficients indicated that there exists a critical location that differentiates the screen cylinder wake into two regions over the measured streamwise range. The formation of the fully formed large-scale vortices was delayed until this critical location. Comparison with existing results for screen strips showed that although the near-wake characteristics and the vortex formation mechanism were similar between the two wake generators, variation in the Strouhal frequencies was observed and the self-preservation states were non-universal, reconfirming the dependence of a wake on its initial condition.

  4. Applied statistics a handbook of BMDP analyses

    CERN Document Server

    Snell, E J

    1987-01-01

    This handbook is a realization of a long term goal of BMDP Statistical Software. As the software supporting statistical analysis has grown in breadth and depth to the point where it can serve many of the needs of accomplished statisticians it can also serve as an essential support to those needing to expand their knowledge of statistical applications. Statisticians should not be handicapped by heavy computation or by the lack of needed options. When Applied Statistics, Principle and Examples by Cox and Snell appeared we at BMDP were impressed with the scope of the applications discussed and felt that many statisticians eager to expand their capabilities in handling such problems could profit from having the solutions carried further, to get them started and guided to a more advanced level in problem solving. Who would be better to undertake that task than the authors of Applied Statistics? A year or two later discussions with David Cox and Joyce Snell at Imperial College indicated that a wedding of the proble...

  5. System Synthesis in Preliminary Aircraft Design Using Statistical Methods

    Science.gov (United States)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and early preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically Design of Experiments (DOE) and Response Surface Methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an Overall Evaluation Criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in an innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting in solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a High Speed Civil Transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabilistic designs (and eventually robust ones).

  6. SWORDS: A statistical tool for analysing large DNA sequences

    Indian Academy of Sciences (India)

    Probal Chaudhuri; Sandip Das

    2002-02-01

    In this article, we present some simple yet effective statistical techniques for analysing and comparing large DNA sequences. These techniques are based on frequency distributions of DNA words in a large sequence, and have been packaged into a software called SWORDS. Using sequences available in public domain databases housed in the Internet, we demonstrate how SWORDS can be conveniently used by molecular biologists and geneticists to unmask biologically important features hidden in large sequences and assess their statistical significance.

  7. Statistical analyses in the physiology of exercise and kinanthropometry.

    Science.gov (United States)

    Winter, E M; Eston, R G; Lamb, K L

    2001-10-01

    Research into the physiology of exercise and kinanthropometry is intended to improve our understanding of how the body responds and adapts to exercise. If such studies are to be meaningful, they have to be well designed and analysed. Advances in personal computing have made available statistical analyses that were previously the preserve of elaborate mainframe systems and have increased opportunities for investigation. However, the ease with which analyses can be performed can mask underlying philosophical and epistemological shortcomings. The aim of this review is to examine the use of four techniques that are especially relevant to physiological studies: (1) bivariate correlation and linear and non-linear regression, (2) multiple regression, (3) repeated-measures analysis of variance and (4) multi-level modelling. The importance of adhering to underlying statistical assumptions is emphasized and ways to accommodate violations of these assumptions are identified.

  8. Statistical technique for analysing functional connectivity of multiple spike trains.

    Science.gov (United States)

    Masud, Mohammad Shahed; Borisyuk, Roman

    2011-03-15

    A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains.

  9. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  10. Statistical analyses support power law distributions found in neuronal avalanches.

    Directory of Open Access Journals (Sweden)

    Andreas Klaus

    Full Text Available The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii model parameter estimation to determine the specific exponent of the power law, and (iii comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect. This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  11. Statistical analyses support power law distributions found in neuronal avalanches.

    Science.gov (United States)

    Klaus, Andreas; Yu, Shan; Plenz, Dietmar

    2011-01-01

    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  12. A weighted U statistic for association analyses considering genetic heterogeneity.

    Science.gov (United States)

    Wei, Changshuai; Elston, Robert C; Lu, Qing

    2016-07-20

    Converging evidence suggests that common complex diseases with the same or similar clinical manifestations could have different underlying genetic etiologies. While current research interests have shifted toward uncovering rare variants and structural variations predisposing to human diseases, the impact of heterogeneity in genetic studies of complex diseases has been largely overlooked. Most of the existing statistical methods assume the disease under investigation has a homogeneous genetic effect and could, therefore, have low power if the disease undergoes heterogeneous pathophysiological and etiological processes. In this paper, we propose a heterogeneity-weighted U (HWU) method for association analyses considering genetic heterogeneity. HWU can be applied to various types of phenotypes (e.g., binary and continuous) and is computationally efficient for high-dimensional genetic data. Through simulations, we showed the advantage of HWU when the underlying genetic etiology of a disease was heterogeneous, as well as the robustness of HWU against different model assumptions (e.g., phenotype distributions). Using HWU, we conducted a genome-wide analysis of nicotine dependence from the Study of Addiction: Genetics and Environments dataset. The genome-wide analysis of nearly one million genetic markers took 7h, identifying heterogeneous effects of two new genes (i.e., CYP3A5 and IKBKB) on nicotine dependence. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Non-Statistical Methods of Analysing of Bankruptcy Risk

    Directory of Open Access Journals (Sweden)

    Pisula Tomasz

    2015-06-01

    Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.

  14. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses

    Science.gov (United States)

    Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy

    2015-01-01

    Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically

  15. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses.

    Directory of Open Access Journals (Sweden)

    Md Shamsuzzoha Bayzid

    Full Text Available Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS, modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees, they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014 presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also

  16. Statistical analyses for NANOGrav 5-year timing residuals

    Science.gov (United States)

    Wang, Yan; Cordes, James M.; Jenet, Fredrick A.; Chatterjee, Shami; Demorest, Paul B.; Dolch, Timothy; Ellis, Justin A.; Lam, Michael T.; Madison, Dustin R.; McLaughlin, Maura A.; Perrodin, Delphine; Rankin, Joanna; Siemens, Xavier; Vallisneri, Michele

    2017-02-01

    In pulsar timing, timing residuals are the differences between the observed times of arrival and predictions from the timing model. A comprehensive timing model will produce featureless residuals, which are presumably composed of dominating noise and weak physical effects excluded from the timing model (e.g. gravitational waves). In order to apply optimal statistical methods for detecting weak gravitational wave signals, we need to know the statistical properties of noise components in the residuals. In this paper we utilize a variety of non-parametric statistical tests to analyze the whiteness and Gaussianity of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) 5-year timing data, which are obtained from Arecibo Observatory and Green Bank Telescope from 2005 to 2010. We find that most of the data are consistent with white noise; many data deviate from Gaussianity at different levels, nevertheless, removing outliers in some pulsars will mitigate the deviations.

  17. Statistical Analyses for NANOGrav 5-year Timing Residuals

    CERN Document Server

    Wang, Y; Jenet, F A; Chatterjee, S; Demorest, P B; Dolch, T; Ellis, J A; Lam, M T; Madison, D R; McLaughlin, M; Perrodin, D; Rankin, J; Siemens, X; Vallisneri, M

    2016-01-01

    In pulsar timing, timing residuals are the differences between the observed times of arrival and the predictions from the timing model. A comprehensive timing model will produce featureless residuals, which are presumably composed of dominating noise and weak physical effects excluded from the timing model (e.g. gravitational waves). In order to apply the optimal statistical methods for detecting the weak gravitational wave signals, we need to know the statistical properties of the noise components in the residuals. In this paper we utilize a variety of non-parametric statistical tests to analyze the whiteness and Gaussianity of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) 5-year timing data which are obtained from the Arecibo Observatory and the Green Bank Telescope from 2005 to 2010 (Demorest et al. 2013). We find that most of the data are consistent with white noise; Many data deviate from Gaussianity at different levels, nevertheless, removing outliers in some pulsars will m...

  18. Feminism and Factoral Analyses: Alleviating Students' Statistics Anxieties.

    Science.gov (United States)

    Nielsen, Linda

    1979-01-01

    Describes female math underachievement and counseling or teaching techniques being implemented on college campuses to alleviate math anxiety. Students derive unique benefits from being taught research and statistics courses by female professors. To be effective, female professors must embody distinct feminist roles and perspectives. Specific…

  19. Systematic Mapping and Statistical Analyses of Valley Landform and Vegetation Asymmetries Across Hydroclimatic Gradients

    Science.gov (United States)

    Poulos, M. J.; Pierce, J. L.; McNamara, J. P.; Flores, A. N.; Benner, S. G.

    2015-12-01

    Terrain aspect alters the spatial distribution of insolation across topography, driving eco-pedo-hydro-geomorphic feedbacks that can alter landform evolution and result in valley asymmetries for a suite of land surface characteristics (e.g. slope length and steepness, vegetation, soil properties, and drainage development). Asymmetric valleys serve as natural laboratories for studying how landscapes respond to climate perturbation. In the semi-arid montane granodioritic terrain of the Idaho batholith, Northern Rocky Mountains, USA, prior works indicate that reduced insolation on northern (pole-facing) aspects prolongs snow pack persistence, and is associated with thicker, finer-grained soils, that retain more water, prolong the growing season, support coniferous forest rather than sagebrush steppe ecosystems, stabilize slopes at steeper angles, and produce sparser drainage networks. We hypothesize that the primary drivers of valley asymmetry development are changes in the pedon-scale water-balance that coalesce to alter catchment-scale runoff and drainage development, and ultimately cause the divide between north and south-facing land surfaces to migrate northward. We explore this conceptual framework by coupling land surface analyses with statistical modeling to assess relationships and the relative importance of land surface characteristics. Throughout the Idaho batholith, we systematically mapped and tabulated various statistical measures of landforms, land cover, and hydroclimate within discrete valley segments (n=~10,000). We developed a random forest based statistical model to predict valley slope asymmetry based upon numerous measures (n>300) of landscape asymmetries. Preliminary results suggest that drainages are tightly coupled with hillslopes throughout the region, with drainage-network slope being one of the strongest predictors of land-surface-averaged slope asymmetry. When slope-related statistics are excluded, due to possible autocorrelation, valley

  20. Statistical analyses of hydrophobic interactions: A mini-review

    CERN Document Server

    Pratt, L R; Rempe, Susan B

    2016-01-01

    This review focuses on the striking recent progress in solving for hydrophobic interactions between small inert molecules. We discuss several new understandings. Firstly, the _inverse _temperature phenomenology of hydrophobic interactions, _i.e., strengthening of hydrophobic bonds with increasing temperature, is decisively exhibited by hydrophobic interactions between atomic-scale hard sphere solutes in water. Secondly, inclusion of attractive interactions associated with atomic-size hydrophobic reference cases leads to substantial, non-trivial corrections to reference results for purely repulsive solutes. Hydrophobic bonds are _weakened by adding solute dispersion forces to treatment of reference cases. The classic statistical mechanical theory for those corrections is not accurate in this application, but molecular quasi-chemical theory shows promise. Finally, because of the masking roles of excluded volume and attractive interactions, comparisons that do not discriminate the different possibilities face an...

  1. Fluctuations of Lake Orta water levels: preliminary analyses

    Directory of Open Access Journals (Sweden)

    Helmi Saidi

    2016-04-01

    Full Text Available While the effects of past industrial pollution on the chemistry and biology of Lake Orta have been well documented, annual and seasonal fluctuations of lake levels have not yet been studied. Considering their potential impacts on both the ecosystem and on human safety, fluctuations in lake levels are an important aspect of limnological research. In the enormous catchment of Lake Maggiore, there are many rivers and lakes, and the amount of annual precipitation is both high and concentrated in spring and autumn. This has produced major flood events, most recently in November 2014. Flood events are also frequent on Lake Orta, occurring roughly triennially since 1917. The 1926, 1951, 1976 and 2014 floods were severe, with lake levels raised from 2.30 m to 3.46 m above the hydrometric zero. The most important event occurred in 1976, with a maximum level equal to 292.31 m asl and a return period of 147 years. In 2014 the lake level reached 291.89 m asl and its return period was 54 years. In this study, we defined trends and temporal fluctuations in Lake Orta water levels from 1917 to 2014, focusing on extremes. We report both annual maximum and seasonal variations of the lake water levels over this period. Both Mann-Kendall trend tests and simple linear regression were utilized to detect monotonic trends in annual and seasonal extremes, and logistic regression was used to detect trends in the number of flood events. Lake level decreased during winter and summer seasons, and a small but statistically non-significant positive trend was found in the number of flood events over the period. We provide estimations of return period for lake levels, a metric which could be used in planning lake flood protection measures.

  2. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  3. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    Energy Technology Data Exchange (ETDEWEB)

    Udey, Ruth Norma [Michigan State Univ., East Lansing, MI (United States)

    2013-01-01

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  4. Preliminary statistical assessment towards characterization of biobotic control.

    Science.gov (United States)

    Latif, Tahmid; Meng Yang; Lobaton, Edgar; Bozkurt, Alper

    2016-08-01

    Biobotic research involving neurostimulation of instrumented insects to control their locomotion is finding potential as an alternative solution towards development of centimeter-scale distributed swarm robotics. To improve the reliability of biobotic agents, their control mechanism needs to be precisely characterized. To achieve this goal, this paper presents our initial efforts for statistical assessment of the angular response of roach biobots to the applied bioelectrical stimulus. Subsequent findings can help to understand the effect of each stimulation parameter individually or collectively and eventually reach reliable and consistent biobotic control suitable for real life scenarios.

  5. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    Science.gov (United States)

    Piparo, D.; Schott, G.; Quast, G.

    2009-12-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  6. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    CERN Document Server

    Piparo, D; Quast, Prof G

    2008-01-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  7. Retention of Statistical Concepts in a Preliminary Randomization-Based Introductory Statistics Curriculum

    Science.gov (United States)

    Tintle, Nathan; Topliff, Kylie; VanderStoep, Jill; Holmes, Vicki-Lynn; Swanson, Todd

    2012-01-01

    Previous research suggests that a randomization-based introductory statistics course may improve student learning compared to the consensus curriculum. However, it is unclear whether these gains are retained by students post-course. We compared the conceptual understanding of a cohort of students who took a randomization-based curriculum (n = 76)…

  8. SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Directory of Open Access Journals (Sweden)

    Annie Chu

    2009-04-01

    Full Text Available The web-based, Java-written SOCR (Statistical Online Computational Resource toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b. It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b. Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses.

  9. Scripts for TRUMP data analyses. Part II (HLA-related data): statistical analyses specific for hematopoietic stem cell transplantation.

    Science.gov (United States)

    Kanda, Junya

    2016-01-01

    The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.

  10. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  11. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    DEFF Research Database (Denmark)

    Edjabou, Vincent Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona;

    2015-01-01

    from one municipality was sorted at "Level III", e.g. detailed, while the two others were sorted only at "Level I"). The results showed that residual household waste mainly contained food waste (42 +/- 5%, mass per wet basis) and miscellaneous combustibles (18 +/- 3%, mass per wet basis). The residual...... household waste generation rate in the study areas was 3-4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three...

  12. A new statistical method for design and analyses of component tolerance

    Science.gov (United States)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2017-09-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  13. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.

  14. Preliminary analyses of scenarios for potential human interference for repositories in three salt formations

    Energy Technology Data Exchange (ETDEWEB)

    1985-10-01

    Preliminary analyses of scenarios for human interference with the performance of a radioactive waste repository in a deep salt formation are presented. The following scenarios are analyzed: (1) the U-Tube Connection Scenario involving multiple connections between the repository and the overlying aquifer system; (2) the Single Borehole Intrusion Scenario involving penetration of the repository by an exploratory borehole that simultaneously connects the repository with overlying and underlying aquifers; and (3) the Pressure Release Scenario involving inflow of water to saturate any void space in the repository prior to creep closure with subsequent release under near lithostatic pressures following creep closure. The methodology to evaluate repository performance in these scenarios is described and this methodology is applied to reference systems in three candidate formations: bedded salt in the Palo Duro Basin, Texas; bedded salt in the Paradox Basin, Utah; and the Richton Salt Dome, Mississippi, of the Gulf Coast Salt Dome Basin.

  15. An assessment of recently published gene expression data analyses: reporting experimental design and statistical factors

    Directory of Open Access Journals (Sweden)

    Azuaje Francisco

    2006-06-01

    Full Text Available Abstract Background The analysis of large-scale gene expression data is a fundamental approach to functional genomics and the identification of potential drug targets. Results derived from such studies cannot be trusted unless they are adequately designed and reported. The purpose of this study is to assess current practices on the reporting of experimental design and statistical analyses in gene expression-based studies. Methods We reviewed hundreds of MEDLINE-indexed papers involving gene expression data analysis, which were published between 2003 and 2005. These papers were examined on the basis of their reporting of several factors, such as sample size, statistical power and software availability. Results Among the examined papers, we concentrated on 293 papers consisting of applications and new methodologies. These papers did not report approaches to sample size and statistical power estimation. Explicit statements on data transformation and descriptions of the normalisation techniques applied prior to data analyses (e.g. classification were not reported in 57 (37.5% and 104 (68.4% of the methodology papers respectively. With regard to papers presenting biomedical-relevant applications, 41(29.1 % of these papers did not report on data normalisation and 83 (58.9% did not describe the normalisation technique applied. Clustering-based analysis, the t-test and ANOVA represent the most widely applied techniques in microarray data analysis. But remarkably, only 5 (3.5% of the application papers included statements or references to assumption about variance homogeneity for the application of the t-test and ANOVA. There is still a need to promote the reporting of software packages applied or their availability. Conclusion Recently-published gene expression data analysis studies may lack key information required for properly assessing their design quality and potential impact. There is a need for more rigorous reporting of important experimental

  16. Preliminary Results of Ancillary Safety Analyses Supporting TREAT LEU Conversion Activities

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, A. J. [Argonne National Lab. (ANL), Argonne, IL (United States); Fei, T. [Argonne National Lab. (ANL), Argonne, IL (United States); Strons, P. S. [Argonne National Lab. (ANL), Argonne, IL (United States); Papadias, D. D. [Argonne National Lab. (ANL), Argonne, IL (United States); Hoffman, E. A. [Argonne National Lab. (ANL), Argonne, IL (United States); Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-10-01

    Report (FSAR) [3]. Depending on the availability of historical data derived from HEU TREAT operation, results calculated for the LEU core are compared to measurements obtained from HEU TREAT operation. While all analyses in this report are largely considered complete and have been reviewed for technical content, it is important to note that all topics will be revisited once the LEU design approaches its final stages of maturity. For most safety significant issues, it is expected that the analyses presented here will be bounding, but additional calculations will be performed as necessary to support safety analyses and safety documentation. It should also be noted that these analyses were completed as the LEU design evolved, and therefore utilized different LEU reference designs. Preliminary shielding, neutronic, and thermal hydraulic analyses have been completed and have generally demonstrated that the various LEU core designs will satisfy existing safety limits and standards also satisfied by the existing HEU core. These analyses include the assessment of the dose rate in the hodoscope room, near a loaded fuel transfer cask, above the fuel storage area, and near the HEPA filters. The potential change in the concentration of tramp uranium and change in neutron flux reaching instrumentation has also been assessed. Safety-significant thermal hydraulic items addressed in this report include thermally-induced mechanical distortion of the grid plate, and heating in the radial reflector.

  17. A review of statistical analyses on monthly and daily rainfall in Catalonia

    Directory of Open Access Journals (Sweden)

    X. Lana

    2009-01-01

    Full Text Available A review on recent studies about monthly and daily rainfall in Catalonia is presented. Monthly rainfall is analysed along the west Mediterranean Coast and in Catalonia, quantifying aspects as the irregularity of monthly amounts and the spatial distribution of the Standard Precipitation Index. Several statistics are applied to daily rainfall series such as their extreme value and intraannual spatial distributions, the variability of the average and standard deviation rain amounts for each month, their amount and time distributions, and time trends affecting four pluviometric indices for different percentiles and class intervals. All these different analyses constitute the continuity of the scientific study of Catalan rainfall, which started about a century ago.

  18. The Statistical Analyses of the White-Light Flares: Two Main Results About Flare Behaviours

    CERN Document Server

    Dal, H A

    2012-01-01

    We present two main results, based on the models and the statistical analyses of 1672 U-band flares. We also discuss the behaviours of the white-light flares. In addition, the parameters of the flares detected from two years of observations on CR Dra are presented. By comparing with the flare parameters obtained from other UV Ceti type stars, we examine the behaviour of optical flare processes along the spectral types. Moreover, we aimed, using large white-light flare data,to analyse the flare time-scales in respect to some results obtained from the X-ray observations. Using the SPSS V17.0 and the GraphPad Prism V5.02 software, the flares detected from CR Dra were modelled with the OPEA function and analysed with t-Test method to compare similar flare events in other stars. In addition, using some regression calculations in order to derive the best histograms, the time-scales of the white-light flares were analysed. Firstly, CR Dra flares have revealed that the white-light flares behave in a similar way as th...

  19. Statistical analyses of digital collections: Using a large corpus of systematic reviews to study non-citations

    DEFF Research Database (Denmark)

    Frandsen, Tove Faber; Nicolaisen, Jeppe

    2017-01-01

    Using statistical methods to analyse digital material for patterns makes it possible to detect patterns in big data that we would otherwise not be able to detect. This paper seeks to exemplify this fact by statistically analysing a large corpus of references in systematic reviews. The aim...

  20. A weighted U-statistic for genetic association analyses of sequencing data.

    Science.gov (United States)

    Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing

    2014-12-01

    With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol.

  1. Comparisons of power of statistical methods for gene-environment interaction analyses.

    Science.gov (United States)

    Ege, Markus J; Strachan, David P

    2013-10-01

    Any genome-wide analysis is hampered by reduced statistical power due to multiple comparisons. This is particularly true for interaction analyses, which have lower statistical power than analyses of associations. To assess gene-environment interactions in population settings we have recently proposed a statistical method based on a modified two-step approach, where first genetic loci are selected by their associations with disease and environment, respectively, and subsequently tested for interactions. We have simulated various data sets resembling real world scenarios and compared single-step and two-step approaches with respect to true positive rate (TPR) in 486 scenarios and (study-wide) false positive rate (FPR) in 252 scenarios. Our simulations confirmed that in all two-step methods the two steps are not correlated. In terms of TPR, two-step approaches combining information on gene-disease association and gene-environment association in the first step were superior to all other methods, while preserving a low FPR in over 250 million simulations under the null hypothesis. Our weighted modification yielded the highest power across various degrees of gene-environment association in the controls. An optimal threshold for step 1 depended on the interacting allele frequency and the disease prevalence. In all scenarios, the least powerful method was to proceed directly to an unbiased full interaction model, applying conventional genome-wide significance thresholds. This simulation study confirms the practical advantage of two-step approaches to interaction testing over more conventional one-step designs, at least in the context of dichotomous disease outcomes and other parameters that might apply in real-world settings.

  2. Statistical analyses on sandstones: Systematic approach for predicting petrographical and petrophysical properties

    Science.gov (United States)

    Stück, H. L.; Siegesmund, S.

    2012-04-01

    Sandstones are a popular natural stone due to their wide occurrence and availability. The different applications for these stones have led to an increase in demand. From the viewpoint of conservation and the natural stone industry, an understanding of the material behaviour of this construction material is very important. Sandstones are a highly heterogeneous material. Based on statistical analyses with a sufficiently large dataset, a systematic approach to predicting the material behaviour should be possible. Since the literature already contains a large volume of data concerning the petrographical and petrophysical properties of sandstones, a large dataset could be compiled for the statistical analyses. The aim of this study is to develop constraints on the material behaviour and especially on the weathering behaviour of sandstones. Approximately 300 samples from historical and presently mined natural sandstones in Germany and ones described worldwide were included in the statistical approach. The mineralogical composition and fabric characteristics were determined from detailed thin section analyses and descriptions in the literature. Particular attention was paid to evaluating the compositional and textural maturity, grain contact respectively contact thickness, type of cement, degree of alteration and the intergranular volume. Statistical methods were used to test for normal distributions and calculating the linear regression of the basic petrophysical properties of density, porosity, water uptake as well as the strength. The sandstones were classified into three different pore size distributions and evaluated with the other petrophysical properties. Weathering behavior like hygric swelling and salt loading tests were also included. To identify similarities between individual sandstones or to define groups of specific sandstone types, principle component analysis, cluster analysis and factor analysis were applied. Our results show that composition and porosity

  3. Statistical analyses to support guidelines for marine avian sampling. Final report

    Science.gov (United States)

    Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris

    2012-01-01

    distribution to describe counts of a given species in a particular region and season. 4. Using a large database of historical at-sea seabird survey data, we applied this technique to identify appropriate statistical distributions for modeling a variety of species, allowing the distribution to vary by season. For each species and season, we used the selected distribution to calculate and map retrospective statistical power to detect hotspots and coldspots, and map pvalues from Monte Carlo significance tests of hotspots and coldspots, in discrete lease blocks designated by the U.S. Department of Interior, Bureau of Ocean Energy Management (BOEM). 5. Because our definition of hotspots and coldspots does not explicitly include variability over time, we examine the relationship between the temporal scale of sampling and the proportion of variance captured in time series of key environmental correlates of marine bird abundance, as well as available marine bird abundance time series, and use these analyses to develop recommendations for the temporal distribution of sampling to adequately represent both shortterm and long-term variability. We conclude by presenting a schematic “decision tree” showing how this power analysis approach would fit in a general framework for avian survey design, and discuss implications of model assumptions and results. We discuss avenues for future development of this work, and recommendations for practical implementation in the context of siting and wildlife assessment for offshore renewable energy development projects.

  4. Harmonisation of variables names prior to conducting statistical analyses with multiple datasets: an automated approach

    Directory of Open Access Journals (Sweden)

    Bosch-Capblanch Xavier

    2011-05-01

    Full Text Available Abstract Background Data requirements by governments, donors and the international community to measure health and development achievements have increased in the last decade. Datasets produced in surveys conducted in several countries and years are often combined to analyse time trends and geographical patterns of demographic and health related indicators. However, since not all datasets have the same structure, variables definitions and codes, they have to be harmonised prior to submitting them to the statistical analyses. Manually searching, renaming and recoding variables are extremely tedious and prone to errors tasks, overall when the number of datasets and variables are large. This article presents an automated approach to harmonise variables names across several datasets, which optimises the search of variables, minimises manual inputs and reduces the risk of error. Results Three consecutive algorithms are applied iteratively to search for each variable of interest for the analyses in all datasets. The first search (A captures particular cases that could not be solved in an automated way in the search iterations; the second search (B is run if search A produced no hits and identifies variables the labels of which contain certain key terms defined by the user. If this search produces no hits, a third one (C is run to retrieve variables which have been identified in other surveys, as an illustration. For each variable of interest, the outputs of these engines can be (O1 a single best matching variable is found, (O2 more than one matching variable is found or (O3 not matching variables are found. Output O2 is solved by user judgement. Examples using four variables are presented showing that the searches have a 100% sensitivity and specificity after a second iteration. Conclusion Efficient and tested automated algorithms should be used to support the harmonisation process needed to analyse multiple datasets. This is especially relevant when

  5. Harmonisation of variables names prior to conducting statistical analyses with multiple datasets: an automated approach

    Science.gov (United States)

    2011-01-01

    Background Data requirements by governments, donors and the international community to measure health and development achievements have increased in the last decade. Datasets produced in surveys conducted in several countries and years are often combined to analyse time trends and geographical patterns of demographic and health related indicators. However, since not all datasets have the same structure, variables definitions and codes, they have to be harmonised prior to submitting them to the statistical analyses. Manually searching, renaming and recoding variables are extremely tedious and prone to errors tasks, overall when the number of datasets and variables are large. This article presents an automated approach to harmonise variables names across several datasets, which optimises the search of variables, minimises manual inputs and reduces the risk of error. Results Three consecutive algorithms are applied iteratively to search for each variable of interest for the analyses in all datasets. The first search (A) captures particular cases that could not be solved in an automated way in the search iterations; the second search (B) is run if search A produced no hits and identifies variables the labels of which contain certain key terms defined by the user. If this search produces no hits, a third one (C) is run to retrieve variables which have been identified in other surveys, as an illustration. For each variable of interest, the outputs of these engines can be (O1) a single best matching variable is found, (O2) more than one matching variable is found or (O3) not matching variables are found. Output O2 is solved by user judgement. Examples using four variables are presented showing that the searches have a 100% sensitivity and specificity after a second iteration. Conclusion Efficient and tested automated algorithms should be used to support the harmonisation process needed to analyse multiple datasets. This is especially relevant when the numbers of datasets

  6. IMGT standardization for statistical analyses of T cell receptor junctions: the TRAV-TRAJ example.

    Science.gov (United States)

    Bleakley, Kevin; Giudicelli, Véronique; Wu, Yan; Lefranc, Marie-Paule; Biau, Gérard

    2006-01-01

    The diversity of immunoglobulin (IG) and T cell receptor (TR) chains depends on several mechanisms: combinatorial diversity, which is a consequence of the number of V, D and J genes and the N-REGION diversity, which creates an extensive and clonal somatic diversity at the V-J and V-D-J junctions. For the IG, the diversity is further increased by somatic hypermutations. The number of different junctions per chain and per individual is estimated to be 10(12). We have chosen the human TRAV-TRAJ junctions as an example in order to characterize the required criteria for a standardized analysis of the IG and TR V-J and V-D-J junctions, based on the IMGT-ONTOLOGY concepts, and to serve as a first IMGT junction reference set (IMGT, http://imgt.cines.fr). We performed a thorough statistical analysis of 212 human rearranged TRAV-TRAJ sequences, which were aligned and analysed by the integrated IMGT/V-QUEST software, which includes IMGT/JunctionAnalysis, then manually expert-verified. Furthermore, we compared these 212 sequences with 37 other human TRAV-TRAJ junction sequences for which some particularities (potential sequence polymorphisms, sequencing errors, etc.) did not allow IMGT/JunctionAnalysis to provide the correct biological results, according to expert verification. Using statistical learning, we constructed an automatic warning system to predict if new, automatically analysed TRAV-TRAJ sequences should be manually re-checked. We estimated the robustness of this automatic warning system.

  7. Statistical contact angle analyses; "slow moving" drops on a horizontal silicon-oxide surface.

    Science.gov (United States)

    Schmitt, M; Grub, J; Heib, F

    2015-06-01

    Sessile drop experiments on horizontal surfaces are commonly used to characterise surface properties in science and in industry. The advancing angle and the receding angle are measurable on every solid. Specially on horizontal surfaces even the notions themselves are critically questioned by some authors. Building a standard, reproducible and valid method of measuring and defining specific (advancing/receding) contact angles is an important challenge of surface science. Recently we have developed two/three approaches, by sigmoid fitting, by independent and by dependent statistical analyses, which are practicable for the determination of specific angles/slopes if inclining the sample surface. These approaches lead to contact angle data which are independent on "user-skills" and subjectivity of the operator which is also of urgent need to evaluate dynamic measurements of contact angles. We will show in this contribution that the slightly modified procedures are also applicable to find specific angles for experiments on horizontal surfaces. As an example droplets on a flat freshly cleaned silicon-oxide surface (wafer) are dynamically measured by sessile drop technique while the volume of the liquid is increased/decreased. The triple points, the time, the contact angles during the advancing and the receding of the drop obtained by high-precision drop shape analysis are statistically analysed. As stated in the previous contribution the procedure is called "slow movement" analysis due to the small covered distance and the dominance of data points with low velocity. Even smallest variations in velocity such as the minimal advancing motion during the withdrawing of the liquid are identifiable which confirms the flatness and the chemical homogeneity of the sample surface and the high sensitivity of the presented approaches.

  8. Computational AstroStatistics Fast and Efficient Tools for Analysing Huge Astronomical Data Sources

    CERN Document Server

    Nichol, R C; Connolly, A J; Davies, S; Genovese, C; Hopkins, A M; Miller, C J; Moore, A W; Pelleg, D; Richards, G T; Schneider, J; Szapudi, I; Wasserman, L H

    2001-01-01

    I present here a review of past and present multi-disciplinary research of the Pittsburgh Computational AstroStatistics (PiCA) group. This group is dedicated to developing fast and efficient statistical algorithms for analysing huge astronomical data sources. I begin with a short review of multi-resolutional kd-trees which are the building blocks for many of our algorithms. For example, quick range queries and fast n-point correlation functions. I will present new results from the use of Mixture Models (Connolly et al. 2000) in density estimation of multi-color data from the Sloan Digital Sky Survey (SDSS). Specifically, the selection of quasars and the automated identification of X-ray sources. I will also present a brief overview of the False Discovery Rate (FDR) procedure (Miller et al. 2001a) and show how it has been used in the detection of ``Baryon Wiggles'' in the local galaxy power spectrum and source identification in radio data. Finally, I will look forward to new research on an automated Bayes Netw...

  9. Municipal solid waste composition: sampling methodology, statistical analyses, and case study evaluation.

    Science.gov (United States)

    Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2015-02-01

    Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10-50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at "Level III", e.g. detailed, while the two others were sorted only at "Level I"). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3-4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single-family and multi-family house areas), the individual percentage composition of food waste, paper, and glass was significantly different between the housing types. This indicates that housing type is a critical stratification parameter. Separating food leftovers from food packaging during manual sorting of the sampled waste did not have significant influence on the proportions of food waste

  10. Modelling and analysing track cycling Omnium performances using statistical and machine learning techniques.

    Science.gov (United States)

    Ofoghi, Bahadorreza; Zeleznikow, John; Dwyer, Dan; Macmahon, Clare

    2013-01-01

    This article describes the utilisation of an unsupervised machine learning technique and statistical approaches (e.g., the Kolmogorov-Smirnov test) that assist cycling experts in the crucial decision-making processes for athlete selection, training, and strategic planning in the track cycling Omnium. The Omnium is a multi-event competition that will be included in the summer Olympic Games for the first time in 2012. Presently, selectors and cycling coaches make decisions based on experience and intuition. They rarely have access to objective data. We analysed both the old five-event (first raced internationally in 2007) and new six-event (first raced internationally in 2011) Omniums and found that the addition of the elimination race component to the Omnium has, contrary to expectations, not favoured track endurance riders. We analysed the Omnium data and also determined the inter-relationships between different individual events as well as between those events and the final standings of riders. In further analysis, we found that there is no maximum ranking (poorest performance) in each individual event that riders can afford whilst still winning a medal. We also found the required times for riders to finish the timed components that are necessary for medal winning. The results of this study consider the scoring system of the Omnium and inform decision-making toward successful participation in future major Omnium competitions.

  11. Marine Traffic Density Over Port Klang, Malaysia Using Statistical Analysis of AIS Data: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Masnawi MUSTAFFA

    2016-12-01

    Full Text Available Port Klang Malaysia is the 13th busiest port in the world, the capacity at the port expected to be able to meet the demand until 2018. It is one of the busiest ports in the world and also the busiest port in Malaysia. Even though there are statistics published by Port Klang Authority showing that a lot of ships using this port, this number is only based on ships that entering Port Klang. Therefore, no study has been done to investigate on how dense the traffic is in Port Klang, Malaysia the surrounding sea including Strait of Malacca . This paper has investigated on traffic density over Port Klang Malaysia and its surrounding sea using statistical analysis from AIS data. As a preliminary study, this study only collected AIS data for 7 days to represent daily traffic weekly. As a result, an hourly number of vessels, daily number of vessels, vessels classification and sizes and also traffic paths will be plotted.

  12. Boxing and mixed martial arts: preliminary traumatic neuromechanical injury risk analyses from laboratory impact dosage data.

    Science.gov (United States)

    Bartsch, Adam J; Benzel, Edward C; Miele, Vincent J; Morr, Douglas R; Prakash, Vikas

    2012-05-01

    In spite of ample literature pointing to rotational and combined impact dosage being key contributors to head and neck injury, boxing and mixed martial arts (MMA) padding is still designed to primarily reduce cranium linear acceleration. The objects of this study were to quantify preliminary linear and rotational head impact dosage for selected boxing and MMA padding in response to hook punches; compute theoretical skull, brain, and neck injury risk metrics; and statistically compare the protective effect of various glove and head padding conditions. An instrumented Hybrid III 50th percentile anthropomorphic test device (ATD) was struck in 54 pendulum impacts replicating hook punches at low (27-29 J) and high (54-58 J) energy. Five padding combinations were examined: unpadded (control), MMA glove-unpadded head, boxing glove-unpadded head, unpadded pendulum-boxing headgear, and boxing glove-boxing headgear. A total of 17 injury risk parameters were measured or calculated. All padding conditions reduced linear impact dosage. Other parameters significantly decreased, significantly increased, or were unaffected depending on padding condition. Of real-world conditions (MMA glove-bare head, boxing glove-bare head, and boxing glove-headgear), the boxing glove-headgear condition showed the most meaningful reduction in most of the parameters. In equivalent impacts, the MMA glove-bare head condition induced higher rotational dosage than the boxing glove-bare head condition. Finite element analysis indicated a risk of brain strain injury in spite of significant reduction of linear impact dosage. In the replicated hook punch impacts, all padding conditions reduced linear but not rotational impact dosage. Head and neck dosage theoretically accumulates fastest in MMA and boxing bouts without use of protective headgear. The boxing glove-headgear condition provided the best overall reduction in impact dosage. More work is needed to develop improved protective padding to minimize

  13. Detailed statistical contact angle analyses; "slow moving" drops on inclining silicon-oxide surfaces.

    Science.gov (United States)

    Schmitt, M; Groß, K; Grub, J; Heib, F

    2015-06-01

    Contact angle determination by sessile drop technique is essential to characterise surface properties in science and in industry. Different specific angles can be observed on every solid which are correlated with the advancing or the receding of the triple line. Different procedures and definitions for the determination of specific angles exist which are often not comprehensible or reproducible. Therefore one of the most important things in this area is to build standard, reproducible and valid methods for determining advancing/receding contact angles. This contribution introduces novel techniques to analyse dynamic contact angle measurements (sessile drop) in detail which are applicable for axisymmetric and non-axisymmetric drops. Not only the recently presented fit solution by sigmoid function and the independent analysis of the different parameters (inclination, contact angle, velocity of the triple point) but also the dependent analysis will be firstly explained in detail. These approaches lead to contact angle data and different access on specific contact angles which are independent from "user-skills" and subjectivity of the operator. As example the motion behaviour of droplets on flat silicon-oxide surfaces after different surface treatments is dynamically measured by sessile drop technique when inclining the sample plate. The triple points, the inclination angles, the downhill (advancing motion) and the uphill angles (receding motion) obtained by high-precision drop shape analysis are independently and dependently statistically analysed. Due to the small covered distance for the dependent analysis (static to the "slow moving" dynamic contact angle determination. They are characterised by small deviations of the computed values. Additional to the detailed introduction of this novel analytical approaches plus fit solution special motion relations for the drop on inclined surfaces and detailed relations about the reactivity of the freshly cleaned silicon wafer

  14. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Edjabou, Maklawe Essonanawe, E-mail: vine@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Petersen, Claus [Econet AS, Omøgade 8, 2.sal, 2100 Copenhagen (Denmark); Scheutz, Charlotte; Astrup, Thomas Fruergaard [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark)

    2015-02-15

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  15. AxPcoords & parallel AxParafit: statistical co-phylogenetic analyses on thousands of taxa

    Directory of Open Access Journals (Sweden)

    Meier-Kolthoff Jan

    2007-10-01

    Full Text Available Abstract Background Current tools for Co-phylogenetic analyses are not able to cope with the continuous accumulation of phylogenetic data. The sophisticated statistical test for host-parasite co-phylogenetic analyses implemented in Parafit does not allow it to handle large datasets in reasonable times. The Parafit and DistPCoA programs are the by far most compute-intensive components of the Parafit analysis pipeline. We present AxParafit and AxPcoords (Ax stands for Accelerated which are highly optimized versions of Parafit and DistPCoA respectively. Results Both programs have been entirely re-written in C. Via optimization of the algorithm and the C code as well as integration of highly tuned BLAS and LAPACK methods AxParafit runs 5–61 times faster than Parafit with a lower memory footprint (up to 35% reduction while the performance benefit increases with growing dataset size. The MPI-based parallel implementation of AxParafit shows good scalability on up to 128 processors, even on medium-sized datasets. The parallel analysis with AxParafit on 128 CPUs for a medium-sized dataset with an 512 by 512 association matrix is more than 1,200/128 times faster per processor than the sequential Parafit run. AxPcoords is 8–26 times faster than DistPCoA and numerically stable on large datasets. We outline the substantial benefits of using parallel AxParafit by example of a large-scale empirical study on smut fungi and their host plants. To the best of our knowledge, this study represents the largest co-phylogenetic analysis to date. Conclusion The highly efficient AxPcoords and AxParafit programs allow for large-scale co-phylogenetic analyses on several thousands of taxa for the first time. In addition, AxParafit and AxPcoords have been integrated into the easy-to-use CopyCat tool.

  16. A new scoring system in Cystic Fibrosis: statistical tools for database analysis - a preliminary report.

    Science.gov (United States)

    Hafen, G M; Hurst, C; Yearwood, J; Smith, J; Dzalilov, Z; Robinson, P J

    2008-10-05

    severe and severe disease. (3) Generated confusion tables showed a misclassification rate of 19.1% for males and 16.5% for females, with a majority of misallocations into adjacent severity classes particularly for males. Our preliminary data show that using CAP for detection of selection features and Linear DA to derive the actual model in a CF database might be helpful in developing a scoring system. However, there are several limitations, particularly more data entry points are needed to finalize a score and the statistical tools have further to be refined and validated, with re-running the statistical methods in the larger dataset.

  17. A new scoring system in Cystic Fibrosis: statistical tools for database analysis – a preliminary report

    Science.gov (United States)

    Hafen, GM; Hurst, C; Yearwood, J; Smith, J; Dzalilov, Z; Robinson, PJ

    2008-01-01

    , moderate, intermediate severe and severe disease. (3) Generated confusion tables showed a misclassification rate of 19.1% for males and 16.5% for females, with a majority of misallocations into adjacent severity classes particularly for males. Conclusion Our preliminary data show that using CAP for detection of selection features and Linear DA to derive the actual model in a CF database might be helpful in developing a scoring system. However, there are several limitations, particularly more data entry points are needed to finalize a score and the statistical tools have further to be refined and validated, with re-running the statistical methods in the larger dataset. PMID:18834547

  18. A new scoring system in Cystic Fibrosis: statistical tools for database analysis – a preliminary report

    Directory of Open Access Journals (Sweden)

    Yearwood J

    2008-10-01

    , intermediate moderate, moderate, intermediate severe and severe disease. (3 Generated confusion tables showed a misclassification rate of 19.1% for males and 16.5% for females, with a majority of misallocations into adjacent severity classes particularly for males. Conclusion Our preliminary data show that using CAP for detection of selection features and Linear DA to derive the actual model in a CF database might be helpful in developing a scoring system. However, there are several limitations, particularly more data entry points are needed to finalize a score and the statistical tools have further to be refined and validated, with re-running the statistical methods in the larger dataset.

  19. Analyses of Nucleon Scattering Based on the Modified Statistical Hauser - Feshbach - Weidenmueller Formalism.

    Science.gov (United States)

    Chan, Desmond Wing-Sum

    An S-matrix formalism of the statistical theory of nuclear reactions has been developed by Weidenmuller et al., based upon the Engelbrecht-Weidenmuller transformation and extended to cases where direct reactions are present as a means of deriving expressions for the fluctuation cross section going beyond the framework of conventional Hauser-Feshbach theory. This unified approach, from which a coherent sum of fluctuation and direct-interaction cross sections is combined to yield a net reaction cross section, provides a means of deriving a comprehensive and accurate theoretical description of the scattering process. Although a framework for the formal theory has been constructed, it had not previously been applied to the qualitative analyses of scattering data. As described in this thesis, a computer program "NANCY" has been compiled by modifying Tamura's coupled -channels code "JUPITOR-1" (through modifications suggested by Moldauer) and incorporating Smith's optical model routine "SCAT", as a means of generating the entire symmetric S -matrix. Using this program, computations were undertaken to determine numerically the energy-averaged cross sections for inelastic neutron scattering on ('232)Th and ('238)U from threshold to several MeV. With appropriate variation of coupling strengths between the ground state rotational band and vibrational levels good fits to the experimental data were attained, which compared favorably with theoretical results generated from conventional approaches.

  20. Preliminary Statistics of Temperatures and Pressures for Formation of Eclogites,Granulites and Peridotites in China

    Institute of Scientific and Technical Information of China (English)

    Hu Baoqun; Wang Fangzheng; Sun Zhanxue; Liu Chengdong; Bai Lihong

    2004-01-01

    The rock-forming temperatures and pressures represent the p-T points of the local regions in the lithosphere at a certain age, providing some important information on rock formation. Based on the preliminary statistics on the temperatures and pressures for the formation of eclogites, granulites and peridotites in China, the variant ranges are given, in this paper, of temperatures, pressures and linear geothermal gradients of eclogites, granulites and peridotites. In addition, since the eclogite is different from granulite and peridotite in the p-T diagram, these three rocks can be classified into two groups: the first group includes eclogites and the second group granulites and peridotites. Then, the p-T correlation functions of these two groups of rocks are provided. Finally, the two groups of rocks have different geothermal gradients at the same pressure gradient or have different pressure gradients at the same geothermal gradient. The temperatures and pressures for the formation of the rocks can be calculated from the mineral chemical compositions, but the depths (H) for the rock formation can be calculated only under the hypotheses of given p-H (or T-H) correlation functions. The explanations for the ultrahigh pressure metamorphism vary obviously with different hypotheses.

  1. Preliminary Statistics from the NASA Alphasat Beacon Receiver in Milan, Italy

    Science.gov (United States)

    Nessel, James; Zemba, Michael; Morse, Jacquelynne; Luini, Lorenzo; Riva, Carlo

    2015-01-01

    NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have initiated a joint propagation campaign within the framework of the Alphasat propagation experiment to characterize rain attenuation, scintillation, and gaseous absorption effects of the atmosphere in the 40 gigahertz band. NASA GRC has developed and installed a K/Q-band (20/40 gigahertz) beacon receiver at the POLIMI campus in Milan, Italy, which receives the 20/40 gigahertz signals broadcast from the Alphasat Aldo Paraboni TDP no. 5 beacon payload. The primary goal of these measurements is to develop a physical model to improve predictions of communications systems performance within the Q-band. Herein, we provide an overview of the design and data calibration procedure, and present 6 months of preliminary statistics of the NASA propagation terminal, which has been installed and operating in Milan since May 2014. The Q-band receiver has demonstrated a dynamic range of 40 decibels at an 8-hertz sampling rate. A weather station with an optical disdrometer is also installed to characterize rain drop size distribution for correlation with physical based models.

  2. Radiation induced chromatin conformation changes analysed by fluorescent localization microscopy, statistical physics, and graph theory.

    Science.gov (United States)

    Zhang, Yang; Máté, Gabriell; Müller, Patrick; Hillebrandt, Sabina; Krufczik, Matthias; Bach, Margund; Kaufmann, Rainer; Hausmann, Michael; Heermann, Dieter W

    2015-01-01

    It has been well established that the architecture of chromatin in cell nuclei is not random but functionally correlated. Chromatin damage caused by ionizing radiation raises complex repair machineries. This is accompanied by local chromatin rearrangements and structural changes which may for instance improve the accessibility of damaged sites for repair protein complexes. Using stably transfected HeLa cells expressing either green fluorescent protein (GFP) labelled histone H2B or yellow fluorescent protein (YFP) labelled histone H2A, we investigated the positioning of individual histone proteins in cell nuclei by means of high resolution localization microscopy (Spectral Position Determination Microscopy = SPDM). The cells were exposed to ionizing radiation of different doses and aliquots were fixed after different repair times for SPDM imaging. In addition to the repair dependent histone protein pattern, the positioning of antibodies specific for heterochromatin and euchromatin was separately recorded by SPDM. The present paper aims to provide a quantitative description of structural changes of chromatin after irradiation and during repair. It introduces a novel approach to analyse SPDM images by means of statistical physics and graph theory. The method is based on the calculation of the radial distribution functions as well as edge length distributions for graphs defined by a triangulation of the marker positions. The obtained results show that through the cell nucleus the different chromatin re-arrangements as detected by the fluorescent nucleosomal pattern average themselves. In contrast heterochromatic regions alone indicate a relaxation after radiation exposure and re-condensation during repair whereas euchromatin seemed to be unaffected or behave contrarily. SPDM in combination with the analysis techniques applied allows the systematic elucidation of chromatin re-arrangements after irradiation and during repair, if selected sub-regions of nuclei are

  3. Evaluation of Leymus chinensis quality using near-infrared reflectance spectroscopy with three different statistical analyses

    Directory of Open Access Journals (Sweden)

    Jishan Chen

    2015-12-01

    Full Text Available Due to a boom in the dairy industry in Northeast China, the hay industry has been developing rapidly. Thus, it is very important to evaluate the hay quality with a rapid and accurate method. In this research, a novel technique that combines near infrared spectroscopy (NIRs with three different statistical analyses (MLR, PCR and PLS was used to predict the chemical quality of sheepgrass (Leymus chinensis in Heilongjiang Province, China including the concentrations of crude protein (CP, acid detergent fiber (ADF, and neutral detergent fiber (NDF. Firstly, the linear partial least squares regression (PLS was performed on the spectra and the predictions were compared to those with laboratory-based recorded spectra. Then, the MLR evaluation method for CP has a potential to be used for industry requirements, as it needs less sophisticated and cheaper instrumentation using only a few wavelengths. Results show that in terms of CP, ADF and NDF, (i the prediction accuracy in terms of CP, ADF and NDF using PLS was obviously improved compared to the PCR algorithm, and comparable or even better than results generated using the MLR algorithm; (ii the predictions were worse compared to laboratory-based spectra with the MLR algorithmin, and poor predictions were obtained (R2, 0.62, RPD, 0.9 using MLR in terms of NDF; (iii a satisfactory accuracy with R2 and RPD by PLS method of 0.91, 3.2 for CP, 0.89, 3.1 for ADF and 0.88, 3.0 for NDF, respectively, was obtained. Our results highlight the use of the combined NIRs-PLS method could be applied as a valuable technique to rapidly and accurately evaluate the quality of sheepgrass hay.

  4. Reporting characteristics of meta-analyses in orthodontics: methodological assessment and statistical recommendations.

    Science.gov (United States)

    Papageorgiou, Spyridon N; Papadopoulos, Moschos A; Athanasiou, Athanasios E

    2014-02-01

    Ideally meta-analyses (MAs) should consolidate the characteristics of orthodontic research in order to produce an evidence-based answer. However severe flaws are frequently observed in most of them. The aim of this study was to evaluate the statistical methods, the methodology, and the quality characteristics of orthodontic MAs and to assess their reporting quality during the last years. Electronic databases were searched for MAs (with or without a proper systematic review) in the field of orthodontics, indexed up to 2011. The AMSTAR tool was used for quality assessment of the included articles. Data were analyzed with Student's t-test, one-way ANOVA, and generalized linear modelling. Risk ratios with 95% confidence intervals were calculated to represent changes during the years in reporting of key items associated with quality. A total of 80 MAs with 1086 primary studies were included in this evaluation. Using the AMSTAR tool, 25 (27.3%) of the MAs were found to be of low quality, 37 (46.3%) of medium quality, and 18 (22.5%) of high quality. Specific characteristics like explicit protocol definition, extensive searches, and quality assessment of included trials were associated with a higher AMSTAR score. Model selection and dealing with heterogeneity or publication bias were often problematic in the identified reviews. The number of published orthodontic MAs is constantly increasing, while their overall quality is considered to range from low to medium. Although the number of MAs of medium and high level seems lately to rise, several other aspects need improvement to increase their overall quality.

  5. Statistical Analyses of Second Indoor Bio-Release Field Evaluation Study at Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G.; Pulsipher, Brent A.; Matzke, Brett D.

    2009-12-17

    number of zeros. Using QQ plots these data characteristics show a lack of normality from the data after contamination. Normality is improved when looking at log(CFU/cm2). Variance component analysis (VCA) and analysis of variance (ANOVA) were used to estimate the amount of variance due to each source and to determine which sources of variability were statistically significant. In general, the sampling methods interacted with the across event variability and with the across room variability. For this reason, it was decided to do analyses for each sampling method, individually. The between event variability and between room variability were significant for each method, except for the between event variability for the swabs. For both the wipes and vacuums, the within room standard deviation was much larger (26.9 for wipes and 7.086 for vacuums) than the between event standard deviation (6.552 for wipes and 1.348 for vacuums) and the between room standard deviation (6.783 for wipes and 1.040 for vacuums). Swabs between room standard deviation was 0.151, while both the within room and between event standard deviations are less than 0.10 (all measurements in CFU/cm2).

  6. Electrocochleographic recording in Asian adults: Preliminary normative data and demographic analyses

    Directory of Open Access Journals (Sweden)

    Mohd Normani Zakaria

    2017-03-01

    Conclusion: The present study provides preliminary normative data for ET-ECochG among Asian adults. The ECochG components do not appear to be influenced by either ethnicity or gender. The derived normative data can be used for clinical applications and as the reference for future studies involving Asian population.

  7. The mobility of Atlantic baric depressions leading to intense precipitation over Italy: a preliminary statistical analysis

    Directory of Open Access Journals (Sweden)

    N. Tartaglione

    2006-01-01

    Full Text Available The speed of Atlantic surface depressions, occurred during the autumn and winter seasons and that lead to intense precipitation over Italy from 1951 to 2000, was investigated. Italy was divided into 5 regions as documented in previous climatological studies (based on Principal Component Analysis. Intense precipitation events were selected on the basis of in situ rain gauge data and clustered according to the region that they hit. For each intense precipitation event we tried to identify an associated surface depression and we tracked it, within a large domain covering the Mediterranean and Atlantic regions, from its formation to cyclolysis in order to estimate its speed. 'Depression speeds' were estimated with 6-h resolution and clustered into slow and non-slow classes by means of a threshold, coinciding with the first quartile of speed distribution and depression centre speeds were associated with their positions. Slow speeds occurring over an area including Italy and the western Mediterranean basin showed frequencies higher than 25%, for all the Italian regions but one. The probability of obtaining by chance the observed more than 25% success rate was estimated by means of a binomial distribution. The statistical reliability of the result is confirmed for only one region. For Italy as a whole, results were confirmed at 95% confidence level. Stability of the statistical inference, with respect to errors in estimating depression speed and changes in the threshold of slow depressions, was analysed and essentially confirmed the previous results.

  8. Essentials of Excel, Excel VBA, SAS and Minitab for statistical and financial analyses

    CERN Document Server

    Lee, Cheng-Few; Chang, Jow-Ran; Tai, Tzu

    2016-01-01

    This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, MINITAB, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and MINITAB. Of those, we look at MINITAB and SAS in this textbook. One of the main reasons to use MINITAB is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities...

  9. Analysis of Norwegian bio energy statistics. Quality improvement proposals; Analyse av norsk bioenergistatistikk. Forslag til kvalitetsheving

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This report is an assessment of the current model and presentation form of bio energy statistics. It appears proposed revision and enhancement of both collection and data representation. In the context of market development both in general for energy and particularly for bio energy and government targets, a good bio energy statistics form the basis to follow up the objectives and means.(eb)

  10. Preliminary results and analyses of using IGS GPS data to determine global ionospheric TEC

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Using the spherical harmonic (SH) function model and the dual frequency GPS data of 139 International GPS Service (IGS) stations for July 15 of 2000, the global ionospheric total electron content (TEC) is calculated and the basic method is investigated. Here, preliminary results are reported and the problems and difficulties to be solved for using GPS data to determine the global ionospheric TEC are discussed.

  11. Statistical issues in quality control of proteomic analyses: good experimental design and planning.

    Science.gov (United States)

    Cairns, David A

    2011-03-01

    Quality control is becoming increasingly important in proteomic investigations as experiments become more multivariate and quantitative. Quality control applies to all stages of an investigation and statistics can play a key role. In this review, the role of statistical ideas in the design and planning of an investigation is described. This involves the design of unbiased experiments using key concepts from statistical experimental design, the understanding of the biological and analytical variation in a system using variance components analysis and the determination of a required sample size to perform a statistically powerful investigation. These concepts are described through simple examples and an example data set from a 2-D DIGE pilot experiment. Each of these concepts can prove useful in producing better and more reproducible data. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Neutronic analyses of the preliminary design of a DCLL blanket for the EUROfusion DEMO power plant

    Energy Technology Data Exchange (ETDEWEB)

    Palermo, Iole, E-mail: iole.palermo@ciemat.es; Fernández, Iván; Rapisarda, David; Ibarra, Angel

    2016-11-01

    Highlights: • We perform neutronic calculations for the preliminary DCLL Blanket design. • We study the tritium breeding capability of the reactor. • We determine the nuclear heating in the main components. • We verify if the shielding of the TF coil is maintained. - Abstract: In the frame of the newly established EUROfusion WPBB Project for the period 2014–2018, four breeding blanket options are being investigated to be used in the fusion power demonstration plant DEMO. CIEMAT is leading the development of the conceptual design of the Dual Coolant Lithium Lead, DCLL, breeding blanket. The primary role of the blanket is of energy extraction, tritium production, and radiation shielding. With this aim the DCLL uses LiPb as primary coolant, tritium breeder and neutron multiplier and Eurofer as structural material. Focusing on the achievement of the fundamental neutronic responses a preliminary blanket model has been designed. Thus detailed 3D neutronic models of the whole blanket modules have been generated, arranged in a specific DCLL segmentation and integrated in the generic DEMO model. The initial design has been studied to demonstrate its viability. Thus, the neutronic behaviour of the blanket and of the shield systems in terms of tritium breeding capabilities, power generation and shielding efficiency has been assessed in this paper. The results demonstrate that the primary nuclear performances are already satisfactory at this preliminary stage of the design, having obtained the tritium self-sufficiency and an adequate shielding.

  13. [Selection of a statistical model for the evaluation of the reliability of the results of toxicological analyses. II. Selection of our statistical model for the evaluation].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    Part II presents a statistical model devised by the authors for evaluating toxicological analyses results. The model includes: 1. Establishment of a reference value, basing on our own measurements taken by two independent analytical methods. 2. Selection of laboratories -- basing on the deviation of the obtained values from reference ones. 3. On consideration of variance analysis, t-student's test and differences test, subsequent quality controls and particular laboratories have been evaluated.

  14. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling......, which leads to more accurate results. Finally, we present a thorough statistical investigation of the shape, orientation and interactions of the synaptic vesicles during active time of the synapse. Focused ion beam-scanning electron microscopy images of a male mammalian brain are used for this study...

  15. Multitemporal satellite data analyses for archaeological mark detection: preliminary results in Italy and Argentina

    Science.gov (United States)

    Lasaponara, Rosa; Masini, Nicola

    2014-05-01

    within Basilicata and Puglia Region, southern Patagonia and Payunia-Campo Volcanicos Liancanelo e PayunMatru respectively, in Italy and Argentina. We focused our attention on diverse surfaces and soil types in different periods of the year in order to assess the capabilities of both optical and radar data to detect archaeological marks in different ecosystems and seasons. We investigated not only crop culture during the "favourable vegetative period" to enhance the presence of subsurface remains but also the "spectral response" of spontaneous, sparse herbaceous covers during periods considered and expected to be less favourable (as for example summer and winter) for this type of investigation. The main interesting results were the capability of radar (cosmoskymed) and multispectral optical data satellite data (Pleiades, Quickbird, Geoeye) to highlight the presence of structures below the surface even (i) in during period of years generally considered not "suitable for crop mark investigations" and even (ii) in areas only covered by sparse, spontaneous herbaceous plants in several test sites investigate din both Argentine and Italian areas of interest. Preliminary results conducted in both Italian and Argentina sites pointed out that Earth Observation (EO) technology can be successfully used for extracting useful information on traces the past human activities still fossilized in the modern landscape in different ecosystems and seasons. Moreover the multitemporal analyses of satellite data can fruitfully applied to: (i) improve knowledge, (ii) support monitoring of natural and cultural site, (iii) assess natural and man-made risks including emerging threats to the heritage sites. References Lasaponara R, N Masini 2009 Full-waveform Airborne Laser Scanning for the detection of medieval archaeological microtopographic relief Journal of Cultural Heritage 10, e78-e82 Ciminale M, D Gallo, R Lasaponara, N Masini 2009 A multiscale approach for reconstructing archaeological

  16. Statistics for quantifying heterogeneity in univariate and bivariate meta-analyses of binary data: the case of meta-analyses of diagnostic accuracy.

    Science.gov (United States)

    Zhou, Yan; Dendukuri, Nandini

    2014-07-20

    Heterogeneity in diagnostic meta-analyses is common because of the observational nature of diagnostic studies and the lack of standardization in the positivity criterion (cut-off value) for some tests. So far the unexplained heterogeneity across studies has been quantified by either using the I(2) statistic for a single parameter (i.e. either the sensitivity or the specificity) or visually examining the data in a receiver-operating characteristic space. In this paper, we derive improved I(2) statistics measuring heterogeneity for dichotomous outcomes, with a focus on diagnostic tests. We show that the currently used estimate of the 'typical' within-study variance proposed by Higgins and Thompson is not able to properly account for the variability of the within-study variance across studies for dichotomous variables. Therefore, when the between-study variance is large, the 'typical' within-study variance underestimates the expected within-study variance, and the corresponding I(2) is overestimated. We propose to use the expected value of the within-study variation in the construction of I(2) in cases of univariate and bivariate diagnostic meta-analyses. For bivariate diagnostic meta-analyses, we derive a bivariate version of I(2) that is able to account for the correlation between sensitivity and specificity. We illustrate the performance of these new estimators using simulated data as well as two real data sets.

  17. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations, 2. Robustness of Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C.; Kleijnen, J.P.C.

    1999-03-24

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked.

  18. The Effects of Two Types of Sampling Error on Common Statistical Analyses.

    Science.gov (United States)

    Arnold, Margery E.

    Sampling error refers to variability that is unique to the sample. If the sample is the entire population, then there is no sampling error. A related point is that sampling error is a function of sample size, as a hypothetical example illustrates. As the sample statistics more and more closely approximate the population parameters, the sampling…

  19. Priors, Posterior Odds and Lagrange Multiplier Statistics in Bayesian Analyses of Cointegration

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); R. Paap (Richard)

    1996-01-01

    textabstractUsing the standard linear model as a base, a unified theory of Bayesian Analyses of Cointegration Models is constructed. This is achieved by defining (natural conjugate) priors in the linear model and using the implied priors for the cointegration model. Using these priors, posterior res

  20. Statistical Significance and Reliability Analyses in Recent "Journal of Counseling & Development" Research Articles.

    Science.gov (United States)

    Thompson, Bruce; Snyder, Patricia A.

    1998-01-01

    Investigates two aspects of research analyses in quantitative research studies reported in the 1996 issues of "Journal of Counseling & Development" (JCD). Acceptable methodological practice regarding significance testing and evaluation of score reliability has evolved considerably. Contemporary thinking on these issues is described; practice as…

  1. Thinking About Data, Research Methods, and Statistical Analyses: Commentary on Sijtsma's (2014) "Playing with Data".

    Science.gov (United States)

    Waldman, Irwin D; Lilienfeld, Scott O

    2016-03-01

    We comment on Sijtsma's (2014) thought-provoking essay on how to minimize questionable research practices (QRPs) in psychology. We agree with Sijtsma that proactive measures to decrease the risk of QRPs will ultimately be more productive than efforts to target individual researchers and their work. In particular, we concur that encouraging researchers to make their data and research materials public is the best institutional antidote against QRPs, although we are concerned that Sijtsma's proposal to delegate more responsibility to statistical and methodological consultants could inadvertently reinforce the dichotomy between the substantive and statistical aspects of research. We also discuss sources of false-positive findings and replication failures in psychological research, and outline potential remedies for these problems. We conclude that replicability is the best metric of the minimization of QRPs and their adverse effects on psychological research.

  2. Time Series Expression Analyses Using RNA-seq: A Statistical Approach

    Directory of Open Access Journals (Sweden)

    Sunghee Oh

    2013-01-01

    Full Text Available RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI, autoregressive time-lagged regression (AR(1, and hidden Markov model (HMM approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis.

  3. Statistical analyses of the performance of Macedonian investment and pension funds

    Directory of Open Access Journals (Sweden)

    Petar Taleski

    2015-10-01

    Full Text Available The foundation of the post-modern portfolio theory is creating a portfolio based on a desired target return. This specifically applies to the performance of investment and pension funds that provide a rate of return meeting payment requirements from investment funds. A desired target return is the goal of an investment or pension fund. It is the primary benchmark used to measure performances, dynamic monitoring and evaluation of the risk–return ratio on investment funds. The analysis in this paper is based on monthly returns of Macedonian investment and pension funds (June 2011 - June 2014. Such analysis utilizes the basic, but highly informative statistical characteristic moments like skewness, kurtosis, Jarque–Bera, and Chebyishev’s Inequality. The objective of this study is to perform a trough analysis, utilizing the above mentioned and other types of statistical techniques (Sharpe, Sortino, omega, upside potential, Calmar, Sterling to draw relevant conclusions regarding the risks and characteristic moments in Macedonian investment and pension funds. Pension funds are the second largest segment of the financial system, and has great potential for further growth due to constant inflows from pension insurance. The importance of investment funds for the financial system in the Republic of Macedonia is still small, although open-end investment funds have been the fastest growing segment of the financial system. Statistical analysis has shown that pension funds have delivered a significantly positive volatility-adjusted risk premium in the analyzed period more so than investment funds.

  4. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  5. Characterizing Earthflow Surface Morphology With Statistical and Spectral Analyses of Airborne Laser Altimetry

    Science.gov (United States)

    McKean, J.; Roering, J.

    High-resolution laser altimetry can depict the topography of large landslides with un- precedented accuracy and allow better management of the hazards posed by such slides. The surface of most landslides is rougher, on a local scale of a few meters, than adjacent unfailed slopes. This characteristic can be exploited to automatically detect and map landslides in landscapes represented by high resolution DTMs. We have used laser altimetry measurements of local topographic roughness to identify and map the perimeter and internal features of a large earthflow in the South Island, New Zealand. Surface roughness was first quantified by statistically characterizing the local variabil- ity of ground surface orientations using both circular and spherical statistics. These measures included the circular resultant, standard deviation and dispersion, and the three-dimensional spherical resultant and ratios of the normalized eigenvalues of the direction cosines. The circular measures evaluate the amount of change in topographic aspect from pixel-to-pixel in the gridded data matrix. The spherical statistics assess both the aspect and steepness of each pixel. The standard deviation of the third di- rection cosine was also used alone to define the variability in just the steepness of each pixel. All of the statistical measures detect and clearly map the earthflow. Cir- cular statistics also emphasize small folds transverse to the movement in the most active zone of the slide. The spherical measures are more sensitive to the larger scale roughness in a portion of the slide that includes large intact limestone blocks. Power spectra of surface roughness were also calculated from two-dimensional Fourier transformations in local test areas. A small earthflow had a broad spectral peak at wavelengths between 10 and 30 meters. Shallower soil failures and surface erosion produced surfaces with a very sharp spectral peak at 12 meters wavelength. Unfailed slopes had an order of magnitude

  6. Births: Preliminary Data for 2011. National Vital Statistics Reports. Volume 61, Number 5

    Science.gov (United States)

    Hamilton, Brady E.; Martin, Joyce A.; Ventura, Stephanie J.

    2012-01-01

    Objectives: This report presents preliminary data for 2011 on births in the United States. U.S. data on births are shown by age, live-birth order, race, and Hispanic origin of mother. Data on marital status, cesarean delivery, preterm births, and low birthweight are also presented. Methods: Data in this report are based on approximately 100…

  7. The Use of Statistical Process Control Tools for Analysing Financial Statements

    Directory of Open Access Journals (Sweden)

    Niezgoda Janusz

    2017-06-01

    Full Text Available This article presents the proposed application of one type of the modified Shewhart control charts in the monitoring of changes in the aggregated level of financial ratios. The control chart x̅ has been used as a basis of analysis. The examined variable from the sample in the mentioned chart is the arithmetic mean. The author proposes to substitute it with a synthetic measure that is determined and based on the selected ratios. As the ratios mentioned above, are expressed in different units and characters, the author applies standardisation. The results of selected comparative analyses have been presented for both bankrupts and non-bankrupts. They indicate the possibility of using control charts as an auxiliary tool in financial analyses.

  8. Calibration of back-analysed model parameters for landslides using classification statistics

    Science.gov (United States)

    Cepeda, Jose; Henderson, Laura

    2016-04-01

    Back-analyses are useful for characterizing the geomorphological and mechanical processes and parameters involved in the initiation and propagation of landslides. These processes and parameters can in turn be used for improving forecasts of scenarios and hazard assessments in areas or sites which have similar settings to the back-analysed cases. The selection of the modeled landslide that produces the best agreement with the actual observations requires running a number of simulations by varying the type of model and the sets of input parameters. The comparison of the simulated and observed parameters is normally performed by visual comparison of geomorphological or dynamic variables (e.g., geometry of scarp and final deposit, maximum velocities and depths). Over the past six years, a method developed by NGI has been used by some researchers for a more objective selection of back-analysed input model parameters. That method includes an adaptation of the equations for calculation of classifiers, and a comparative evaluation of classifiers of the selected parameter sets in the Receiver Operating Characteristic (ROC) space. This contribution presents an updating of the methodology. The proposed procedure allows comparisons between two or more "clouds" of classifiers. Each cloud represents the performance of a model over a range of input parameters (e.g., samples of probability distributions). Considering the fact that each cloud does not necessarily produce a full ROC curve, two new normalised ROC-space parameters are introduced for characterizing the performance of each cloud. The first parameter is representative of the cloud position relative to the point of perfect classification. The second parameter characterizes the position of the cloud relative to the theoretically perfect ROC curve and the no-discrimination line. The methodology is illustrated with back-analyses of slope stability and landslide runout of selected case studies. This research activity has been

  9. An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses

    Science.gov (United States)

    Saether, E.; Glaessgen, E.H.; Yamakov, V.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  10. On the Assessment of Monte Carlo Error in Simulation-Based Statistical Analyses.

    Science.gov (United States)

    Koehler, Elizabeth; Brown, Elizabeth; Haneuse, Sebastien J-P A

    2009-05-01

    Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought.

  11. Preliminary Thermal Hydraulic Analyses of the Conceptual Core Models with Tubular Type Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Hee Taek; Park, Jong Hark; Park, Cheol

    2006-11-15

    A new research reactor (AHR, Advanced HANARO Reactor) based on the HANARO has being conceptually developed for the future needs of research reactors. A tubular type fuel was considered as one of the fuel options of the AHR. A tubular type fuel assembly has several curved fuel plates arranged with a constant small gap to build up cooling channels, which is very similar to an annulus pipe with many layers. This report presents the preliminary analysis of thermal hydraulic characteristics and safety margins for three conceptual core models using tubular fuel assemblies. Four design criteria, which are the fuel temperature, ONB (Onset of Nucleate Boiling) margin, minimum DNBR (Departure from Nucleate Boiling Ratio) and OFIR (Onset of Flow Instability Ratio), were investigated along with various core flow velocities in the normal operating conditions. And the primary coolant flow rate based a conceptual core model was suggested as a design information for the process design of the primary cooling system. The computational fluid dynamics analysis was also carried out to evaluate the coolant velocity distributions between tubular channels and the pressure drop characteristics of the tubular fuel assembly.

  12. Basic statistical analyses of candidate nickel-hydrogen cells for the Space Station Freedom

    Science.gov (United States)

    Maloney, Thomas M.; Frate, David T.

    1993-01-01

    Nickel-Hydrogen (Ni/H2) secondary batteries will be implemented as a power source for the Space Station Freedom as well as for other NASA missions. Consequently, characterization tests of Ni/H2 cells from Eagle-Picher, Whittaker-Yardney, and Hughes were completed at the NASA Lewis Research Center. Watt-hour efficiencies of each Ni/H2 cell were measured for regulated charge and discharge cycles as a function of temperature, charge rate, discharge rate, and state of charge. Temperatures ranged from -5 C to 30 C, charge rates ranged from C/10 to 1C, discharge rates ranged from C/10 to 2C, and states of charge ranged from 20 percent to 100 percent. Results from regression analyses and analyses of mean watt-hour efficiencies demonstrated that overall performance was best at temperatures between 10 C and 20 C while the discharge rate correlated most strongly with watt-hour efficiency. In general, the cell with back-to-back electrode arrangement, single stack, 26 percent KOH, and serrated zircar separator and the cell with a recirculating electrode arrangement, unit stack, 31 percent KOH, zircar separators performed best.

  13. Basic statistical analyses of candidate nickel-hydrogen cells for the Space Station Freedom

    Energy Technology Data Exchange (ETDEWEB)

    Maloney, T.M.; Frate, D.T.

    1993-05-01

    Nickel-Hydrogen (Ni/H2) secondary batteries will be implemented as a power source for the Space Station Freedom as well as for other NASA missions. Consequently, characterization tests of Ni/H2 cells from Eagle-Picher, Whittaker-Yardney, and Hughes were completed at the NASA Lewis Research Center. Watt-hour efficiencies of each Ni/H2 cell were measured for regulated charge and discharge cycles as a function of temperature, charge rate, discharge rate, and state of charge. Temperatures ranged from -5 C to 30 C, charge rates ranged from C/10 to 1C, discharge rates ranged from C/10 to 2C, and states of charge ranged from 20 percent to 100 percent. Results from regression analyses and analyses of mean watt-hour efficiencies demonstrated that overall performance was best at temperatures between 10 C and 20 C while the discharge rate correlated most strongly with watt-hour efficiency. In general, the cell with back-to-back electrode arrangement, single stack, 26 percent KOH, and serrated zircar separator and the cell with a recirculating electrode arrangement, unit stack, 31 percent KOH, zircar separators performed best.

  14. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    DEFF Research Database (Denmark)

    Edjabou, Vincent Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona

    2015-01-01

    Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both...... comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub......-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10-50 waste fractions, organised according to a three-level (tiered approach) facilitating,comparison of the waste data between individual sub-areas with different fractionation (waste...

  15. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  16. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section....... The model is able to correctly separate the two experimental groups. Two different approaches to estimate the thickness of each section of specimen being imaged are introduced. The first approach uses Darboux frame and Cartan matrix to measure the isophote curvature and the second approach is based...

  17. Quantum, classical and semiclassical analyses of photon statistics in harmonic generation

    CERN Document Server

    Bajer, J; Bajer, Jiri; Miranowicz, Adam

    2001-01-01

    In this review, we compare different descriptions of photon-number statistics in harmonic generation processes within quantum, classical and semiclassical approaches. First, we study the exact quantum evolution of the harmonic generation by applying numerical methods including those of Hamiltonian diagonalization and global characteristics. We show explicitly that the harmonic generations can indeed serve as a source of nonclassical light. Then, we demonstrate that the quasi-stationary sub-Poissonian light can be generated in these quantum processes under conditions corresponding to the so-called no-energy-transfer regime known in classical nonlinear optics. By applying method of classical trajectories, we demonstrate that the analytical predictions of the Fano factors are in good agreement with the quantum results. On comparing second and higher harmonic generations in the no-energy-transfer regime, we show that the highest noise reduction is achieved in third-harmonic generation with the Fano-factor of the ...

  18. Chemometric and Statistical Analyses of ToF-SIMS Spectra of Increasingly Complex Biological Samples

    Energy Technology Data Exchange (ETDEWEB)

    Berman, E S; Wu, L; Fortson, S L; Nelson, D O; Kulp, K S; Wu, K J

    2007-10-24

    Characterizing and classifying molecular variation within biological samples is critical for determining fundamental mechanisms of biological processes that will lead to new insights including improved disease understanding. Towards these ends, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to examine increasingly complex samples of biological relevance, including monosaccharide isomers, pure proteins, complex protein mixtures, and mouse embryo tissues. The complex mass spectral data sets produced were analyzed using five common statistical and chemometric multivariate analysis techniques: principal component analysis (PCA), linear discriminant analysis (LDA), partial least squares discriminant analysis (PLSDA), soft independent modeling of class analogy (SIMCA), and decision tree analysis by recursive partitioning. PCA was found to be a valuable first step in multivariate analysis, providing insight both into the relative groupings of samples and into the molecular basis for those groupings. For the monosaccharides, pure proteins and protein mixture samples, all of LDA, PLSDA, and SIMCA were found to produce excellent classification given a sufficient number of compound variables calculated. For the mouse embryo tissues, however, SIMCA did not produce as accurate a classification. The decision tree analysis was found to be the least successful for all the data sets, providing neither as accurate a classification nor chemical insight for any of the tested samples. Based on these results we conclude that as the complexity of the sample increases, so must the sophistication of the multivariate technique used to classify the samples. PCA is a preferred first step for understanding ToF-SIMS data that can be followed by either LDA or PLSDA for effective classification analysis. This study demonstrates the strength of ToF-SIMS combined with multivariate statistical and chemometric techniques to classify increasingly complex biological samples

  19. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  20. Structured multiplicity and confirmatory statistical analyses in pharmacodynamic studies using the quantitative electroencephalogram.

    Science.gov (United States)

    Ferber, Georg; Staner, Luc; Boeijinga, Peter

    2011-09-30

    Pharmacodynamic (PD) clinical studies are characterised by a high degree of multiplicity. This multiplicity is the result of the design of these studies that typically investigate effects of a number of biomarkers at various doses and multiple time points. Measurements are taken at many or all points of a "hyper-grid" that can be understood as the cross-product of a number of dimensions each of which has typically 3-30 discrete values. This exploratory design helps understanding the phenomena under investigation, but has made a confirmatory statistical analysis of these studies difficult, so that such an analysis is often missing in this type of studies. In this contribution we show that the cross-product structure of PD studies allows to combine several well-known techniques to address multiplicity in an effective way, so that a confirmatory analysis of these studies becomes feasible without unrealistic loss of power. We demonstrate the application of this technique in two studies that use the quantitative EEG (qEEG) as biomarker for drug activity at the GABA-A receptor. QEEG studies suffer particularly from the curse of multiplicity, since, in addition to the common dimensions like dose and time, the qEEG is measured at many locations over the scalp and in a number of frequency bands which inflate the multiplicity by a factor of about 250.

  1. An educational review of the statistical issues in analysing utility data for cost-utility analysis.

    Science.gov (United States)

    Hunter, Rachael Maree; Baio, Gianluca; Butt, Thomas; Morris, Stephen; Round, Jeff; Freemantle, Nick

    2015-04-01

    The aim of cost-utility analysis is to support decision making in healthcare by providing a standardised mechanism for comparing resource use and health outcomes across programmes of work. The focus of this paper is the denominator of the cost-utility analysis, specifically the methodology and statistical challenges associated with calculating QALYs from patient-level data collected as part of a trial. We provide a brief description of the most common questionnaire used to calculate patient level utility scores, the EQ-5D, followed by a discussion of other ways to calculate patient level utility scores alongside a trial including other generic measures of health-related quality of life and condition- and population-specific questionnaires. Detail is provided on how to calculate the mean QALYs per patient, including discounting, adjusting for baseline differences in utility scores and a discussion of the implications of different methods for handling missing data. The methods are demonstrated using data from a trial. As the methods chosen can systematically change the results of the analysis, it is important that standardised methods such as patient-level analysis are adhered to as best as possible. Regardless, researchers need to ensure that they are sufficiently transparent about the methods they use so as to provide the best possible information to aid in healthcare decision making.

  2. STATISTIC, PROBABILISTIC, CORRELATION AND SPECTRAL ANALYSES OF REGENERATIVE BRAKING CURRENT OF DC ELECTRIC ROLLING STOCK

    Directory of Open Access Journals (Sweden)

    A. V. Nikitenko

    2014-04-01

    Full Text Available Purpose. Defining and analysis of the probabilistic and spectral characteristics of random current in regenerative braking mode of DC electric rolling stock are observed in this paper. Methodology. The elements and methods of the probability theory (particularly the theory of stationary and non-stationary processes and methods of the sampling theory are used for processing of the regenerated current data arrays by PC. Findings. The regenerated current records are obtained from the locomotives and trains in Ukraine railways and trams in Poland. It was established that the current has uninterrupted and the jumping variations in time (especially in trams. For the random current in the regenerative braking mode the functions of mathematical expectation, dispersion and standard deviation are calculated. Histograms, probabilistic characteristics and correlation functions are calculated and plotted down for this current too. It was established that the current of the regenerative braking mode can be considered like the stationary and non-ergodic process. The spectral analysis of these records and “tail part” of the correlation function found weak periodical (or low-frequency components which are known like an interharmonic. Originality. Firstly, the theory of non-stationary random processes was adapted for the analysis of the recuperated current which has uninterrupted and the jumping variations in time. Secondly, the presence of interharmonics in the stochastic process of regenerated current was defined for the first time. And finally, the patterns of temporal changes of the correlation current function are defined too. This allows to reasonably apply the correlation functions method in the identification of the electric traction system devices. Practical value. The results of probabilistic and statistic analysis of the recuperated current allow to estimate the quality of recovered energy and energy quality indices of electric rolling stock in the

  3. Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism

    Directory of Open Access Journals (Sweden)

    Le Bao

    2014-11-01

    Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.

  4. Perceived conflict in the couple and chronic illness management: Preliminary analyses from the Quebec Health Survey

    Directory of Open Access Journals (Sweden)

    Hudon Catherine

    2006-10-01

    females with more than one chronic condition were more likely to have a negative perception of their general health and mental health. Conclusion The study provides a useful preliminary measure of the importance of living arrangements and the quality of the couple relationship in chronic illness management broadly conceived as a measure of the patient's efforts at self-care and an illness status indicator. Results of this study prod us to examine more closely, within longitudinal designs, the influence of living arrangements and the presence of conflict in the couple on chronic illness management as well as the modifying effect of gender on these associations.

  5. Preliminary Accident Analyses for Conversion of the Massachusetts Institute of Technology Reactor (MITR) from Highly Enriched to Low Enriched Uranium

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, Floyd E. [Argonne National Lab. (ANL), Argonne, IL (United States); Olson, Arne P. [Argonne National Lab. (ANL), Argonne, IL (United States); Wilson, Erik H. [Argonne National Lab. (ANL), Argonne, IL (United States); Sun, Kaichao S. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Newton, Jr., Thomas H. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Hu, Lin-wen [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2013-09-30

    The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context most research and test reactors, both domestic and international, have started a program of conversion to the use of LEU fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (U-Mo) is expected to allow the conversion of U.S. domestic high performance reactors like MITR. This report presents the preliminary accident analyses for MITR cores fueled with LEU monolithic U-Mo alloy fuel with 10 wt% Mo. Preliminary results demonstrate adequate performance, including thermal margin to expected safety limits, for the LEU accident scenarios analyzed.

  6. Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Saleh Altwaijri

    2012-12-01

    Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but

  7. Geoarchaeology of Ancient Karnak's harbour (Upper Egypt) : preliminary results derived from sedimentological analyses

    Science.gov (United States)

    Ghilardi, M.

    2009-04-01

    This paper aims to detail the first results of a geomorphological study, led in the western part of the Karnak Temple, Upper Egypt. The geoarchaeological approach privileged here helps to better understand the Nile River dynamics in the neighbourhood of the ancient harbour and of the jetty identified by archaeologists. Based on the study of six stratigraphical profiles, realized by the Egyptian Supreme Council of Antiquities and sixteen manual auger boreholes (up to a maximum depth of 3.50m) drilled in November 2008, the results clearly indicate the continuous presence of Nile River westward of the first Pylon. The boreholes were drilled westward and eastward of the ancient fluvial harbour. Fluvial dynamics characterized by flood events, sandy accretions and large Nile silts depositions are presented and discussed here for later palaeoenvironmental reconstruction. The accurate levelling of the different profiles and boreholes, with the help a topographic survey, allow us to get long sedimentological sequences and to correlate the different sedimentary units. Perspectives of research are introduced with the possibility to realize sedimentological analyses which include the grain-size distribution (sieving method employed) and a magnetic susceptibility study of the different sediments described. Finally, in order to obtain chronostratigraphic sequences, it is also proposed to perform radiocarbon dating on charcoal samples.

  8. Economic and Risk Analyses for SMEs Internationalization Projects. A Preliminary Insight on the Rationale of Business Consulting Firms

    Directory of Open Access Journals (Sweden)

    Elena - Madalina VĂTĂMĂNESCU

    2014-06-01

    Full Text Available The present study is meant to be a first step towards the investigation of two current issues: the organization’s demand for highly professional services and the business consulting firms’ response as a challenge for the ethic imperatives. The main question raised is whether the national business consulting firms are liable to provide the economic and risk analyses required by the ambitious internationalization projects of small and medium enterprises (SMEs, and if not, what should be the firms’ ethical approach. In this respect, preliminary conclusions were made after testing the deliverables of several business consulting firms which were contracted to elaborate intricate economic and risk analyses of an internationalization project developed by a medium enterprise. The proven level of expertise of the contracted firms did not confirm their claims and assurances that they were fit for the job. This is why the rationale of today’s business consulting firms should be taken into account for further consideration while the exigency for increased savvy should become a priority.

  9. Reservoir zonation based on statistical analyses: A case study of the Nubian sandstone, Gulf of Suez, Egypt

    Science.gov (United States)

    El Sharawy, Mohamed S.; Gaafar, Gamal R.

    2016-12-01

    Both reservoir engineers and petrophysicists have been concerned about dividing a reservoir into zones for engineering and petrophysics purposes. Through decades, several techniques and approaches were introduced. Out of them, statistical reservoir zonation, stratigraphic modified Lorenz (SML) plot and the principal component and clustering analyses techniques were chosen to apply on the Nubian sandstone reservoir of Palaeozoic - Lower Cretaceous age, Gulf of Suez, Egypt, by using five adjacent wells. The studied reservoir consists mainly of sandstone with some intercalation of shale layers with varying thickness from one well to another. The permeability ranged from less than 1 md to more than 1000 md. The statistical reservoir zonation technique, depending on core permeability, indicated that the cored interval of the studied reservoir can be divided into two zones. Using reservoir properties such as porosity, bulk density, acoustic impedance and interval transit time indicated also two zones with an obvious variation in separation depth and zones continuity. The stratigraphic modified Lorenz (SML) plot indicated the presence of more than 9 flow units in the cored interval as well as a high degree of microscopic heterogeneity. On the other hand, principal component and cluster analyses, depending on well logging data (gamma ray, sonic, density and neutron), indicated that the whole reservoir can be divided at least into four electrofacies having a noticeable variation in reservoir quality, as correlated with the measured permeability. Furthermore, continuity or discontinuity of the reservoir zones can be determined using this analysis.

  10. Nomothetic outcome assessment in counseling and psychotherapy: Development and preliminary psychometric analyses of the Depression/Anxiety Negative Affect Scale

    Directory of Open Access Journals (Sweden)

    Scott T. Meier

    2012-12-01

    Full Text Available Negative affect (NA plays a significant role in the initiation, persistence, and response to psychotherapy of many client problems (Moses & Barlow, 2006. This report describes the development of a brief NA measure, the Depression/Anxiety Negative Affect (DANA scale, and preliminary analyses of its psychometric properties. An initial pool of DANA items was selected on the basis of a review of relevant literature about emotion science and counseling outcomes, related tests, and feedback from psychotherapists as part of a pilot test. The DANA was evaluated in two representative clinical samples where psychotherapists produced a total of 363 session ratings with 81 clients. DANA scores evidenced adequate internal consistency, evidence of convergent and discriminant validity, and sensitivity to change over the course of psychotherapy. Effect sizes (ES of DANA scores consistently equaled or exceeded the average ES of .68 found for scales assessing the outcomes of counseling and psychotherapy in meta-analytic studies (Smith & Glass, 1977. ESs greater than 1 were found on DANA variables for clients whose therapists rated them as experiencing, rather than avoiding, NA.

  11. Preliminary PM2.5 and PM10 fractions source apportionment complemented by statistical accuracy determination

    Directory of Open Access Journals (Sweden)

    Samek Lucyna

    2016-03-01

    Full Text Available Samples of PM10 and PM2.5 fractions were collected between the years 2010 and 2013 at the urban area of Krakow, Poland. Numerous types of air pollution sources are present at the site; these include steel and cement industries, traffic, municipal emission sources and biomass burning. Energy dispersive X-ray fluorescence was used to determine the concentrations of the following elements: Cl, K, Ca, Ti, Mn, Fe, Ni, Cu, Zn, Br, Rb, Sr, As and Pb within the collected samples. Defining the elements as indicators, airborne particulate matter (APM source profiles were prepared by applying principal component analysis (PCA, factor analysis (FA and multiple linear regression (MLR. Four different factors identifying possible air pollution sources for both PM10 and PM2.5 fractions were attributed to municipal emissions, biomass burning, steel industry, traffic, cement and metal industry, Zn and Pb industry and secondary aerosols. The uncertainty associated with each loading was determined by a statistical simulation method that took into account the individual elemental concentrations and their corresponding uncertainties. It will be possible to identify two or more sources of air particulate matter pollution for a single factor in case it is extremely difficult to separate the sources.

  12. Preliminary Retrospective Analysis of Daily Tomotherapy Output Constancy Checks Using Statistical Process Control.

    Directory of Open Access Journals (Sweden)

    Emilio Mezzenga

    Full Text Available The purpose of this study was to retrospectively evaluate the results from a Helical TomoTherapy Hi-Art treatment system relating to quality controls based on daily static and dynamic output checks using statistical process control methods. Individual value X-charts, exponentially weighted moving average charts, and process capability and acceptability indices were used to monitor the treatment system performance. Daily output values measured from January 2014 to January 2015 were considered. The results obtained showed that, although the process was in control, there was an out-of-control situation in the principal maintenance intervention for the treatment system. In particular, process capability indices showed a decreasing percentage of points in control which was, however, acceptable according to AAPM TG148 guidelines. Our findings underline the importance of restricting the acceptable range of daily output checks and suggest a future line of investigation for a detailed process control of daily output checks for the Helical TomoTherapy Hi-Art treatment system.

  13. Exploring the statistical and clinical impact of two interim analyses on the Phase II design with option for direct assignment.

    Science.gov (United States)

    An, Ming-Wen; Mandrekar, Sumithra J; Edelman, Martin J; Sargent, Daniel J

    2014-07-01

    The primary goal of Phase II clinical trials is to understand better a treatment's safety and efficacy to inform a Phase III go/no-go decision. Many Phase II designs have been proposed, incorporating randomization, interim analyses, adaptation, and patient selection. The Phase II design with an option for direct assignment (i.e. stop randomization and assign all patients to the experimental arm based on a single interim analysis (IA) at 50% accrual) was recently proposed [An et al., 2012]. We discuss this design in the context of existing designs, and extend it from a single-IA to a two-IA design. We compared the statistical properties and clinical relevance of the direct assignment design with two IA (DAD-2) versus a balanced randomized design with two IA (BRD-2) and a direct assignment design with one IA (DAD-1), over a range of response rate ratios (2.0-3.0). The DAD-2 has minimal loss in power (assignment design, especially with two IA, provides a middle ground with desirable statistical properties and likely appeal to both clinicians and patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Linkage analyses in type I diabetes mellitus using CASPAR, a software and statistical program for conditional analysis of polygenic diseases.

    Science.gov (United States)

    Buhler, J; Owerbach, D; Schäffer, A A; Kimmel, M; Gabbay, K H

    1997-01-01

    We have developed software and statistical tools for linkage analysis of polygenic diseases. We use type I diabetes mellitus (insulin-dependent diabetes mellitus, IDDM) as our model system. Two susceptibility loci (IDDM1 on 6p21 and IDDM2 on 11p15) are well established, and recent genome searches suggest the existence of other susceptibility loci. We have implemented CASPAR, a software tool that makes it possible to test for linkage quickly and efficiently using multiple polymorphic DNA markers simultaneously in nuclear families consisting of two unaffected parents and a pair of affected siblings (ASP). We use a simulation-based method to determine whether lod scores from a collection of ASP tests are significant. We test our new software and statistical tools to assess linkage of IDDM5 and IDDM7 conditioned on analyses with 1 or 2 other unlinked type I diabetes susceptibility loci. The results from the CASPAR analysis suggest that conditioning of IDDM5 on IDDM1 and IDDM4, and of IDDM7 on IDDM1 and IDDM2 provides significant benefits for the genetic analysis of polygenic loci.

  15. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses.

    Science.gov (United States)

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-12-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community. © 2014 The Authors. FEMS Microbiology Ecology published by John Wiley & Sons Ltd on behalf of Federation of European Microbiological Societies.

  16. Analyses of simulations of three-dimensional lattice proteins in comparison with a simplified statistical mechanical model of protein folding.

    Science.gov (United States)

    Abe, H; Wako, H

    2006-07-01

    Folding and unfolding simulations of three-dimensional lattice proteins were analyzed using a simplified statistical mechanical model in which their amino acid sequences and native conformations were incorporated explicitly. Using this statistical mechanical model, under the assumption that only interactions between amino acid residues within a local structure in a native state are considered, the partition function of the system can be calculated for a given native conformation without any adjustable parameter. The simulations were carried out for two different native conformations, for each of which two foldable amino acid sequences were considered. The native and non-native contacts between amino acid residues occurring in the simulations were examined in detail and compared with the results derived from the theoretical model. The equilibrium thermodynamic quantities (free energy, enthalpy, entropy, and the probability of each amino acid residue being in the native state) at various temperatures obtained from the simulations and the theoretical model were also examined in order to characterize the folding processes that depend on the native conformations and the amino acid sequences. Finally, the free energy landscapes were discussed based on these analyses.

  17. Statistical model of fractures and deformation zones. Preliminary site description, Laxemar subarea, version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hermanson, Jan; Forssberg, Ola [Golder Associates AB, Stockholm (Sweden); Fox, Aaron; La Pointe, Paul [Golder Associates Inc., Redmond, WA (United States)

    2005-10-15

    The goal of this summary report is to document the data sources, software tools, experimental methods, assumptions, and model parameters in the discrete-fracture network (DFN) model for the local model volume in Laxemar, version 1.2. The model parameters presented herein are intended for use by other project modeling teams. Individual modeling teams may elect to simplify or use only a portion of the DFN model, depending on their needs. This model is not intended to be a flow model or a mechanical model; as such, only the geometrical characterization is presented. The derivations of the hydraulic or mechanical properties of the fractures or their subsurface connectivities are not within the scope of this report. This model represents analyses carried out on particular data sets. If additional data are obtained, or values for existing data are changed or excluded, the conclusions reached in this report, and the parameter values calculated, may change as well. The model volume is divided into two subareas; one located on the Simpevarp peninsula adjacent to the power plant (Simpevarp), and one further to the west (Laxemar). The DFN parameters described in this report were determined by analysis of data collected within the local model volume. As such, the final DFN model is only valid within this local model volume and the modeling subareas (Laxemar and Simpevarp) within.

  18. Statistical Analyses of Optimum Partial Replacement of Cement by Fly Ash Based on Complete Consumption of Calcium Hydroxide

    Directory of Open Access Journals (Sweden)

    Ouypornprasert Winai

    2016-01-01

    Full Text Available The objectives of this technical paper were to propose the optimum partial replacement of cement by fly ash based on the complete consumption of calcium hydroxide from hydration reactions of cement and the long-term strength activity index based on equivalent calcium silicate hydrate as well as the propagation of uncertainty due to randomness inherent in main chemical compositions in cement and fly ash. Firstly the hydration- and pozzolanic reactions as well as stoichiometry were reviewed. Then the optimum partial replacement of cement by fly ash was formulated. After that the propagation of uncertainty due to main chemical compositions in cement and fly ash was discussed and the reliability analyses for applying the suitable replacement were reviewed. Finally an applicability of the concepts mentioned above based on statistical data of materials available was demonstrated. The results from analyses were consistent with the testing results by other researchers. The results of this study provided guidelines of suitable utilization of fly ash for partial replacement of cement. It was interesting to note that these concepts could be extended to optimize partial replacement of cement by other types of pozzolan which were described in the other papers of the authors.

  19. Dumb-bell galaxies in southern clusters: Catalog and preliminary statistical results

    Science.gov (United States)

    Vettolani, G.; Gregorini, L.; Parma, P.; Deruiter, H. R.

    1990-01-01

    The dominant galaxy of a rich cluster is often an object whose formation and evolution is closely connected to the dynamics of the cluster itself. Hoessel (1980) and Schneider et al. (1983) estimate that 50 percent of the dominant galaxies are either of the dumb-bell type or have companions at projected distances less than 20 kpc, which is far in excess of the number expected from chance projection (see also Rood and Leir 1979). Presently there is no complete sample of these objects, with the exception of the listing of dumb-bell galaxies in BM type I and I-II clusters in the Abell statistical sample of Rood and Leir (1979). Recent dynamical studies of dumb-bell galaxies in clusters (Valentijn and Casertano, 1988) still suffer from inhomogeneity of the sample. The fact that it is a mixture of optically and radio selected objects may have introduced an unknown biases, for instance if the probability of radio emission is enhanced by the presence of close companions (Stocke, 1978, Heckman et al. 1985, Vettolani and Gregorini 1988) a bias could be present in their velocity distribution. However, this situation is bound to improve: a new sample of Abell clusters in the Southern Hemisphere has been constructed (Abell et al., 1988 hereafter ACO), which has several advantages over the original northern catalog. The plate material (IIIaJ plates) is of better quality and reaches fainter magnitudes. This makes it possible to classify the cluster types with a higher degree of accuracy, as well as to fainter magnitudes. The authors therefore decided to reconsider the whole problem constructing a new sample of dumb-bell galaxies homogeneously selected from the ACO survey. Details of the classification criteria are given.

  20. Statistical analysis of individual participant data meta-analyses: a comparison of methods and recommendations for practice.

    Directory of Open Access Journals (Sweden)

    Gavin B Stewart

    Full Text Available BACKGROUND: Individual participant data (IPD meta-analyses that obtain "raw" data from studies rather than summary data typically adopt a "two-stage" approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of "one-stage" approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare "two-stage" and "one-stage" models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. METHODS AND FINDINGS: We included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97. Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. CONCLUSIONS: For these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled

  1. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany;

    2011-01-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random......-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically...... significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four...

  2. Evaluating the statistical conclusion validity of weighted mean results in meta-analysis by analysing funnel graph diagrams.

    Science.gov (United States)

    Elvik, R

    1998-03-01

    The validity of weighted mean results estimated in meta-analysis has been criticized. This paper presents a set of simple statistical and graphical techniques that can be used in meta-analysis to evaluate common points of criticism. The graphical techniques are based on funnel graph diagrams. Problems and techniques for dealing with them that are discussed include: (1) the so-called 'apples and oranges' problem, stating that mean results in meta-analysis tend to gloss over important differences that should be highlighted. A test of the homogeneity of results is described for testing the presence of this problem. If results are highly heterogeneous, a random effects model of meta-analysis is more appropriate than the fixed effects model of analysis. (2) The possible presence of skewness in a sample of results. This can be tested by comparing the mode, median and mean of the results in the sample. (3) The possible presence of more than one mode in a sample of results. This can be tested by forming a frequency distribution of the results and examining the shape of this distribution. (4) The sensitivity of the mean to the possible presence of atypical results (outliers) can be tested by comparing the overall mean to the mean of all results except the one suspected of being atypical. (5) The possible presence of publication bias can be tested by visual inspection of funnel graph diagrams in which data points have been sorted according to statistical significance and direction of effect. (6) The possibility of underestimating the standard error of the mean in meta-analyses by using multiple, correlated results from the same study as the unit of analysis can be addressed by using the jack-knife technique for estimating the uncertainty of the mean. Brief examples, taken from road safety research, are given of all these techniques.

  3. Statistical Contact Angle Analyses with the High-Precision Drop Shape Analysis (HPDSA Approach: Basic Principles and Applications

    Directory of Open Access Journals (Sweden)

    Florian Heib

    2016-11-01

    Full Text Available Surface science, which includes the preparation, development and analysis of surfaces and coatings, is essential in both fundamental and applied as well as in engineering and industrial research. Contact angle measurements using sessile drop techniques are commonly used to characterize coated surfaces or surface modifications. Well-defined surfaces structures at both nanoscopic and microscopic level can be achieved but the reliable characterization by means of contact angle measurements and their interpretation often remains an open question. Thus, we focused our research effort on one main problem of surface science community, which is the determination of correct and valid definitions and measurements of contact angles. In this regard, we developed the high-precision drop shape analysis (HPDSA, which involves a complex transformation of images from sessile drop experiments to Cartesian coordinates and opens up the possibility of a physically meaningful contact angle calculation. To fulfill the dire need for a reproducible contact angle determination/definition, we developed three easily adaptable statistical analyses procedures. In the following, the basic principles of HPDSA will be explained and applications of HPDSA will be illustrated. Thereby, the unique potential of this analysis approach will be illustrated by means of selected examples.

  4. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  5. Application of both a physical theory and statistical procedure in the analyses of an in vivo study of aerosol deposition

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, K.H.; Swift, D.L. [Johns Hopkins Univ., Baltimore, MD (United States); Yang, Y.H. [Univ. of North Carolina, Chapel Hill, NC (United States)] [and others

    1995-12-01

    Regional deposition of inhaled aerosols in the respiratory tract is a significant factor in assessing the biological effects from exposure to a variety of environmental particles. Understanding the deposition efficiency of inhaled aerosol particles in the nasal and oral airways can help evaluate doses to the extrathoracic region as well as to the lung. Dose extrapolation from laboratory animals to humans has been questioned due to significant physiological and anatomical variations. Although human studies are considered ideal for obtaining in vivo toxicity information important in risk assessment, the number of subjects in the study is often small compared to epidemiological and animal studies. This study measured in vivo the nasal airway dimensions and the extrathoracic deposition of ultrafine aerosols in 10 normal adult males. Variability among individuals was significant. The nasal geometry of each individual was characterized at a resolution of 3 mm using magnetic resonance imaging (MRI) and acoustic rhinometry (AR). The turbulent diffusion theory was used to describe the nonlinear nature of extrathoracic aerosol deposition. To determine what dimensional features of the nasal airway were responsible for the marked differences in particle deposition, the MIXed-effects NonLINear Regression (MIXNLIN) procedure was used to account for the random effort of repeated measurements on the same subject. Using both turbulent diffusion theory and MIXNLIN, the ultrafine particle deposition is correlated with nasal dimensions measured by the surface area, minimum cross-sectional area, and complexity of the airway shape. The combination of MRI and AR is useful for characterizing both detailed nasal dimensions and temporal changes in nasal patency. We conclude that a suitable statistical procedure incorporated with existing physical theories must be used in data analyses for experimental studies of aerosol deposition that involve a relatively small number of human subjects.

  6. The price of electricity from private power producers: Stage 2, Expansion of sample and preliminary statistical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Comnes, G.A.; Belden, T.N.; Kahn, E.P.

    1995-02-01

    The market for long-term bulk power is becoming increasingly competitive and mature. Given that many privately developed power projects have been or are being developed in the US, it is possible to begin to evaluate the performance of the market by analyzing its revealed prices. Using a consistent method, this paper presents levelized contract prices for a sample of privately developed US generation properties. The sample includes 26 projects with a total capacity of 6,354 MW. Contracts are described in terms of their choice of technology, choice of fuel, treatment of fuel price risk, geographic location, dispatchability, expected dispatch niche, and size. The contract price analysis shows that gas technologies clearly stand out as the most attractive. At an 80% capacity factor, coal projects have an average 20-year levelized price of $0.092/kWh, whereas natural gas combined cycle and/or cogeneration projects have an average price of $0.069/kWh. Within each technology type subsample, however, there is considerable variation. Prices for natural gas combustion turbines and one wind project are also presented. A preliminary statistical analysis is conducted to understand the relationship between price and four categories of explanatory factors including product heterogeneity, geographic heterogeneity, economic and technological change, and other buyer attributes (including avoided costs). Because of residual price variation, we are unable to accept the hypothesis that electricity is a homogeneous product. Instead, the analysis indicates that buyer value still plays an important role in the determination of price for competitively-acquired electricity.

  7. The effects of clinical and statistical heterogeneity on the predictive values of results from meta-analyses

    NARCIS (Netherlands)

    Melsen, W G; Rovers, M M; Bonten, M J M; Bootsma, M C J

    2014-01-01

    Variance between studies in a meta-analysis will exist. This heterogeneity may be of clinical, methodological or statistical origin. The last of these is quantified by the I(2) -statistic. We investigated, using simulated studies, the accuracy of I(2) in the assessment of heterogeneity and the effec

  8. Analysing spatial trends in referral patterns to cancer genetics services: a preliminary investigation of regional variations in Wales.

    Science.gov (United States)

    McDonald, Kevin; Higgs, Gary; Iredale, Rachel; Tempest, Vanessa; Gray, Jonathon

    2004-11-01

    This paper discusses spatial trends in referral patterns to a cancer genetics service. It presents a literature review outlining the paucity of existing research, a preliminary analysis at the Unitary Authority level in Wales and advances a programme of further research to be conducted at a more detailed spatial level. The preliminary analysis shows a weak negative relationship between referral rates from primary care and social deprivation by Unitary Authority (Spearman rank correlation coefficient, sigma = -0.38). There is also a weak positive relationship between average settlement size and referral rates (sigma = +0.28), which taken together may indicate that primary care practices in affluent urban areas are more likely to refer than those in poorer rural areas. Future research will be conducted at a finer spatial scale, and will take into account characteristics of primary care practices and the patients being referred, amongst other variables.

  9. Preliminary correlations of feature strength in spark-induced breakdown spectroscopy of bioaerosols with concentrations measured in laboratory analyses

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Morgan S.; Bauer, Amy J. Ray

    2010-05-01

    We present preliminary results that show good correlation between elemental compositions of three bioaerosol samples, as measured in the laboratory by combustion analysis and with proton-induced x-ray emission and spark-induced breakdown spectroscopy signals integrated over the entire emission time profiles. Atomic (Ca, Al, Fe, and Si) and molecular features (CN, N2{sup +}, and OH) were observed compared to the laboratory data.

  10. Lack of association between dopamine-β hydroxylase gene and a history of suicide attempt in schizophrenia: comparison of molecular and statistical haplotype analyses.

    Science.gov (United States)

    Howe, Aaron S; Leung, Tiffany; Bani-Fatemi, Ali; Souza, Renan; Tampakeras, Maria; Zai, Clement; Kennedy, James L; Strauss, John; De Luca, Vincenzo

    2014-06-01

    In the present study, we examined whether there was an association between dopamine-β hydroxylase (DBH) promoter polymorphisms (a 5'-ins/del and a GTn repeats) and a history of suicide attempt in 223 chronic schizophrenia individuals using statistical and molecular analyses. Within the genetic association study design, we compared the statistical haplotype phase with the molecular phase produced by the amplicon size analysis. The two DBH polymorphisms were analysed using the Applied Biosystem 3130 and the statistical analyses were carried out using UNPHASED v.3.1.5 and PHASE v.2.1.1 to determine the haplotype frequencies and infer the phase in each patient. Then, DBH polymorphisms were incorporated into the Haploscore analysis to test the association with a history of suicide attempt. In our sample, 62 individuals had a history of suicide attempt. There was no association between DBH polymorphisms and a history of suicide attempt across the different analytical strategies applied. There was no significant difference between the haplotype frequencies produced by the amplicon size analysis and statistical analytical strategies. However, some of the haplotype pairs inferred in the PHASE analysis were inconsistent with the molecular haplotype size measured by the ABI 3130. The amplicon size analysis proved to be the most accurate method using the haplotype as a possible genetic marker for future testing. Although the results were not significant, further molecular analyses of the DBH gene and other candidate genes can clarify the utility of the molecular phase in psychiatric genetics and personalized medicine.

  11. Preliminary study to characterize plastic polymers using elemental analyser/isotope ratio mass spectrometry (EA/IRMS).

    Science.gov (United States)

    Berto, Daniela; Rampazzo, Federico; Gion, Claudia; Noventa, Seta; Ronchi, Francesca; Traldi, Umberto; Giorgi, Giordano; Cicero, Anna Maria; Giovanardi, Otello

    2017-06-01

    Plastic waste is a growing global environmental problem, particularly in the marine ecosystems, in consideration of its persistence. The monitoring of the plastic waste has become a global issue, as reported by several surveillance guidelines proposed by Regional Sea Conventions (OSPAR, UNEP) and appointed by the EU Marine Strategy Framework Directive. Policy responses to plastic waste vary at many levels, ranging from beach clean-up to bans on the commercialization of plastic bags and to Regional Plans for waste management and recycling. Moreover, in recent years, the production of plant-derived biodegradable plastic polymers has assumed increasing importance. This study reports the first preliminary characterization of carbon stable isotopes (δ(13)C) of different plastic polymers (petroleum- and plant-derived) in order to increase the dataset of isotopic values as a tool for further investigation in different fields of polymers research as well as in the marine environment surveillance. The δ(13)C values determined in different packaging for food uses reflect the plant origin of "BIO" materials, whereas the recycled plastic materials displayed a δ(13)C signatures between plant- and petroleum-derived polymers source. In a preliminary estimation, the different colours of plastic did not affect the variability of δ(13)C values, whereas the abiotic and biotic degradation processes that occurred in the plastic materials collected on beaches and in seawater, showed less negative δ(13)C values. A preliminary experimental field test confirmed these results. The advantages offered by isotope ratio mass spectrometry with respect to other analytical methods used to characterize the composition of plastic polymers are: high sensitivity, small amount of material required, rapidity of analysis, low cost and no limitation in black/dark samples compared with spectroscopic analysis.

  12. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses.

    Science.gov (United States)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany; Thabane, Lehana; Gluud, Christian

    2011-12-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four alternative estimators yielded p-values and CIs that could be interpreted as discordant in up to 11.6% or 6% of the included meta-analyses pending whether a normal distribution or a t-distribution of the intervention effect estimates were assumed. Large discrepancies were observed for the measures of degree of heterogeneity when comparing DL with each of the four alternative estimators. Estimating the degree (percentage) of heterogeneity on the basis of less biased between-trial variance estimators seems preferable to current practice. Disclosing inferential sensitivity of p-values and CIs may also be necessary when borderline significant results have substantial impact on the conclusion. Copyright © 2012 John Wiley & Sons, Ltd.

  13. An Approximate Method for Calculation of Mean Statistical Value of Ship Service Speed on a Given Shipping Line , Useful in Preliminary Design Stage

    Directory of Open Access Journals (Sweden)

    Żelazny Katarzyna

    2015-01-01

    Full Text Available During ship design, its service speed is one of the crucial parameters which decide on future economic effects. As sufficiently exact calculation methods applicable to preliminary design stage are lacking the so called contract speed which a ship reaches in calm water is usually applied. In the paper [11] a parametric method for calculation of total ship resistance in actual weather conditions (wind, waves, sea current, was presented. This paper presents a parametric model of ship propulsion system (screw propeller - propulsion engine as well as a calculation method, based on both models, of mean statistical value of ship service speed in seasonal weather conditions occurring on shipping lines. The method makes use of only basic design parameters and may be applied in preliminary design stage.

  14. An innovative statistical approach for analysing non-continuous variables in environmental monitoring: assessing temporal trends of TBT pollution.

    Science.gov (United States)

    Santos, José António; Galante-Oliveira, Susana; Barroso, Carlos

    2011-03-01

    The current work presents an innovative statistical approach to model ordinal variables in environmental monitoring studies. An ordinal variable has values that can only be compared as "less", "equal" or "greater" and it is not possible to have information about the size of the difference between two particular values. The example of ordinal variable under this study is the vas deferens sequence (VDS) used in imposex (superimposition of male sexual characters onto prosobranch females) field assessment programmes for monitoring tributyltin (TBT) pollution. The statistical methodology presented here is the ordered logit regression model. It assumes that the VDS is an ordinal variable whose values match up a process of imposex development that can be considered continuous in both biological and statistical senses and can be described by a latent non-observable continuous variable. This model was applied to the case study of Nucella lapillus imposex monitoring surveys conducted in the Portuguese coast between 2003 and 2008 to evaluate the temporal evolution of TBT pollution in this country. In order to produce more reliable conclusions, the proposed model includes covariates that may influence the imposex response besides TBT (e.g. the shell size). The model also provides an analysis of the environmental risk associated to TBT pollution by estimating the probability of the occurrence of females with VDS ≥ 2 in each year, according to OSPAR criteria. We consider that the proposed application of this statistical methodology has a great potential in environmental monitoring whenever there is the need to model variables that can only be assessed through an ordinal scale of values.

  15. Preliminary analyses of the deep geoenvironmental characteristics for the deep borehole disposal of high-level radioactive waste in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Youl; Lee, Min Soo; Choi, Heui Joo; Kim, Geon Young; Kim, Kyung Su [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-06-15

    Spent fuels from nuclear power plants, as well as high-level radioactive waste from the recycling of spent fuels, should be safely isolated from human environment for an extremely long time. Recently, meaningful studies on the development of deep borehole radioactive waste disposal system in 3-5 km depth have been carried out in USA and some countries in Europe, due to great advance in deep borehole drilling technology. In this paper, domestic deep geoenvironmental characteristics are preliminarily investigated to analyze the applicability of deep borehole disposal technology in Korea. To do this, state-of-the art technologies in USA and some countries in Europe are reviewed, and geological and geothermal data from the deep boreholes for geothermal usage are analyzed. Based on the results on the crystalline rock depth, the geothermal gradient and the spent fuel types generated in Korea, a preliminary deep borehole concept including disposal canister and sealing system, is suggested.

  16. [Selection of a statistical model for evaluation of the reliability of the results of toxicological analyses. I. Discussion on selected statistical models for evaluation of the systems of control of the results of toxicological analyses].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    2 statistical models for evaluation of toxicological studies results have been presented. Model I. after R. Hoschek and H. J. Schittke (2) involves: 1. Elimination of the values deviating from most results-by Grubbs' method (2). 2. Analysis of the differences between the results obtained by the participants of the action and tentatively assumed value. 3. Evaluation of significant differences between the reference value and average value for a given series of measurements. 4. Thorough evaluation of laboratories based on evaluation coefficient fx. Model II after Keppler et al. As a criterion for evaluating the results the authors assumed the median. Individual evaluation of laboratories was performed on the basis of: 1. Adjusted test "t" 2. Linear regression test.

  17. A statistical human resources costing and accounting model for analysing the economic effects of an intervention at a workplace.

    Science.gov (United States)

    Landstad, Bodil J; Gelin, Gunnar; Malmquist, Claes; Vinberg, Stig

    2002-09-15

    The study had two primary aims. The first aim was to combine a human resources costing and accounting approach (HRCA) with a quantitative statistical approach in order to get an integrated model. The second aim was to apply this integrated model in a quasi-experimental study in order to investigate whether preventive intervention affected sickness absence costs at the company level. The intervention studied contained occupational organizational measures, competence development, physical and psychosocial working environmental measures and individual and rehabilitation measures on both an individual and a group basis. The study is a quasi-experimental design with a non-randomized control group. Both groups involved cleaning jobs at predominantly female workplaces. The study plan involved carrying out before and after studies on both groups. The study included only those who were at the same workplace during the whole of the study period. In the HRCA model used here, the cost of sickness absence is the net difference between the costs, in the form of the value of the loss of production and the administrative cost, and the benefits in the form of lower labour costs. According to the HRCA model, the intervention used counteracted a rise in sickness absence costs at the company level, giving an average net effect of 266.5 Euros per person (full-time working) during an 8-month period. Using an analogue statistical analysis on the whole of the material, the contribution of the intervention counteracted a rise in sickness absence costs at the company level giving an average net effect of 283.2 Euros. Using a statistical method it was possible to study the regression coefficients in sub-groups and calculate the p-values for these coefficients; in the younger group the intervention gave a calculated net contribution of 605.6 Euros with a p-value of 0.073, while the intervention net contribution in the older group had a very high p-value. Using the statistical model it was

  18. USING STATISTICAL PROCESS CONTROL AND SIX SIGMA TO CRITICALLY ANALYSE SAFETY OF HELICAL SPRINGS: A RAILWAY CASE STUDY

    Directory of Open Access Journals (Sweden)

    Fulufhelo Ṋemavhola

    2017-09-01

    Full Text Available The paper exhibits the examination of life quality evaluation of helical coil springs in the railway industry as it impacts the safety of the transportation of goods and people. The types of spring considered are: the external spring, internal spring and stabiliser spring. Statistical process control was utilised as the fundamental instrument in the investigation. Measurements were performed using a measuring tape, dynamic actuators and the vernier caliper. The purpose of this research was to examine the usability of old helical springs found in a railway environment. The goal of the experiment was to obtain factual statistical information to determine the life quality of the helical springs used in the railroad transportation environment. Six sigma advocacies were additionally used as a part of this paper. According to six sigma estimation examination only the stabilizers and inner springs for coil bar diameter met the six sigma prerequisites. It is reasoned that the coil springs should be replaced as they do not meet the six sigma requirements.

  19. Ecophysiological significance of scale-dependent patterns in prokaryotic genomes unveiled by a combination of statistic and genometric analyses.

    Science.gov (United States)

    Garcia, Juan A L; Bartumeus, Frederic; Roche, David; Giraldo, Jesús; Stanley, H Eugene; Casamayor, Emilio O

    2008-06-01

    We combined genometric (DNA walks) and statistical (detrended fluctuation analysis) methods on 456 prokaryotic chromosomes from 309 different bacterial and archaeal species to look for specific patterns and long-range correlations along the genome and relate them to ecological lifestyles. The position of each nucleotide along the complete genome sequence was plotted on an orthogonal plane (DNA landscape), and fluctuation analysis applied to the DNA walk series showed a long-range correlation in contrast to the lack of correlation for artificially generated genomes. Different features in the DNA landscapes among genomes from different ecological and metabolic groups of prokaryotes appeared with the combined analysis. Transition from hyperthermophilic to psychrophilic environments could have been related to more complex structural adaptations in microbial genomes, whereas for other environmental factors such as pH and salinity this effect would have been smaller. Prokaryotes with domain-specific metabolisms, such as photoautotrophy in Bacteria and methanogenesis in Archaea, showed consistent differences in genome correlation structure. Overall, we show that, beyond the relative proportion of nucleotides, correlation properties derived from their sequential position within the genome hide relevant phylogenetic and ecological information. This can be studied by combining genometric and statistical physics methods, leading to a reduction of genome complexity to a few useful descriptors.

  20. Influence of Immersion Conditions on The Tensile Strength of Recycled Kevlar®/Polyester/Low-Melting-Point Polyester Nonwoven Geotextiles through Applying Statistical Analyses

    Directory of Open Access Journals (Sweden)

    Jing-Chzi Hsieh

    2016-05-01

    Full Text Available The recycled Kevlar®/polyester/low-melting-point polyester (recycled Kevlar®/PET/LPET nonwoven geotextiles are immersed in neutral, strong acid, and strong alkali solutions, respectively, at different temperatures for four months. Their tensile strength is then tested according to various immersion periods at various temperatures, in order to determine their durability to chemicals. For the purpose of analyzing the possible factors that influence mechanical properties of geotextiles under diverse environmental conditions, the experimental results and statistical analyses are incorporated in this study. Therefore, influences of the content of recycled Kevlar® fibers, implementation of thermal treatment, and immersion periods on the tensile strength of recycled Kevlar®/PET/LPET nonwoven geotextiles are examined, after which their influential levels are statistically determined by performing multiple regression analyses. According to the results, the tensile strength of nonwoven geotextiles can be enhanced by adding recycled Kevlar® fibers and thermal treatment.

  1. Point processes statistics of stable isotopes: analysing water uptake patterns in a mixed stand of Aleppo pine and Holm oak

    Directory of Open Access Journals (Sweden)

    Carles Comas

    2015-04-01

    Full Text Available Aim of study: Understanding inter- and intra-specific competition for water is crucial in drought-prone environments. However, little is known about the spatial interdependencies for water uptake among individuals in mixed stands. The aim of this work was to compare water uptake patterns during a drought episode in two common Mediterranean tree species, Quercus ilex L. and Pinus halepensis Mill., using the isotope composition of xylem water (δ18O, δ2H as hydrological marker. Area of study: The study was performed in a mixed stand, sampling a total of 33 oaks and 78 pines (plot area= 888 m2. We tested the hypothesis that both species uptake water differentially along the soil profile, thus showing different levels of tree-to-tree interdependency, depending on whether neighbouring trees belong to one species or the other. Material and Methods: We used pair-correlation functions to study intra-specific point-tree configurations and the bivariate pair correlation function to analyse the inter-specific spatial configuration. Moreover, the isotopic composition of xylem water was analysed as a mark point pattern. Main results: Values for Q. ilex (δ18O = –5.3 ± 0.2‰, δ2H = –54.3 ± 0.7‰ were significantly lower than for P. halepensis (δ18O = –1.2 ± 0.2‰, δ2H = –25.1 ± 0.8‰, pointing to a greater contribution of deeper soil layers for water uptake by Q. ilex. Research highlights: Point-process analyses revealed spatial intra-specific dependencies among neighbouring pines, showing neither oak-oak nor oak-pine interactions. This supports niche segregation for water uptake between the two species.

  2. Application of fingerprint-based multivariate statistical analyses in source characterization and tracking of contaminated sediment migration in surface water.

    Science.gov (United States)

    Chen, Fei; Taylor, William D; Anderson, William B; Huck, Peter M

    2013-08-01

    This study investigates the suitability of multivariate techniques, including principal component analysis and discriminant function analysis, for analysing polycyclic aromatic hydrocarbon and heavy metal-contaminated aquatic sediment data. We show that multivariate "fingerprint" analysis of relative abundances of contaminants can characterize a contamination source and distinguish contaminated sediments of interest from background contamination. Thereafter, analysis of the unstandardized concentrations among samples contaminated from the same source can identify migration pathways within a study area that is hydraulically complex and has a long contamination history, without reliance on complex hydrodynamic data and modelling techniques. Together, these methods provide an effective tool for drinking water source monitoring and protection.

  3. Statistical tools for analysing the data obtained from repeated dose toxicity studies with rodents: a comparison of the statistical tools used in Japan with that of used in other countries.

    Science.gov (United States)

    Kobayashi, Katsumi; Pillai, K Sadasivan; Guhatakurta, Soma; Cherian, K M; Ohnishi, Mariko

    2011-01-01

    In the present study, an attempt was made to compare the statistical tools used for analysing the data of repeated dose toxicity studies with rodents conducted in 45 countries, with that of Japan. The study revealed that there was no congruence among the countries in the use of statistical tools for analysing the data obtained from the above studies. For example, to analyse the data obtained from repeated dose toxicity studies with rodents, Scheffé's multiple range and Dunnett type (joint type Dunnett) tests are commonly used in Japan, but in other countries use of these statistical tools is not so common. However, statistical techniques used for testing the above data for homogeneity of variance and inter-group comparisons do not differ much between Japan and other countries. In Japan, the data are generally not tested for normality and the same is true with the most of the countries investigated. In the present investigation, out of 127 studies examined, data of only 6 studies were analysed for both homogeneity of variance and normal distribution. For examining homogeneity of variance, we propose Levene's test, since the commonly used Bartlett's test may show heterogeneity in variance in all the groups, if a slight heterogeneity in variance is seen any one of the groups. We suggest the data may be examined for both homogeneity of variance and normal distribution. For the data of the groups that do not show heterogeneity of variance, to find the significant difference among the groups, we recommend Dunnett's test, and for those show heterogeneity of variance, we recommend Steel's test.

  4. Statistical model of fractures and deformations zones for Forsmark. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, Paul R. [Golder Associate Inc., Redmond, WA (United States); Olofsson, Isabelle; Hermanson, Jan [Golder Associates AB, Uppsala (Sweden)

    2005-04-01

    Compared to version 1.1, a much larger amount of data especially from boreholes is available. Both one-hole interpretation and Boremap indicate the presence of high and low fracture intensity intervals in the rock mass. The depth and width of these intervals varies from borehole to borehole but these constant fracture intensity intervals are contiguous and present quite sharp transitions. There is not a consistent pattern of intervals of high fracture intensity at or near to the surface. In many cases, the intervals of highest fracture intensity are considerably below the surface. While some fractures may have occurred or been reactivated in response to surficial stress relief, surficial stress relief does not appear to be a significant explanatory variable for the observed variations in fracture intensity. Data from the high fracture intensity intervals were extracted and statistical analyses were conducted in order to identify common geological factors. Stereoplots of fracture orientation versus depth for the different fracture intensity intervals were also produced for each borehole. Moreover percussion borehole data were analysed in order to identify the persistence of these intervals throughout the model volume. The main conclusions of these analyses are the following: The fracture intensity is conditioned by the rock domain, but inside a rock domain intervals of high and low fracture intensity are identified. The intervals of high fracture intensity almost always correspond to intervals with distinct fracture orientations (whether a set, most often the NW sub-vertical set, is highly dominant, or some orientation sets are missing). These high fracture intensity intervals are positively correlated to the presence of first and second generation minerals (epidote, calcite). No clear correlation for these fracture intensity intervals has been identified between holes. Based on these results the fracture frequency has been calculated in each rock domain for the

  5. Does bisphenol A induce superfeminization in Marisa cornuarietis? Part II: toxicity test results and requirements for statistical power analyses.

    Science.gov (United States)

    Forbes, Valery E; Aufderheide, John; Warbritton, Ryan; van der Hoeven, Nelly; Caspers, Norbert

    2007-03-01

    This study presents results of the effects of bisphenol A (BPA) on adult egg production, egg hatchability, egg development rates and juvenile growth rates in the freshwater gastropod, Marisa cornuarietis. We observed no adult mortality, substantial inter-snail variability in reproductive output, and no effects of BPA on reproduction during 12 weeks of exposure to 0, 0.1, 1.0, 16, 160 or 640 microg/L BPA. We observed no effects of BPA on egg hatchability or timing of egg hatching. Juveniles showed good growth in the control and all treatments, and there were no significant effects of BPA on this endpoint. Our results do not support previous claims of enhanced reproduction in Marisa cornuarietis in response to exposure to BPA. Statistical power analysis indicated high levels of inter-snail variability in the measured endpoints and highlighted the need for sufficient replication when testing treatment effects on reproduction in M. cornuarietis with adequate power.

  6. Literature review of some selected types of results and statistical analyses of total-ozone data. [for the ozonosphere

    Science.gov (United States)

    Myers, R. H.

    1976-01-01

    The depletion of ozone in the stratosphere is examined, and causes for the depletion are cited. Ground station and satellite measurements of ozone, which are taken on a worldwide basis, are discussed. Instruments used in ozone measurement are discussed, such as the Dobson spectrophotometer, which is credited with providing the longest and most extensive series of observations for ground based observation of stratospheric ozone. Other ground based instruments used to measure ozone are also discussed. The statistical differences of ground based measurements of ozone from these different instruments are compared to each other, and to satellite measurements. Mathematical methods (i.e., trend analysis or linear regression analysis) of analyzing the variability of ozone concentration with respect to time and lattitude are described. Various time series models which can be employed in accounting for ozone concentration variability are examined.

  7. Analyses on schedule-cost coefficient correlation of spaceflight project based on historical statistics and its application

    Institute of Scientific and Technical Information of China (English)

    Liu Yanqiong; Chen Yingwu

    2006-01-01

    When analyze the uncertainty of the cost and the schedule of the spaceflight project, it is needed to know the value of the schedule-cost correlation coefficient. This paper deduces the schedule distribution, considering the effect of the cost, and proposes the estimation formula of the correlation coefficient between the ln(schedule) and the cost. On the basis of the fact and Taylor expansion, the relation expression between the schedule-cost correlation coefficient and the ln-schedule-cost correlation coefficient is put forward. By analyzing the value features of the estimation formula of the ln-schedule-cost correlation coefficient, the general rules are proposed to ascertain the value of the schedule-cost correlation coefficient. An example is given to demonstrate how to approximately amend the schedule-cost correlation coefficient based on the historical statistics, which reveals the traditional assigned value is inaccurate. The universality of this estimation method is analyzed.

  8. Theoretical analyses and numerical simulations of the torsional mode for two acoustic viscometers with preliminary experimental tests.

    Science.gov (United States)

    Ai, Yuhui; Lange, Rebecca A

    2008-03-01

    A rigorous analysis of the torsional modes in both a cylindrical wave guide and the associated static viscous fluid field has been conducted from the solid and the fluid wave equations and the coupled boundary conditions. As a result, two acoustic viscometer models, along with four independent equations connecting the density and the viscosity of the fluid with the attenuation and the phase velocity of the torsional wave in the wave guide, have been developed. The analysis shows that the product of the viscosity and the density of the fluid can be measured from the end reflection coefficient of the torsional wave in the wave guide and that both the viscosity and the density can be determined simultaneously from either the phase velocity or the attenuation of the torsional wave in a single cylindrical wave guide. For the simultaneous measurements of the viscosity and the density, the independent equations have to be solved numerically, for example, using Matlab (The MathWorks, Natick, MA), given either the attenuation or the phase velocity in the wave guide that is surrounded by the fluid. To demonstrate the technical feasibility, numerical simulations have been conducted to discern viscosity, phase velocity, and density, all versus attenuation, at different frequencies, and with variable dimension of a molybdenum rod, so that both the advantages and the disadvantages of the simultaneous measurements can be explored. In the end, to test the two models, preliminary experiments on two viscous standards were conducted at 23 degrees C, and good agreements have been achieved between the viscosities measured from both models and for both standards.

  9. Statistical Analyses of Satellite Cloud Object Data From CERES. Part 4; Boundary-layer Cloud Objects During 1998 El Nino

    Science.gov (United States)

    Xu, Kuan-Man; Wong, Takmeng; Wielicki, Bruce A.; Parker, Lindsay

    2006-01-01

    Three boundary-layer cloud object types, stratus, stratocumulus and cumulus, that occurred over the Pacific Ocean during January-August 1998, are identified from the CERES (Clouds and the Earth s Radiant Energy System) single scanner footprint (SSF) data from the TRMM (Tropical Rainfall Measuring Mission) satellite. This study emphasizes the differences and similarities in the characteristics of each cloud-object type between the tropical and subtropical regions and among different size categories and among small geographic areas. Both the frequencies of occurrence and statistical distributions of cloud physical properties are analyzed. In terms of frequencies of occurrence, stratocumulus clouds dominate the entire boundary layer cloud population in all regions and among all size categories. Stratus clouds are more prevalent in the subtropics and near the coastal regions, while cumulus clouds are relatively prevalent over open ocean and the equatorial regions, particularly, within the small size categories. The largest size category of stratus cloud objects occurs more frequently in the subtropics than in the tropics and has much larger average size than its cumulus and stratocumulus counterparts. Each of the three cloud object types exhibits small differences in statistical distributions of cloud optical depth, liquid water path, TOA albedo and perhaps cloud-top height, but large differences in those of cloud-top temperature and OLR between the tropics and subtropics. Differences in the sea surface temperature (SST) distributions between the tropics and subtropics influence some of the cloud macrophysical properties, but cloud microphysical properties and albedo for each cloud object type are likely determined by (local) boundary-layer dynamics and structures. Systematic variations of cloud optical depth, TOA albedo, cloud-top height, OLR and SST with cloud object sizes are pronounced for the stratocumulus and stratus types, which are related to systematic

  10. Statistical and molecular analyses of evolutionary significance of red-green color vision and color blindness in vertebrates.

    Science.gov (United States)

    Yokoyama, Shozo; Takenaka, Naomi

    2005-04-01

    Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.

  11. How to Tell the Truth with Statistics: The Case for Accountable Data Analyses in Team-based Science.

    Science.gov (United States)

    Gelfond, Jonathan A L; Klugman, Craig M; Welty, Leah J; Heitman, Elizabeth; Louden, Christopher; Pollock, Brad H

    Data analysis is essential to translational medicine, epidemiology, and the scientific process. Although recent advances in promoting reproducibility and reporting standards have made some improvements, the data analysis process remains insufficiently documented and susceptible to avoidable errors, bias, and even fraud. Comprehensively accounting for the full analytical process requires not only records of the statistical methodology used, but also records of communications among the research team. In this regard, the data analysis process can benefit from the principle of accountability that is inherent in other disciplines such as clinical practice. We propose a novel framework for capturing the analytical narrative called the Accountable Data Analysis Process (ADAP), which allows the entire research team to participate in the analysis in a supervised and transparent way. The framework is analogous to an electronic health record in which the dataset is the "patient" and actions related to the dataset are recorded in a project management system. We discuss the design, advantages, and challenges in implementing this type of system in the context of academic health centers, where team based science increasingly demands accountability.

  12. How to Tell the Truth with Statistics: The Case for Accountable Data Analyses in Team-based Science

    Science.gov (United States)

    Gelfond, Jonathan A. L.; Klugman, Craig M.; Welty, Leah J.; Heitman, Elizabeth; Louden, Christopher; Pollock, Brad H.

    2015-01-01

    Data analysis is essential to translational medicine, epidemiology, and the scientific process. Although recent advances in promoting reproducibility and reporting standards have made some improvements, the data analysis process remains insufficiently documented and susceptible to avoidable errors, bias, and even fraud. Comprehensively accounting for the full analytical process requires not only records of the statistical methodology used, but also records of communications among the research team. In this regard, the data analysis process can benefit from the principle of accountability that is inherent in other disciplines such as clinical practice. We propose a novel framework for capturing the analytical narrative called the Accountable Data Analysis Process (ADAP), which allows the entire research team to participate in the analysis in a supervised and transparent way. The framework is analogous to an electronic health record in which the dataset is the “patient” and actions related to the dataset are recorded in a project management system. We discuss the design, advantages, and challenges in implementing this type of system in the context of academic health centers, where team based science increasingly demands accountability. PMID:26290897

  13. Evaluation of multivariate statistical analyses for monitoring and prediction of processes in an seawater reverse osmosis desalination plant

    Energy Technology Data Exchange (ETDEWEB)

    Kolluri, Srinivas Sahan; Esfahani, Iman Janghorban; Garikiparthy, Prithvi Sai Nadh; Yoo, Chang Kyoo [Kyung Hee University, Yongin (Korea, Republic of)

    2015-08-15

    Our aim was to analyze, monitor, and predict the outcomes of processes in a full-scale seawater reverse osmosis (SWRO) desalination plant using multivariate statistical techniques. Multivariate analysis of variance (MANOVA) was used to investigate the performance and efficiencies of two SWRO processes, namely, pore controllable fiber filterreverse osmosis (PCF-SWRO) and sand filtration-ultra filtration-reverse osmosis (SF-UF-SWRO). Principal component analysis (PCA) was applied to monitor the two SWRO processes. PCA monitoring revealed that the SF-UF-SWRO process could be analyzed reliably with a low number of outliers and disturbances. Partial least squares (PLS) analysis was then conducted to predict which of the seven input parameters of feed flow rate, PCF/SF-UF filtrate flow rate, temperature of feed water, turbidity feed, pH, reverse osmosis (RO)flow rate, and pressure had a significant effect on the outcome variables of permeate flow rate and concentration. Root mean squared errors (RMSEs) of the PLS models for permeate flow rates were 31.5 and 28.6 for the PCF-SWRO process and SF-UF-SWRO process, respectively, while RMSEs of permeate concentrations were 350.44 and 289.4, respectively. These results indicate that the SF-UF-SWRO process can be modeled more accurately than the PCF-SWRO process, because the RMSE values of permeate flowrate and concentration obtained using a PLS regression model of the SF-UF-SWRO process were lower than those obtained for the PCF-SWRO process.

  14. Statistical analyses in Swedish randomised trials on mammography screening and in other randomised trials on cancer screening: a systematic review

    Science.gov (United States)

    Boniol, Mathieu; Smans, Michel; Sullivan, Richard; Boyle, Peter

    2015-01-01

    Objectives We compared calculations of relative risks of cancer death in Swedish mammography trials and in other cancer screening trials. Participants Men and women from 30 to 74 years of age. Setting Randomised trials on cancer screening. Design For each trial, we identified the intervention period, when screening was offered to screening groups and not to control groups, and the post-intervention period, when screening (or absence of screening) was the same in screening and control groups. We then examined which cancer deaths had been used for the computation of relative risk of cancer death. Main outcome measures Relative risk of cancer death. Results In 17 non-breast screening trials, deaths due to cancers diagnosed during the intervention and post-intervention periods were used for relative risk calculations. In the five Swedish trials, relative risk calculations used deaths due to breast cancers found during intervention periods, but deaths due to breast cancer found at first screening of control groups were added to these groups. After reallocation of the added breast cancer deaths to post-intervention periods of control groups, relative risks of 0.86 (0.76; 0.97) were obtained for cancers found during intervention periods and 0.83 (0.71; 0.97) for cancers found during post-intervention periods, indicating constant reduction in the risk of breast cancer death during follow-up, irrespective of screening. Conclusions The use of unconventional statistical methods in Swedish trials has led to overestimation of risk reduction in breast cancer death attributable to mammography screening. The constant risk reduction observed in screening groups was probably due to the trial design that optimised awareness and medical management of women allocated to screening groups. PMID:26152677

  15. Semi-automated genetic analyses of soil microbial communities: comparison of T-RFLP and RISA based on descriptive and discriminative statistical approaches.

    Science.gov (United States)

    Hartmann, Martin; Frey, Beat; Kölliker, Roland; Widmer, Franco

    2005-06-01

    Cultivation independent analyses of soil microbial community structures are frequently used to describe microbiological soil characteristics. This approach is based on direct extraction of total soil DNA followed by PCR amplification of selected marker genes and subsequent genetic fingerprint analyses. Semi-automated genetic fingerprinting techniques such as terminal restriction fragment length polymorphism (T-RFLP) and ribosomal intergenic spacer analysis (RISA) yield high-resolution patterns of highly diverse soil microbial communities and hold great potential for use in routine soil quality monitoring, when rapid high throughput screening for differences or changes is more important than phylogenetic identification of organisms affected. Our objective was to perform profound statistical analysis to evaluate the cultivation independent approach and the consistency of results from T-RFLP and RISA. As a model system, we used two different heavy metal treated soils from an open top chamber experiment. Bacterial T-RFLP and RISA profiles of 16S rDNA were converted into numeric data matrices in order to allow for detailed statistical analyses with cluster analysis, Mantel test statistics, Monte Carlo permutation tests and ANOVA. Analyses revealed that soil DNA-contents were significantly correlated with soil microbial biomass in our system. T-RFLP and RISA yielded highly consistent and correlating results and both allowed to distinguish the four treatments with equal significance. While RISA represents a fast and general fingerprinting method of moderate cost and labor intensity, T-RFLP is technically more demanding but offers the advantage of phylogenetic identification of detected soil microorganisms. Therefore, selection of either of these methods should be based on the specific research question under investigation.

  16. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  17. Statistical properties of interval mapping methods on quantitative trait loci location: impact on QTL/eQTL analyses

    Directory of Open Access Journals (Sweden)

    Wang Xiaoqiang

    2012-04-01

    Full Text Available Abstract Background Quantitative trait loci (QTL detection on a huge amount of phenotypes, like eQTL detection on transcriptomic data, can be dramatically impaired by the statistical properties of interval mapping methods. One of these major outcomes is the high number of QTL detected at marker locations. The present study aims at identifying and specifying the sources of this bias, in particular in the case of analysis of data issued from outbred populations. Analytical developments were carried out in a backcross situation in order to specify the bias and to propose an algorithm to control it. The outbred population context was studied through simulated data sets in a wide range of situations. The likelihood ratio test was firstly analyzed under the "one QTL" hypothesis in a backcross population. Designs of sib families were then simulated and analyzed using the QTL Map software. On the basis of the theoretical results in backcross, parameters such as the population size, the density of the genetic map, the QTL effect and the true location of the QTL, were taken into account under the "no QTL" and the "one QTL" hypotheses. A combination of two non parametric tests - the Kolmogorov-Smirnov test and the Mann-Whitney-Wilcoxon test - was used in order to identify the parameters that affected the bias and to specify how much they influenced the estimation of QTL location. Results A theoretical expression of the bias of the estimated QTL location was obtained for a backcross type population. We demonstrated a common source of bias under the "no QTL" and the "one QTL" hypotheses and qualified the possible influence of several parameters. Simulation studies confirmed that the bias exists in outbred populations under both the hypotheses of "no QTL" and "one QTL" on a linkage group. The QTL location was systematically closer to marker locations than expected, particularly in the case of low QTL effect, small population size or low density of markers, i

  18. Changes in forebrain function from waking to REM sleep in depression: preliminary analyses of [18F]FDG PET studies.

    Science.gov (United States)

    Nofzinger, E A; Nichols, T E; Meltzer, C C; Price, J; Steppe, D A; Miewald, J M; Kupfer, D J; Moore, R Y

    1999-08-31

    Based on recent functional brain imaging studies of healthy human REM sleep, we hypothesized that alterations in REM sleep in mood disorder patients reflect a functional dysregulation within limbic and paralimbic forebrain structures during that sleep state. Six unipolar depressed subjects and eight healthy subjects underwent separate [18F]2-fluoro-2-deoxy-D-glucose ([18F]FDG) PET scans during waking and during their first REM period of sleep. Statistical parametric mapping contrasts were performed to detect changes in relative regional cerebral glucose metabolism (rCMRglu) from waking to REM sleep in each group as well as interactions in patterns of change between groups. Clinical and EEG sleep comparisons from an undisturbed night of sleep were also performed. In contrast to healthy control subjects, depressed patients did not show increases in rCMRglu in anterior paralimbic structures in REM sleep compared to waking. Depressed subjects showed greater increases from waking to REM sleep in rCMRglu in the tectal area and a series of left hemispheric areas including sensorimotor cortex, inferior temporal cortex, uncal gyrus-amygdala, and subicular complex than did the control subjects. These observations indicate that changes in limbic and paralimbic function from waking to REM sleep differ significantly from normal in depressed patients.

  19. Neuropsychological impairment as a consequence of football (soccer) play and football heading: preliminary analyses and report on university footballers.

    Science.gov (United States)

    Rutherford, A; Stephens, R; Potter, D; Fernie, G

    2005-04-01

    Previous research has claimed neuropsychological impairment occurs as a result of professional and amateur football play, and, specifically, football heading. However, much of this research exhibits substantial methodological problems. By investigating less committed amateur level footballers, the current study sought to gain some insight into the developmental history of any neuropsychological consequences of football play. University football, rugby and noncontact sports players were compared on a range of biographical and neuropsychological test variables. While playing their chosen sports, rugby players sustained many more head injuries than footballers and noncontact sportsmen, but footballers did not sustain significantly more head injuries than noncontact sportsmen. The number of head injuries sustained predicted Trails B and TAP Divided Attention latencies in a positive fashion. After controlling for the number of head injuries sustained, sport group effects were detected with TAP Divided Attention accuracy scores, with footballers exhibiting poorest performance. After controlling for the number of head injuries sustained, the total amount of heading done by footballers predicted the number of Wisconsin Card Sorting category shifts in a negative fashion. Nevertheless, over interpretation of all of these results should be resisted because of the exploratory nature of the analyses and the possibility that the sport groups may differ in ways other than just the nature of their sports activities.

  20. Preliminary study of microscale zircon oxygen isotopes for Dabie-Sulu metamorphic rocks: Ion probe in situ analyses

    Institute of Scientific and Technical Information of China (English)

    CHEN Daogong; Deloule Etienne; CHENG Hao; XIA Qunke; WU Yuanbao

    2003-01-01

    151 in situ analyses of oxygen isotopes were carried out by ion micro-probe for zircons from 8 localities of HP-UHP metamorphic rocks including eclogites in the Dabie-Sulu terrane. The results show significant heterogeneity inδ18O values, with variation in different rocks from -8.5‰ to +9.7‰ and within one sample from 2‰ to 12‰. No measurable difference inδ18O was observed between protolith magmatic (detrital) zircons and metamorphic recrystallized zircons within analytical uncertainties from the ion micro-probe measurements. This indicates that the metamorphic zircons have inherited the oxygen isotopic compositions of protolith zircons despite the HP to UHP metamorphism. According to their protolith ages from zircon U-Pb in situ dating by the same ion micro-probe, two groups of oxygen isotope composition are recognized, with one having δ18O values of 6‰-7‰ for old protolith of 1.9-2.5 Ga ages and the other 0‰-2‰ for young protolith of 0.7-0.8 Ga ages. The latter anomalously lowδ18O values of zircons indicate that the magma has had the obvious involvement of meteoric water when forming the young protolith of high-grade metamorphic rocks. This may be correlated with the snowball Earth event occurring in South China and the world elsewhere during the Neoproterozoic.

  1. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.

  2. Statistical methods for meta-analyses including information from studies without any events-add nothing to nothing and succeed nevertheless.

    Science.gov (United States)

    Kuss, O

    2015-03-30

    Meta-analyses with rare events, especially those that include studies with no event in one ('single-zero') or even both ('double-zero') treatment arms, are still a statistical challenge. In the case of double-zero studies, researchers in general delete these studies or use continuity corrections to avoid them. A number of arguments against both options has been given, and statistical methods that use the information from double-zero studies without using continuity corrections have been proposed. In this paper, we collect them and compare them by simulation. This simulation study tries to mirror real-life situations as completely as possible by deriving true underlying parameters from empirical data on actually performed meta-analyses. It is shown that for each of the commonly encountered effect estimators valid statistical methods are available that use the information from double-zero studies without using continuity corrections. Interestingly, all of them are truly random effects models, and so also the current standard method for very sparse data as recommended from the Cochrane collaboration, the Yusuf-Peto odds ratio, can be improved on. For actual analysis, we recommend to use beta-binomial regression methods to arrive at summary estimates for the odds ratio, the relative risk, or the risk difference. Methods that ignore information from double-zero studies or use continuity corrections should no longer be used. We illustrate the situation with an example where the original analysis ignores 35 double-zero studies, and a superior analysis discovers a clinically relevant advantage of off-pump surgery in coronary artery bypass grafting. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Unlocking Data for Statistical Analyses and Data Mining: Generic Case Extraction of Clinical Items from i2b2 and tranSMART.

    Science.gov (United States)

    Firnkorn, Daniel; Merker, Sebastian; Ganzinger, Matthias; Muley, Thomas; Knaup, Petra

    2016-01-01

    In medical science, modern IT concepts are increasingly important to gather new findings out of complex diseases. Data Warehouses (DWH) as central data repository systems play a key role by providing standardized, high-quality and secure medical data for effective analyses. However, DWHs in medicine must fulfil various requirements concerning data privacy and the ability to describe the complexity of (rare) disease phenomena. Here, i2b2 and tranSMART are free alternatives representing DWH solutions especially developed for medical informatics purposes. But different functionalities are not yet provided in a sufficient way. In fact, data import and export is still a major problem because of the diversity of schemas, parameter definitions and data quality which are described variously in each single clinic. Further, statistical analyses inside i2b2 and tranSMART are possible, but restricted to the implemented functions. Thus, data export is needed to provide a data basis which can be directly included within statistics software like SPSS and SAS or data mining tools like Weka and RapidMiner. The standard export tools of i2b2 and tranSMART are more or less creating a database dump of key-value pairs which cannot be used immediately by the mentioned tools. They need an instance-based or a case-based representation of each patient. To overcome this lack, we developed a concept called Generic Case Extractor (GCE) which pivots the key-value pairs of each clinical fact into a row-oriented format for each patient sufficient to enable analyses in a broader context. Therefore, complex pivotisation routines where necessary to ensure temporal consistency especially in terms of different data sets and the occurrence of identical but repeated parameters like follow-up data. GCE is embedded inside a comprehensive software platform for systems medicine.

  4. Performing statistical analyses on quantitative data in Taverna workflows: an example using R and maxdBrowse to identify differentially-expressed genes from microarray data.

    Science.gov (United States)

    Li, Peter; Castrillo, Juan I; Velarde, Giles; Wassink, Ingo; Soiland-Reyes, Stian; Owen, Stuart; Withers, David; Oinn, Tom; Pocock, Matthew R; Goble, Carole A; Oliver, Stephen G; Kell, Douglas B

    2008-08-07

    There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna workbench. Taverna can be used by data analysis

  5. Performing statistical analyses on quantitative data in Taverna workflows: An example using R and maxdBrowse to identify differentially-expressed genes from microarray data

    Directory of Open Access Journals (Sweden)

    Pocock Matthew R

    2008-08-01

    Full Text Available Abstract Background There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Results Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna

  6. How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions.

    Science.gov (United States)

    Cunningham, Michael R; Baumeister, Roy F

    2016-01-01

    The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger et al., 2010). Meta-analyses are supposed to reduce bias in literature reviews. Carter et al.'s (2015) meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and Funnel Plot Asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test and Precision Effect Estimate with Standard Error (PEESE) procedures. Despite these serious problems, the Carter et al. (2015) meta-analysis results actually indicate that there is a real depletion effect - contrary to their title.

  7. Analysing the spatial patterns of livestock anthrax in Kazakhstan in relation to environmental factors: a comparison of local (Gi* and morphology cluster statistics

    Directory of Open Access Journals (Sweden)

    Ian T. Kracalik

    2012-11-01

    Full Text Available We compared a local clustering and a cluster morphology statistic using anthrax outbreaks in large (cattle and small (sheep and goats domestic ruminants across Kazakhstan. The Getis-Ord (Gi* statistic and a multidirectional optimal ecotope algorithm (AMOEBA were compared using 1st, 2nd and 3rd order Rook contiguity matrices. Multivariate statistical tests were used to evaluate the environmental signatures between clusters and non-clusters from the AMOEBA and Gi* tests. A logistic regression was used to define a risk surface for anthrax outbreaks and to compare agreement between clustering methodologies. Tests revealed differences in the spatial distribution of clusters as well as the total number of clusters in large ruminants for AMOEBA (n = 149 and for small ruminants (n = 9. In contrast, Gi* revealed fewer large ruminant clusters (n = 122 and more small ruminant clusters (n = 61. Significant environmental differences were found between groups using the Kruskall-Wallis and Mann- Whitney U tests. Logistic regression was used to model the presence/absence of anthrax outbreaks and define a risk surface for large ruminants to compare with cluster analyses. The model predicted 32.2% of the landscape as high risk. Approximately 75% of AMOEBA clusters corresponded to predicted high risk, compared with ~64% of Gi* clusters. In general, AMOEBA predicted more irregularly shaped clusters of outbreaks in both livestock groups, while Gi* tended to predict larger, circular clusters. Here we provide an evaluation of both tests and a discussion of the use of each to detect environmental conditions associated with anthrax outbreak clusters in domestic livestock. These findings illustrate important differences in spatial statistical methods for defining local clusters and highlight the importance of selecting appropriate levels of data aggregation.

  8. Analysing the spatial patterns of livestock anthrax in Kazakhstan in relation to environmental factors: a comparison of local (Gi*) and morphology cluster statistics.

    Science.gov (United States)

    Kracalik, Ian T; Blackburn, Jason K; Lukhnova, Larisa; Pazilov, Yerlan; Hugh-Jones, Martin E; Aikimbayev, Alim

    2012-11-01

    We compared a local clustering and a cluster morphology statistic using anthrax outbreaks in large (cattle) and small (sheep and goats) domestic ruminants across Kazakhstan. The Getis-Ord (Gi*) statistic and a multidirectional optimal ecotope algorithm (AMOEBA) were compared using 1st, 2nd and 3rd order Rook contiguity matrices. Multivariate statistical tests were used to evaluate the environmental signatures between clusters and non-clusters from the AMOEBA and Gi* tests. A logistic regression was used to define a risk surface for anthrax outbreaks and to compare agreement between clustering methodologies. Tests revealed differences in the spatial distribution of clusters as well as the total number of clusters in large ruminants for AMOEBA (n = 149) and for small ruminants (n = 9). In contrast, Gi* revealed fewer large ruminant clusters (n = 122) and more small ruminant clusters (n = 61). Significant environmental differences were found between groups using the Kruskall-Wallis and Mann-Whitney U tests. Logistic regression was used to model the presence/absence of anthrax outbreaks and define a risk surface for large ruminants to compare with cluster analyses. The model predicted 32.2% of the landscape as high risk. Approximately 75% of AMOEBA clusters corresponded to predicted high risk, compared with ~64% of Gi* clusters. In general, AMOEBA predicted more irregularly shaped clusters of outbreaks in both livestock groups, while Gi* tended to predict larger, circular clusters. Here we provide an evaluation of both tests and a discussion of the use of each to detect environmental conditions associated with anthrax outbreak clusters in domestic livestock. These findings illustrate important differences in spatial statistical methods for defining local clusters and highlight the importance of selecting appropriate levels of data aggregation.

  9. Expression, Crystallization and Preliminary X-ray Diffraction Analyses of Med-ORF10 in the Biosynthetic Pathway of an Antitumor Antibiotic Medermycin.

    Science.gov (United States)

    Liu, Yanli; Liu, Shasha; Yang, Tingting; Guo, Xiaoxia; Jiang, Yali; Zahid, Kashif Rafiq; Liu, Ke; Liu, Jinlin; Yang, Jihong; Zhao, Haobin; Yang, Yi; Li, Aiying; Qi, Chao

    2015-12-01

    Medermycin, as a prominent member of benzoisochromanequinones, possesses strong antitumor activity and is biosynthesized under the control of a 29-ORF-containing biosynthetic gene cluster. Most of ORFs in this gene cluster have not been characterized, including a small protein encoding gene med-ORF10, proposed to play a regulatory role in biosynthesis of medermycin in an unknown mode. In this study, we reported the expression, protein preparation, crystallization and preliminary X-ray diffraction analyses of Med-ORF10 of the wild type Streptomyces strain. Firstly, we cloned and overexpressed med-ORF10 in Escherichia coli and purified the protein with 98% purity and 3 mg/L yield. Then, we crystallized the protein at concentration of 20 mg/mL in condition 22% PEG 3350, 0.2 M magnesium formate and collected the data at 1.78 Å resolution. Finally, we detected the expression of Med-ORF10 in Streptomyces by western blotting. In conclusion, this study confirmed the expression of Med-ORF10 protein in the wild-type strain of Streptomyces AM-7161 and collected the X-ray diffraction data of Med-ORF10 crystal at 1.78 Å resolution. These studies provide evidences for the functional Med-ORF10 protein in Streptomyces strains and facilitate our further investigation.

  10. On the Proper Use of Statistical Analyses; a Comment on "Evaluation of Colorado Learning Attitudes about Science Survey" by Douglas et al

    CERN Document Server

    Wieman, Carl E

    2015-01-01

    The paper "Evaluation of Colorado Learning Attitudes about Science Survey" [1] proposes a new, much shorter, version of the CLASS based on standard factor analysis. In this comment we explain why we believe the analysis that is used is inappropriate, and the proposed modified CLASS will be measuring something quite different, and less useful, than the original. The CLASS was based on extensive interviews with students and is intended to be a formative measurement of instruction that is probing a much more complex construct and with different goals than what is handled with classic psychometrics. We are writing this comment to reiterate the value of combining techniques of cognitive science with statistical analyses as described in detail in Adams & Wieman, 2011 [2] when developing a test of expert-like thinking for use in formative assessment. This type of approach is also called for by the National Research Council in a recent report [3].

  11. Summary of statistical and trend analyses of selected water-quality data collected near the Big Thicket National Preserve, southeast Texas

    Science.gov (United States)

    Wells, Frank C.; Bourdon, Kristin C.

    1985-01-01

    Statistical and trend analyses of selected water-quality data collected at three streamflow stations in the lower Neches River basin, Texas, are summarized in order to document baseline water-quality conditions in stream segments that flow through the Big Thicket National Preserve in southeast Texas. Dissolved-solids concentrations in the streams are small, less than 132 milligrams per liter in 50 percent of the samples analyzed from each of the sites. Dissolved-oxygen concentrations in the Neches River at Evadale (08041000) generally are large, exceeding 8.0 milligrams per liter in more than 50 percent of the samples analyzed. Total nitrogen and total phosphorus concentrations in samples from this site have not exceeded 1.8 milligrams per liter and 0.20 milligram per liter, respectively.

  12. Validation of Refractivity Profiles Retrieved from FORMOSAT-3/COSMIC Radio Occultation Soundings: Preliminary Results of Statistical Comparisons Utilizing Balloon-Borne Observations

    Directory of Open Access Journals (Sweden)

    Hiroo Hayashi

    2009-01-01

    Full Text Available The GPS radio occultation (RO soundings by the FORMOSAT-3/COSMIC (Taiwan¡¦s Formosa Satellite Misssion #3/Constellation Observing System for Meteorology, Ionosphere and Climate satellites launched in mid-April 2006 are compared with high-resolution balloon-borne (radiosonde and ozonesonde observations. This paper presents preliminary results of validation of the COSMIC RO measurements in terms of refractivity through the troposphere and lower stratosphere. With the use of COSMIC RO soundings within 2 hours and 300 km of sonde profiles, statistical comparisons between the collocated refractivity profiles are erformed for some tropical regions (Malaysia and Western Pacific islands where moisture-rich air is expected in the lower troposphere and for both northern and southern polar areas with a very dry troposphere. The results of the comparisons show good agreement between COSMIC RO and sonde refractivity rofiles throughout the troposphere (1 - 1.5% difference at most with a positive bias generally becoming larger at progressively higher altitudes in the lower stratosphere (1 - 2% difference around 25 km, and a very small standard deviation (about 0.5% or less for a few kilometers below the tropopause level. A large standard deviation of fractional differences in the lowermost troposphere, which reaches up to as much as 3.5 - 5%at 3 km, is seen in the tropics while a much smaller standard deviation (1 - 2% at most is evident throughout the polar troposphere.

  13. Statistical properties of coastal long waves analysed through sea-level time-gradient functions: exemplary analysis of the Siracusa, Italy, tide-gauge data

    Science.gov (United States)

    Bressan, L.; Tinti, S.

    2016-01-01

    This study presents a new method to analyse the properties of the sea-level signal recorded by coastal tide gauges in the long wave range that is in a window between wind/storm waves and tides and is typical of several phenomena like local seiches, coastal shelf resonances and tsunamis. The method consists of computing four specific functions based on the time gradient (slope) of the recorded sea level oscillations, namely the instantaneous slope (IS) as well as three more functions based on IS, namely the reconstructed sea level (RSL), the background slope (BS) and the control function (CF). These functions are examined through a traditional spectral fast Fourier transform (FFT) analysis and also through a statistical analysis, showing that they can be characterised by probability distribution functions PDFs such as the Student's t distribution (IS and RSL) and the beta distribution (CF). As an example, the method has been applied to data from the tide-gauge station of Siracusa, Italy.

  14. Multivariate statistical and lead isotopic analyses approach to identify heavy metal sources in topsoil from the industrial zone of Beijing Capital Iron and Steel Factory.

    Science.gov (United States)

    Zhu, Guangxu; Guo, Qingjun; Xiao, Huayun; Chen, Tongbin; Yang, Jun

    2017-06-01

    Heavy metals are considered toxic to humans and ecosystems. In the present study, heavy metal concentration in soil was investigated using the single pollution index (PIi), the integrated Nemerow pollution index (PIN), and the geoaccumulation index (Igeo) to determine metal accumulation and its pollution status at the abandoned site of the Capital Iron and Steel Factory in Beijing and its surrounding area. Multivariate statistical (principal component analysis and correlation analysis), geostatistical analysis (ArcGIS tool), combined with stable Pb isotopic ratios, were applied to explore the characteristics of heavy metal pollution and the possible sources of pollutants. The results indicated that heavy metal elements show different degrees of accumulation in the study area, the observed trend of the enrichment factors, and the geoaccumulation index was Hg > Cd > Zn > Cr > Pb > Cu ≈ As > Ni. Hg, Cd, Zn, and Cr were the dominant elements that influenced soil quality in the study area. The Nemerow index method indicated that all of the heavy metals caused serious pollution except Ni. Multivariate statistical analysis indicated that Cd, Zn, Cu, and Pb show obvious correlation and have higher loads on the same principal component, suggesting that they had the same sources, which are related to industrial activities and vehicle emissions. The spatial distribution maps based on ordinary kriging showed that high concentrations of heavy metals were located in the local factory area and in the southeast-northwest part of the study region, corresponding with the predominant wind directions. Analyses of lead isotopes confirmed that Pb in the study soils is predominantly derived from three Pb sources: dust generated during steel production, coal combustion, and the natural background. Moreover, the ternary mixture model based on lead isotope analysis indicates that lead in the study soils originates mainly from anthropogenic sources, which contribute much more

  15. Book Trade Research and Statistics. Prices of U.S. and Foreign Published Materials; Book Title Output and Average Prices: 2001 Final and 2002 Preliminary Figures; Book Sales Statistics, 2002: AAP Preliminary Estimates; U.S. Book Exports and Imports:2002; Number of Book Outlets in the United States and Canada; Review Media Statistics.

    Science.gov (United States)

    Sullivan, Sharon G.; Grabois, Andrew; Greco, Albert N.

    2003-01-01

    Includes six reports related to book trade statistics, including prices of U.S. and foreign materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and numbers of books and other media reviewed by major reviewing publications. (LRW)

  16. Practical Statistics

    CERN Document Server

    Lyons, L

    2016-01-01

    Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

  17. Expression, crystallization and preliminary X-ray crystallographic analyses of two N-terminal acetyltransferase-related proteins from Thermoplasma acidophilum

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hee; Ha, Jun Yong; Kim, Kyoung Hoon; Oh, Sung Jin; Kim, Do Jin; Kang, Ji Yong; Yoon, Hye Jin [Department of Chemistry, College of Natural Sciences, Seoul National University, Seoul 151-742 (Korea, Republic of); Kim, Se-Hee; Seo, Ji Hae; Kim, Kyu-Won [NeuroVascular Coordination Research Center, Research Institute of Pharmaceutical Sciences, College of Pharmacy, Seoul National University, Seoul 151-742 (Korea, Republic of); Suh, Se Won, E-mail: sewonsuh@snu.ac.kr [Department of Chemistry, College of Natural Sciences, Seoul National University, Seoul 151-742 (Korea, Republic of)

    2006-11-01

    An N-terminal acetyltransferase ARD1 subunit-related protein (Ta0058) and an N-terminal acetyltransferase-related protein (Ta1140) from T. acidophilum were crystallized. X-ray diffraction data were collected to 2.17 and 2.40 Å, respectively. N-terminal acetylation is one of the most common protein modifications in eukaryotes, occurring in approximately 80–90% of cytosolic mammalian proteins and about 50% of yeast proteins. ARD1 (arrest-defective protein 1), together with NAT1 (N-acetyltransferase protein 1) and possibly NAT5, is responsible for the NatA activity in Saccharomyces cerevisiae. In mammals, ARD1 is involved in cell proliferation, neuronal development and cancer. Interestingly, it has been reported that mouse ARD1 (mARD1{sup 225}) mediates ∊-acetylation of hypoxia-inducible factor 1α (HIF-1α) and thereby enhances HIF-1α ubiquitination and degradation. Here, the preliminary X-ray crystallographic analyses of two N-terminal acetyltransferase-related proteins encoded by the Ta0058 and Ta1140 genes of Thermoplasma acidophilum are reported. The Ta0058 protein is related to an N-terminal acetyltransferase complex ARD1 subunit, while Ta1140 is a putative N-terminal acetyltransferase-related protein. Ta0058 shows 26% amino-acid sequence identity to both mARD1{sup 225} and human ARD1{sup 235}.The sequence identity between Ta0058 and Ta1140 is 28%. Ta0058 and Ta1140 were overexpressed in Escherichia coli fused with an N-terminal purification tag. Ta0058 was crystallized at 297 K using a reservoir solution consisting of 0.1 M sodium acetate pH 4.6, 8%(w/v) polyethylene glycol 4000 and 35%(v/v) glycerol. X-ray diffraction data were collected to 2.17 Å. The Ta0058 crystals belong to space group P4{sub 1} (or P4{sub 3}), with unit-cell parameters a = b = 49.334, c = 70.384 Å, α = β = γ = 90°. The asymmetric unit contains a monomer, giving a calculated crystal volume per protein weight (V{sub M}) of 2.13 Å{sup 3} Da{sup −1} and a solvent content of 42

  18. Analyses of hydrocarbons in BLM sediment intercalibration sample from Santa Barbara basin and spiked with API South Louisiana crude oil. A preliminary report

    Energy Technology Data Exchange (ETDEWEB)

    Farrington, J W; Tripp, B W; Sass, J

    1977-01-01

    A preliminary report is made of a BLM intercalibration sediment sample from the Santa Barbara basin spiked with South Louisiana crude oil. The two subsamples reported were analyzed by a procedure described in the appendix for which a separate abstract was written. Because of the high oil content of the sediment the usual thin layer chromatography procedure resulted in an overloaded plate. An appendix was indexed separately. (JSR)

  19. Preliminary results on the tectonic activity of the Ovacık Fault (Malatya-Ovacık Fault Zone, Turkey): Implications of the morphometric analyses

    Science.gov (United States)

    Yazıcı, Müge; Zabci, Cengiz; Sançar, Taylan; Sunal, Gürsel; Natalin, Boris A.

    2016-04-01

    The Anatolian 'plate' is being extruded westward relative to the Eurasia along two major tectonic structures, the North Anatolian and the East Anatolian shear zones, respectively making its northern and eastern boundaries. Although the main deformation is localized along these two structures, there is remarkable intra-plate deformation within Anatolia, especially which are characterized by NE-striking sinistral and NW-striking dextral strike-slip faults (Şengör et al. 1985). The Malatya-Ovacık Fault Zone (MOFZ) and its northeastern member, the Ovacık Fault (OF), is a one of the NE-striking sinistral strike slip faults in the central 'ova' neotectonic province of Anatolia, located close to its eastern boundary. Although this fault zone is claimed to be an inactive structure in some studies, the recent GPS measurements (Aktuǧ et al., 2013) and microseismic activity (AFAD, 2013) strongly suggest the opposite. In order to understand rates and patterns of vertical ground motions along the OF, we studied the certain morphometric analyses such as hypsometric curves and integrals, longitudinal channel profiles, and asymmetry of drainage basins. The Karasu (Euphrates) and Munzur rivers form the main drainage systems of the study area. We extracted all drainage network from SRTM-based Digital Elevation Model with 30 m ground pixel resolution and totally identified 40 sub-drainage basins, which are inhomogeneously distributed to the north and to the south of the OF. Most of these basins show strong asymmetry, which are mainly tilted to SW. The asymmetry relatively decreases from NE to SW in general. The only exception is at the margins of the Ovacık Basin (OB), where almost the highest asymmetry values were calculated. On the other hand, the characteristics of hypsometric curves and the calculated hypsometric integrals do not show the similar systematic spatial pattern. The hypsometric curves with convex-shaped geometry, naturally indicating relatively young morphology

  20. Statistical properties of coastal long waves analysed through sea-level time-gradient functions: exemplary analysis of the Siracusa, Italy, tide-gauge data

    Directory of Open Access Journals (Sweden)

    L. Bressan

    2016-01-01

    reconstructed sea level (RSL, the background slope (BS and the control function (CF. These functions are examined through a traditional spectral fast Fourier transform (FFT analysis and also through a statistical analysis, showing that they can be characterised by probability distribution functions PDFs such as the Student's t distribution (IS and RSL and the beta distribution (CF. As an example, the method has been applied to data from the tide-gauge station of Siracusa, Italy.

  1. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  2. The weak points of statistical and demographic analyses in estimations of war victims in Bosnia and Herzegovina in the period 1992-1995

    Directory of Open Access Journals (Sweden)

    Kovačević Miladin

    2005-01-01

    Full Text Available In the political and war crisis which embraced Bosnia and Herzegovina in the spring of 1992 with an end of war hostilities in the autumn of 1995 when the "Dayton Peace Agreement" emerged (November 1995, a media war occurred. From the very beginning, this war had an international character. The question on the number of war victims (killed and missing "exploded" in June of 1993 when Haris Silajdžić stated that there had been 200000 dead among the Muslims. This figure uncritically became the basis for all later media and local "empirical truths" on the number of victims. All statistical and demographic disciplines were exploited to support, if not prove, the propaganda standpoints. Objectivity was oppressed by an ugly "face of the war". Having in mind the experience of the Second World War in Yugoslavia the question on the number of victims does not cease to be topical for decades after the end of the war. Bosnia and Herzegovina is more than a confirmation. This question seems to intervene (and in a way "feed of" with the most difficult political and international questions and court trials. ("International Court of Justice", indictment of Bosnia and Herzegovina against The Federal Republic of Yugoslavia, namely Serbia. The methodological analysis of the most important works which deal with the question of the number of victims in the Bosnian war (above all, those done by Bosnian institutes and authors indicate the "mistakes" made by the character of these works (propaganda. The manipulation with statistical methods and numbers is not new. Methodological and numerical traps can slip even to the most informed. The use of statistics and social science in court trials seems to show Janus's face of science: on one side the authentic "moral passion" of researchers finds great sense, and on the other side special interests strive to impose themselves through the (most refined instrumentation of science and knowledge. (The example of Mr. Patrick Ball

  3. Genetic Structure and Preliminary Findings of Cryptic Diversity of the Malaysian Mahseer (Tor tambroides Valenciennes: Cyprinidae Inferred from Mitochondrial DNA and Microsatellite Analyses

    Directory of Open Access Journals (Sweden)

    Yuzine Esa

    2013-01-01

    Full Text Available This study examines the population genetic structure of Tor tambroides, an important freshwater fish species in Malaysia, using fifteen polymorphic microsatellite loci and sequencing of 464 base pairs of the mitochondrial cytochrome c oxidase I (COI gene. A total of 152 mahseer samples were collected from eight populations throughout the Malaysia river system. Microsatellites results found high levels of intrapopulation variations, but mitochondrial COI results found high levels of interpopulations differentiation. The possible reasons for their discrepancies might be the varying influence of genetic drift on each marker or the small sample sizes used in most of the populations. The Kelantan population showed very low levels of genetic variations using both mitochondrial and microsatellite analyses. Phylogenetic analysis of the COI gene found a unique haplotype (ER8*, possibly representing a cryptic lineage of T. douronensis, from the Endau-Rompin population. Nevertheless, the inclusion of nuclear microsatellite analyses could not fully resolve the genetic identity of haplotype ER8* in the present study. Overall, the findings showed a serious need for more comprehensive and larger scale samplings, especially in remote river systems, in combination with molecular analyses using multiple markers, in order to discover more cryptic lineages or undescribed “genetic species” of mahseer.

  4. Statistics; Tilastot

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-31

    For the year 1997 and 1998, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually includes also historical time series over a longer period (see e.g. Energiatilastot 1997, Statistics Finland, Helsinki 1998, ISSN 0784-3165). The inside of the Review`s back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO{sub 2}-emissions, Electricity supply, Energy imports by country of origin in January-September 1998, Energy exports by recipient country in January-September 1998, Consumer prices of liquid fuels, Consumer prices of hard coal, Natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, Value added taxes and fiscal charges and fees included in consumer prices of some energy sources, Energy taxes and precautionary stock fees, pollution fees on oil products

  5. Statistics; Tilastot

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-31

    For the year 1997 and 1998, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually includes also historical time series over a longer period (see e.g. Energiatilastot 1996, Statistics Finland, Helsinki 1997, ISSN 0784-3165). The inside of the Review`s back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO{sub 2}-emissions, Electricity supply, Energy imports by country of origin in January-June 1998, Energy exports by recipient country in January-June 1998, Consumer prices of liquid fuels, Consumer prices of hard coal, Natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, Value added taxes and fiscal charges and fees included in consumer prices of some energy sources, Energy taxes and precautionary stock fees, pollution fees on oil products

  6. A Raman lidar at La Reunion (20.8° S, 55.5° E for monitoring water vapour and cirrus distributions in the subtropical upper troposphere: preliminary analyses and description of a future system

    Directory of Open Access Journals (Sweden)

    C. Hoareau

    2012-06-01

    Full Text Available A ground-based Rayleigh lidar has provided continuous observations of tropospheric water vapour profiles and cirrus cloud using a preliminary Raman channels setup on an existing Rayleigh lidar above La Reunion over the period 2002–2005. With this instrument, we performed a first measurement campaign of 350 independent water vapour profiles. A statistical study of the distribution of water vapour profiles is presented and some investigations concerning the calibration are discussed. Analysis regarding the cirrus clouds is presented and a classification has been performed showing 3 distinct classes. Based on these results, the characteristics and the design of a future lidar system, to be implemented at the new Reunion Island altitude observatory (2200 m for long-term monitoring, is presented and numerical simulations of system performance have been realised to compare both instruments.

  7. STATISTICAL ANALYSES OF AE DATA FROM AIRCRAFT%机体结构AE信号的统计分析研究

    Institute of Scientific and Technical Information of China (English)

    张凤林; 韩维; 胡国才; 李子尚

    2001-01-01

    对机体结构声发射数据的表征参数进行分析,建立了数据处理的统计模型,并运用该模型分析了某型飞机全机疲劳试验和某现役飞机随机机载声发射监测数据,提出了结构损伤判据,可供实际应用。%AE technique can be used to monitor the formation and development of fatigue cracks in steel structure dynamically and continuously. In this paper, a mathematical statistical model is established by analyzing characteristic parameters of AE data from aircraft. By using it, the authors make a study of AE data from an aircraft during its fatigue test and a fighter structure during its flight. On this basis, a practical damage standard is advanced, which can be applied to judge whether cracks are being formed or developed in an aircraft under given probability. At the end of the paper, the research orientations are given to the establishment of a more general standard.

  8. A graphical user interface (GUI) toolkit for the calculation of three-dimensional (3D) multi-phase biological effective dose (BED) distributions including statistical analyses.

    Science.gov (United States)

    Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis

    2016-07-01

    A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated.

  9. Computational and statistical analyses of amino acid usage and physico-chemical properties of the twelve late embryogenesis abundant protein classes.

    Directory of Open Access Journals (Sweden)

    Emmanuel Jaspard

    Full Text Available Late Embryogenesis Abundant Proteins (LEAPs are ubiquitous proteins expected to play major roles in desiccation tolerance. Little is known about their structure - function relationships because of the scarcity of 3-D structures for LEAPs. The previous building of LEAPdb, a database dedicated to LEAPs from plants and other organisms, led to the classification of 710 LEAPs into 12 non-overlapping classes with distinct properties. Using this resource, numerous physico-chemical properties of LEAPs and amino acid usage by LEAPs have been computed and statistically analyzed, revealing distinctive features for each class. This unprecedented analysis allowed a rigorous characterization of the 12 LEAP classes, which differed also in multiple structural and physico-chemical features. Although most LEAPs can be predicted as intrinsically disordered proteins, the analysis indicates that LEAP class 7 (PF03168 and probably LEAP class 11 (PF04927 are natively folded proteins. This study thus provides a detailed description of the structural properties of this protein family opening the path toward further LEAP structure - function analysis. Finally, since each LEAP class can be clearly characterized by a unique set of physico-chemical properties, this will allow development of software to predict proteins as LEAPs.

  10. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 4: Uncertainty and sensitivity analyses for 40 CFR 191, Subpart B

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to the EPA`s Environmental Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Additional information about the 1992 PA is provided in other volumes. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions, the choice of parameters selected for sampling, and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect compliance with 40 CFR 191B are: drilling intensity, intrusion borehole permeability, halite and anhydrite permeabilities, radionuclide solubilities and distribution coefficients, fracture spacing in the Culebra Dolomite Member of the Rustler Formation, porosity of the Culebra, and spatial variability of Culebra transmissivity. Performance with respect to 40 CFR 191B is insensitive to uncertainty in other parameters; however, additional data are needed to confirm that reality lies within the assigned distributions.

  11. Rapid detection and statistical differentiation of KPC gene variants in Gram-negative pathogens by use of high-resolution melting and ScreenClust analyses.

    Science.gov (United States)

    Roth, Amanda L; Hanson, Nancy D

    2013-01-01

    In the United States, the production of the Klebsiella pneumoniae carbapenemase (KPC) is an important mechanism of carbapenem resistance in Gram-negative pathogens. Infections with KPC-producing organisms are associated with increased morbidity and mortality; therefore, the rapid detection of KPC-producing pathogens is critical in patient care and infection control. We developed a real-time PCR assay complemented with traditional high-resolution melting (HRM) analysis, as well as statistically based genotyping, using the Rotor-Gene ScreenClust HRM software to both detect the presence of bla(KPC) and differentiate between KPC-2-like and KPC-3-like alleles. A total of 166 clinical isolates of Enterobacteriaceae, Pseudomonas aeruginosa, and Acinetobacter baumannii with various β-lactamase susceptibility patterns were tested in the validation of this assay; 66 of these organisms were known to produce the KPC β-lactamase. The real-time PCR assay was able to detect the presence of bla(KPC) in all 66 of these clinical isolates (100% sensitivity and specificity). HRM analysis demonstrated that 26 had KPC-2-like melting peak temperatures, while 40 had KPC-3-like melting peak temperatures. Sequencing of 21 amplified products confirmed the melting peak results, with 9 isolates carrying bla(KPC-2) and 12 isolates carrying bla(KPC-3). This PCR/HRM assay can identify KPC-producing Gram-negative pathogens in as little as 3 h after isolation of pure colonies and does not require post-PCR sample manipulation for HRM analysis, and ScreenClust analysis easily distinguishes bla(KPC-2-like) and bla(KPC-3-like) alleles. Therefore, this assay is a rapid method to identify the presence of bla(KPC) enzymes in Gram-negative pathogens that can be easily integrated into busy clinical microbiology laboratories.

  12. Estimation of thyroid radiation doses for the hanford thyroid disease study: results and implications for statistical power of the epidemiological analyses.

    Science.gov (United States)

    Kopecky, Kenneth J; Davis, Scott; Hamilton, Thomas E; Saporito, Mark S; Onstad, Lynn E

    2004-07-01

    Residents of eastern Washington, northeastern Oregon, and western Idaho were exposed to I released into the atmosphere from operations at the Hanford Nuclear Site from 1944 through 1972, especially in the late 1940's and early 1950's. This paper describes the estimated doses to the thyroid glands of the 3,440 evaluable participants in the Hanford Thyroid Disease Study, which investigated whether thyroid morbidity was increased in people exposed to radioactive iodine from Hanford during 1944-1957. The participants were born during 1940-1946 to mothers living in Benton, Franklin, Walla Walla, Adams, Okanogan, Ferry, or Stevens Counties in Washington State. Whenever possible someone with direct knowledge of the participant's early life (preferably the participant's mother) was interviewed about the participant's individual dose-determining characteristics (residence history, sources and quantities of food, milk, and milk products consumed, production and processing techniques for home-grown food and milk products). Default information was used if no interview respondent was available. Thyroid doses were estimated using the computer program Calculation of Individual Doses from Environmental Radionuclides (CIDER) developed by the Hanford Environmental Dose Reconstruction Project. CIDER provided 100 sets of doses to represent uncertainty of the estimates. These sets were not generated independently for each participant, but reflected the effects of uncertainties in characteristics shared by participants. Estimated doses (medians of each participant's 100 realizations) ranged from 0.0029 mGy to 2823 mGy, with mean and median of 174 and 97 mGy, respectively. The distribution of estimated doses provided the Hanford Thyroid Disease Study with sufficient statistical power to test for dose-response relationships between thyroid outcomes and exposure to Hanford's I.

  13. Multivariate statistic and time series analyses of grain-size data in Quaternary sediments of Lake El'gygytgyn, NE Russia

    Directory of Open Access Journals (Sweden)

    A. Francke

    2013-01-01

    Full Text Available Lake El'gygytgyn, located in the Far East Russian Arctic, was formed by a meteorite impact about 3.58 Ma ago. In 2009, the ICDP Lake El'gygytgyn Drilling Project obtained a continuous sediment sequence of the lacustrine deposits and the upper part of the impact breccia. Here, we present grain-size data of the past 2.6 Ma. General downcore grain-size variations yield coarser sediments during warm periods and finer ones during cold periods. According to Principal Component Analyses (PCA, the climate-dependent variations in grain-size distributions mainly occur in the coarse silt and very fine silt fraction. During interglacial periods, accumulation of coarser grain sizes in the lake center is supposed to be caused by redistribution of clastic material by a wind-induced current pattern during the ice-free period. Sediment supply to the lake is triggered by the thickness of the active layer in the catchment, and the availability of water as transport medium. During glacial periods, sedimentation at Lake El'gygytgyn is hampered by the occurrence of a perennial ice-cover with sedimentation being restricted to seasonal moats and vertical conducts through the ice. Thus, the summer temperature predominantly triggers transport of coarse material into the lake center. Time series analysis that was carried out to gain insight in the frequency of the grain-size data showed grain-size variations predominately on Milankovitch's eccentricity, obliquity and precession bands. Variations in the relative power of these three oscillation bands during the Quaternary imply that climate conditions at Lake El'gygytgyn are mainly triggered by global glacial/interglacial variations (eccentricity, obliquity and local insolation forcing (precession, respectively.

  14. Statistical analyses of hydrologic system components and simulation of Edwards aquifer water-level response to rainfall using transfer-function models, San Antonio region, Texas

    Science.gov (United States)

    Miller, Lisa D.; Long, Andrew J.

    2006-01-01

    In 2003 the U.S. Geological Survey, in cooperation with the San Antonio Water System, did a study using historical data to statistically analyze hydrologic system components in the San Antonio region of Texas and to develop transfer-function models to simulate water levels at selected sites (wells) in the Edwards aquifer on the basis of rainfall. Water levels for two wells in the confined zone in Medina County and one well in the confined zone in Bexar County were highly correlated and showed little or no lag time between water-level responses. Water levels in these wells also were highly correlated with springflow at Comal Springs. Water-level hydrographs for 35 storms showed that an individual well can respond differently to similar amounts of rainfall. Fourteen water-level-recession hydrographs for a Medina County well showed that recession rates were variable. Transfer-function models were developed to simulate water levels at one confined-zone well and two recharge-zone wells in response to rainfall. For the confined-zone well, 50 percent of the simulated water levels are within 10 feet of the measured water levels, and 80 percent of the simulated water levels are within 15 feet of the measured water levels. For one recharge-zone well, 50 percent of the simulated water levels are within 5 feet of the measured water levels, and 90 percent of the simulated water levels are within 14 feet of the measured water levels. For the other recharge-zone well, 50 percent of the simulated water levels are within 14 feet of the measured water levels, and 90 percent of the simulated water levels are within 27 feet of the measured water levels. The transfer-function models showed that (1) the Edwards aquifer in the San Antonio region responds differently to recharge (effective rainfall) at different wells; and (2) multiple flow components are present in the aquifer. If simulated long-term system response results from a change in the hydrologic budget, then water levels would

  15. The Preliminary Discussion on the Statistical Framework of the Inclusive Finance%包容性金融统计框架初探

    Institute of Scientific and Technical Information of China (English)

    余晓芳

    2015-01-01

    包容性金融已成为我国金融业务发展的一个重要目标,为实现金融业的包容性增长,应在目前的金融统计体系下建立包容性金融统计框架,量化反映包容性金融发展状况,为包容性金融发展状况的评估提供数据支撑,同时,也为包容性金融发展政策的制定和实施提供信息依据。本文厘清了包容性金融及包容性金融统计等概念,并从统计内容、指标体系和统计方式三方面分析包容性金融统计框架内容,提出构建包容性金融统计框架的建议。%The inclusive finance has become an important goal of the development of the financial business in China. To achieve the inclusive growth of the financial industry, the statistical framework of the inclusive finance should be established under the current financial statistics system to quantitatively reflect the development situation of the inclusive finance, provide data support for the as-sessment of the development situation of the inclusive finance and provide the information basis for formulating and implementing policies on the development of the inclusive finance as well. The paper clarifies the concepts such as the inclusive finance and inclu-sive financial statistics, analyzes the content of the statistical framework of the inclusive finance from three aspects such as the statisti-cal content, index system and statistical way, and puts forward suggestions on constructing the statistical framework of the inclusive fi-nance.

  16. Statistical analysis of lightning electric field measured under Malaysian condition

    Science.gov (United States)

    Salimi, Behnam; Mehranzamir, Kamyar; Abdul-Malek, Zulkurnain

    2014-02-01

    Lightning is an electrical discharge during thunderstorms that can be either within clouds (Inter-Cloud), or between clouds and ground (Cloud-Ground). The Lightning characteristics and their statistical information are the foundation for the design of lightning protection system as well as for the calculation of lightning radiated fields. Nowadays, there are various techniques to detect lightning signals and to determine various parameters produced by a lightning flash. Each technique provides its own claimed performances. In this paper, the characteristics of captured broadband electric fields generated by cloud-to-ground lightning discharges in South of Malaysia are analyzed. A total of 130 cloud-to-ground lightning flashes from 3 separate thunderstorm events (each event lasts for about 4-5 hours) were examined. Statistical analyses of the following signal parameters were presented: preliminary breakdown pulse train time duration, time interval between preliminary breakdowns and return stroke, multiplicity of stroke, and percentages of single stroke only. The BIL model is also introduced to characterize the lightning signature patterns. Observations on the statistical analyses show that about 79% of lightning signals fit well with the BIL model. The maximum and minimum of preliminary breakdown time duration of the observed lightning signals are 84 ms and 560 us, respectively. The findings of the statistical results show that 7.6% of the flashes were single stroke flashes, and the maximum number of strokes recorded was 14 multiple strokes per flash. A preliminary breakdown signature in more than 95% of the flashes can be identified.

  17. Insights into the 2011-2012 submarine eruption off the coast of El Hierro (Canary Islands, Spain) from statistical analyses of earthquake activity

    Science.gov (United States)

    Ibáñez, J. M.; De Angelis, S.; Díaz-Moreno, A.; Hernández, P.; Alguacil, G.; Posadas, A.; Pérez, N.

    2012-08-01

    The purpose of this work is to gain insights into the 2011-2012 eruption of El Hierro (Canary Islands) by mapping the evolution of the seismic b-value. The El Hierro seismic sequence offers a rather unique opportunity to investigate the process of reawakening of an oceanic intraplate volcano after a long period of repose. The 2011-2012 eruption is a submarine volcanic event that took place about 2 km off of the southern coast of El Hierro. The eruption was accompanied by an intense seismic swarm and surface manifestations of activity. The earthquake catalogue during the period of unrest includes over 12 000 events, the largest with magnitude 4.6. The seismic sequence can be grouped into three distinct phases, which correspond to well-separated spatial clusters and distinct earthquake regimes. The estimated b-value is of 1.18 ± 0.03, and a magnitude of completeness of 1.3, for the entire catalogue. B is very close to 1.0, which indicates completeness of the earthquake catalogue with only minor departures from the linearity of Gutenberg-Richter frequency-magnitude distribution. The most straightforward interpretation of this result is that the seismic swarm reached its final stages, and no additional large magnitude events should be anticipated, similarly to what one would expect for non-volcanic earthquake sequences. The results, dividing the activity in different phases, illustrate remarkable differences in the estimate of b-value during the early and late stages of the eruption. The early pre-eruptive activity was characterized by a b-value of 2.25. In contrast, the b-value was 1.25 during the eruptive phase. Based on our analyses, and the results of other studies, we propose a scenario that may account for the observations reported in this work. We infer that the earthquakes that occurred in the first phase reflect magma migration from the upper mantle to crustal depths. The area where magma initially intruded into the crust, because of its transitional nature

  18. The Added Value of Log File Analyses of the Use of a Personal Health Record for Patients With Type 2 Diabetes Mellitus: Preliminary Results.

    Science.gov (United States)

    Sieverink, Floor; Kelders, Saskia M; Braakman-Jansen, Louise M A; van Gemert-Pijnen, Julia E W C

    2014-03-01

    The electronic personal health record (PHR) is a promising technology for improving the quality of chronic disease management. Until now, evaluations of such systems have provided only little insight into why a particular outcome occurred. The aim of this study is to gain insight into the navigation process (what functionalities are used, and in what sequence) of e-Vita, a PHR for patients with type 2 diabetes mellitus (T2DM), to increase the efficiency of the system and improve the long-term adherence. Log data of the first visits in the first 6 weeks after the release of a renewed version of e-Vita were analyzed to identify the usage patterns that emerge when users explore a new application. After receiving the invitation, 28% of all registered users visited e-Vita. In total, 70 unique usage patterns could be identified. When users visited the education service first, 93% of all users ended their session. Most users visited either 1 or 5 or more services during their first session, but the distribution of the routes was diffuse. In conclusion, log file analyses can provide valuable prompts for improving the system design of a PHR. In this way, the match between the system and its users and the long-term adherence has the potential to increase.

  19. Stardust Interstellar Preliminary Examination IV: Scanning transmission X-ray microscopy analyses of impact features in the Stardust Interstellar Dust Collector

    Science.gov (United States)

    Butterworth, Anna L.; Westphal, Andrew J.; Tyliszczak, Tolek; Gainsforth, Zack; Stodolna, Julien; Frank, David R.; Allen, Carlton; Anderson, David; Ansari, Asna; Bajt, SašA.; Bastien, Ron K.; Bassim, Nabil; Bechtel, Hans A.; Borg, Janet; Brenker, Frank E.; Bridges, John; Brownlee, Donald E.; Burchell, Mark; Burghammer, Manfred; Changela, Hitesh; Cloetens, Peter; Davis, Andrew M.; Doll, Ryan; Floss, Christine; Flynn, George; Grün, Eberhard; Heck, Philipp R.; Hillier, Jon K.; Hoppe, Peter; Hudson, Bruce; Huth, Joachim; Hvide, Brit; Kearsley, Anton; King, Ashley J.; Lai, Barry; Leitner, Jan; Lemelle, Laurence; Leroux, Hugues; Leonard, Ariel; Lettieri, Robert; Marchant, William; Nittler, Larry R.; Ogliore, Ryan; Ong, Wei Ja; Postberg, Frank; Price, Mark C.; Sandford, Scott A.; Tresseras, Juan-Angel Sans; Schmitz, Sylvia; Schoonjans, Tom; Silversmit, Geert; Simionovici, Alexandre S.; Solé, Vicente A.; Srama, Ralf; Stadermann, Frank J.; Stephan, Thomas; Sterken, Veerle J.; Stroud, Rhonda M.; Sutton, Steven; Trieloff, Mario; Tsou, Peter; Tsuchiyama, Akira; Vekemans, Bart; Vincze, Laszlo; von Korff, Joshua; Wordsworth, Naomi; Zevin, Daniel; Zolensky, Michael E.

    2014-09-01

    We report the quantitative characterization by synchrotron soft X-ray spectroscopy of 31 potential impact features in the aerogel capture medium of the Stardust Interstellar Dust Collector. Samples were analyzed in aerogel by acquiring high spatial resolution maps and high energy-resolution spectra of major rock-forming elements Mg, Al, Si, Fe, and others. We developed diagnostic screening tests to reject spacecraft secondary ejecta and terrestrial contaminants from further consideration as interstellar dust candidates. The results support an extraterrestrial origin for three interstellar candidates: I1043,1,30 (Orion) is a 3 pg particle with Mg-spinel, forsterite, and an iron-bearing phase. I1047,1,34 (Hylabrook) is a 4 pg particle comprising an olivine core surrounded by low-density, amorphous Mg-silicate and amorphous Fe, Cr, and Mn phases. I1003,1,40 (Sorok) has the track morphology of a high-speed impact, but contains no detectable residue that is convincingly distinguishable from the background aerogel. Twenty-two samples with an anthropogenic origin were rejected, including four secondary ejecta from impacts on the Stardust spacecraft aft solar panels, nine ejecta from secondary impacts on the Stardust Sample Return Capsule, and nine contaminants lacking evidence of an impact. Other samples in the collection included I1029,1,6, which contained surviving solar system impactor material. Four samples remained ambiguous: I1006,2,18, I1044,2,32, and I1092,2,38 were too dense for analysis, and we did not detect an intact projectile in I1044,3,33. We detected no radiation effects from the synchrotron soft X-ray analyses; however, we recorded the effects of synchrotron hard X-ray radiation on I1043,1,30 and I1047,1,34.

  20. Crystallization and preliminary X-ray diffraction analyses of the TIR domains of three TIR-NB-LRR proteins that are involved in disease resistance in Arabidopsis thaliana.

    Science.gov (United States)

    Wan, Li; Zhang, Xiaoxiao; Williams, Simon J; Ve, Thomas; Bernoux, Maud; Sohn, Kee Hoon; Jones, Jonathan D G; Dodds, Peter N; Kobe, Bostjan

    2013-11-01

    The Toll/interleukin-1 receptor (TIR) domain is a protein-protein interaction domain that is found in both animal and plant immune receptors. The N-terminal TIR domain from the nucleotide-binding (NB)-leucine-rich repeat (LRR) class of plant disease-resistance (R) proteins has been shown to play an important role in defence signalling. Recently, the crystal structure of the TIR domain from flax R protein L6 was determined and this structure, combined with functional studies, demonstrated that TIR-domain homodimerization is a requirement for function of the R protein L6. To advance the molecular understanding of the function of TIR domains in R-protein signalling, the protein expression, purification, crystallization and X-ray diffraction analyses of the TIR domains of the Arabidopsis thaliana R proteins RPS4 (resistance to Pseudomonas syringae 4) and RRS1 (resistance to Ralstonia solanacearum 1) and the resistance-like protein SNC1 (suppressor of npr1-1, constitutive 1) are reported here. RPS4 and RRS1 function cooperatively as a dual resistance-protein system that prevents infection by three distinct pathogens. SNC1 is implicated in resistance pathways in Arabidopsis and is believed to be involved in transcriptional regulation through its interaction with the transcriptional corepressor TPR1 (Topless-related 1). The TIR domains of all three proteins have successfully been expressed and purified as soluble proteins in Escherichia coli. Plate-like crystals of the RPS4 TIR domain were obtained using PEG 3350 as a precipitant; they diffracted X-rays to 2.05 Å resolution, had the symmetry of space group P1 and analysis of the Matthews coefficient suggested that there were four molecules per asymmetric unit. Tetragonal crystals of the RRS1 TIR domain were obtained using ammonium sulfate as a precipitant; they diffracted X-rays to 1.75 Å resolution, had the symmetry of space group P4(1)2(1)2 or P4(3)2(1)2 and were most likely to contain one molecule per asymmetric

  1. Preliminary clinical application of an adaptive iterative statistical reconstruction algorithm inhead and neck computed tomography angiography with low tube voltage and a low concentration of contrast medium

    Institute of Scientific and Technical Information of China (English)

    Shan Hu; Wenzhen Zhu; Daoyu Hu; XiaoYan Meng; Jinhua Zhang; Weijia Wan; Li Zhou

    2015-01-01

    Objective To evaluate the feasibility of using a low concentration of contrast medium (Visipaque 270 mgI/mL), low tube voltage, and an advanced image reconstruction algorithm in head and neck computed tomography angiography (CTA). Methods Forty patients (22 men and 18 women; average age 48.7 ± 14.25 years; average body mass index 23.9 ± 3.7 kg/m2) undergoing CTA for suspected vascular diseases were randomly assigned into two groups. Group A (n = 20) was administered 370 mgI/mL contrast medium, and group B (n = 20) was administered 270 mgI/mL contrast medium. Both groups were administered at a rate of 4.8 mL/s and an injection volume of 0.8 mL/kg. Images of group A were obtained with 120 kVp and filtered back projection (FBP) reconstruction, whereas images of group B were obtained with 80 kVp and 80% adaptive iterative statistical reconstruction algorithm (ASiR). The CT values and standard deviations of intracranial arteries and image noise on the corona radiata were measured to calculate the contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR). The beam-hardening artifacts (BHAs) around the skul base were calculated. Two readers evaluated the image quality with volume rendered images using scores from 1 to 5. The values between the two groups were statistical y compared. Results The mean CT value of the intracranial arteries in group B was significantly higher than that in group A (P < 0.001). The CNR and SNR values in group B were also statistical y higher than those in group A (P < 0.001). Image noise and BHAs were not significantly dif erent between the two groups. The image quality score of VR images of in group B was significantly higher than that in group A (P = 0.001). However, the quality scores of axial enhancement images in group B became significantly smal er than those in group A (P< 0.001). The CT dose index volume and dose-length product were decreased by 63.8% and 64%, respectively, in group B (P < 0.001 for both). Conclusion Visipaque

  2. Statistical methods for analysing complex genetic traits

    NARCIS (Netherlands)

    El Galta, Rachid

    2006-01-01

    Complex traits are caused by multiple genetic and environmental factors, and are therefore difficult to study compared with simple Mendelian diseases. The modes of inheritance of Mendelian diseases are often known. Methods to dissect such diseases are well described in literature. For complex geneti

  3. Crystallization and preliminary X-ray diffraction analyses of pseudechetoxin and pseudecin, two snake-venom cysteine-rich secretory proteins that target cyclic nucleotide-gated ion channels

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Nobuhiro [Institute of Applied Biochemistry, University of Tsukuba, Tsukuba, Ibaraki 305-8572 (Japan); Department of Biochemistry, National Institute of Agrobiological Sciences, Tsukuba, Ibaraki 305-8602 (Japan); Yamazaki, Yasuo [Department of Biochemistry, Meiji Pharmaceutical University, Kiyose, Tokyo 204-8588 (Japan); Fujimoto, Zui [Department of Biochemistry, National Institute of Agrobiological Sciences, Tsukuba, Ibaraki 305-8602 (Japan); Morita, Takashi [Department of Biochemistry, Meiji Pharmaceutical University, Kiyose, Tokyo 204-8588 (Japan); Mizuno, Hiroshi, E-mail: mizuno-hiroshi@aist.go.jp [Department of Biochemistry, National Institute of Agrobiological Sciences, Tsukuba, Ibaraki 305-8602 (Japan); VALWAY Technology Center, NEC Soft Ltd, Koto-ku, Tokyo 136-8627 (Japan); Institute for Biological Resources and Functions, National Institute of Advanced Industrial Science and Technology, Central 6, Tsukuba, Ibaraki 305-8566 (Japan); Institute of Applied Biochemistry, University of Tsukuba, Tsukuba, Ibaraki 305-8572 (Japan)

    2005-08-01

    Crystals of pseudechetoxin and pseudecin, potent peptidic inhibitors of cyclic nucleotide-gated ion channels, have been prepared and X-ray diffraction data have been collected to 2.25 and 1.90 Å resolution, respectively. Cyclic nucleotide-gated (CNG) ion channels play pivotal roles in sensory transduction of retinal and olfactory neurons. The elapid snake toxins pseudechetoxin (PsTx) and pseudecin (Pdc) are the only known protein blockers of CNG channels. These toxins are structurally classified as cysteine-rich secretory proteins and exhibit structural features that are quite distinct from those of other known small peptidic channel blockers. This article describes the crystallization and preliminary X-ray diffraction analyses of these toxins. Crystals of PsTx belonged to space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 60.30, b = 61.59, c = 251.69 Å, and diffraction data were collected to 2.25 Å resolution. Crystals of Pdc also belonged to space group P2{sub 1}2{sub 1}2{sub 1}, with similar unit-cell parameters a = 60.71, b = 61.67, c = 251.22 Å, and diffraction data were collected to 1.90 Å resolution.

  4. Translation of the Manchester Clinical Supervision Scale (MCSS) into Danish and a preliminary psychometric validation

    DEFF Research Database (Denmark)

    Buus, Niels; Gonge, Henrik

    2013-01-01

    for the translation of the MCSS from English into Danish and to present a preliminary psychometric validation of the Danish version of the scale. Methods included a formal translation/back-translation procedure and statistical analyses. The sample consisted of MCSS scores from 139 Danish mental health nursing staff...

  5. Injury Statistics

    Science.gov (United States)

    ... Certification Import Safety International Recall Guidance Civil and Criminal Penalties Federal Court Orders & Decisions Research & Statistics Research & Statistics Technical Reports Injury Statistics NEISS Injury ...

  6. Cosmic Statistics of Statistics

    OpenAIRE

    Szapudi, I.; Colombi, S.; Bernardeau, F.

    1999-01-01

    The errors on statistics measured in finite galaxy catalogs are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi (1996) is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly nonlinear to weakly nonlinear scales. The final analytic formu...

  7. Monitoring the quality consistency of Weibizhi tablets by micellar electrokinetic chromatography fingerprints combined with multivariate statistical analyses, the simple quantified ratio fingerprint method, and the fingerprint-efficacy relationship.

    Science.gov (United States)

    Liu, Yingchun; Sun, Guoxiang; Wang, Yan; Yang, Lanping; Yang, Fangliang

    2015-06-01

    Micellar electrokinetic chromatography fingerprinting combined with quantification was successfully developed and applied to monitor the quality consistency of Weibizhi tablets, which is a classical compound preparation used to treat gastric ulcers. A background electrolyte composed of 57 mmol/L sodium borate, 21 mmol/L sodium dodecylsulfate and 100 mmol/L sodium hydroxide was used to separate compounds. To optimize capillary electrophoresis conditions, multivariate statistical analyses were applied. First, the most important factors influencing sample electrophoretic behavior were identified as background electrolyte concentrations. Then, a Box-Benhnken design response surface strategy using resolution index RF as an integrated response was set up to correlate factors with response. RF reflects the effective signal amount, resolution, and signal homogenization in an electropherogram, thus, it was regarded as an excellent indicator. In fingerprint assessments, simple quantified ratio fingerprint method was established for comprehensive quality discrimination of traditional Chinese medicines/herbal medicines from qualitative and quantitative perspectives, by which the quality of 27 samples from the same manufacturer were well differentiated. In addition, the fingerprint-efficacy relationship between fingerprints and antioxidant activities was established using partial least squares regression, which provided important medicinal efficacy information for quality control. The present study offered an efficient means for monitoring Weibizhi tablet quality consistency.

  8. Statistics in a nutshell

    CERN Document Server

    Boslaugh, Sarah

    2013-01-01

    Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.

  9. Kvalitative analyser ..

    DEFF Research Database (Denmark)

    Boolsen, Merete Watt

    bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse......bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse...

  10. Statistical mechanics of superconductivity

    CERN Document Server

    Kita, Takafumi

    2015-01-01

    This book provides a theoretical, step-by-step comprehensive explanation of superconductivity for undergraduate and graduate students who have completed elementary courses on thermodynamics and quantum mechanics. To this end, it adopts the unique approach of starting with the statistical mechanics of quantum ideal gases and successively adding and clarifying elements and techniques indispensible for understanding it. They include the spin-statistics theorem, second quantization, density matrices, the Bloch–De Dominicis theorem, the variational principle in statistical mechanics, attractive interaction, and bound states. Ample examples of their usage are also provided in terms of topics from advanced statistical mechanics such as two-particle correlations of quantum ideal gases, derivation of the Hartree–Fock equations, and Landau’s Fermi-liquid theory, among others. With these preliminaries, the fundamental mean-field equations of superconductivity are derived with maximum mathematical clarity based on ...

  11. Algebraic Statistics

    OpenAIRE

    Norén, Patrik

    2013-01-01

    Algebraic statistics brings together ideas from algebraic geometry, commutative algebra, and combinatorics to address problems in statistics and its applications. Computer algebra provides powerful tools for the study of algorithms and software. However, these tools are rarely prepared to address statistical challenges and therefore new algebraic results need often be developed. This way of interplay between algebra and statistics fertilizes both disciplines. Algebraic statistics is a relativ...

  12. Commentary: statistics for biomarkers.

    Science.gov (United States)

    Lovell, David P

    2012-05-01

    This short commentary discusses Biomarkers' requirements for the reporting of statistical analyses in submitted papers. It is expected that submitters will follow the general instructions of the journal, the more detailed guidance given by the International Committee of Medical Journal Editors, the specific guidelines developed by the EQUATOR network, and those of various specialist groups. Biomarkers expects that the study design and subsequent statistical analyses are clearly reported and that the data reported can be made available for independent assessment. The journal recognizes that there is continuing debate about different approaches to statistical science. Biomarkers appreciates that the field continues to develop rapidly and encourages the use of new methodologies.

  13. Chemical Analyses

    Science.gov (United States)

    Bulluck, J. W.; Rushing, R. A.

    1994-01-01

    As a preliminary study on the effects of chemical aging of polymer materials MERL and TRI have examined two polymeric materials that are typically used for offshore umbilical applications. These two materials were Tefzel, a copolymer of ethylene and tetrafluoroethylene, and Coflon, polyvinylidene fluoride. The Coflon specimens were cut from pipe sections and exposed to H2S at various temperatures and pressures. One of these specimens was tested for methane permeation, and another for H2S permeation. The Tefzel specimens were cut from .05 mm sheet stock material and were exposed to methanol at elevated temperature and pressure. One of these specimens was exposed to methanol permeation for 2 days at 100 C and 2500 psi. An additional specimen was exposed to liquid methanol for 3 days at 150 C and 15 Bar. Virgin specimens of each material were similarly prepared and tested.

  14. Bayesian statistics

    OpenAIRE

    新家, 健精

    2013-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  15. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  16. Harmonic statistics

    Science.gov (United States)

    Eliazar, Iddo

    2017-05-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.

  17. Modeling of soil penetration resistance using statistical analyses and artificial neural networks=Modelagem da resistência à penetração do solo usando análises estatísticas e redes neurais artificiais

    Directory of Open Access Journals (Sweden)

    Domingos Sárvio Magalhães Valente

    2012-04-01

    Full Text Available An important factor for the evaluation of an agricultural system’s sustainability is the monitoring of soil quality via its physical attributes. The physical attributes of soil, such as soil penetration resistance, can be used to monitor and evaluate the soil’s quality. Artificial Neural Networks (ANN have been employed to solve many problems in agriculture, and the use of this technique can be considered an alternative approach for predicting the penetration resistance produced by the soil’s basic properties, such as bulk density and water content. The aim of this work is to perform an analysis of the soil penetration resistance behavior measured from the cone index under different levels of bulk density and water content using statistical analyses, specifically regression analysis and ANN modeling. Both techniques show that soil penetration resistance is associated with soil bulk density and water content. The regression analysis presented a determination coefficient of 0.92 and an RMSE of 0.951, and the ANN modeling presented a determination coefficient of 0.98 and an RMSE of 0.084. The results show that the ANN modeling presented better results than the mathematical model obtained from regression analysis.Um importante fator para a avaliação da sustentabilidade de sistemas agrícolas é o monitoramento da qualidade do solo por meio de seus atritutos físicos. Logo, atributos físicos do solo, como resistência à penetração, podem ser empregados no monitoramento e na avaliação da qualidade do solo. Redes Neurais Artificiais (RNA tem sido empregadas na solução de vários problemas na agricultura, neste contexto, o uso desta técnica pode ser considerada uma abordagem alternativa para se predizer a resistência à penetração do solo a partir de suas propriedades básicas como densidade e teor de água. Portanto, o objetivo desse trabalho foi desenvolver um estudo do comportamento da resistência à penetração do solo, medida

  18. Statistical physics

    CERN Document Server

    Sadovskii, Michael V

    2012-01-01

    This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

  19. Statistical methods

    CERN Document Server

    Szulc, Stefan

    1965-01-01

    Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

  20. Statistical optics

    CERN Document Server

    Goodman, Joseph W

    2015-01-01

    This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications.  The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i

  1. Histoplasmosis Statistics

    Science.gov (United States)

    ... Foodborne, Waterborne, and Environmental Diseases Mycotic Diseases Branch Histoplasmosis Statistics Recommend on Facebook Tweet Share Compartir How common is histoplasmosis? In the United States, an estimated 60% to ...

  2. Statistical distributions

    CERN Document Server

    Forbes, Catherine; Hastings, Nicholas; Peacock, Brian J.

    2010-01-01

    A new edition of the trusted guide on commonly used statistical distributions Fully updated to reflect the latest developments on the topic, Statistical Distributions, Fourth Edition continues to serve as an authoritative guide on the application of statistical methods to research across various disciplines. The book provides a concise presentation of popular statistical distributions along with the necessary knowledge for their successful use in data modeling and analysis. Following a basic introduction, forty popular distributions are outlined in individual chapters that are complete with re

  3. Harmonic statistics

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    2017-05-15

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  4. Contaminação do ambiente aquático por pesticidas. Estudo de caso: águas usadas para consumo humano em Primavera do Leste, Mato Grosso - análise preliminar Aquatic environment contamination by pesticides. Case study: water used for human consumption in Primavera do Leste, Mato Grosso - preliminary analyses

    Directory of Open Access Journals (Sweden)

    Eliana Freire Gaspar de Carvalho Dores

    2001-02-01

    Full Text Available A preliminary analyses of the possible contamination of superficial and underground water by the active ingredients of the pesticide products used in the surroundings of the urban area of Primavera do Leste, Mato Grosso, Brazil, was carried out. A description of the study region and of its environmental characteristics, which can favor the contamination of the local aquatic environment, was presented. The EPA screening criteria, the groundwater ubiquity score (GUS and the criteria proposed by Goss were used to evaluate which pesticides might contaminate the local waters. Among the active ingredients studied, several present risks to the local aquatic environment.

  5. Scan Statistics

    CERN Document Server

    Glaz, Joseph

    2009-01-01

    Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.

  6. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2008-01-01

    In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

  7. Lehrer in der Bundesrepublik Deutschland. Eine Kritische Analyse Statistischer Daten uber das Lehrpersonal an Allgemeinbildenden Schulen. (Education in the Federal Republic of Germany. A Statistical Study of Teachers in Schools of General Education.)

    Science.gov (United States)

    Kohler, Helmut

    The purpose of this study was to analyze the available statistics concerning teachers in schools of general education in the Federal Republic of Germany. An analysis of the demographic structure of the pool of full-time teachers showed that in 1971 30 percent of the teachers were under age 30, and 50 percent were under age 35. It was expected that…

  8. Notices about using elementary statistics in psychology

    OpenAIRE

    松田, 文子; 三宅, 幹子; 橋本, 優花里; 山崎, 理央; 森田, 愛子; 小嶋, 佳子

    2003-01-01

    Improper uses of elementary statistics that were often observed in beginners' manuscripts and papers were collected and better ways were suggested. This paper consists of three parts: About descriptive statistics, multivariate analyses, and statistical tests.

  9. Statistical Pattern Recognition

    CERN Document Server

    Webb, Andrew R

    2011-01-01

    Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions.  It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,

  10. Informal Statistics Help Desk

    Science.gov (United States)

    Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.

    2017-01-01

    Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.

  11. Introductory statistics

    CERN Document Server

    Ross, Sheldon M

    2005-01-01

    In this revised text, master expositor Sheldon Ross has produced a unique work in introductory statistics. The text's main merits are the clarity of presentation, contemporary examples and applications from diverse areas, and an explanation of intuition and ideas behind the statistical methods. To quote from the preface, ""It is only when a student develops a feel or intuition for statistics that she or he is really on the path toward making sense of data."" Ross achieves this goal through a coherent mix of mathematical analysis, intuitive discussions and examples.* Ross's clear writin

  12. Introductory statistics

    CERN Document Server

    Ross, Sheldon M

    2010-01-01

    In this 3rd edition revised text, master expositor Sheldon Ross has produced a unique work in introductory statistics. The text's main merits are the clarity of presentation, contemporary examples and applications from diverse areas, and an explanation of intuition and ideas behind the statistical methods. Concepts are motivated, illustrated and explained in a way that attempts to increase one's intuition. To quote from the preface, ""It is only when a student develops a feel or intuition for statistics that she or he is really on the path toward making sense of data."" Ross achieves this

  13. Statistical physics

    CERN Document Server

    Wannier, Gregory H

    2010-01-01

    Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for

  14. Semiconductor statistics

    CERN Document Server

    Blakemore, J S

    1962-01-01

    Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co

  15. SEER Statistics

    Science.gov (United States)

    The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.

  16. Cancer Statistics

    Science.gov (United States)

    ... Resources Conducting Clinical Trials Statistical Tools and Data Terminology Resources NCI Data Catalog Cryo-EM NCI's Role ... Contacts Other Funding Find NCI funding for small business innovation, technology transfer, and contracts Training Cancer Training ...

  17. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  18. Reversible Statistics

    DEFF Research Database (Denmark)

    Tryggestad, Kjell

    2004-01-01

    The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...

  19. Image Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-08

    In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.

  20. Accident Statistics

    Data.gov (United States)

    Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

  1. Multiparametric statistics

    CERN Document Server

    Serdobolskii, Vadim Ivanovich

    2007-01-01

    This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...

  2. Statistics for Learning Genetics

    Science.gov (United States)

    Charles, Abigail Sheena

    2012-01-01

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing…

  3. Statistics for Learning Genetics

    Science.gov (United States)

    Charles, Abigail Sheena

    2012-01-01

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing…

  4. Implementation of quality by design principles in the development of microsponges as drug delivery carriers: Identification and optimization of critical factors using multivariate statistical analyses and design of experiments studies.

    Science.gov (United States)

    Simonoska Crcarevska, Maja; Dimitrovska, Aneta; Sibinovska, Nadica; Mladenovska, Kristina; Slavevska Raicki, Renata; Glavas Dodov, Marija

    2015-07-15

    Microsponges drug delivery system (MDDC) was prepared by double emulsion-solvent-diffusion technique using rotor-stator homogenization. Quality by design (QbD) concept was implemented for the development of MDDC with potential to be incorporated into semisolid dosage form (gel). Quality target product profile (QTPP) and critical quality attributes (CQA) were defined and identified, accordingly. Critical material attributes (CMA) and Critical process parameters (CPP) were identified using quality risk management (QRM) tool, failure mode, effects and criticality analysis (FMECA). CMA and CPP were identified based on results obtained from principal component analysis (PCA-X&Y) and partial least squares (PLS) statistical analysis along with literature data, product and process knowledge and understanding. FMECA identified amount of ethylcellulose, chitosan, acetone, dichloromethane, span 80, tween 80 and water ratio in primary/multiple emulsions as CMA and rotation speed and stirrer type used for organic solvent removal as CPP. The relationship between identified CPP and particle size as CQA was described in the design space using design of experiments - one-factor response surface method. Obtained results from statistically designed experiments enabled establishment of mathematical models and equations that were used for detailed characterization of influence of identified CPP upon MDDC particle size and particle size distribution and their subsequent optimization.

  5. Statistical mechanics

    CERN Document Server

    Jana, Madhusudan

    2015-01-01

    Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...

  6. Statistical mechanics

    CERN Document Server

    Schwabl, Franz

    2006-01-01

    The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...

  7. Statistical inference

    CERN Document Server

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  8. Statistical Physics

    CERN Document Server

    Mandl, Franz

    1988-01-01

    The Manchester Physics Series General Editors: D. J. Sandiford; F. Mandl; A. C. Phillips Department of Physics and Astronomy, University of Manchester Properties of Matter B. H. Flowers and E. Mendoza Optics Second Edition F. G. Smith and J. H. Thomson Statistical Physics Second Edition E. Mandl Electromagnetism Second Edition I. S. Grant and W. R. Phillips Statistics R. J. Barlow Solid State Physics Second Edition J. R. Hook and H. E. Hall Quantum Mechanics F. Mandl Particle Physics Second Edition B. R. Martin and G. Shaw The Physics of Stars Second Edition A. C. Phillips Computing for Scient

  9. AP statistics

    CERN Document Server

    Levine-Wissing, Robin

    2012-01-01

    All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep

  10. Statistical methods

    CERN Document Server

    Freund, Rudolf J; Wilson, William J

    2010-01-01

    Statistical Methods, 3e provides students with a working introduction to statistical methods offering a wide range of applications that emphasize the quantitative skills useful across many academic disciplines. This text takes a classic approach emphasizing concepts and techniques for working out problems and intepreting results. The book includes research projects, real-world case studies, numerous examples and data exercises organized by level of difficulty. This text requires that a student be familiar with algebra. New to this edition: NEW expansion of exercises a

  11. Statistical mechanics

    CERN Document Server

    Davidson, Norman

    2003-01-01

    Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody

  12. Statistical Mechancis

    CERN Document Server

    Gallavotti, Giovanni

    2011-01-01

    C. Cercignani: A sketch of the theory of the Boltzmann equation.- O.E. Lanford: Qualitative and statistical theory of dissipative systems.- E.H. Lieb: many particle Coulomb systems.- B. Tirozzi: Report on renormalization group.- A. Wehrl: Basic properties of entropy in quantum mechanics.

  13. Radiotoxicological analyses of {sup 239+240}Pu and {sup 241}Am in biological samples by anion-exchange and extraction chromatography: a preliminary study for internal contamination evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Ridone, S.; Arginelli, D.; Bortoluzzi, S.; Canuto, G.; Montalto, M.; Nocente, M.; Vegro, M. [Italian National Agency for New Technologies, Energy and the Environment (ENEA), Research Centre of Saluggia, Radiation Protection Institute, Saluggia, VC (Italy)

    2006-07-01

    Many biological samples (urines and faeces) have been analysed by means of chromatographic extraction columns, utilising two different resins (AG 1-X2 resin chloride and T.R.U.), in order to detect the possible internal contamination of {sup 239{sup +}}{sup 240}Pu and {sup 241}Am, for some workers of a reprocessing nuclear plant in the decommissioning phase. The results obtained show on one hand the great suitability of the first resin for the determination of plutonium, and on the other the great selectivity of the second one for the determination of americium.

  14. Elementary Statistics Tables

    CERN Document Server

    Neave, Henry R

    2012-01-01

    This book, designed for students taking a basic introductory course in statistical analysis, is far more than just a book of tables. Each table is accompanied by a careful but concise explanation and useful worked examples. Requiring little mathematical background, Elementary Statistics Tables is thus not just a reference book but a positive and user-friendly teaching and learning aid. The new edition contains a new and comprehensive "teach-yourself" section on a simple but powerful approach, now well-known in parts of industry but less so in academia, to analysing and interpreting process dat

  15. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    2005-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  16. Arc Statistics

    CERN Document Server

    Meneghetti, M; Dahle, H; Limousin, M

    2013-01-01

    The existence of an arc statistics problem was at the center of a strong debate in the last fifteen years. With the aim to clarify if the optical depth for giant gravitational arcs by galaxy clusters in the so called concordance model is compatible with observations, several studies were carried out which helped to significantly improve our knowledge of strong lensing clusters, unveiling their extremely complex internal structure. In particular, the abundance and the frequency of strong lensing events like gravitational arcs turned out to be a potentially very powerful tool to trace the structure formation. However, given the limited size of observational and theoretical data-sets, the power of arc statistics as a cosmological tool has been only minimally exploited so far. On the other hand, the last years were characterized by significant advancements in the field, and several cluster surveys that are ongoing or planned for the near future seem to have the potential to make arc statistics a competitive cosmo...

  17. Statistical analysis of management data

    CERN Document Server

    Gatignon, Hubert

    2013-01-01

    This book offers a comprehensive approach to multivariate statistical analyses. It provides theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications.

  18. Statistical analyses for the purpose of an early detection of global and regional climate change due to the anthropogenic greenhouse effect; Statistische Analysen zur Frueherkennung globaler und regionaler Klimaaenderungen aufgrund des anthropogenen Treibhauseffektes

    Energy Technology Data Exchange (ETDEWEB)

    Grieser, J.; Staeger, T.; Schoenwiese, C.D.

    2000-03-01

    The report answers the question where, why and how different climate variables have changed within the last 100 years. The analyzed variables are observed time series of temperature (mean, maximum, minimum), precipitation, air pressure, and water vapour pressure in a monthly resolution. The time series are given as station data and grid box data as well. Two kinds of time-series analysis are performed. The first is applied to find significant changes concerning mean and variance of the time series. Thereby also changes in the annual cycle and frequency of extreme events arise. The second approach is used to detect significant spatio-temporal patterns in the variations of climate variables, which are most likely driven by known natural and anthropogenic climate forcings. Furtheron, an estimation of climate noise allows to indicate regions where certain climate variables have changed significantly due to the enhanced anthropogenic greenhouse effect. (orig.) [German] Der Bericht gibt Antwort auf die Frage, wo sich welche Klimavariable wie und warum veraendert hat. Ausgangspunkt der Analyse sind huntertjaehrige Zeitreihen der Temperatur (Mittel, Maximum, Minimum), des Niederschlags, Luftdrucks und Wasserdampfpartialdrucks in monatlicher Aufloesung. Es wurden sowohl Stationsdaten als auch Gitterpunktdaten verwendet. Mit Hilfe der strukturorientierten Zeitreihenzerlegung wurden signifikankte Aenderungen im Mittel und in der Varianz der Zeitreihen gefunden. Diese betreffen auch Aenderungen im Jahresgang und in der Haeufigkeit extremer Ereignisse. Die ursachenorientierte Zeitreihenzerlegung selektiert signifikante raumzeitliche Variationen der Klimavariablen, die natuerlichen bzw. anthropogenen Klimaantrieben zugeordnet werden koennen. Eine Abschaetzung des Klimarauschens erlaubt darueber hinaus anzugeben, wo und wie signifikant der anthropogene Treibhauseffekt welche Klimavariablen veraendert hat. (orig.)

  19. Métodos estatísticos e estrutura espacial de populações: uma análise comparativa = Statistic methods and population spatial structure: a comparative analyses

    Directory of Open Access Journals (Sweden)

    Matheus de Souza Lima-Ribeiro

    2006-07-01

    Full Text Available O presente estudo teve por objetivo comparar os resultados de distribuição espacial obtidos entre os métodos clássicos e os métodos que estimam a variância entre parcelas. Foram analisadas duas espécies, Vernonia aurea e Duguetia furfuracea. Foram utilizados a Distribuição de Poisson (padrão aleatório, a Distribuição Binomial Negativa (padrão agregado e os métodos BQV, TTLQV e PQV (variância entre parcelas, bem como a razão variância:média (I, coeficiente de Green (Ig e o índice de dispersão de Morisita (Im. Ambas metodologias detectaram padrão de distribuição espacial agregado para as populações analisadas, com resultados similares quanto ao nível de agregação, além de complementação das informações, em diferentes escalas, entre os métodos clássicos e de variância entre parcelas. Desse modo, recomenda-se a utilização desses métodos estatísticos em estudos de estrutura espacial, uma vez que os testes são robustos e complementares e os dados são de fácil coleta em campo.This study aims to compare the results of spatial structure obtained between the classic and quadrat variance methods. Two species were analised, Vernonia aurea and Duguetia furfuracea. The Poisson distribution (random pattern, the Negative Binomial distribution (aggregate pattern, the BQV, TTLQV and PQV methods, the ratiovariance: mean (I, the Green coefficient (Ig and the Morisita’s index of dispersion (Im were used to detect the populations spatial pattern. An aggregated spatial pattern distribution was detected through both methodologies, with similar results as for the aggregation level and the complementation of the information in different scales between classic and quadrat variance methods. Thus, the utilization of these statistic methods in studies of the spatialstructure is recommended, given that tests are robust and complementary and field data samples are easy to collect.

  20. MEVSİMSEL DÜZELTMEDE KULLANILAN İSTATİSTİKİ YÖNTEMLER ÜZERİNE BİR İNCELEME-AN ANALYSE ON STATISTICAL METHODS WHICH ARE USED FOR SEASONAL ADJUSTMENT

    Directory of Open Access Journals (Sweden)

    Handan YOLSAL

    2010-01-01

    Full Text Available Bu makalenin amacı zaman serileri için resmi istatistik ajansları tarafından geliştirilen ve çok yaygın olarak uygulanan mevsim düzeltme programlarını tanıtmaktır. Bu programlar iki ana grupta sınıflanmaktadır. Bunlardan biri, ilk defa olarak NBER tarafından geliştirilen ve hareketli ortalamalar filtreleri kullanan CENSUS II X-11 ailesidir. Bu aile X-11 ARIMA ve X-12 ARIMA tekniklerini içerir. Diğeri ise İspanya Merkez Bankası tarafından geliştirilen ve model bazlı bir yaklaşım olan TRAMO/SEATS programıdır. Bu makalede sözü edilen tekniklerin mevsimsel ayrıştırma süreçleri, bu tekniklerin içerdiği ticari gün, takvim etkisi gibi bazı özel etkiler, avantaj ve dezavantajları ve ayrıca öngörü performansları tartışılacaktır.-This paper’s aim is to introduce most commonly applied seasonal adjustment programs improved by official statistical agencies for the time series. These programs are classified in two main groups. One of them is the family of CENSUS II X-11 which was using moving average filters and was first developed by NBER. This family involves X-11 ARIMA and X-12 ARIMA techniques. The other one is TRAMO/SEATS program which was a model based approach and has been developed by Spain Central Bank. The seasonal decomposition procedures of these techniques which are mentioned before and consisting of some special effects such as trading day, calendar effects and their advantages-disadvantages and also forecasting performances of them will be discussed in this paper.

  1. PRELIMINARY PALAEOSYNECOLOGICAL ANALYSES ON THE UPPER ANISIAN (MIDDLE TRIASSIC)QINGYAN FAUNA%中三叠世青岩生物群的群体古生态学初步研究

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The Qingyan fauna (Upper Anisian, Middle Triassic) was found from a section located at about 30km south of Guiyang, the provincial capital of Guizhou Province, SW China. Here yields abundant well-preserved and highly diversed fossils, involving 16-17 groups of organisms, among which gastropods, frachiopods and bivalves are dominant. Some macrofossils are a eommon sight here including corals, sponges, crinoids, scaphapods, ammonoids, nautilids, bryozoaus and echinoids. whereas others such as porifers, cnidarians and annelids are rare in occurrence but visible sometimes. Among microfossils abounded in this fauna, snch groups as ostracods, foraminiferans and calcareous algae were analysed.   After a palaeosynecologic study combining with palaeogeographic analyses we believe that the Qingyan fauna might have developed in an environment of a protected shallow marine habitat, which was lacated at a position of the upper part of basin slope. The sea bottom was fine-grained and stable. The rates of sedimentation were generally relatively low. The water was relatively deep and the organisms lived in the photic zone above the storm wave base and below the fair-weather wave base.   Epibenthic forms are the main groups in this primarily benthic fauna. Shallow burrowing in fauna and semiinfauna form smaller portions. In traphic habits, suspension-feeders are the dominant group. In a ddition, there are herbivores (gastropods), detritus-feeders, scavengers, and omnivores, a few deposit-feeders and few micro- and macro-carnivores.   The high percentage of suspension-feeders in the fauna indicates that the water was clean and clear, gently moving, and rich in oxygen and nutrients. The highly diverse associations, especially bivalves and gastropods, most probablyre present selics of community which had lived in patches of macroalgae.%根据群体古生态学的初步研究,认为青岩生物群的富集是因为发育一个较为特殊的生态环境,即位于斜坡

  2. Depth statistics

    OpenAIRE

    2012-01-01

    In 1975 John Tukey proposed a multivariate median which is the 'deepest' point in a given data cloud in R^d. Later, in measuring the depth of an arbitrary point z with respect to the data, David Donoho and Miriam Gasko considered hyperplanes through z and determined its 'depth' by the smallest portion of data that are separated by such a hyperplane. Since then, these ideas has proved extremely fruitful. A rich statistical methodology has developed that is based on data depth and, more general...

  3. Statistical mechanics

    CERN Document Server

    Sheffield, Scott

    2009-01-01

    In recent years, statistical mechanics has been increasingly recognized as a central domain of mathematics. Major developments include the Schramm-Loewner evolution, which describes two-dimensional phase transitions, random matrix theory, renormalization group theory and the fluctuations of random surfaces described by dimers. The lectures contained in this volume present an introduction to recent mathematical progress in these fields. They are designed for graduate students in mathematics with a strong background in analysis and probability. This book will be of particular interest to graduate students and researchers interested in modern aspects of probability, conformal field theory, percolation, random matrices and stochastic differential equations.

  4. Statistics for the Relative Detectability of Chemicals in Weak Gaseous Plumes in LWIR Hyperspectral Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Metoyer, Candace N.; Walsh, Stephen J.; Tardiff, Mark F.; Chilton, Lawrence

    2008-10-30

    The detection and identification of weak gaseous plumes using thermal imaging data is complicated by many factors. These include variability due to atmosphere, ground and plume temperature, and background clutter. This paper presents an analysis of one formulation of the physics-based model that describes the at-sensor observed radiance. The motivating question for the analyses performed in this paper is as follows. Given a set of backgrounds, is there a way to predict the background over which the probability of detecting a given chemical will be the highest? Two statistics were developed to address this question. These statistics incorporate data from the long-wave infrared band to predict the background over which chemical detectability will be the highest. These statistics can be computed prior to data collection. As a preliminary exploration into the predictive ability of these statistics, analyses were performed on synthetic hyperspectral images. Each image contained one chemical (either carbon tetrachloride or ammonia) spread across six distinct background types. The statistics were used to generate predictions for the background ranks. Then, the predicted ranks were compared to the empirical ranks obtained from the analyses of the synthetic images. For the simplified images under consideration, the predicted and empirical ranks showed a promising amount of agreement. One statistic accurately predicted the best and worst background for detection in all of the images. Future work may include explorations of more complicated plume ingredients, background types, and noise structures.

  5. The quality improvement attitude survey: Development and preliminary psychometric characteristics.

    Science.gov (United States)

    Dunagan, Pamela B

    2017-08-22

    To report the development of a tool to measure nurse's attitudes about quality improvement in their practice setting and to examine preliminary psychometric characteristics of the Quality Improvement Nursing Attitude Scale. Human factors such as nursing attitudes of complacency have been identified as root causes of sentinel events. Attitudes of nurses concerning use of Quality and Safety Education for nurse's competencies can be most challenging to teach and to change. No tool has been developed measuring attitudes of nurses concerning their role in quality improvement. A descriptive study design with preliminary psychometric evaluation was used to examine the preliminary psychometric characteristics of the Quality Improvement Nursing Attitude Scale. Registered bedside clinical nurses comprised the sample for the study (n = 57). Quantitative data were analysed using descriptive statistics and Cronbach's alpha reliability. Total score and individual item statistics were evaluated. Two open-ended items were used to collect statements about nurses' feelings regarding their experience in quality improvement efforts. Strong support for the internal consistency reliability and face validity of the Quality Improvement Nursing Attitude Scale was found. Total scale scores were high indicating nurse participants valued Quality and Safety Education for Nurse competencies in practice. However, item-level statistics indicated nurses felt powerless when other nurses deviate from care standards. Additionally, the sample indicated they did not consistently report patient safety issues and did not have a feeling of value in efforts to improve care. Findings suggested organisational culture fosters nurses' reporting safety issues and feeling valued in efforts to improve care. Participants' narrative comments and item analysis revealed the need to generate new items for the Quality Improvement Nursing Attitude Scale focused on nurses' perception of their importance in quality and

  6. Statistical Neurodynamics.

    Science.gov (United States)

    Paine, Gregory Harold

    1982-03-01

    The primary objective of the thesis is to explore the dynamical properties of small nerve networks by means of the methods of statistical mechanics. To this end, a general formalism is developed and applied to elementary groupings of model neurons which are driven by either constant (steady state) or nonconstant (nonsteady state) forces. Neuronal models described by a system of coupled, nonlinear, first-order, ordinary differential equations are considered. A linearized form of the neuronal equations is studied in detail. A Lagrange function corresponding to the linear neural network is constructed which, through a Legendre transformation, provides a constant of motion. By invoking the Maximum-Entropy Principle with the single integral of motion as a constraint, a probability distribution function for the network in a steady state can be obtained. The formalism is implemented for some simple networks driven by a constant force; accordingly, the analysis focuses on a study of fluctuations about the steady state. In particular, a network composed of N noninteracting neurons, termed Free Thinkers, is considered in detail, with a view to interpretation and numerical estimation of the Lagrange multiplier corresponding to the constant of motion. As an archetypical example of a net of interacting neurons, the classical neural oscillator, consisting of two mutually inhibitory neurons, is investigated. It is further shown that in the case of a network driven by a nonconstant force, the Maximum-Entropy Principle can be applied to determine a probability distribution functional describing the network in a nonsteady state. The above examples are reconsidered with nonconstant driving forces which produce small deviations from the steady state. Numerical studies are performed on simplified models of two physical systems: the starfish central nervous system and the mammalian olfactory bulb. Discussions are given as to how statistical neurodynamics can be used to gain a better

  7. Hydrometeorological and Statistical Analyses of Heavy Rainfall in Midwestern USA

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Smith, J. A.; Krajewski, W. F.

    2012-01-01

    During the last two decades the mid-western states of the United States of America has been largely afflicted by heavy flood producing rainfall. Several of these storms seem to have similar hydrometeorological properties in terms of pattern, track, evolution, life cycle, clustering, etc. which rais...

  8. Statistical analyses of plume composition and deposited radionuclide mixture ratios

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, Terrence D.; Sallaberry, Cedric Jean-Marie; Eckert-Gallup, Aubrey Celia; Brito, Roxanne; Hunt, Brian D.; Osborn, Douglas.

    2014-01-01

    A proposed method is considered to classify the regions in the close neighborhood of selected measurements according to the ratio of two radionuclides measured from either a radioactive plume or a deposited radionuclide mixture. The subsequent associated locations are then considered in the area of interest with a representative ratio class. This method allows for a more comprehensive and meaningful understanding of the data sampled following a radiological incident.

  9. Cervical Spine Injuries - Numerical Analyses and Statistical Survey

    OpenAIRE

    2002-01-01

    Injuries to the neck, or cervical region, are very importantsince there is a potential risk of damage to the spinal cord.Any neck injury can have devastating if not life threateningconsequences. High-speed transportation as well as leisure-timeadventures have increased the number of serious neck injuriesand made us increasingly aware of its consequences.Surveillance systems and epidemiological studies are importantprerequisites in defining the scope of the problem. Thedevelopment of mechanica...

  10. Practical Statistics for Particle Physics Analyses: Likelihoods (1/4)

    CERN Document Server

    CERN. Geneva; Lyons, Louis

    2016-01-01

    This will be a 4-day series of 2-hour sessions as part of CERN's Academic Training Course. Each session will consist of a 1-hour lecture followed by one hour of practical computing, which will have exercises based on that day's lecture. While it is possible to follow just the lectures or just the computing exercises, we highly recommend that, because of the way this course is designed, participants come to both parts. In order to follow the hands-on exercises sessions, students need to bring their own laptops. The exercises will be run on a dedicated CERN Web notebook service, SWAN (swan.cern.ch), which is open to everybody holding a CERN computing account. The requirement to use the SWAN service is to have a CERN account and to have also access to Cernbox, the shared storage service at CERN. New users of cernbox are invited to activate beforehand cernbox by simply connecting to https://cernbox.cern.ch. A basic prior knowledge of ROOT and C++ is also recommended for participation in the practical session....

  11. Large Statistics Study Of QCD Topological Charge Distribution

    CERN Document Server

    Giusti, Leonardo; Taglienti, Bruno

    2006-01-01

    We present preliminary results for a high statistics study of the topological charge distribution in the SU(3) Yang-Mills theory obtained by using the definition of the charge suggested by Neuberger fermions. We find statistical evidence for deviations from a gaussian distribution. The large statistics required has been obtained by using PCs of the INFN-GRID.

  12. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    the vast amounts of raw data. This task is tackled by computational tools implementing algorithms that match the experimental data to databases, providing the user with lists for downstream analysis. Several platforms for such automated interpretation of mass spectrometric data have been developed, each...... having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....

  13. Preliminary Multiphysics Analyses of HFIR LEU Fuel Conversion using COMSOL

    Energy Technology Data Exchange (ETDEWEB)

    Freels, James D [ORNL; Bodey, Isaac T [ORNL; Arimilli, Rao V [ORNL; Curtis, Franklin G [ORNL; Ekici, Kivanc [ORNL; Jain, Prashant K [ORNL

    2011-06-01

    The research documented herein was performed by several individuals across multiple organizations. We have previously acknowledged our funding for the project, but another common thread among the authors of this document, and hence the research performed, is the analysis tool COMSOL. The research has been divided into categories to allow the COMSOL analysis to be performed independently to the extent possible. As will be seen herein, the research has progressed to the point where it is expected that next year (2011) a large fraction of the research will require collaboration of our efforts as we progress almost exclusively into three-dimensional (3D) analysis. To the extent possible, we have tried to segregate the development effort into two-dimensional (2D) analysis in order to arrive at techniques and methodology that can be extended to 3D models in a timely manner. The Research Reactors Division (RRD) of ORNL has contracted with the University of Tennessee, Knoxville (UTK) Mechanical, Aerospace and Biomedical Engineering Department (MABE) to perform a significant fraction of this research. This group has been chosen due to their expertise and long-term commitment in using COMSOL and also because the participating students are able to work onsite on a part-time basis due to the close proximity of UTK with the ORNL campus. The UTK research has been governed by a statement of work (SOW) which clearly defines the specific tasks reported herein on the perspective areas of research. Ph.D. student Isaac T. Bodey has focused on heat transfer, fluid flow, modeling, and meshing issues and has been aided by his major professor Dr. Rao V. Arimilli and is the primary contributor to Section 2 of this report. Ph.D student Franklin G. Curtis has been focusing exclusively on fluid-structure interaction (FSI) due to the mechanical forces acting on the plate caused by the flow and has also been aided by his major professor Dr. Kivanc Ekici and is the primary contributor to Section 4 of this report. The HFIR LEU conversion project has also obtained the services of Dr. Prashant K. Jain of the Reactor & Nuclear Systems Division (RNSD) of ORNL. Prashant has quickly adapted to the COMSOL tools and has been focusing on thermal-structure interaction (TSI) issues and development of alternative 3D model approaches that could yield faster-running solutions. Prashant is the primary contributor to Section 5 of the report. And finally, while incorporating findings from all members of the COMSOL team (i.e., the team) and contributing as the senior COMSOL leader and advocate, Dr. James D. Freels has focused on the 3D model development, cluster deployment, and has contributed primarily to Section 3 and overall integration of this report. The team has migrated to the current release of COMSOL at version 4.1 for all the work described in this report, except where stated otherwise. Just as in the performance of the research, each of the respective sections has been originally authored by the respective authors. Therefore, the reader will observe a contrast in writing style throughout this document.

  14. Preliminary Analyses of Beidou Signal-In Anomaly Since 2013

    Science.gov (United States)

    Wu, Y.; Ren, J.; Liu, W.

    2016-06-01

    As BeiDou navigation system has been operational since December 2012. There is an increasing desire to use multiple constellation to improve positioning performance. The signal-in-space (SIS) anomaly caused by the ground control and the space vehicle is one of the major threats to affect the integrity. For a young Global Navigation Satellite System, knowledge about SIS anomalies in history is very important for not only assessing the SIS integrity performance of a constellation but also providing the assumption for ARAIM (Advanced Receiver Autonomous Integrity Monitoring). In this paper, the broadcast ephemerides and the precise ones are pre-processed for avoiding the false anomaly identification. The SIS errors over the period of Mar. 2013-Feb. 2016 are computed by comparing the broadcast ephemerides with the precise ones. The time offsets between GPST (GPS time) and BDT (BeiDou time) are estimated and removed by an improved estimation algorithm. SIS worst-UREs are computed and a RMS criteria are investigated to identify the SIS anomalies. The results show that the probability of BeiDou SIS anomalies is in 10-3 level in last three years. Even though BeiDou SIS integrity performance currently cannot match the GPS integrity performances, the result indicates that BeiDou has a tendency to improve its integrity performance.

  15. 基于统计学方法的酱油二次沉淀形成的初步研究%Preliminary study on secondary sediment formation of soy sauces based on statistics methods

    Institute of Scientific and Technical Information of China (English)

    高献礼; 闫爽; 陈燕斌; 陆健

    2013-01-01

    本文以市售25种酱油为研究对象,以pearson相关分析法研究了酱油pH、总糖、还原糖、NaCl、总氮、氨基酸态氮、无盐固形物含量与酱油二次沉淀形成的相关性 . 同时,以T检验法研究了酱油不同发酵方式、类型、主要原料、质量等级和产地对酱油二次沉淀形成的影响.研究结果表明:酱油二次沉淀的形成与酱油中总糖、总氮和无盐固形物的含量呈显著正相关,与酱油PH、还原糖、NaCl、氨基酸态氮含量无统计学意义上的相关性.此外,分析结果显示酱油的发酵方式、主要原料、质量等级和产地对酱油二次沉淀的形成存在较大影响,而酱油类型对酱油二次沉淀的形成不存在明显影响.%The correlation between pH, contents of total sugar, reducing sugar, NaCI, total nitrogen, amino acid nitrogen,salt-free solid and secondary sediment formation of 25 commercial soy sauces was studied using pearson correlation analysis. Meanwhile, the effects of fermentation methods, types ( light/dark), main raw materials,quality grades and origins on secondary sediment formation of soy sauces were analyzed using T test method.Results demonstrated that the secondary sediment formation and contents of total sugar,total nitrogen and salt-free solid of soy sauces had a significantly positive correlation, and no statistically correlation between secondary sediment formation and pH, contents of reducing sugar, NaCI, amino acid nitrogen of soy sauces. Furthermore,results showed that there were significant effects of fermentation methods,main raw materials,quality grades,origins on secondary sediment formation of soy sauces,and no significant effect of types on secondary sediment formation of soy sauces.

  16. Statistical methods for astronomical data analysis

    CERN Document Server

    Chattopadhyay, Asis Kumar

    2014-01-01

    This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...

  17. THE EFFECTS OF PRELIMINARY RULINGS

    Directory of Open Access Journals (Sweden)

    Iuliana-Mădălina LARION

    2015-07-01

    Full Text Available The study analyses the effects of the preliminary rulings rendered by the Court of Justice for the judicial body that made the reference and for other bodies dealing with similar cases, for the member states, for the European Union’ s institutions and for EU legal order. Starting from the binding effect of the preliminary judgment for national judicial bodies, which requires them to follow the ruling or make a new reference, to the lack of precedent doctrine in EU law, continuing with the possibility to indirectly verify the compatibility of national law of the member states with EU law and ending with the administrative or legislative measures that can or must be taken by the member states, the study intends to highlight the limits, nuances and consequences of the binding effect. It mentions the contribution of the national courts and of the Court of Justice of the European Union to the development of EU law, such as clarifying autonomous notions and it emphasizes the preliminary procedure's attributes of being a form of judicial protection of individual rights, as well as a means to review the legality of acts of EU institutions. The paper is meant to be a useful instrument for practitioners. Therefor, it also deals with the possibility and limits of asking new questions, in order to obtain reconsideration or a refinement of the legal issue and with the problem of judicial control over the interpretation and application of the preliminary ruling by the lower court.

  18. Methodology in robust and nonparametric statistics

    CERN Document Server

    Jurecková, Jana; Picek, Jan

    2012-01-01

    Introduction and SynopsisIntroductionSynopsisPreliminariesIntroductionInference in Linear ModelsRobustness ConceptsRobust and Minimax Estimation of LocationClippings from Probability and Asymptotic TheoryProblemsRobust Estimation of Location and RegressionIntroductionM-EstimatorsL-EstimatorsR-EstimatorsMinimum Distance and Pitman EstimatorsDifferentiable Statistical FunctionsProblemsAsymptotic Representations for L-Estimators

  19. Bayesian Methods for Statistical Analysis

    OpenAIRE

    Puza, Borek

    2015-01-01

    Bayesian methods for statistical analysis is a book on statistical methods for analysing a wide variety of data. The book consists of 12 chapters, starting with basic concepts and covering numerous topics, including Bayesian estimation, decision theory, prediction, hypothesis testing, hierarchical models, Markov chain Monte Carlo methods, finite population inference, biased sampling and nonignorable nonresponse. The book contains many exercises, all with worked solutions, including complete c...

  20. Preliminary Monthly Climatological Summaries

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Preliminary Local Climatological Data, recorded since 1970 on Weather Burean Form 1030 and then National Weather Service Form F-6. The preliminary climate data pages...

  1. On the future of astrostatistics: statistical foundations and statistical practice

    CERN Document Server

    Loredo, Thomas J

    2012-01-01

    This paper summarizes a presentation for a panel discussion on "The Future of Astrostatistics" held at the Statistical Challenges in Modern Astronomy V conference at Pennsylvania State University in June 2011. I argue that the emerging needs of astrostatistics may both motivate and benefit from fundamental developments in statistics. I highlight some recent work within statistics on fundamental topics relevant to astrostatistical practice, including the Bayesian/frequentist debate (and ideas for a synthesis), multilevel models, and multiple testing. As an important direction for future work in statistics, I emphasize that astronomers need a statistical framework that explicitly supports unfolding chains of discovery, with acquisition, cataloging, and modeling of data not seen as isolated tasks, but rather as parts of an ongoing, integrated sequence of analyses, with information and uncertainty propagating forward and backward through the chain. A prototypical example is surveying of astronomical populations, ...

  2. FY2012 Office of Child Support Preliminary Report

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Office of Child Support Preliminary Report highlights financial and statistical program achievements based on quarterly and annual data. In fiscal year (FY)...

  3. Nonparametric statistical methods using R

    CERN Document Server

    Kloke, John

    2014-01-01

    A Practical Guide to Implementing Nonparametric and Rank-Based ProceduresNonparametric Statistical Methods Using R covers traditional nonparametric methods and rank-based analyses, including estimation and inference for models ranging from simple location models to general linear and nonlinear models for uncorrelated and correlated responses. The authors emphasize applications and statistical computation. They illustrate the methods with many real and simulated data examples using R, including the packages Rfit and npsm.The book first gives an overview of the R language and basic statistical c

  4. Students' attitudes towards learning statistics

    Science.gov (United States)

    Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah

    2015-05-01

    Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.

  5. Plastic Surgery Statistics

    Science.gov (United States)

    ... PSN PSEN GRAFT Contact Us News Plastic Surgery Statistics Plastic surgery procedural statistics from the American Society of Plastic Surgeons. Statistics by Year Print 2016 Plastic Surgery Statistics 2015 ...

  6. MQSA National Statistics

    Science.gov (United States)

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard Statistics (Archived) 2015 ...

  7. Applying statistics in behavioural research

    NARCIS (Netherlands)

    Ellis, J.L.

    2016-01-01

    Applying Statistics in Behavioural Research is written for undergraduate students in the behavioural sciences, such as Psychology, Pedagogy, Sociology and Ethology. The topics range from basic techniques, like correlation and t-tests, to moderately advanced analyses, like multiple regression and MAN

  8. Statistical Analysis of Time Series Data (STATS). Users Manual (Preliminary)

    Science.gov (United States)

    1987-05-01

    15, 30. 60, 90, 120, andL -!/14:X.... 183 days are presently used. auto Page 1 of 10 wrpy *VtsE0> J1 record (continued) Field Variab Vlue D 2 NPRDS ...each event. 6 JEND + Order number of last period in time series to ( NPRDS ) select for analysis. If blank, the last period is assumed. 7 JPPF Plotting...values. 2 NPRDS + Actual number of periods for the event following on ’INO records until the next ID, BF, or LI record. IN record - T:E SERIES DATA

  9. Classical Statistics and Statistical Learning in Imaging Neuroscience

    Directory of Open Access Journals (Sweden)

    Danilo Bzdok

    2017-10-01

    Full Text Available Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques.

  10. Preliminary ECLSS waste water model

    Science.gov (United States)

    Carter, Donald L.; Holder, Donald W., Jr.; Alexander, Kevin; Shaw, R. G.; Hayase, John K.

    1991-01-01

    A preliminary waste water model for input to the Space Station Freedom (SSF) Environmental Control and Life Support System (ECLSS) Water Processor (WP) has been generated for design purposes. Data have been compiled from various ECLSS tests and flight sample analyses. A discussion of the characterization of the waste streams comprising the model is presented, along with a discussion of the waste water model and the rationale for the inclusion of contaminants in their respective concentrations. The major objective is to establish a methodology for the development of a waste water model and to present the current state of that model.

  11. Preliminary ECLSS waste water model

    Science.gov (United States)

    Carter, Donald L.; Holder, Donald W., Jr.; Alexander, Kevin; Shaw, R. G.; Hayase, John K.

    1991-01-01

    A preliminary waste water model for input to the Space Station Freedom (SSF) Environmental Control and Life Support System (ECLSS) Water Processor (WP) has been generated for design purposes. Data have been compiled from various ECLSS tests and flight sample analyses. A discussion of the characterization of the waste streams comprising the model is presented, along with a discussion of the waste water model and the rationale for the inclusion of contaminants in their respective concentrations. The major objective is to establish a methodology for the development of a waste water model and to present the current state of that model.

  12. Predict! Teaching Statistics Using Informational Statistical Inference

    Science.gov (United States)

    Makar, Katie

    2013-01-01

    Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…

  13. Misuse of statistics in surgical literature.

    Science.gov (United States)

    Thiese, Matthew S; Ronna, Brenden; Robbins, Riann B

    2016-08-01

    Statistical analyses are a key part of biomedical research. Traditionally surgical research has relied upon a few statistical methods for evaluation and interpretation of data to improve clinical practice. As research methods have increased in both rigor and complexity, statistical analyses and interpretation have fallen behind. Some evidence suggests that surgical research studies are being designed and analyzed improperly given the specific study question. The goal of this article is to discuss the complexities of surgical research analyses and interpretation, and provide some resources to aid in these processes.

  14. Statistical ecology comes of age.

    Science.gov (United States)

    Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-12-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.

  15. Statistical ecology comes of age

    Science.gov (United States)

    Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-01-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151

  16. R statistical application development by example : beginner's guide

    CERN Document Server

    Tattar, Narayanachart Prabhanjan

    2013-01-01

    Full of screenshots and examples, this Beginner's Guide by Example will teach you practically everything you need to know about R statistical application development from scratch. You will begin learning the first concepts of statistics in R which is vital in this fast paced era and it is also a bargain as you do not need to do a preliminary course on the subject.

  17. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Science.gov (United States)

    West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  18. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Science.gov (United States)

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  19. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Directory of Open Access Journals (Sweden)

    Brady T West

    Full Text Available Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT, which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  20. 引发意大利托斯卡纳区滑坡的区域降水阈值体系的定义——统计与环境分析%Statistical and environmental analyses for the definition of a regional rainfall threshold system for landslide triggering in Tuscany (Italy)

    Institute of Scientific and Technical Information of China (English)

    ROSI Ascanio; SEGONI Samuele; CATANI Filippo; CASAGLI Nicola

    2012-01-01

    The aim of this work is the determination of regional-scale rainfall thresholds for the triggering of landslides in the Tuscany Region (Italy).The critical rainfall events related to the occurrence of 593 past landslides were characterized in terms of duration (D) and intensity (Ⅰ).I and D values were plotted in a log-log diagram and a lower boundary was clearly noticeable:it was interpreted as a threshold representing the rainfall conditions associated to landsliding.That was also confirmed by a comparison with many literature thresholds,but at the same time it was clear that a similar threshold would be affected by a too large approximation to be effectively used for a regional warning system.Therefore,further analyses were performed differentiating the events on the basis of seasonality,magnitude,location,land use and lithology.None of these criteria led to discriminate among all the events different groups to be characterized by a specific and more effective threshold.This outcome could be interpreted as the demonstration that at regional scale the best results are obtained by the simplest approach,in our case an empirical black box model which accounts only for two rainfall parameters (I and D).So a set of thresholds could be conveniently defined using a statistical approach:four thresholds corresponding to four severity levels were defined by means of the prediction interval technique and we developed a prototype warning system based on rainfall recordings or weather forecasts.%@@

  1. The foundations of statistics

    CERN Document Server

    Savage, Leonard J

    1972-01-01

    Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.

  2. Adrenal Gland Tumors: Statistics

    Science.gov (United States)

    ... Gland Tumor: Statistics Request Permissions Adrenal Gland Tumor: Statistics Approved by the Cancer.Net Editorial Board , 03/ ... primary adrenal gland tumor is very uncommon. Exact statistics are not available for this type of tumor ...

  3. Blood Facts and Statistics

    Science.gov (United States)

    ... Facts and Statistics Printable Version Blood Facts and Statistics Facts about blood needs Facts about the blood ... to Top Learn About Blood Blood Facts and Statistics Blood Components Whole Blood and Red Blood Cells ...

  4. Statistical methods in spatial genetics

    DEFF Research Database (Denmark)

    Guillot, Gilles; Leblois, Raphael; Coulon, Aurelie

    2009-01-01

    The joint analysis of spatial and genetic data is rapidly becoming the norm in population genetics. More and more studies explicitly describe and quantify the spatial organization of genetic variation and try to relate it to underlying ecological processes. As it has become increasingly difficult...... to keep abreast with the latest methodological developments, we review the statistical toolbox available to analyse population genetic data in a spatially explicit framework. We mostly focus on statistical concepts but also discuss practical aspects of the analytical methods, highlighting not only...

  5. Algebraic statistics computational commutative algebra in statistics

    CERN Document Server

    Pistone, Giovanni; Wynn, Henry P

    2000-01-01

    Written by pioneers in this exciting new field, Algebraic Statistics introduces the application of polynomial algebra to experimental design, discrete probability, and statistics. It begins with an introduction to Gröbner bases and a thorough description of their applications to experimental design. A special chapter covers the binary case with new application to coherent systems in reliability and two level factorial designs. The work paves the way, in the last two chapters, for the application of computer algebra to discrete probability and statistical modelling through the important concept of an algebraic statistical model.As the first book on the subject, Algebraic Statistics presents many opportunities for spin-off research and applications and should become a landmark work welcomed by both the statistical community and its relatives in mathematics and computer science.

  6. PROBABILITY AND STATISTICS.

    Science.gov (United States)

    STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION

  7. Analyses of a Virtual World

    CERN Document Server

    Holovatch, Yurij; Szell, Michael; Thurner, Stefan

    2016-01-01

    We present an overview of a series of results obtained from the analysis of human behavior in a virtual environment. We focus on the massive multiplayer online game (MMOG) Pardus which has a worldwide participant base of more than 400,000 registered players. We provide evidence for striking statistical similarities between social structures and human-action dynamics in the real and virtual worlds. In this sense MMOGs provide an extraordinary way for accurate and falsifiable studies of social phenomena. We further discuss possibilities to apply methods and concepts developed in the course of these studies to analyse oral and written narratives.

  8. Practical Statistics for Environmental and Biological Scientists

    CERN Document Server

    Townend, John

    2012-01-01

    All students and researchers in environmental and biological sciences require statistical methods at some stage of their work. Many have a preconception that statistics are difficult and unpleasant and find that the textbooks available are difficult to understand. Practical Statistics for Environmental and Biological Scientists provides a concise, user-friendly, non-technical introduction to statistics. The book covers planning and designing an experiment, how to analyse and present data, and the limitations and assumptions of each statistical method. The text does not refer to a specific comp

  9. Statistical analysis of personal radiofrequency electromagnetic field measurements with nondetects.

    Science.gov (United States)

    Röösli, Martin; Frei, Patrizia; Mohler, Evelyn; Braun-Fahrländer, Charlotte; Bürgi, Alfred; Fröhlich, Jürg; Neubauer, Georg; Theis, Gaston; Egger, Matthias

    2008-09-01

    Exposimeters are increasingly applied in bioelectromagnetic research to determine personal radiofrequency electromagnetic field (RF-EMF) exposure. The main advantages of exposimeter measurements are their convenient handling for study participants and the large amount of personal exposure data, which can be obtained for several RF-EMF sources. However, the large proportion of measurements below the detection limit is a challenge for data analysis. With the robust ROS (regression on order statistics) method, summary statistics can be calculated by fitting an assumed distribution to the observed data. We used a preliminary sample of 109 weekly exposimeter measurements from the QUALIFEX study to compare summary statistics computed by robust ROS with a naïve approach, where values below the detection limit were replaced by the value of the detection limit. For the total RF-EMF exposure, differences between the naïve approach and the robust ROS were moderate for the 90th percentile and the arithmetic mean. However, exposure contributions from minor RF-EMF sources were considerably overestimated with the naïve approach. This results in an underestimation of the exposure range in the population, which may bias the evaluation of potential exposure-response associations. We conclude from our analyses that summary statistics of exposimeter data calculated by robust ROS are more reliable and more informative than estimates based on a naïve approach. Nevertheless, estimates of source-specific medians or even lower percentiles depend on the assumed data distribution and should be considered with caution. Copyright 2008 Wiley-Liss, Inc.

  10. Multivariate Evolutionary Analyses in Astrophysics

    CERN Document Server

    Fraix-Burnet, Didier

    2011-01-01

    The large amount of data on galaxies, up to higher and higher redshifts, asks for sophisticated statistical approaches to build adequate classifications. Multivariate cluster analyses, that compare objects for their global similarities, are still confidential in astrophysics, probably because their results are somewhat difficult to interpret. We believe that the missing key is the unavoidable characteristics in our Universe: evolution. Our approach, known as Astrocladistics, is based on the evolutionary nature of both galaxies and their properties. It gathers objects according to their "histories" and establishes an evolutionary scenario among groups of objects. In this presentation, I show two recent results on globular clusters and earlytype galaxies to illustrate how the evolutionary concepts of Astrocladistics can also be useful for multivariate analyses such as K-means Cluster Analysis.

  11. Editorial: Special Issue: Emerging Research In Statistics Education

    Directory of Open Access Journals (Sweden)

    Carmen Batanero

    2007-10-01

    IASE offers to the community of statistics educators and were started 24 years ago by the International Statistical Institute (ISI. ICOTS-7 was successfully held in Brazil, with over 550 participants representing about 50 different countries. The more than 220 invited papers, 120 contributed papers, 120 posters, keynote lectures, panels, and special sessions gave a synthesis of the main tendencies and developments in statistics education. Selecting ICOTS-7 papers for this special issue was not an easy task, given the enormous number of contributions. We then focused, as a first step, on those invited papers dealing specifically with empirical research in themes that could be of interest to the IEJME audience. We, secondly, considered only those papers that had passed the double blind refereeing process in ICOTS (since refereeing was optional for invited papers. Taking into account these restrictions we selected a sample of papers that include a variety of research topics and nationality of authors, as well as both young and experienced researchers. The papers in this issue analyse probability, distribution and conditional probability, variation, statistical graphs, statistical literacy, sampling distributions, informal inference, multivariate data and teachers’ views about teaching statistics. It combines qualitative and quantitative research with methods including interviews, open-ended tasks, paper-pencil and computer based questionnaires, Rasch or factor analysis. Students in the samples range from primary school level to University, including prospective in-service teachers. The studies focus on students or teachers’ conceptions, assessment, instruction, use of technology or research methods. I hope this variety reflects the state of research in statistics education and increases the readers’ interest to know more about what is going on in this area; hopefully some of them will decide to undertake new research. In closing, I want to thank the contributors

  12. Measuring statistical evidence using relative belief

    Directory of Open Access Journals (Sweden)

    Michael Evans

    2016-01-01

    Full Text Available A fundamental concern of a theory of statistical inference is how one should measure statistical evidence. Certainly the words “statistical evidence,” or perhaps just “evidence,” are much used in statistical contexts. It is fair to say, however, that the precise characterization of this concept is somewhat elusive. Our goal here is to provide a definition of how to measure statistical evidence for any particular statistical problem. Since evidence is what causes beliefs to change, it is proposed to measure evidence by the amount beliefs change from a priori to a posteriori. As such, our definition involves prior beliefs and this raises issues of subjectivity versus objectivity in statistical analyses. This is dealt with through a principle requiring the falsifiability of any ingredients to a statistical analysis. These concerns lead to checking for prior-data conflict and measuring the a priori bias in a prior.

  13. Modern applied statistics with S-plus

    CERN Document Server

    Venables, W N

    1994-01-01

    S-Plus is a powerful environment for statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-Plus to perform statistical analyses and provides both an introduction to the use of S-Plus and a course in modern statistical methods. The aim of the book is to show how to use S-Plus as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-Plus, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets.

  14. Mad Libs Statistics: A "Happy" Activity

    Science.gov (United States)

    Trumpower, David

    2010-01-01

    This article describes a fun activity that can be used to help students make links between statistical analyses and their real-world implications. Although an illustrative example is provided using analysis of variance, the activity may be adapted for use with other statistical techniques.

  15. Explorations in statistics: statistical facets of reproducibility.

    Science.gov (United States)

    Curran-Everett, Douglas

    2016-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science.

  16. Statistics using R

    CERN Document Server

    Purohit, Sudha G; Deshmukh, Shailaja R

    2015-01-01

    STATISTICS USING R will be useful at different levels, from an undergraduate course in statistics, through graduate courses in biological sciences, engineering, management and so on. The book introduces statistical terminology and defines it for the benefit of a novice. For a practicing statistician, it will serve as a guide to R language for statistical analysis. For a researcher, it is a dual guide, simultaneously explaining appropriate statistical methods for the problems at hand and indicating how these methods can be implemented using the R language. For a software developer, it is a guide in a variety of statistical methods for development of a suite of statistical procedures.

  17. Bayesian Statistics in Software Engineering: Practical Guide and Case Studies

    OpenAIRE

    Furia, Carlo A.

    2016-01-01

    Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reasons, frequentist statistics has dominated data analysis in the past; but Bayesian statistics is making a comeback at the forefront of science. In this paper, we give a practical overview of Bayesian statistics and illustrate its main advantages over frequentist statistics for the kinds of analyses that are common in empirical software engineering, where frequentist statistics still is standard. We...

  18. Dealing with statistics what you need to know

    CERN Document Server

    Brown, Reva Berman

    2007-01-01

    A guide to the essential statistical skills needed for success in assignments, projects or dissertations. It explains why it is impossible to avoid using statistics in analysing data. It also describes the language of statistics to make it easier to understand the various terms used for statistical techniques.

  19. Sproglig Metode og Analyse

    DEFF Research Database (Denmark)

    le Fevre Jakobsen, Bjarne

    Publikationen indeholder øvematerialer, tekster, powerpointpræsentationer og handouts til undervisningsfaget Sproglig Metode og Analyse på BA og tilvalg i Dansk/Nordisk 2010-2011......Publikationen indeholder øvematerialer, tekster, powerpointpræsentationer og handouts til undervisningsfaget Sproglig Metode og Analyse på BA og tilvalg i Dansk/Nordisk 2010-2011...

  20. SPATIAL STATISTICS FOR SIMULATED PACKINGS OF SPHERES

    Directory of Open Access Journals (Sweden)

    Alexander Bezrukov

    2011-05-01

    Full Text Available This paper reports on spatial-statistical analyses for simulated random packings of spheres with random diameters. The simulation methods are the force-biased algorithm and the Jodrey-Tory sedimentation algorithm. The sphere diameters are taken as constant or following a bimodal or lognormal distribution. Standard characteristics of spatial statistics are used to describe these packings statistically, namely volume fraction, pair correlation function of the system of sphere centres and spherical contact distribution function of the set-theoretic union of all spheres. Furthermore, the coordination numbers are analysed.

  1. Closed rhinoplasty:effects and changes on voice - a preliminary report

    Institute of Scientific and Technical Information of China (English)

    Giuseppe Guarro; Romano Mafifa; Barbara Rasile; Carmine Alfano

    2016-01-01

    Aim: Effects of rhinoplasty were already studied from many points of view: otherwise poor is scientific production focused on changes of voice after rhinoplasty. This preliminary study analyzed objectively and subjectively these potential effects on 19 patients who underwent exclusively closed rhinoplasty.Methods: This preliminary evaluation was conducted from September 2012 to May 2013 and 19 patients have undergone primary rhinoplasty with exclusively closed approach (7 males, 12 females). All patients were evaluated before and 6 months after surgery. Each of them answered to a questionnaire (Voice Handicap Index Score) and the voice was recorded for spectrographic analysis: this system allowed to perform the measurement of the intensity and frequency of vowels (“A” and “E”) and nasal consonants (“N” and “M”) before and after surgery. Data were analysed with the Mann-Whitney test.Results: Sixteen patients showed statistically significant differences after surgery. It was detected in 69% of cases an increased frequency of emission of the consonant sounds (P = 0.046), while in 74% of cases the same phenomenon was noticed for vowel sounds (P = 0.048).Conclusion: Many patients who undergo rhinoplasty think that the intervention only leads to anatomical changes and improvement of respiratory function. The surgeon should instead accurately inform patients about the potential effects on the voice. This preliminary study reveals the significant effects of closed rhinoplasty on the human voice.

  2. 2016 TRI Preliminary Dataset

    Science.gov (United States)

    The TRI preliminary dataset includes the most current TRI data available and reflects toxic chemical releases and pollution prevention activities that occurred at TRI facilities during the 2016 calendar year.

  3. NLTE analyses of sdB stars: progress and prospects

    CERN Document Server

    Przybilla, N; Edelmann, H

    2005-01-01

    We report on preliminary results of a hybrid non-LTE analysis of high-resolution, high-S/N spectra of the helium-rich subdwarf B star Feige49 and the helium-poor sdB HD205805. Non-LTE effects are found to have a notable impact on the stellar parameter and abundance determination. In particular the HeI lines show significant deviations from detailed balance, with the computed equivalent widths strengthened by up to ~35%. Non-LTE abundance corrections for the metals (C, N, O, Mg, S) are of the order ~0.05-0.25 dex on the mean, while corrections of up to ~0.7 dex are derived for individual transitions. The non-LTE approach reduces systematic trends and the statistical uncertainties in the abundance determination. Consequently, non-LTE analyses of a larger sample of objects have the potential to put much tighter constraints on the formation history of the different sdB populations than currently discussed.

  4. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull....... The statistical fits have generally been made using all data (100%) and the lower tail (30%) of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. 8 different databases are analysed. The results show that 2......-parameter Weibull (and Normal) distributions give the best fits to the data available, especially if tail fits are used whereas the LogNormal distribution generally gives poor fit and larger coefficients of variation, especially if tail fits are used....

  5. Industrial statistics with Minitab

    CERN Document Server

    Cintas, Pere Grima; Llabres, Xavier Tort-Martorell

    2012-01-01

    Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores

  6. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  7. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using Open......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....

  8. Solar-climatic statistical study

    Energy Technology Data Exchange (ETDEWEB)

    Bray, R.E.

    1979-02-01

    The Solar-Climatic Statistical Study was performed to provide statistical information on the expected future availability of solar and wind power at various nationwide sites. Historic data (SOLMET), at 26 National Weather Service stations reporting hourly solar insolation and collateral meteorological information, were interrogated to provide an estimate of future trends. Solar data are global radiation incident on a horizontal surface, and wind data represent wind power normal to the air flow. Selected insolation and wind power conditions were investigated for their occurrence and persistence, for defined periods of time, on a monthly basis. Information of this nature are intended as an aid to preliminary planning activities for the design and operation of solar and wind energy utilization and conversion systems. Presented in this volume are probability estimates of solar insolation and wind power, alone and in combination, occurring and persisting at or above specified thresholds, for up to one week, for each of the 26 SOLMET stations. Diurnal variations of wind power were also considered. Selected probability data for each station are presented graphically, and comprehensive plots for all stations are provided on a set of microfiche included in a folder in the back of this volume.

  9. Industrial psychology students’ attitudes towards statistics

    Directory of Open Access Journals (Sweden)

    Sanet Coetzee

    2010-03-01

    Full Text Available Orientation: The attitude of students toward statistics may influence their enrolment, achievement and motivation in the subject of research and Industrial Psychology.Research purpose: The aims of this study were to determine the reliability and validity of the survey of attitudes toward statistics (SATS-36 for a South African sample and to determine whether biographical variables influence students’ attitudes.Motivation for study: Students could be better prepared for, and guided through, a course in statistics if more is known about their attitudes towards statistics.Research design, approach and method: A cross-sectional survey design was used and the SATS-36 was administered to a sample of convenience consisting of 235 students enrolled in Industrial and Organisational Psychology at a large tertiary institution in South Africa.Main findings: Results revealed that even though students perceive statistics to be technical, complicated and difficult to master, they are interested in the subject and believe statistics to be of value. The degree to which students perceived themselves to be competent in mathematics was related to the degree to which they felt confident in their own ability to master statistics. Males displayed slightly more positive feelings toward statistics than females. Older students perceived statistics to be less difficult than younger students and also displayed slightly more positive feelings concerning statistics.Practical implications: It seems that in preparing students for statistics, their perception regarding their mathematical competence could be managed as well.Contribution: This study provides the first preliminary evidence for the reliability and validity of the SATS-36 for a sample of South African students.

  10. Industrial psychology students’ attitudes towards statistics

    Directory of Open Access Journals (Sweden)

    Sanet Coetzee

    2010-03-01

    Full Text Available Orientation: The attitude of students toward statistics may influence their enrolment, achievement and motivation in the subject of research and Industrial Psychology.Research purpose: The aims of this study were to determine the reliability and validity of the survey of attitudes toward statistics (SATS-36 for a South African sample and to determine whether biographical variables influence students’ attitudes.Motivation for study: Students could be better prepared for, and guided through, a course in statistics if more is known about their attitudes towards statistics.Research design, approach and method: A cross-sectional survey design was used and the SATS-36 was administered to a sample of convenience consisting of 235 students enrolled in Industrial and Organisational Psychology at a large tertiary institution in South Africa.Main findings: Results revealed that even though students perceive statistics to be technical, complicated and difficult to master, they are interested in the subject and believe statistics to be of value. The degree to which students perceived themselves to be competent in mathematics was related to the degree to which they felt confident in their own ability to master statistics. Males displayed slightly more positive feelings toward statistics than females. Older students perceived statistics to be less difficult than younger students and also displayed slightly more positive feelings concerning statistics.Practical implications: It seems that in preparing students for statistics, their perception regarding their mathematical competence could be managed as well.Contribution: This study provides the first preliminary evidence for the reliability and validity of the SATS-36 for a sample of South African students.

  11. CMS Program Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

  12. Alcohol Facts and Statistics

    Science.gov (United States)

    ... Standard Drink? Drinking Levels Defined Alcohol Facts and Statistics Print version Alcohol Use in the United States: ... 1245, 2004. PMID: 15010446 11 National Center for Statistics and Analysis. 2014 Crash Data Key Findings (Traffic ...

  13. Bureau of Labor Statistics

    Science.gov (United States)

    ... Statistics Students' Pages Errata Other Statistical Sites Subjects Inflation & Prices » Consumer Price Index Producer Price Indexes Import/Export Price ... Choose a Subject Employment and Unemployment Employment Unemployment Inflation, Prices, and ... price indexes Consumer spending Industry price indexes Pay ...

  14. Recreational Boating Statistics 2012

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  15. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

  16. Mathematical and statistical analysis

    Science.gov (United States)

    Houston, A. Glen

    1988-01-01

    The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.

  17. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

  18. Neuroendocrine Tumor: Statistics

    Science.gov (United States)

    ... Tumor > Neuroendocrine Tumor: Statistics Request Permissions Neuroendocrine Tumor: Statistics Approved by the Cancer.Net Editorial Board , 11/ ... the United States are diagnosed with Merkel cell skin cancer each year. Almost all people diagnosed with the ...

  19. Experiment in Elementary Statistics

    Science.gov (United States)

    Fernando, P. C. B.

    1976-01-01

    Presents an undergraduate laboratory exercise in elementary statistics in which students verify empirically the various aspects of the Gaussian distribution. Sampling techniques and other commonly used statistical procedures are introduced. (CP)

  20. Overweight and Obesity Statistics

    Science.gov (United States)

    ... the full list of resources ​​. Overweight and Obesity Statistics Page Content About Overweight and Obesity Prevalence of ... adults age 20 and older [ Top ] Physical Activity Statistics Adults Research Findings Research suggests that staying active ...

  1. Uterine Cancer Statistics

    Science.gov (United States)

    ... Research AMIGAS Fighting Cervical Cancer Worldwide Stay Informed Statistics for Other Kinds of Cancer Breast Cervical Colorectal ( ... Skin Vaginal and Vulvar Cancer Home Uterine Cancer Statistics Language: English Español (Spanish) Recommend on Facebook Tweet ...

  2. School Violence: Data & Statistics

    Science.gov (United States)

    ... Social Media Publications Injury Center School Violence: Data & Statistics Recommend on Facebook Tweet Share Compartir The first ... fact sheet provides up-to-date data and statistics on youth violence. Data Sources Indicators of School ...

  3. Recreational Boating Statistics 2013

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  4. Multivariate statistical methods a first course

    CERN Document Server

    Marcoulides, George A

    2014-01-01

    Multivariate statistics refer to an assortment of statistical methods that have been developed to handle situations in which multiple variables or measures are involved. Any analysis of more than two variables or measures can loosely be considered a multivariate statistical analysis. An introductory text for students learning multivariate statistical methods for the first time, this book keeps mathematical details to a minimum while conveying the basic principles. One of the principal strategies used throughout the book--in addition to the presentation of actual data analyses--is poin

  5. Software for Spatial Statistics

    Directory of Open Access Journals (Sweden)

    Edzer Pebesma

    2015-02-01

    Full Text Available We give an overview of the papers published in this special issue on spatial statistics, of the Journal of Statistical Software. 21 papers address issues covering visualization (micromaps, links to Google Maps or Google Earth, point pattern analysis, geostatistics, analysis of areal aggregated or lattice data, spatio-temporal statistics, Bayesian spatial statistics, and Laplace approximations. We also point to earlier publications in this journal on the same topic.

  6. Software for Spatial Statistics

    OpenAIRE

    Edzer Pebesma; Roger Bivand; Paulo Justiniano Ribeiro

    2015-01-01

    We give an overview of the papers published in this special issue on spatial statistics, of the Journal of Statistical Software. 21 papers address issues covering visualization (micromaps, links to Google Maps or Google Earth), point pattern analysis, geostatistics, analysis of areal aggregated or lattice data, spatio-temporal statistics, Bayesian spatial statistics, and Laplace approximations. We also point to earlier publications in this journal on the same topic.

  7. Ethics in Statistics

    Science.gov (United States)

    Lenard, Christopher; McCarthy, Sally; Mills, Terence

    2014-01-01

    There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…

  8. Selling statistics[Statistics in scientific progress

    Energy Technology Data Exchange (ETDEWEB)

    Bridle, S. [Astrophysics Group, University College London (United Kingdom)]. E-mail: sarah@star.ucl.ac.uk

    2006-09-15

    From Cosmos to Chaos- Peter Coles, 2006, Oxford University Press, 224pp. To confirm or refute a scientific theory you have to make a measurement. Unfortunately, however, measurements are never perfect: the rest is statistics. Indeed, statistics is at the very heart of scientific progress, but it is often poorly taught and badly received; for many, the very word conjures up half-remembered nightmares of 'null hypotheses' and 'student's t-tests'. From Cosmos to Chaos by Peter Coles, a cosmologist at Nottingham University, is an approachable antidote that places statistics in a range of catchy contexts. Using this book you will be able to calculate the probabilities in a game of bridge or in a legal trial based on DNA fingerprinting, impress friends by talking confidently about entropy, and stretch your mind thinking about quantum mechanics. (U.K.)

  9. The Economic Cost of Homosexuality: Multilevel Analyses

    Science.gov (United States)

    Baumle, Amanda K.; Poston, Dudley, Jr.

    2011-01-01

    This article builds on earlier studies that have examined "the economic cost of homosexuality," by using data from the 2000 U.S. Census and by employing multilevel analyses. Our findings indicate that partnered gay men experience a 12.5 percent earnings penalty compared to married heterosexual men, and a statistically insignificant earnings…

  10. Statistics Essentials For Dummies

    CERN Document Server

    Rumsey, Deborah

    2010-01-01

    Statistics Essentials For Dummies not only provides students enrolled in Statistics I with an excellent high-level overview of key concepts, but it also serves as a reference or refresher for students in upper-level statistics courses. Free of review and ramp-up material, Statistics Essentials For Dummies sticks to the point, with content focused on key course topics only. It provides discrete explanations of essential concepts taught in a typical first semester college-level statistics course, from odds and error margins to confidence intervals and conclusions. This guide is also a perfect re

  11. Statistics & probaility for dummies

    CERN Document Server

    Rumsey, Deborah J

    2013-01-01

    Two complete eBooks for one low price! Created and compiled by the publisher, this Statistics I and Statistics II bundle brings together two math titles in one, e-only bundle. With this special bundle, you'll get the complete text of the following two titles: Statistics For Dummies, 2nd Edition  Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more. Tra

  12. Head First Statistics

    CERN Document Server

    Griffiths, Dawn

    2009-01-01

    Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics

  13. Business statistics for dummies

    CERN Document Server

    Anderson, Alan

    2013-01-01

    Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w

  14. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  15. Lectures on algebraic statistics

    CERN Document Server

    Drton, Mathias; Sullivant, Seth

    2009-01-01

    How does an algebraic geometer studying secant varieties further the understanding of hypothesis tests in statistics? Why would a statistician working on factor analysis raise open problems about determinantal varieties? Connections of this type are at the heart of the new field of "algebraic statistics". In this field, mathematicians and statisticians come together to solve statistical inference problems using concepts from algebraic geometry as well as related computational and combinatorial techniques. The goal of these lectures is to introduce newcomers from the different camps to algebraic statistics. The introduction will be centered around the following three observations: many important statistical models correspond to algebraic or semi-algebraic sets of parameters; the geometry of these parameter spaces determines the behaviour of widely used statistical inference procedures; computational algebraic geometry can be used to study parameter spaces and other features of statistical models.

  16. Statistics for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...

  17. Estimation and inferential statistics

    CERN Document Server

    Sahu, Pradip Kumar; Das, Ajit Kumar

    2015-01-01

    This book focuses on the meaning of statistical inference and estimation. Statistical inference is concerned with the problems of estimation of population parameters and testing hypotheses. Primarily aimed at undergraduate and postgraduate students of statistics, the book is also useful to professionals and researchers in statistical, medical, social and other disciplines. It discusses current methodological techniques used in statistics and related interdisciplinary areas. Every concept is supported with relevant research examples to help readers to find the most suitable application. Statistical tools have been presented by using real-life examples, removing the “fear factor” usually associated with this complex subject. The book will help readers to discover diverse perspectives of statistical theory followed by relevant worked-out examples. Keeping in mind the needs of readers, as well as constantly changing scenarios, the material is presented in an easy-to-understand form.

  18. Breast cancer statistics, 2011.

    Science.gov (United States)

    DeSantis, Carol; Siegel, Rebecca; Bandi, Priti; Jemal, Ahmedin

    2011-01-01

    In this article, the American Cancer Society provides an overview of female breast cancer statistics in the United States, including trends in incidence, mortality, survival, and screening. Approximately 230,480 new cases of invasive breast cancer and 39,520 breast cancer deaths are expected to occur among US women in 2011. Breast cancer incidence rates were stable among all racial/ethnic groups from 2004 to 2008. Breast cancer death rates have been declining since the early 1990s for all women except American Indians/Alaska Natives, among whom rates have remained stable. Disparities in breast cancer death rates are evident by state, socioeconomic status, and race/ethnicity. While significant declines in mortality rates were observed for 36 states and the District of Columbia over the past 10 years, rates for 14 states remained level. Analyses by county-level poverty rates showed that the decrease in mortality rates began later and was slower among women residing in poor areas. As a result, the highest breast cancer death rates shifted from the affluent areas to the poor areas in the early 1990s. Screening rates continue to be lower in poor women compared with non-poor women, despite much progress in increasing mammography utilization. In 2008, 51.4% of poor women had undergone a screening mammogram in the past 2 years compared with 72.8% of non-poor women. Encouraging patients aged 40 years and older to have annual mammography and a clinical breast examination is the single most important step that clinicians can take to reduce suffering and death from breast cancer. Clinicians should also ensure that patients at high risk of breast cancer are identified and offered appropriate screening and follow-up. Continued progress in the control of breast cancer will require sustained and increased efforts to provide high-quality screening, diagnosis, and treatment to all segments of the population.

  19. Illustrating the practice of statistics

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Christina A [Los Alamos National Laboratory; Hamada, Michael S [Los Alamos National Laboratory

    2009-01-01

    The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem and incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.

  20. A new method to reduce the statistical and systematic uncertainty of chance coincidence backgrounds measured with waveform digitizers

    CERN Document Server

    O'Donnell, J M

    2016-01-01

    A new method for measuring chance-coincidence backgrounds during the collection of coincidence data is presented. The method relies on acquiring data with near-zero dead time, which is now realistic due to the increasing deployment of flash electronic-digitizer (waveform digitizer) techniques. An experiment designed to use this new method is capable of acquiring more coincidence data, and a much reduced statistical fluctuation of the measured background. A statistical analysis is presented, and used to derive a figure of merit for the new method. Factors of four improvement over other analyses are realistic. The technique is illustrated with preliminary data taken as part of a program to make new measurements of the prompt fission neutron spectra at Los Alamos Neutron Science Center. It is expected that the these measurements will occur in a regime where the maximum figure of merit will be exploited.

  1. Baseline Statistics of Linked Statistical Data

    NARCIS (Netherlands)

    Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe

    2014-01-01

    We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National Statistica

  2. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

    2007-01-01

    The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  3. Analysing Interplanetary Probe Guidance Accuracy

    Directory of Open Access Journals (Sweden)

    S. V. Sukhova

    2016-01-01

    Full Text Available The paper presents a guidance accuracy analysis and estimates delta-v budget required to provide the trajectory correction maneuvers for direct interplanetary flights (without midcourse gravity assists. The analysis takes into consideration the orbital hyperbolic injection errors (depend on a selected launch vehicle and ascent trajectory and the uncertainties of midcourse correction maneuvers.The calculation algorithm is based on Monte Carlo simulation and Danby’s matrix methods (the matrizant of keplerian motion. Danby’s method establishes a link between the errors of the spacecraft state vectors at different flight times using the reference keplerian orbit matrizant. Utilizing the nominal trajectory parameters and the covariance matrix of launch vehicle injection errors the random perturbed orbits are generated and required velocity corrections are calculated. The next step is to simulate midcourse maneuver performance uncertainty using the midcourse maneuver covariance matrix. The obtained trajectory correction impulses and spacecraft position errors are statistically processed to compute required delta-v budget and dispersions ellipse parameters for different prediction intervals.As an example, a guidance accuracy analysis has been conducted for a 2022 mission to Mars and a Venus mission in 2026. The paper considers one and two midcourse correction options, as well as utilization of two different launch vehicles.The presented algorithm based on Monte Carlo simulation and Danby’s methods provides preliminary evaluation for midcourse corrections delta-v budget and spacecraft position error. The only data required for this guidance accuracy analysis are a reference keplerian trajectory and a covariance matrix of the injection errors. Danby’s matrix method allows us to take into account also the other factors affecting the trajectory thereby increasing the accuracy of analysis.

  4. Impact of ontology evolution on functional analyses.

    Science.gov (United States)

    Groß, Anika; Hartung, Michael; Prüfer, Kay; Kelso, Janet; Rahm, Erhard

    2012-10-15

    Ontologies are used in the annotation and analysis of biological data. As knowledge accumulates, ontologies and annotation undergo constant modifications to reflect this new knowledge. These modifications may influence the results of statistical applications such as functional enrichment analyses that describe experimental data in terms of ontological groupings. Here, we investigate to what degree modifications of the Gene Ontology (GO) impact these statistical analyses for both experimental and simulated data. The analysis is based on new measures for the stability of result sets and considers different ontology and annotation changes. Our results show that past changes in the GO are non-uniformly distributed over different branches of the ontology. Considering the semantic relatedness of significant categories in analysis results allows a more realistic stability assessment for functional enrichment studies. We observe that the results of term-enrichment analyses tend to be surprisingly stable despite changes in ontology and annotation.

  5. Errors in statistical decision making Chapter 2 in Applied Statistics in Agricultural, Biological, and Environmental Sciences

    Science.gov (United States)

    Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...

  6. UVISS preliminary visibility analysis

    DEFF Research Database (Denmark)

    Betto, Maurizio

    1998-01-01

    The goal of this work is to obtain a preliminary assessment of the sky visibility for anastronomical telescope located on the express pallet of the International SpaceStation (ISS)} taking into account the major constraints imposed on the instrument by the ISSattitude and structure. Part of the w......The goal of this work is to obtain a preliminary assessment of the sky visibility for anastronomical telescope located on the express pallet of the International SpaceStation (ISS)} taking into account the major constraints imposed on the instrument by the ISSattitude and structure. Part...

  7. A Statistical Analysis of Cryptocurrencies

    Directory of Open Access Journals (Sweden)

    Stephen Chan

    2017-05-01

    Full Text Available We analyze statistical properties of the largest cryptocurrencies (determined by market capitalization, of which Bitcoin is the most prominent example. We characterize their exchange rates versus the U.S. Dollar by fitting parametric distributions to them. It is shown that returns are clearly non-normal, however, no single distribution fits well jointly to all the cryptocurrencies analysed. We find that for the most popular currencies, such as Bitcoin and Litecoin, the generalized hyperbolic distribution gives the best fit, while for the smaller cryptocurrencies the normal inverse Gaussian distribution, generalized t distribution, and Laplace distribution give good fits. The results are important for investment and risk management purposes.

  8. Meta-analyses

    NARCIS (Netherlands)

    Hendriks, M.A.; Luyten, J.W.; Scheerens, J.; Sleegers, P.J.C.; Scheerens, J.

    2014-01-01

    In this chapter results of a research synthesis and quantitative meta-analyses of three facets of time effects in education are presented, namely time at school during regular lesson hours, homework, and extended learning time. The number of studies for these three facets of time that could be used

  9. Contesting Citizenship: Comparative Analyses

    DEFF Research Database (Denmark)

    Siim, Birte; Squires, Judith

    2007-01-01

    . Comparative citizenship analyses need to be considered in relation to multipleinequalities and their intersections and to multiple governance and trans-national organisinf. This, in turn, suggests that comparative citizenship analysis needs to consider new spaces in which struggles for equal citizenship occur...

  10. Wavelet Analyses and Applications

    Science.gov (United States)

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  11. Report sensory analyses veal

    NARCIS (Netherlands)

    Veldman, M.; Schelvis-Smit, A.A.M.

    2005-01-01

    On behalf of a client of Animal Sciences Group, different varieties of veal were analyzed by both instrumental and sensory analyses. The sensory evaluation was performed with a sensory analytical panel in the period of 13th of May and 31st of May, 2005. The three varieties of veal were: young bull,

  12. The statistical stability phenomenon

    CERN Document Server

    Gorban, Igor I

    2017-01-01

    This monograph investigates violations of statistical stability of physical events, variables, and processes and develops a new physical-mathematical theory taking into consideration such violations – the theory of hyper-random phenomena. There are five parts. The first describes the phenomenon of statistical stability and its features, and develops methods for detecting violations of statistical stability, in particular when data is limited. The second part presents several examples of real processes of different physical nature and demonstrates the violation of statistical stability over broad observation intervals. The third part outlines the mathematical foundations of the theory of hyper-random phenomena, while the fourth develops the foundations of the mathematical analysis of divergent and many-valued functions. The fifth part contains theoretical and experimental studies of statistical laws where there is violation of statistical stability. The monograph should be of particular interest to engineers...

  13. Lectures on statistical mechanics

    CERN Document Server

    Bowler, M G

    1982-01-01

    Anyone dissatisfied with the almost ritual dullness of many 'standard' texts in statistical mechanics will be grateful for the lucid explanation and generally reassuring tone. Aimed at securing firm foundations for equilibrium statistical mechanics, topics of great subtlety are presented transparently and enthusiastically. Very little mathematical preparation is required beyond elementary calculus and prerequisites in physics are limited to some elementary classical thermodynamics. Suitable as a basis for a first course in statistical mechanics, the book is an ideal supplement to more convent

  14. Statistics at square one

    CERN Document Server

    Campbell, M J

    2011-01-01

    The new edition of this international bestseller continues to throw light on the world of statistics for health care professionals and medical students. Revised throughout, the 11th edition features new material in the areas of relative risk, absolute risk and   numbers needed to treat diagnostic tests, sensitivity, specificity, ROC curves free statistical software The popular self-testing exercises at the end of every chapter are strengthened by the addition of new sections on reading and reporting statistics and formula appreciation.

  15. Contributions to industrial statistics

    OpenAIRE

    2015-01-01

    This thesis is about statistics' contributions to industry. It is an article compendium comprising four articles divided in two blocks: (i) two contributions for a water supply company, and (ii) significance of the effects in Design of Experiments. In the first block, great emphasis is placed on how the research design and statistics can be applied to various real problems that a water company raises and it aims to convince water management companies that statistics can be very useful to impr...

  16. Optimization techniques in statistics

    CERN Document Server

    Rustagi, Jagdish S

    1994-01-01

    Statistics help guide us to optimal decisions under uncertainty. A large variety of statistical problems are essentially solutions to optimization problems. The mathematical techniques of optimization are fundamentalto statistical theory and practice. In this book, Jagdish Rustagi provides full-spectrum coverage of these methods, ranging from classical optimization and Lagrange multipliers, to numerical techniques using gradients or direct search, to linear, nonlinear, and dynamic programming using the Kuhn-Tucker conditions or the Pontryagin maximal principle. Variational methods and optimiza

  17. Equilibrium statistical mechanics

    CERN Document Server

    Jackson, E Atlee

    2000-01-01

    Ideal as an elementary introduction to equilibrium statistical mechanics, this volume covers both classical and quantum methodology for open and closed systems. Introductory chapters familiarize readers with probability and microscopic models of systems, while additional chapters describe the general derivation of the fundamental statistical mechanics relationships. The final chapter contains 16 sections, each dealing with a different application, ordered according to complexity, from classical through degenerate quantum statistical mechanics. Key features include an elementary introduction t

  18. Applied statistics for economists

    CERN Document Server

    Lewis, Margaret

    2012-01-01

    This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.

  19. Equilibrium statistical mechanics

    CERN Document Server

    Mayer, J E

    1968-01-01

    The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t

  20. Mathematical statistics with applications

    CERN Document Server

    Wackerly, Dennis D; Scheaffer, Richard L

    2008-01-01

    In their bestselling MATHEMATICAL STATISTICS WITH APPLICATIONS, premiere authors Dennis Wackerly, William Mendenhall, and Richard L. Scheaffer present a solid foundation in statistical theory while conveying the relevance and importance of the theory in solving practical problems in the real world. The authors' use of practical applications and excellent exercises helps you discover the nature of statistics and understand its essential role in scientific research.

  1. Contributions to statistics

    CERN Document Server

    Mahalanobis, P C

    1965-01-01

    Contributions to Statistics focuses on the processes, methodologies, and approaches involved in statistics. The book is presented to Professor P. C. Mahalanobis on the occasion of his 70th birthday. The selection first offers information on the recovery of ancillary information and combinatorial properties of partially balanced designs and association schemes. Discussions focus on combinatorial applications of the algebra of association matrices, sample size analogy, association matrices and the algebra of association schemes, and conceptual statistical experiments. The book then examines latt

  2. The Study of Optimizing the Structure of Foreign Direct Investment --analyses of FDI statistical data based on VAR model%优化外商直接投资结构的研究——基于行业FDI数据的VAR模型

    Institute of Scientific and Technical Information of China (English)

    刘梦琴; 王付顺

    2011-01-01

    The paper analyses the relations among GDP and the foreign direct investment in manufacturing industry (FM), service industry (FS), real estate (FRE) and education (FE) from related statistical data from 1990 to 2010 based on VAR model, conclusions are as follows : in the long run, the increases of FM and FS lead to the rise of GDP, and the FRE and FE change in the opposite direction with GDP. In the short term, accounting from the influence to national economic development from high to low ranking, it is FS, FM, FE and FRE. We suggest that the decision - maker should make decision based on long - term goal guided by the Catalogue of Industries for Guiding Foreign Investment, when utilizing and optimizing the structure of foreign direct investment. So the role of FDI to drive economic growth should be given full attention to, and at the same time, reducing the negative influence.%根据1990—2010年全国国内生产总值(GDP)、FDI在制造业的投资额(FM)、FDI在服务业的投资额(FS)、FDI在房地产业的投资额(FRE)及在教育领域的投资额(FE)的统计数据进行基于VAR模型的实证分析,认为长期看,外商在制造业的投资额(FM)和在服务业的投资额(FS)的增加导致全国GDP的增加,外商在房地产业的投资额(FRE)以及在教育领域的投资额(FE)与GDP反向变动;短期看,对全国经济发展的短期影响程度由高到低,依次是FDI在服务业的投资额(FS)、在制造业的投资额(FM)和在教育领域的投资额(FE)以及在房地产业的投资额(FRE)。建议以国务院颁布的《外商投资产业指导目录》为导向,积极利用外资及优化外商投资结构,在发挥外资对我国经济的积极推动作用的同时尽量减少外资的负面影响。

  3. Computed statistics at streamgages, and methods for estimating low-flow frequency statistics and development of regional regression equations for estimating low-flow frequency statistics at ungaged locations in Missouri

    Science.gov (United States)

    Southard, Rodney E.

    2013-01-01

    estimates on one of these streams can be calculated at an ungaged location that has a drainage area that is between 40 percent of the drainage area of the farthest upstream streamgage and within 150 percent of the drainage area of the farthest downstream streamgage along the stream of interest. The second method may be used on any stream with a streamgage that has operated for 10 years or longer and for which anthropogenic effects have not changed the low-flow characteristics at the ungaged location since collection of the streamflow data. A ratio of drainage area of the stream at the ungaged location to the drainage area of the stream at the streamgage was computed to estimate the statistic at the ungaged location. The range of applicability is between 40- and 150-percent of the drainage area of the streamgage, and the ungaged location must be located on the same stream as the streamgage. The third method uses regional regression equations to estimate selected low-flow frequency statistics for unregulated streams in Missouri. This report presents regression equations to estimate frequency statistics for the 10-year recurrence interval and for the N-day durations of 1, 2, 3, 7, 10, 30, and 60 days. Basin and climatic characteristics were computed using geographic information system software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses based on existing digital geospatial data and previous studies. Spatial analyses for geographical bias in the predictive accuracy of the regional regression equations defined three low-flow regions with the State representing the three major physiographic provinces in Missouri. Region 1 includes the Central Lowlands, Region 2 includes the Ozark Plateaus, and Region 3 includes the Mississippi Alluvial Plain. A total of 207 streamgages were used in the regression analyses for the regional equations. Of the 207 U.S. Geological Survey streamgages, 77 were

  4. Statistical discrete geometry

    CERN Document Server

    Ariwahjoedi, Seramika; Kosasih, Jusak Sali; Rovelli, Carlo; Zen, Freddy Permana

    2016-01-01

    Following our earlier work, we construct statistical discrete geometry by applying statistical mechanics to discrete (Regge) gravity. We propose a coarse-graining method for discrete geometry under the assumptions of atomism and background independence. To maintain these assumptions, restrictions are given to the theory by introducing cut-offs, both in ultraviolet and infrared regime. Having a well-defined statistical picture of discrete Regge geometry, we take the infinite degrees of freedom (large n) limit. We argue that the correct limit consistent with the restrictions and the background independence concept is not the continuum limit of statistical mechanics, but the thermodynamical limit.

  5. Improved Statistics Handling

    OpenAIRE

    2009-01-01

    Ericsson is a global provider of telecommunications systems equipment and related services for mobile and fixed network operators.  3Gsim is a tool used by Ericsson in tests of the 3G RNC node. In order to validate the tests, statistics are constantly gathered within 3Gsim and users can use telnet to access the statistics using some system specific 3Gsim commands. The statistics can be retrieved but is unstructured for the human eye and needs parsing and arranging to be readable.  The statist...

  6. Annual Statistical Supplement, 2008

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2008 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  7. Annual Statistical Supplement, 2004

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2004 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  8. Annual Statistical Supplement, 2006

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2006 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  9. Annual Statistical Supplement, 2016

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2016 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  10. Annual Statistical Supplement, 2010

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2010 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  11. Annual Statistical Supplement, 2002

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2002 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  12. Annual Statistical Supplement, 2003

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2003 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  13. Annual Statistical Supplement, 2011

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2011 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  14. Annual Statistical Supplement, 2000

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2000 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  15. Annual Statistical Supplement, 2015

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2015 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  16. Annual Statistical Supplement, 2009

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2009 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  17. Annual Statistical Supplement, 2014

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2014 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  18. Annual Statistical Supplement, 2007

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2007 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  19. Annual Statistical Supplement, 2005

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2005 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  20. Annual Statistical Supplement, 2001

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2001 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  1. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  2. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  3. Statistics in a Nutshell

    CERN Document Server

    Boslaugh, Sarah

    2008-01-01

    Need to learn statistics as part of your job, or want some help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference that's perfect for anyone with no previous background in the subject. This book gives you a solid understanding of statistics without being too simple, yet without the numbing complexity of most college texts. You get a firm grasp of the fundamentals and a hands-on understanding of how to apply them before moving on to the more advanced material that follows. Each chapter presents you with easy-to-follow descriptions illustrat

  4. Record Statistics and Dynamics

    DEFF Research Database (Denmark)

    Sibani, Paolo; Jensen, Henrik J.

    2009-01-01

    The term record statistics covers the statistical properties of records within an ordered series of numerical data obtained from observations or measurements. A record within such series is simply a value larger (or smaller) than all preceding values. The mathematical properties of records strongly...... fluctuations of e. g. the energy are able to push the system past some sort of ‘edge of stability’, inducing irreversible configurational changes, whose statistics then closely follows the statistics of record fluctuations....

  5. Statistics is Easy

    CERN Document Server

    Shasha, Dennis

    2010-01-01

    Statistics is the activity of inferring results about a population given a sample. Historically, statistics books assume an underlying distribution to the data (typically, the normal distribution) and derive results under that assumption. Unfortunately, in real life, one cannot normally be sure of the underlying distribution. For that reason, this book presents a distribution-independent approach to statistics based on a simple computational counting idea called resampling. This book explains the basic concepts of resampling, then systematically presents the standard statistical measures along

  6. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  7. Statistical criteria for characterizing irradiance time series.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Ellis, Abraham; Hansen, Clifford W.

    2010-10-01

    We propose and examine several statistical criteria for characterizing time series of solar irradiance. Time series of irradiance are used in analyses that seek to quantify the performance of photovoltaic (PV) power systems over time. Time series of irradiance are either measured or are simulated using models. Simulations of irradiance are often calibrated to or generated from statistics for observed irradiance and simulations are validated by comparing the simulation output to the observed irradiance. Criteria used in this comparison should derive from the context of the analyses in which the simulated irradiance is to be used. We examine three statistics that characterize time series and their use as criteria for comparing time series. We demonstrate these statistics using observed irradiance data recorded in August 2007 in Las Vegas, Nevada, and in June 2009 in Albuquerque, New Mexico.

  8. 47 CFR 1.363 - Introduction of statistical data.

    Science.gov (United States)

    2010-10-01

    ... analyses, and experiments, and those parts of other studies involving statistical methodology shall be... studies, so as to be available upon request. In the case of experimental analyses, a clear and complete... adjustments, if any, to observed data shall be described. In the case of every kind of statistical study,...

  9. UVISS preliminary visibility analysis

    DEFF Research Database (Denmark)

    Betto, Maurizio

    1998-01-01

    The goal of this work is to obtain a preliminary assessment of the sky visibility for anastronomical telescope located on the express pallet of the International SpaceStation (ISS)} taking into account the major constraints imposed on the instrument by the ISSattitude and structure. Part...

  10. Integrating Statistical Visualization Research into the Political Science Classroom

    Science.gov (United States)

    Draper, Geoffrey M.; Liu, Baodong; Riesenfeld, Richard F.

    2011-01-01

    The use of computer software to facilitate learning in political science courses is well established. However, the statistical software packages used in many political science courses can be difficult to use and counter-intuitive. We describe the results of a preliminary user study suggesting that visually-oriented analysis software can help…

  11. Possible future HERA analyses

    CERN Document Server

    Geiser, Achim

    2015-01-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing $ep$ collider data and their physics scope. Comparisons to the original scope of the HERA programme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-e...

  12. Analysing Access Control Specifications

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2009-01-01

    . Recent events have revealed intimate knowledge of surveillance and control systems on the side of the attacker, making it often impossible to deduce the identity of an inside attacker from logged data. In this work we present an approach that analyses the access control configuration to identify the set......When prosecuting crimes, the main question to answer is often who had a motive and the possibility to commit the crime. When investigating cyber crimes, the question of possibility is often hard to answer, as in a networked system almost any location can be accessed from almost anywhere. The most...... of credentials needed to reach a certain location in a system. This knowledge allows to identify a set of (inside) actors who have the possibility to commit an insider attack at that location. This has immediate applications in analysing log files, but also nontechnical applications such as identifying possible...

  13. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  14. Possible future HERA analyses

    Energy Technology Data Exchange (ETDEWEB)

    Geiser, Achim

    2015-12-15

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  15. Statistical Engine Knock Control

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    A new statistical concept of the knock control of a spark ignition automotive engine is proposed . The control aim is associated with the statistical hy pothesis test which compares the threshold value to the average value of the max imal amplitud e of the knock sensor signal at a given freq uency...

  16. Applied Statistics with SPSS

    Science.gov (United States)

    Huizingh, Eelko K. R. E.

    2007-01-01

    Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…

  17. Statistical Hadronization and Holography

    DEFF Research Database (Denmark)

    Bechi, Jacopo

    2009-01-01

    In this paper we consider some issues about the statistical model of the hadronization in a holographic approach. We introduce a Rindler like horizon in the bulk and we understand the string breaking as a tunneling event under this horizon. We calculate the hadron spectrum and we get a thermal......, and so statistical, shape for it....

  18. Handbook of Spatial Statistics

    CERN Document Server

    Gelfand, Alan E

    2010-01-01

    Offers an introduction detailing the evolution of the field of spatial statistics. This title focuses on the three main branches of spatial statistics: continuous spatial variation (point referenced data); discrete spatial variation, including lattice and areal unit data; and, spatial point patterns.

  19. Practical statistics simply explained

    CERN Document Server

    Langley, Dr Russell A

    1971-01-01

    For those who need to know statistics but shy away from math, this book teaches how to extract truth and draw valid conclusions from numerical data using logic and the philosophy of statistics rather than complex formulae. Lucid discussion of averages and scatter, investigation design, more. Problems with solutions.

  20. Statistical methods in astronomy

    OpenAIRE

    Long, James P.; de Souza, Rafael S.

    2017-01-01

    We present a review of data types and statistical methods often encountered in astronomy. The aim is to provide an introduction to statistical applications in astronomy for statisticians and computer scientists. We highlight the complex, often hierarchical, nature of many astronomy inference problems and advocate for cross-disciplinary collaborations to address these challenges.

  1. Thiele. Pioneer in statistics

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt

    This book studies the brilliant Danish 19th Century astronomer, T.N. Thiele who made important contributions to statistics, actuarial science, astronomy and mathematics. The most important of these contributions in statistics are translated into English for the first time, and the text includes...

  2. Applied Statistics with SPSS

    Science.gov (United States)

    Huizingh, Eelko K. R. E.

    2007-01-01

    Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…

  3. Inductive Logic and Statistics

    NARCIS (Netherlands)

    Romeijn, J. -W.

    2009-01-01

    This chapter concerns inductive logic in relation to mathematical statistics. I start by introducing a general notion of probabilistic induc- tive inference. Then I introduce Carnapian inductive logic, and I show that it can be related to Bayesian statistical inference via de Finetti's representatio

  4. Statistical mechanics of pluripotency.

    Science.gov (United States)

    MacArthur, Ben D; Lemischka, Ihor R

    2013-08-01

    Recent reports using single-cell profiling have indicated a remarkably dynamic view of pluripotent stem cell identity. Here, we argue that the pluripotent state is not well defined at the single-cell level but rather is a statistical property of stem cell populations, amenable to analysis using the tools of statistical mechanics and information theory.

  5. Thiele. Pioneer in statistics

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt

    This book studies the brilliant Danish 19th Century astronomer, T.N. Thiele who made important contributions to statistics, actuarial science, astronomy and mathematics. The most important of these contributions in statistics are translated into English for the first time, and the text includes...

  6. Application Statistics 1987.

    Science.gov (United States)

    Council of Ontario Universities, Toronto.

    Summary statistics on application and registration patterns of applicants wishing to pursue full-time study in first-year places in Ontario universities (for the fall of 1987) are given. Data on registrations were received indirectly from the universities as part of their annual submission of USIS/UAR enrollment data to Statistics Canada and MCU.…

  7. Statistics 101 for Radiologists.

    Science.gov (United States)

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced.

  8. Deconstructing Statistical Analysis

    Science.gov (United States)

    Snell, Joel

    2014-01-01

    Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…

  9. Practical statistics for educators

    CERN Document Server

    Ravid, Ruth

    2014-01-01

    Practical Statistics for Educators, Fifth Edition, is a clear and easy-to-follow text written specifically for education students in introductory statistics courses and in action research courses. It is also a valuable resource and guidebook for educational practitioners who wish to study their own settings.

  10. THE BREED TRACEABILITY OF SHEEP MEAT BY USING MOLECULAR GENETICS METHODS: PRELIMINARY RESULTS

    Directory of Open Access Journals (Sweden)

    A. Bramante

    2011-04-01

    Full Text Available Safety and quality foods of animal origin are extremely important for consumers. The aim of this work was to evaluate the feasibility of a method to track the breed origin of sheep meat all along the production chain using molecular genetics tools. A total of 800 samples evenly distributed among seven Italian sheep breeds have been typed at 19 STR markers, together with 90 samples from both imported sheep animals and local crossbred animals withdrawn at slaughterhouses. A maximum likelihood assignment test was adopted to evaluate STR ability to allocate samples to their true breed of origin. Sarda animals were all correctly allocated, as well as more than 98% of samples from the other breeds. Only slightly worst allocation performances were observed for imported and crossbred animals. Preliminary results seem quite promising, though further analyses will be needed in order to better understand the statistical power of such an assignment test before implementation in the sheep meat production chain.

  11. Mobile biofeedback of heart rate variability in patients with diabetic polyneuropathy: a preliminary study.

    Science.gov (United States)

    Druschky, Katrin; Druschky, Achim

    2015-09-01

    Biofeedback of heart rate variability (HRV) was applied to patients with diabetic polyneuropathy using a new mobile device allowing regularly scheduled self-measurements without the need of visits to a special autonomic laboratory. Prolonged generation of data over an eight-week period facilitated more precise investigation of cardiac autonomic function and assessment of positive and negative trends of HRV parameters over time. Statistical regression analyses revealed significant trends in 11 of 17 patients, while no significant differences were observed when comparing autonomic screening by short-term HRV and respiratory sinus arrhythmia at baseline and after the 8 weeks training period. Four patients showed positive trends of HRV parameters despite the expected progression of cardiac autonomic dysfunction over time. Patient compliance was above 50% in all but two patients. The results of this preliminary study indicate a good practicality of the handheld device and suggest a potential positive effect on cardiac autonomic neuropathy in patients with type 2 diabetes.

  12. Preliminary assessments of spatial influences in the Ambos Nogales region of the US-Mexican border

    Energy Technology Data Exchange (ETDEWEB)

    Smith, L.A. [ManTech Environmental Technology Inc., P.O. Box 12313, 27709 Research Triangle Park, NC (United States); Mukerjee, S. [US Environmental Protection Agency, National Exposure Research Laboratory, MD-47, 27711 Research Triangle Park, NC (United States); Monroy, G.J. [Arizona Department of Environmental Quality, 400 West Congress St., Suite 433, 85701 Tucson, AZ (United States); Keene, F.E. [Arizona Department of Environmental Quality, 3033 North Central Ave., 85012 Phoenix, AZ (United States)

    2001-08-10

    Ambient air measurements collected from 1994 to 1995 were used in a preliminary assessment of potential source and spatial influences in the Ambos Nogales border region (Nogales, Arizona, USA and Nogales, Sonora, Mexico). In this assessment, volatile organic compounds (VOC) and particulate matter (PM) species were used from four sites, two on either side of the border. An examination of median levels and principal component analysis indicated the dominance of soil dusts and mobile sources. Pairwise comparisons of sites for VOCs associated with mobile sources revealed statistically significant differences between sites in the central Nogales area vs. the two sites furthest from the border. Particulate lead at Mexican sites was higher and significantly different vs. US sites. Although further analyses are necessary, findings suggest that local and immediate mobile/other anthropogenic and soil dust influences are present throughout Nogales, with particulate lead from leaded motor vehicle exhaust or soldering operations being a possible influence on the Mexican side.

  13. Statistical laws in linguistics

    CERN Document Server

    Altmann, Eduardo G

    2015-01-01

    Zipf's law is just one out of many universal laws proposed to describe statistical regularities in language. Here we review and critically discuss how these laws can be statistically interpreted, fitted, and tested (falsified). The modern availability of large databases of written text allows for tests with an unprecedent statistical accuracy and also a characterization of the fluctuations around the typical behavior. We find that fluctuations are usually much larger than expected based on simplifying statistical assumptions (e.g., independence and lack of correlations between observations).These simplifications appear also in usual statistical tests so that the large fluctuations can be erroneously interpreted as a falsification of the law. Instead, here we argue that linguistic laws are only meaningful (falsifiable) if accompanied by a model for which the fluctuations can be computed (e.g., a generative model of the text). The large fluctuations we report show that the constraints imposed by linguistic laws...

  14. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  15. Statistical Methods for Astronomy

    CERN Document Server

    Feigelson, Eric D

    2012-01-01

    This review outlines concepts of mathematical statistics, elements of probability theory, hypothesis tests and point estimation for use in the analysis of modern astronomical data. Least squares, maximum likelihood, and Bayesian approaches to statistical inference are treated. Resampling methods, particularly the bootstrap, provide valuable procedures when distributions functions of statistics are not known. Several approaches to model selection and good- ness of fit are considered. Applied statistics relevant to astronomical research are briefly discussed: nonparametric methods for use when little is known about the behavior of the astronomical populations or processes; data smoothing with kernel density estimation and nonparametric regression; unsupervised clustering and supervised classification procedures for multivariate problems; survival analysis for astronomical datasets with nondetections; time- and frequency-domain times series analysis for light curves; and spatial statistics to interpret the spati...

  16. Preliminary hazards analysis -- vitrification process

    Energy Technology Data Exchange (ETDEWEB)

    Coordes, D.; Ruggieri, M.; Russell, J.; TenBrook, W.; Yimbo, P. [Science Applications International Corp., Pleasanton, CA (United States)

    1994-06-01

    This paper presents a Preliminary Hazards Analysis (PHA) for mixed waste vitrification by joule heating. The purpose of performing a PHA is to establish an initial hazard categorization for a DOE nuclear facility and to identify those processes and structures which may have an impact on or be important to safety. The PHA is typically performed during and provides input to project conceptual design. The PHA is then followed by a Preliminary Safety Analysis Report (PSAR) performed during Title 1 and 2 design. The PSAR then leads to performance of the Final Safety Analysis Report performed during the facility`s construction and testing. It should be completed before routine operation of the facility commences. This PHA addresses the first four chapters of the safety analysis process, in accordance with the requirements of DOE Safety Guidelines in SG 830.110. The hazards associated with vitrification processes are evaluated using standard safety analysis methods which include: identification of credible potential hazardous energy sources; identification of preventative features of the facility or system; identification of mitigative features; and analyses of credible hazards. Maximal facility inventories of radioactive and hazardous materials are postulated to evaluate worst case accident consequences. These inventories were based on DOE-STD-1027-92 guidance and the surrogate waste streams defined by Mayberry, et al. Radiological assessments indicate that a facility, depending on the radioactive material inventory, may be an exempt, Category 3, or Category 2 facility. The calculated impacts would result in no significant impact to offsite personnel or the environment. Hazardous materials assessment indicates that a Mixed Waste Vitrification facility will be a Low Hazard facility having minimal impacts to offsite personnel and the environment.

  17. Mass spectrometry based protein identification with accurate statistical significance assignment

    OpenAIRE

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be ach...

  18. Births: preliminary data for 2012.

    Science.gov (United States)

    Hamilton, Brady E; Martin, Joyce A; Ventura, Stephanie J

    2013-09-01

    Objectives-This report presents preliminary data for 2012 on births in the United States. U.S. data on births are shown by age, live-birth order, race, and Hispanic origin of mother. Data on marital status, cesarean delivery, preterm births, and low birthweight are also presented. Methods-Data in this report are based on 99.96% of 2012 births.Records for the few states with less than 100% of records received are weighted to independent control counts of all births received in state vital statistics offices in 2012. Comparisons are made with final 2011 data. Results-The preliminary number of births for the United States in 2012 was 3,952,937, essentially unchanged (not statistically significant) from 2011; the general fertility rate was 63.0 births per 1,000 women aged 15-44, down only slightly from 2011, after declining nearly 3% a year from 2007 through 2010. The number of births and fertility rate either declined or were unchanged for most race and Hispanic origin groups from 2011 to 2012; however, both the number of births and the fertility rate for Asian or Pacific Islander women rose in 2012 (7% and 4%, respectively). The birth rate for teenagers aged 15-19 was down 6% in 2012 (29.4 births per 1,000 teenagers aged 15-19), yet another historic low for the United States, with rates declining for younger and older teenagers and for nearly all race and Hispanic origin groups. The birth rate for women in their early 20s also declined in 2012, to a new record low of 83.1 births per 1,000 women. Birth rates for women in their 30s rose in 2012, as did the birth rate for women in their early 40s. The birth rate for women in their late 40s was unchanged. The nonmarital birth rate declined in 2012 (to 45.3 birth per 1,000 unmarried women aged 15-44), whereas the number of births to unmarried women rose 1% and the percentage of births to unmarried women was unchanged (at 40.7%). The cesarean delivery rate for the United States was unchanged in 2012 at 32.8%. The preterm

  19. Preliminary Test for Nonlinear Input Output Relations in SISO Systems

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2000-01-01

    This paper discusses and develops preliminary statistical tests for detecting nonlinearities in the deterministic part of SISO systems with noise. The most referenced method is unreliable for common noise processes as e.g.\\ colored. Therefore two new methods based on superposition and sinus input...

  20. Metabolic Control with Insulin Pump Therapy: Preliminary Experience

    Directory of Open Access Journals (Sweden)

    Shang-Ren Hsu

    2008-07-01

    Conclusion: Our preliminary experience demonstrated the effectiveness of insulin pump therapy for both type 1 and type 2 diabetic patients. The reduction in their HbA1C values was both statistically and clinically significant. This treatment should be considered for patients poorly controlled by subcutaneous insulin injection therapy.

  1. Statistical test of VEP waveform equality.

    Science.gov (United States)

    Young, Rockefeller S L; Kimura, Eiji

    2010-04-01

    The aim of the study was to describe a theory and method for inferring the statistical significance of a visually evoked cortical potential (VEP) recording. The statistical evaluation is predicated on the pre-stimulus VEP as estimates of the cortical potentials expected when the stimulus does not produce an effect, a mathematical transform to convert the voltages into standard deviations from zero, and a time-series approach for estimating the variability of between-session VEPs under the null hypothesis. Empirical and Monte Carlo analyses address issues concerned with testability, statistical validity, clinical feasibility, as well as limitations of the proposed method. We conclude that visual electrophysiological recordings can be evaluated as a statistical study of n = 1 subject using time-series analysis when confounding effects are adequately controlled. The statistical test can be performed on either a single VEP or the difference between pairs of VEPs.

  2. AMS analyses at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, E.M. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia). Physics Division

    1998-03-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with {sup 14}C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for {sup 14}C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent`s indigenous Aboriginal peoples. (author)

  3. Statistics a complete introduction

    CERN Document Server

    Graham, Alan

    2013-01-01

    Statistics: A Complete Introduction is the most comprehensive yet easy-to-use introduction to using Statistics. Written by a leading expert, this book will help you if you are studying for an important exam or essay, or if you simply want to improve your knowledge. The book covers all the key areas of Statistics including graphs, data interpretation, spreadsheets, regression, correlation and probability. Everything you will need is here in this one book. Each chapter includes not only an explanation of the knowledge and skills you need, but also worked examples and test questions.

  4. Statistics of football dynamics

    CERN Document Server

    Mendes, R S; Anteneodo, C

    2007-01-01

    We investigate the dynamics of football matches. Our goal is to characterize statistically the temporal sequence of ball movements in this collective sport game, searching for traits of complex behavior. Data were collected over a variety of matches in South American, European and World championships throughout 2005 and 2006. We show that the statistics of ball touches presents power-law tails and can be described by $q$-gamma distributions. To explain such behavior we propose a model that provides information on the characteristics of football dynamics. Furthermore, we discuss the statistics of duration of out-of-play intervals, not directly related to the previous scenario.

  5. Practical business statistics

    CERN Document Server

    Siegel, Andrew

    2011-01-01

    Practical Business Statistics, Sixth Edition, is a conceptual, realistic, and matter-of-fact approach to managerial statistics that carefully maintains-but does not overemphasize-mathematical correctness. The book offers a deep understanding of how to learn from data and how to deal with uncertainty while promoting the use of practical computer applications. This teaches present and future managers how to use and understand statistics without an overdose of technical detail, enabling them to better understand the concepts at hand and to interpret results. The text uses excellent examples with

  6. Multivariate Statistical Process Control

    DEFF Research Database (Denmark)

    Kulahci, Murat

    2013-01-01

    As sensor and computer technology continues to improve, it becomes a normal occurrence that we confront with high dimensional data sets. As in many areas of industrial statistics, this brings forth various challenges in statistical process control (SPC) and monitoring for which the aim...... is to identify “out-of-control” state of a process using control charts in order to reduce the excessive variation caused by so-called assignable causes. In practice, the most common method of monitoring multivariate data is through a statistic akin to the Hotelling’s T2. For high dimensional data with excessive...

  7. The nature of statistics

    CERN Document Server

    Wallis, W Allen

    2014-01-01

    Focusing on everyday applications as well as those of scientific research, this classic of modern statistical methods requires little to no mathematical background. Readers develop basic skills for evaluating and using statistical data. Lively, relevant examples include applications to business, government, social and physical sciences, genetics, medicine, and public health. ""W. Allen Wallis and Harry V. Roberts have made statistics fascinating."" - The New York Times ""The authors have set out with considerable success, to write a text which would be of interest and value to the student who,

  8. Statistical deception at work

    CERN Document Server

    Mauro, John

    2013-01-01

    Written to reveal statistical deceptions often thrust upon unsuspecting journalists, this book views the use of numbers from a public perspective. Illustrating how the statistical naivete of journalists often nourishes quantitative misinformation, the author's intent is to make journalists more critical appraisers of numerical data so that in reporting them they do not deceive the public. The book frequently uses actual reported examples of misused statistical data reported by mass media and describes how journalists can avoid being taken in by them. Because reports of survey findings seldom g

  9. Statistical Engine Knock Control

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    A new statistical concept of the knock control of a spark ignition automotive engine is proposed . The control aim is associated with the statistical hy pothesis test which compares the threshold value to the average value of the max imal amplitud e of the knock sensor signal at a given freq uency....... C ontrol algorithm which is used for minimization of the regulation error realizes a simple count-up-count-d own logic. A new ad aptation algorithm for the knock d etection threshold is also d eveloped . C onfi d ence interval method is used as the b asis for ad aptation. A simple statistical mod el...

  10. Statistical Group Comparison

    CERN Document Server

    Liao, Tim Futing

    2011-01-01

    An incomparably useful examination of statistical methods for comparisonThe nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not inde

  11. Approximating Stationary Statistical Properties

    Institute of Scientific and Technical Information of China (English)

    Xiaoming WANG

    2009-01-01

    It is well-known that physical laws for large chaotic dynamical systems are revealed statistically. Many times these statistical properties of the system must be approximated numerically. The main contribution of this manuscript is to provide simple and natural criterions on numerical methods (temporal and spatial discretization) that are able to capture the stationary statistical properties of the underlying dissipative chaotic dynamical systems asymptotically. The result on temporal approximation is a recent finding of the author, and the result on spatial approximation is a new one. Applications to the infinite Prandtl number model for convection and the barotropic quasi-geostrophic model are also discussed.

  12. Evolutionary Statistical Procedures

    CERN Document Server

    Baragona, Roberto; Poli, Irene

    2011-01-01

    This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions a

  13. AP statistics crash course

    CERN Document Server

    D'Alessio, Michael

    2012-01-01

    AP Statistics Crash Course - Gets You a Higher Advanced Placement Score in Less Time Crash Course is perfect for the time-crunched student, the last-minute studier, or anyone who wants a refresher on the subject. AP Statistics Crash Course gives you: Targeted, Focused Review - Study Only What You Need to Know Crash Course is based on an in-depth analysis of the AP Statistics course description outline and actual Advanced Placement test questions. It covers only the information tested on the exam, so you can make the most of your valuable study time. Our easy-to-read format covers: exploring da

  14. Basics of statistical physics

    CERN Document Server

    Müller-Kirsten, Harald J W

    2013-01-01

    Statistics links microscopic and macroscopic phenomena, and requires for this reason a large number of microscopic elements like atoms. The results are values of maximum probability or of averaging. This introduction to statistical physics concentrates on the basic principles, and attempts to explain these in simple terms supplemented by numerous examples. These basic principles include the difference between classical and quantum statistics, a priori probabilities as related to degeneracies, the vital aspect of indistinguishability as compared with distinguishability in classical physics, the differences between conserved and non-conserved elements, the different ways of counting arrangements in the three statistics (Maxwell-Boltzmann, Fermi-Dirac, Bose-Einstein), the difference between maximization of the number of arrangements of elements, and averaging in the Darwin-Fowler method. Significant applications to solids, radiation and electrons in metals are treated in separate chapters, as well as Bose-Eins...

  15. Elements of statistical thermodynamics

    CERN Document Server

    Nash, Leonard K

    2006-01-01

    Encompassing essentially all aspects of statistical mechanics that appear in undergraduate texts, this concise, elementary treatment shows how an atomic-molecular perspective yields new insights into macroscopic thermodynamics. 1974 edition.

  16. LBVs and Statistical Inference

    CERN Document Server

    Davidson, Kris; Weis, Kerstin

    2016-01-01

    Smith and Tombleson (2015) asserted that statistical tests disprove the standard view of LBVs, and proposed a far more complex scenario to replace it. But Humphreys et al. (2016) showed that Smith and Tombleson's Magellanic "LBV" sample was a mixture of physically different classes of stars, and genuine LBVs are in fact statistically consistent with the standard view. Smith (2016) recently objected at great length to this result. Here we note that he misrepresented some of the arguments, altered the test criteria, ignored some long-recognized observational facts, and employed inadequate statistical procedures. This case illustrates the dangers of uncareful statistical sampling, as well as the need to be wary of unstated assumptions.

  17. Ehrlichiosis: Statistics and Epidemiology

    Science.gov (United States)

    ... a tick Diseases transmitted by ticks Statistics and Epidemiology Recommend on Facebook Tweet Share Compartir On this ... Holman RC, McQuiston JH, Krebs JW, Swerdlow DL. Epidemiology of human ehrlichiosis and anaplasmosis in the United ...

  18. Quantum statistics on graphs

    CERN Document Server

    Harrison, JM; Robbins, JM; 10.1098/rspa.2010.0254

    2011-01-01

    Quantum graphs are commonly used as models of complex quantum systems, for example molecules, networks of wires, and states of condensed matter. We consider quantum statistics for indistinguishable spinless particles on a graph, concentrating on the simplest case of abelian statistics for two particles. In spite of the fact that graphs are locally one-dimensional, anyon statistics emerge in a generalized form. A given graph may support a family of independent anyon phases associated with topologically inequivalent exchange processes. In addition, for sufficiently complex graphs, there appear new discrete-valued phases. Our analysis is simplified by considering combinatorial rather than metric graphs -- equivalently, a many-particle tight-binding model. The results demonstrate that graphs provide an arena in which to study new manifestations of quantum statistics. Possible applications include topological quantum computing, topological insulators, the fractional quantum Hall effect, superconductivity and molec...

  19. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...... that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical......, identify interest rate models, value bonds, estimate parameters, and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues...

  20. CMS Statistics Reference Booklet

    Data.gov (United States)

    U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...

  1. Statistical theory of heat

    CERN Document Server

    Scheck, Florian

    2016-01-01

    Scheck’s textbook starts with a concise introduction to classical thermodynamics, including geometrical aspects. Then a short introduction to probabilities and statistics lays the basis for the statistical interpretation of thermodynamics. Phase transitions, discrete models and the stability of matter are explained in great detail. Thermodynamics has a special role in theoretical physics. Due to the general approach of thermodynamics the field has a bridging function between several areas like the theory of condensed matter, elementary particle physics, astrophysics and cosmology. The classical thermodynamics describes predominantly averaged properties of matter, reaching from few particle systems and state of matter to stellar objects. Statistical Thermodynamics covers the same fields, but explores them in greater depth and unifies classical statistical mechanics with quantum theory of multiple particle systems. The content is presented as two tracks: the fast track for master students, providing the essen...

  2. Elements of Statistics

    Science.gov (United States)

    Grégoire, G.

    2016-05-01

    This chapter is devoted to two objectives. The first one is to answer the request expressed by attendees of the first Astrostatistics School (Annecy, October 2013) to be provided with an elementary vademecum of statistics that would facilitate understanding of the given courses. In this spirit we recall very basic notions, that is definitions and properties that we think sufficient to benefit from courses given in the Astrostatistical School. Thus we give briefly definitions and elementary properties on random variables and vectors, distributions, estimation and tests, maximum likelihood methodology. We intend to present basic ideas in a hopefully comprehensible way. We do not try to give a rigorous presentation, and due to the place devoted to this chapter, can cover only a rather limited field of statistics. The second aim is to focus on some statistical tools that are useful in classification: basic introduction to Bayesian statistics, maximum likelihood methodology, Gaussian vectors and Gaussian mixture models.

  3. Childhood Cancer Statistics

    Science.gov (United States)

    ... Non-Hodgkin) Lymphoma (Hodgkin) Neuroblastoma Osteosarcoma Retinoblastoma Rhabdomyosarcoma Skin Cancer Soft Tissue Sarcoma Thyroid Cancer Cancer Resources Childhood Cancer Statistics Coping With Cancer CureSearch CancerCare App Late Effects ...

  4. Boating Accident Statistics

    Data.gov (United States)

    Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

  5. Playing at Statistical Mechanics

    Science.gov (United States)

    Clark, Paul M.; And Others

    1974-01-01

    Discussed are the applications of counting techniques of a sorting game to distributions and concepts in statistical mechanics. Included are the following distributions: Fermi-Dirac, Bose-Einstein, and most probable. (RH)

  6. Transport statistics 1996

    CSIR Research Space (South Africa)

    Shepperson, L

    1997-12-01

    Full Text Available This publication contains transport and related statistics on roads, vehicles, infrastructure, passengers, freight, rail, air, maritime and road traffic, and international comparisons. The information compiled in this publication has been gathered...

  7. Boosted Statistical Mechanics

    CERN Document Server

    Testa, Massimo

    2015-01-01

    Based on the fundamental principles of Relativistic Quantum Mechanics, we give a rigorous, but completely elementary, proof of the relation between fundamental observables of a statistical system when measured relatively to two inertial reference frames, connected by a Lorentz transformation.

  8. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical......, identify interest rate models, value bonds, estimate parameters, and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues......Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

  9. Bureau of Labor Statistics

    Science.gov (United States)

    ... gov Disability.gov Freedom of Information Act | Privacy & Security Statement | Disclaimers | Customer Survey | Important Web Site Notices U.S. Bureau of Labor Statistics | Postal Square Building, 2 Massachusetts Avenue, NE Washington, ...

  10. Statistics For Neuroscientists

    Directory of Open Access Journals (Sweden)

    Subbakrishna D.K

    2000-01-01

    Full Text Available The role statistical methods play in medicine in the interpretation of empirical data is well recognized by researchers. With modern computing facilities and software packages there is little need for familiarity with the computational details of statistical calculations. However, for the researcher to understand whether these calculations are valid and appropriate it is necessary that the user is aware of the rudiments of the statistical methodology. Also, it needs to be emphasized that no amount of advanced analysis can be a substitute for a properly planned and executed study. An attempt is made in this communication to discuss some of the theoretical issues that are important for the valid analysis and interpretation of precious date that are gathered. The article summarises some of the basic statistical concepts followed by illustrations from live data generated from various research projects from the department of Neurology of this Institute.

  11. Information theory and statistics

    CERN Document Server

    Kullback, Solomon

    1997-01-01

    Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.

  12. Statistics of the sagas

    Science.gov (United States)

    Richfield, Jon; bookfeller

    2016-07-01

    In reply to Ralph Kenna and Pádraig Mac Carron's feature article “Maths meets myths” in which they describe how they are using techniques from statistical physics to characterize the societies depicted in ancient Icelandic sagas.

  13. Statistics of extremes

    CERN Document Server

    Gumbel, E J

    2012-01-01

    This classic text covers order statistics and their exceedances; exact distribution of extremes; the 1st asymptotic distribution; uses of the 1st, 2nd, and 3rd asymptotes; more. 1958 edition. Includes 44 tables and 97 graphs.

  14. Medicaid Drug Claims Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Drug Claims Statistics CD is a useful tool that conveniently breaks up Medicaid claim counts and separates them by quarter and includes an annual count.

  15. EDI Performance Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...

  16. Plague Maps and Statistics

    Science.gov (United States)

    ... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... per year in the United States: 1900-2012. Plague Worldwide Plague epidemics have occurred in Africa, Asia, ...

  17. The SAPS crime statistics

    African Journals Online (AJOL)

    Every year, the South African Minister of Police releases the crime statistics in ... prove an invaluable source of information for those who seek to better understand and respond to crime ... of Social Development in the JCPS may suggest a.

  18. CDC WONDER: Cancer Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The United States Cancer Statistics (USCS) online databases in WONDER provide cancer incidence and mortality data for the United States for the years since 1999, by...

  19. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  20. On Preliminary Breakdown

    Science.gov (United States)

    Beasley, W. H.; Petersen, D.

    2013-12-01

    The preliminary breakdown phase of a negative cloud-to-ground lightning flash was observed in detail. Observations were made with a Photron SA1.1 high-speed video camera operating at 9,000 frames per second, fast optical sensors, a flat-plate electric field antenna covering the SLF to MF band, and VHF and UHF radio receivers with bandwidths of 20 MHz. Bright stepwise extensions of a negative leader were observed at an altitude of 8 km during the first few milliseconds of the flash, and were coincident with bipolar electric field pulses called 'characteristic pulses'. The 2-D step lengths of the preliminary processes were in excess of 100 meters, with some 2-D step lengths in excess of 200 meters. Smaller and shorter unipolar electric field pulses were superposed onto the bipolar electric field pulses, and were coincident with VHF and UHF radio pulses. After a few milliseconds, the emerging negative stepped leader system showed a marked decrease in luminosity, step length, and propagation velocity. Details of these events will be discussed, including the possibility that the preliminary breakdown phase consists not of a single developing lightning leader system, but of multiple smaller lightning leader systems that eventually join together into a single system.

  1. Disability: concepts and statistical information

    Directory of Open Access Journals (Sweden)

    Giordana Baldassarre

    2008-06-01

    Full Text Available

    Background: The measurement and definition of disability is difficult due to its objective and subjective characteristics. In Italy, three different perspectives have been developed during the last 40 years. These various perspectives have had an effect, not only on how to measure disability, but also on policies to improve the social integration of people with disabilities.

    Methods: This paper examines the various conceptual models behind the definition of disability and the differences in the estimated number of persons with disabilities. In addition, it analyses in accordance with the International Classification of Functioning, disability and health, the European and international initiatives undertaken to harmonize the definitions of disability.

    Discussion: There are various bodies and central government agencies that either have management data or carry out statistical systematic surveys and disability surveys. Statistically speaking, the worst aspect of this scenario is that it creates confusion and uncertainty among the end users of this data, namely the policy makers. At international level the statistical data on disability is scarcely comparable among countries, despite huge efforts on behalf of international organisations to harmonize classifications and definitions of disability.

    Conclusions: Statistical and administrative surveys provide information flows using a different definition and label based on a conceptual model that reflects the time period in which they were implemented. The use of different prescriptive definitions of disability produces different counts of persons with disabilities in Italy. For this reason it is important to interpret the data correctly and choose the appropriate cross section that best represents the population on which to focus attention.

  2. German cancer statistics 2004

    OpenAIRE

    2010-01-01

    Abstract Background For years the Robert Koch Institute (RKI) has been annually pooling and reviewing the data from the German population-based cancer registries and evaluating them together with the cause-of-death statistics provided by the statistical offices. Traditionally, the RKI periodically estimates the number of new cancer cases in Germany on the basis of the available data from the regional cancer registries in which registration is complete; this figure, in turn, forms the basis fo...

  3. Dominican Republic; Statistical Appendix

    OpenAIRE

    International Monetary Fund

    2003-01-01

    In this paper, statistical data for the Dominican Republic were presented as real, public, financial, and external sectors. In real sector, GDP by sector at constant prices, savings, investment, consumer price index, petroleum statistics, and so on, were outlined. The public sector summarizes operations of the consolidated public sector, central government, and revenues. A summary of the banking system, claims, interest rates, financial indicators, and reserve requirements were described in t...

  4. Introductory statistical inference

    CERN Document Server

    Mukhopadhyay, Nitis

    2014-01-01

    This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist

  5. Business statistics I essentials

    CERN Document Server

    Clark, Louise

    2014-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Business Statistics I includes descriptive statistics, introduction to probability, probability distributions, sampling and sampling distributions, interval estimation, and hypothesis t

  6. Dominican Republic; Statistical Appendix

    OpenAIRE

    International Monetary Fund

    2003-01-01

    In this paper, statistical data for the Dominican Republic were presented as real, public, financial, and external sectors. In real sector, GDP by sector at constant prices, savings, investment, consumer price index, petroleum statistics, and so on, were outlined. The public sector summarizes operations of the consolidated public sector, central government, and revenues. A summary of the banking system, claims, interest rates, financial indicators, and reserve requirements were described in t...

  7. Addressing mathematics & statistics anxiety

    OpenAIRE

    Kotecha, Meena

    2015-01-01

    This paper should be of interest to mathematics and statistics educators ranging from pre-university to university education sectors. It will discuss some features of the author’s teaching model developed over her longitudinal study conducted to understand and address mathematics and statistics anxiety, which is one of the main barriers to engaging with these subjects especially in non-specialist undergraduates. It will demonstrate how a range of formative assessments are used to kindle, as w...

  8. Breakthroughs in statistics

    CERN Document Server

    Johnson, Norman

    This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Edi...

  9. UN Data: Environment Statistics: Waste

    Data.gov (United States)

    World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...

  10. UN Data- Environmental Statistics: Waste

    Data.gov (United States)

    World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...

  11. Analyse de "La banlieue"

    Directory of Open Access Journals (Sweden)

    Nelly Morais

    2006-11-01

    Full Text Available 1. Préambule - Conditions de réalisation de la présente analyse Un groupe d'étudiants de master 1 de FLE de l'université Paris 3 (donc des étudiants en didactique des langues se destinant à l'enseignement du FLE a observé le produit au cours d'un module sur les TIC (Technologies de l'Information et de la Communication et la didactique des langues. Une discussion s'est ensuite engagée sur le forum d'une plate-forme de formation à distance à partir de quelques questions posées par l'enseigna...

  12. EEG analyses with SOBI.

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R.; Tang, Akaysha (University of New Mexico, Albuquerque, NM)

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  13. Statistical Treatment of Looking-Time Data

    Science.gov (United States)

    Csibra, Gergely; Hernik, Mikolaj; Mascaro, Olivier; Tatone, Denis; Lengyel, Máté

    2016-01-01

    Looking times (LTs) are frequently measured in empirical research on infant cognition. We analyzed the statistical distribution of LTs across participants to develop recommendations for their treatment in infancy research. Our analyses focused on a common within-subject experimental design, in which longer looking to novel or unexpected stimuli is…

  14. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  15. Proteins analysed as virtual knots

    Science.gov (United States)

    Alexander, Keith; Taylor, Alexander J.; Dennis, Mark R.

    2017-02-01

    Long, flexible physical filaments are naturally tangled and knotted, from macroscopic string down to long-chain molecules. The existence of knotting in a filament naturally affects its configuration and properties, and may be very stable or disappear rapidly under manipulation and interaction. Knotting has been previously identified in protein backbone chains, for which these mechanical constraints are of fundamental importance to their molecular functionality, despite their being open curves in which the knots are not mathematically well defined; knotting can only be identified by closing the termini of the chain somehow. We introduce a new method for resolving knotting in open curves using virtual knots, which are a wider class of topological objects that do not require a classical closure and so naturally capture the topological ambiguity inherent in open curves. We describe the results of analysing proteins in the Protein Data Bank by this new scheme, recovering and extending previous knotting results, and identifying topological interest in some new cases. The statistics of virtual knots in protein chains are compared with those of open random walks and Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important.

  16. Proteins analysed as virtual knots

    Science.gov (United States)

    Alexander, Keith; Taylor, Alexander J.; Dennis, Mark R.

    2017-01-01

    Long, flexible physical filaments are naturally tangled and knotted, from macroscopic string down to long-chain molecules. The existence of knotting in a filament naturally affects its configuration and properties, and may be very stable or disappear rapidly under manipulation and interaction. Knotting has been previously identified in protein backbone chains, for which these mechanical constraints are of fundamental importance to their molecular functionality, despite their being open curves in which the knots are not mathematically well defined; knotting can only be identified by closing the termini of the chain somehow. We introduce a new method for resolving knotting in open curves using virtual knots, which are a wider class of topological objects that do not require a classical closure and so naturally capture the topological ambiguity inherent in open curves. We describe the results of analysing proteins in the Protein Data Bank by this new scheme, recovering and extending previous knotting results, and identifying topological interest in some new cases. The statistics of virtual knots in protein chains are compared with those of open random walks and Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important. PMID:28205562

  17. Guidelines for Meta-Analyses of Counseling Psychology Research

    Science.gov (United States)

    Quintana, Stephen M.; Minami, Takuya

    2006-01-01

    This article conceptually describes the steps in conducting quantitative meta-analyses of counseling psychology research with minimal reliance on statistical formulas. The authors identify sources that describe necessary statistical formula for various meta-analytic calculations and describe recent developments in meta-analytic techniques. The…

  18. Modern applied statistics with s-plus

    CERN Document Server

    Venables, W N

    1997-01-01

    S-PLUS is a powerful environment for the statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-PLUS to perform statistical analyses and provides both an introduction to the use of S-PLUS and a course in modern statistical methods. S-PLUS is available for both Windows and UNIX workstations, and both versions are covered in depth. The aim of the book is to show how to use S-PLUS as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-PLUS, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets. Many of the methods discussed are state-of-the-art approaches to topics such as linear and non-linear regression models, robust a...

  19. Spatial and Alignment Analyses for a field of Small Volcanic Vents South of Pavonis Mons Mars

    Science.gov (United States)

    Bleacher, J. E.; Glaze, L. S.; Greeley, R.; Hauber, E.; Baloga, S. M.; Sakimoto, S. E. H.; Williams, D. A.; Glotch, T. D.

    2008-01-01

    The Tharsis province of Mars displays a variety of small volcanic vent (10s krn in diameter) morphologies. These features were identified in Mariner and Viking images [1-4], and Mars Orbiter Laser Altimeter (MOLA) data show them to be more abundant than originally observed [5,6]. Recent studies are classifying their diverse morphologies [7-9]. Building on this work, we are mapping the location of small volcanic vents (small-vents) in the Tharsis province using MOLA, Thermal Emission Imaging System, and High Resolution Stereo Camera data [10]. Here we report on a preliminary study of the spatial and alignment relationships between small-vents south of Pavonis Mons, as determined by nearest neighbor and two-point azimuth statistical analyses. Terrestrial monogenetic volcanic fields display four fundamental characteristics: 1) recurrence rates of eruptions,2 ) vent abundance, 3) vent distribution, and 4) tectonic relationships [11]. While understanding recurrence rates typically requires field measurements, insight into vent abundance, distribution, and tectonic relationships can be established by mapping of remotely sensed data, and subsequent application of spatial statistical studies [11,12], the goal of which is to link the distribution of vents to causal processes.

  20. Raman spectroscopy and SERS analysis of ovarian tumour derived exosomes (TEXs): a preliminary study

    Science.gov (United States)

    Kerr, Laura T.; Gubbins, Luke; Weiner Gorzel, Karolina; Sharma, Shiva; Kell, Malcolm; McCann, Amanda; Hennelly, Bryan M.

    2014-05-01

    Here we report a preliminary study based on the application of Raman spectroscopy and surface enhanced Raman spectroscopy (SERS) to investigate the compositional differences between exosomes derived from ovarian carcinoma cells (cell line A2780) grown in normoxia (normal O2 conditions) and hypoxia (1% O2 conditions). Exosomes are integral to cell signalling, and are of interest in the study of how cells communicate within their environment. We are particularly interested in identifying whether hypoxia induced senescent cells can communi- cate via exosomes with neighbouring tumour cells, thereby causing them to become senescent and therefore radio and chemo resistant. With this goal in mind, we performed a preliminary study on the application of Raman spectroscopy and SERS to analyse the biomolecular fingerprint of both groups of exosomes and to investigate whether there exists a different biomolecular composition associated with exosomes derived from hypoxic cells in comparison to those from normoxic cells. We also applied multivariate statistical techniques for the classification of both groups of exosomes.