REANALYSIS OF F-STATISTIC GRAVITATIONAL-WAVE SEARCHES WITH THE HIGHER CRITICISM STATISTIC
International Nuclear Information System (INIS)
Bennett, M. F.; Melatos, A.; Delaigle, A.; Hall, P.
2013-01-01
We propose a new method of gravitational-wave detection using a modified form of higher criticism, a statistical technique introduced by Donoho and Jin. Higher criticism is designed to detect a group of sparse, weak sources, none of which are strong enough to be reliably estimated or detected individually. We apply higher criticism as a second-pass method to synthetic F-statistic and C-statistic data for a monochromatic periodic source in a binary system and quantify the improvement relative to the first-pass methods. We find that higher criticism on C-statistic data is more sensitive by ∼6% than the C-statistic alone under optimal conditions (i.e., binary orbit known exactly) and the relative advantage increases as the error in the orbital parameters increases. Higher criticism is robust even when the source is not monochromatic (e.g., phase-wandering in an accreting system). Applying higher criticism to a phase-wandering source over multiple time intervals gives a ∼> 30% increase in detectability with few assumptions about the frequency evolution. By contrast, in all-sky searches for unknown periodic sources, which are dominated by the brightest source, second-pass higher criticism does not provide any benefits over a first-pass search.
Sensitivity analysis of ranked data: from order statistics to quantiles
Heidergott, B.F.; Volk-Makarewicz, W.
2015-01-01
In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before
The Study of Second Higher Education through Mathematical Statistics
Directory of Open Access Journals (Sweden)
Olga V. Kremer
2014-05-01
Full Text Available The article deals with the statistic reasons, age and wages of people who get the second higher education. People opt for the second higher education mostly due to many economical and physiological factors. According to our research, the age is a key motivator for the second higher education. Based on statistical data the portrait of a second higher education student was drawn.
[Hydrologic variability and sensitivity based on Hurst coefficient and Bartels statistic].
Lei, Xu; Xie, Ping; Wu, Zi Yi; Sang, Yan Fang; Zhao, Jiang Yan; Li, Bin Bin
2018-04-01
Due to the global climate change and frequent human activities in recent years, the pure stochastic components of hydrological sequence is mixed with one or several of the variation ingredients, including jump, trend, period and dependency. It is urgently needed to clarify which indices should be used to quantify the degree of their variability. In this study, we defined the hydrological variability based on Hurst coefficient and Bartels statistic, and used Monte Carlo statistical tests to test and analyze their sensitivity to different variants. When the hydrological sequence had jump or trend variation, both Hurst coefficient and Bartels statistic could reflect the variation, with the Hurst coefficient being more sensitive to weak jump or trend variation. When the sequence had period, only the Bartels statistic could detect the mutation of the sequence. When the sequence had a dependency, both the Hurst coefficient and the Bartels statistics could reflect the variation, with the latter could detect weaker dependent variations. For the four variations, both the Hurst variability and Bartels variability increased with the increases of variation range. Thus, they could be used to measure the variation intensity of the hydrological sequence. We analyzed the temperature series of different weather stations in the Lancang River basin. Results showed that the temperature of all stations showed the upward trend or jump, indicating that the entire basin had experienced warming in recent years and the temperature variability in the upper and lower reaches was much higher. This case study showed the practicability of the proposed method.
Statistics Report on TEQSA Registered Higher Education Providers
Australian Government Tertiary Education Quality and Standards Agency, 2015
2015-01-01
This statistics report provides a comprehensive snapshot of national statistics on all parts of the sector for the year 2013, by bringing together data collected directly by TEQSA with data sourced from the main higher education statistics collections managed by the Australian Government Department of Education and Training. The report provides…
Pattern statistics on Markov chains and sensitivity to parameter estimation
Directory of Open Access Journals (Sweden)
Nuel Grégory
2006-10-01
Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data.
Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng
2014-04-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.
Statistics Report on TEQSA Registered Higher Education Providers, 2016
Australian Government Tertiary Education Quality and Standards Agency, 2016
2016-01-01
This Statistics Report is the third release of selected higher education sector data held by the Australian Government Tertiary Education Quality and Standards Agency (TEQSA) for its quality assurance activities. It provides a snapshot of national statistics on all parts of the sector by bringing together data collected directly by TEQSA with data…
Statistical insights from Romanian data on higher education
Directory of Open Access Journals (Sweden)
Andreea Ardelean
2015-09-01
Full Text Available This paper aims to use cluster analysis to make a comparative analysis at regional level concerning the Romanian higher education. The evolution of higher education in post-communist period will also be presented, using quantitative traits. Although the focus is on university education, this will also include references to the total education by comparison. Then, to highlight the importance of higher education, the chi-square test will be applied to check whether there is an association between statistical regions and education level of the unemployed.
Gender sensitive statistics as a prerequsite for democratization of a society
Directory of Open Access Journals (Sweden)
Balon Bojana
2007-01-01
Full Text Available The author analyzes statistical data on position of women in the EU and Serbia, which confirm that women in the EU, as well as in Serbia, are insufficiently represented in decision-making processes, are on average better educated then men, work more hours per day but that they on average earn less then men. Statistical Office of Republic of Serbia has published in 2005 a brochure Women and men in Serbia in which gender sensitive data were published in a coherent way and also data that they gather but do not represent in regular publications, or they do, but disorderly and not systematically. By publishing this brochure it was proved that there is interest and the need to better collect gender sensitive statistical data. Nevertheless, the brochure does not represent a radical breakthrough in methodology of data collection and analysis of new areas in statistics. There are still many "black holes" that Serbia will have to fill in if it wants to respond to its international obligations regarding reporting on position of women to international organizations and various bodies. This is why the author also represented gender sensitive categories which are gathered and published in Germany, Norway and Slovenia - a country where statistics is based on similar grounds which were built in former Yugoslavia and in short time progressed to European standards, proving that progress is possible, if there is political will.
South Carolina Higher Education Statistical Abstract, 2014. 36th Edition
Armour, Mim, Ed.
2014-01-01
The South Carolina Higher Education Statistical Abstract is a comprehensive, single-source compilation of tables and graphs which report data frequently requested by the Governor, Legislators, college and university staff, other state government officials, and the general public. The 2014 edition of the Statistical Abstract marks the 36th year of…
South Carolina Higher Education Statistical Abstract, 2015. 37th Edition
Armour, Mim, Ed.
2015-01-01
The South Carolina Higher Education Statistical Abstract is a comprehensive, single-source compilation of tables and graphs which report data frequently requested by the Governor, Legislators, college and university staff, other state government officials, and the general public. The 2015 edition of the Statistical Abstract marks the 37th year of…
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Litchford, Ron J.; Jeng, San-Mou
1992-01-01
The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.
Higher-order scene statistics of breast images
Abbey, Craig K.; Sohl-Dickstein, Jascha N.; Olshausen, Bruno A.; Eckstein, Miguel P.; Boone, John M.
2009-02-01
Researchers studying human and computer vision have found description and construction of these systems greatly aided by analysis of the statistical properties of naturally occurring scenes. More specifically, it has been found that receptive fields with directional selectivity and bandwidth properties similar to mammalian visual systems are more closely matched to the statistics of natural scenes. It is argued that this allows for sparse representation of the independent components of natural images [Olshausen and Field, Nature, 1996]. These theories have important implications for medical image perception. For example, will a system that is designed to represent the independent components of natural scenes, where objects occlude one another and illumination is typically reflected, be appropriate for X-ray imaging, where features superimpose on one another and illumination is transmissive? In this research we begin to examine these issues by evaluating higher-order statistical properties of breast images from X-ray projection mammography (PM) and dedicated breast computed tomography (bCT). We evaluate kurtosis in responses of octave bandwidth Gabor filters applied to PM and to coronal slices of bCT scans. We find that kurtosis in PM rises and quickly saturates for filter center frequencies with an average value above 0.95. By contrast, kurtosis in bCT peaks near 0.20 cyc/mm with kurtosis of approximately 2. Our findings suggest that the human visual system may be tuned to represent breast tissue more effectively in bCT over a specific range of spatial frequencies.
DEFF Research Database (Denmark)
Manevski, Kiril; Jabloun, Mohamed; Gupta, Manika
2016-01-01
a more powerful input to a nonparametric analysis for discrimination at the field scale, when compared with unaltered reflectance and parametric analysis. However, the discrimination outputs interact and are very sensitive to the number of observations - an important implication for the design......Remote sensing of land covers utilizes an increasing number of methods for spectral reflectance processing and its accompanying statistics to discriminate between the covers’ spectral signatures at various scales. To this end, the present chapter deals with the field-scale sensitivity...... of the vegetation spectral discrimination to the most common types of reflectance (unaltered and continuum-removed) and statistical tests (parametric and nonparametric analysis of variance). It is divided into two distinct parts. The first part summarizes the current knowledge in relation to vegetation...
Higher order capacity statistics of multi-hop transmission systems over Rayleigh fading channels
Yilmaz, Ferkan
2012-03-01
In this paper, we present an exact analytical expression to evaluate the higher order statistics of the channel capacity for amplify and forward (AF) multihop transmission systems operating over Rayleigh fading channels. Furthermore, we present simple and efficient closed-form expression to the higher order moments of the channel capacity of dual hop transmission system with Rayleigh fading channels. In order to analyze the behavior of the higher order capacity statistics and investigate the usefulness of the mathematical analysis, some selected numerical and simulation results are presented. Our results are found to be in perfect agreement. © 2012 IEEE.
A perceptual space of local image statistics.
Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M
2015-12-01
Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.
Connection between weighted LPC and higher-order statistics for AR model estimation
Kamp, Y.; Ma, C.
1993-01-01
This paper establishes the relationship between a weighted linear prediction method used for robust analysis of voiced speech and the autoregressive modelling based on higher-order statistics, known as cumulants
Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut
2015-01-01
The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Study of relationship between MUF correlation and detection sensitivity of statistical analysis
International Nuclear Information System (INIS)
Tamura, Toshiaki; Ihara, Hitoshi; Yamamoto, Yoichi; Ikawa, Koji
1989-11-01
Various kinds of statistical analysis are proposed to NRTA (Near Real Time Materials Accountancy) which was devised to satisfy the timeliness goal of one of the detection goals of IAEA. It will be presumed that different statistical analysis results will occur between the case of considered rigorous error propagation (with MUF correlation) and the case of simplified error propagation (without MUF correlation). Therefore, measurement simulation and decision analysis were done using flow simulation of 800 MTHM/Y model reprocessing plant, and relationship between MUF correlation and detection sensitivity and false alarm of statistical analysis was studied. Specific character of material accountancy for 800 MTHM/Y model reprocessing plant was grasped by this simulation. It also became clear that MUF correlation decreases not only false alarm but also detection probability for protracted loss in case of CUMUF test and Page's test applied to NRTA. (author)
Higher insulin sensitivity in vegans is not associated with higher mitochondrial density.
Gojda, J; Patková, J; Jaček, M; Potočková, J; Trnka, J; Kraml, P; Anděl, M
2013-12-01
Vegans have a lower incidence of insulin resistance (IR)-associated diseases and a higher insulin sensitivity (IS) compared with omnivores. The aim of this study was to examine whether the higher IS in vegans relates to markers of mitochondrial biogenesis and to intramyocellular lipid (IMCL) content. Eleven vegans and 10 matched (race, age, sex, body mass index, physical activity and energy intake) omnivorous controls were enrolled in a case-control study. Anthropometry, bioimpedance (BIA), ultrasound measurement of visceral and subcutaneous fat layer, parameters of glucose and lipid homeostasis, hyperinsulinemic euglycemic clamp and muscle biopsies were performed. Citrate synthase (CS) activity, mitochondrial DNA (mtDNA) and IMCL content were assessed in skeletal muscle samples. Both groups were comparable in anthropometric and BIA parameters, physical activity and protein-energy intake. Vegans had significantly higher glucose disposal (M-value, vegans 8.11±1.51 vs controls 6.31±1.57 mg/kg/min, 95% confidence interval: 0.402 to 3.212, P=0.014), slightly lower IMCL content (vegans 13.91 (7.8 to 44.0) vs controls 17.36 (12.4 to 78.5) mg/g of muscle, 95% confidence interval: -7.594 to 24.550, P=0.193) and slightly higher relative muscle mtDNA amount (vegans 1.36±0.31 vs controls 1.13±0.36, 95% confidence interval:-0.078 to 0.537, P=0.135). No significant differences were found in CS activity (vegans 18.43±5.05 vs controls 18.16±5.41 μmol/g/min, 95% confidence interval: -4.503 to 5.050, P=0.906). Vegans have a higher IS, but comparable mitochondrial density and IMCL content with omnivores. This suggests that a decrease in whole-body glucose disposal may precede muscle lipid accumulation and mitochondrial dysfunction in IR development.
Higher-Order Statistical Correlations and Mutual Information Among Particles in a Quantum Well
Yépez, V. S.; Sagar, R. P.; Laguna, H. G.
2017-12-01
The influence of wave function symmetry on statistical correlation is studied for the case of three non-interacting spin-free quantum particles in a unidimensional box, in position and in momentum space. Higher-order statistical correlations occurring among the three particles in this quantum system is quantified via higher-order mutual information and compared to the correlation between pairs of variables in this model, and to the correlation in the two-particle system. The results for the higher-order mutual information show that there are states where the symmetric wave functions are more correlated than the antisymmetric ones with same quantum numbers. This holds in position as well as in momentum space. This behavior is opposite to that observed for the correlation between pairs of variables in this model, and the two-particle system, where the antisymmetric wave functions are in general more correlated. These results are also consistent with those observed in a system of three uncoupled oscillators. The use of higher-order mutual information as a correlation measure, is monitored and examined by considering a superposition of states or systems with two Slater determinants.
Higher-Order Statistical Correlations and Mutual Information Among Particles in a Quantum Well
International Nuclear Information System (INIS)
Yépez, V. S.; Sagar, R. P.; Laguna, H. G.
2017-01-01
The influence of wave function symmetry on statistical correlation is studied for the case of three non-interacting spin-free quantum particles in a unidimensional box, in position and in momentum space. Higher-order statistical correlations occurring among the three particles in this quantum system is quantified via higher-order mutual information and compared to the correlation between pairs of variables in this model, and to the correlation in the two-particle system. The results for the higher-order mutual information show that there are states where the symmetric wave functions are more correlated than the antisymmetric ones with same quantum numbers. This holds in position as well as in momentum space. This behavior is opposite to that observed for the correlation between pairs of variables in this model, and the two-particle system, where the antisymmetric wave functions are in general more correlated. These results are also consistent with those observed in a system of three uncoupled oscillators. The use of higher-order mutual information as a correlation measure, is monitored and examined by considering a superposition of states or systems with two Slater determinants. (author)
Higher order capacity statistics of multi-hop transmission systems over Rayleigh fading channels
Yilmaz, Ferkan; Tabassum, Hina; Alouini, Mohamed-Slim
2012-01-01
In this paper, we present an exact analytical expression to evaluate the higher order statistics of the channel capacity for amplify and forward (AF) multihop transmission systems operating over Rayleigh fading channels. Furthermore, we present
High order statistical signatures from source-driven measurements of subcritical fissile systems
International Nuclear Information System (INIS)
Mattingly, J.K.
1998-01-01
This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
Energy Technology Data Exchange (ETDEWEB)
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
International Nuclear Information System (INIS)
Keitel, David; Prix, Reinhard
2015-01-01
The multi-detector F-statistic is close to optimal for detecting continuous gravitational waves (CWs) in Gaussian noise. However, it is susceptible to false alarms from instrumental artefacts, for example quasi-monochromatic disturbances (‘lines’), which resemble a CW signal more than Gaussian noise. In a recent paper (Keitel et al 2014 Phys. Rev. D 89 064023), a Bayesian model selection approach was used to derive line-robust detection statistics for CW signals, generalizing both the F-statistic and the F-statistic consistency veto technique and yielding improved performance in line-affected data. Here we investigate a generalization of the assumptions made in that paper: if a CW analysis uses data from two or more detectors with very different sensitivities, the line-robust statistics could be less effective. We investigate the boundaries within which they are still safe to use, in comparison with the F-statistic. Tests using synthetic draws show that the optimally-tuned version of the original line-robust statistic remains safe in most cases of practical interest. We also explore a simple idea on further improving the detection power and safety of these statistics, which we, however, find to be of limited practical use. (paper)
Subdomain sensitive statistical parsing using raw corpora
Plank, B.; Sima'an, K.
2008-01-01
Modern statistical parsers are trained on large annotated corpora (treebanks). These treebanks usually consist of sentences addressing different subdomains (e.g. sports, politics, music), which implies that the statistics gathered by current statistical parsers are mixtures of subdomains of language
Jordan, Julie-Ann; McGladdery, Gary; Dyer, Kevin
2014-08-01
This study examined levels of mathematics and statistics anxiety, as well as general mental health amongst undergraduate students with dyslexia (n = 28) and those without dyslexia (n = 71). Students with dyslexia had higher levels of mathematics anxiety relative to those without dyslexia, while statistics anxiety and general mental health were comparable for both reading ability groups. In terms of coping strategies, undergraduates with dyslexia tended to use planning-based strategies and seek instrumental support more frequently than those without dyslexia. Higher mathematics anxiety was associated with having a dyslexia diagnosis, as well as greater levels of worrying, denial, seeking instrumental support and less use of the positive reinterpretation coping strategy. By contrast, statistics anxiety was not predicted by dyslexia diagnosis, but was instead predicted by overall worrying and the use of denial and emotion focused coping strategies. The results suggest that disability practitioners should be aware that university students with dyslexia are at risk of high mathematics anxiety. Additionally, effective anxiety reduction strategies such as positive reframing and thought challenging would form a useful addition to the support package delivered to many students with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.
The Statistical Knowledge Gap in Higher Degree by Research Students: The Supervisors' Perspective
Baglin, James; Hart, Claire; Stow, Sarah
2017-01-01
This study sought to gain an understanding of the current statistical training and support needs for Australian Higher Degree by Research (HDR) students and their supervisors. The data reported herein are based on the survey responses of 191 (18.7%) eligible supervisors from a single Australian institution. The survey was composed of both…
Fathiazar, Elham; Anemuller, Jorn; Kretzberg, Jutta
2016-08-01
Voltage-Sensitive Dye (VSD) imaging is an optical imaging method that allows measuring the graded voltage changes of multiple neurons simultaneously. In neuroscience, this method is used to reveal networks of neurons involved in certain tasks. However, the recorded relative dye fluorescence changes are usually low and signals are superimposed by noise and artifacts. Therefore, establishing a reliable method to identify which cells are activated by specific stimulus conditions is the first step to identify functional networks. In this paper, we present a statistical method to identify stimulus-activated network nodes as cells, whose activities during sensory network stimulation differ significantly from the un-stimulated control condition. This method is demonstrated based on voltage-sensitive dye recordings from up to 100 neurons in a ganglion of the medicinal leech responding to tactile skin stimulation. Without relying on any prior physiological knowledge, the network nodes identified by our statistical analysis were found to match well with published cell types involved in tactile stimulus processing and to be consistent across stimulus conditions and preparations.
2009-01-01
In high-dimensional studies such as genome-wide association studies, the correction for multiple testing in order to control total type I error results in decreased power to detect modest effects. We present a new analytical approach based on the higher criticism statistic that allows identification of the presence of modest effects. We apply our method to the genome-wide study of rheumatoid arthritis provided in the Genetic Analysis Workshop 16 Problem 1 data set. There is evidence for unknown bias in this study that could be explained by the presence of undetected modest effects. We compared the asymptotic and empirical thresholds for the higher criticism statistic. Using the asymptotic threshold we detected the presence of modest effects genome-wide. We also detected modest effects using 90th percentile of the empirical null distribution as a threshold; however, there is no such evidence when the 95th and 99th percentiles were used. While the higher criticism method suggests that there is some evidence for modest effects, interpreting individual single-nucleotide polymorphisms with significant higher criticism statistics is of undermined value. The goal of higher criticism is to alert the researcher that genetic effects remain to be discovered and to promote the use of more targeted and powerful studies to detect the remaining effects. PMID:20018032
Kleibergen, F.R.
2003-01-01
We show that the sensitivity of the limit distribution of commonly used GMM statistics to weak and many instruments results from superfluous elements in the higher order expansion of these statistics. When the instruments are strong and their number is small, these elements are of higher order and
Ultraviolet sensitivity in higher dimensions
International Nuclear Information System (INIS)
Hoover, Doug; Burgess, Clifford P.
2006-01-01
We calculate the first three Gilkey-DeWitt (heat-kernel) coefficients, a 0 , a 1 and a 2 , for massive particles having the spins of most physical interest in n dimensions, including the contributions of the ghosts and the fields associated with the appropriate generalized Higgs mechanism. By assembling these into supermultiplets we compute the same coefficients for general supergravity theories, and show that they vanish for many examples. One of the steps of the calculation involves computing these coefficients for massless particles, and our expressions in this case agree with - and extend to more general background spacetimes - earlier calculations, where these exist. Our results give that part of the low-energy effective action which depends most sensitively on the mass of heavy fields once these are integrated out. These results are used in hep-th/0504004 to compute the sensitivity to large masses of the Casimir energy in Ricci-flat 4D compactifications of 6D supergravity
The Need for Context-Sensitive Measures of Educational Quality in Transnational Higher Education
Pyvis, David
2011-01-01
This paper argues that the current approach to educational quality formation in transnational higher education promotes educational imperialism, and that guidelines and practices should be altered to embrace context-sensitive measures of quality. The claims are sustained by findings from a study that investigated how academics understood and…
DEFF Research Database (Denmark)
Høeg, Louise; Roepstorff, Carsten; Thiele, Maja
2009-01-01
that despite 47% higher IMTG levels in women in the follicular phase whole body as well as leg insulin sensitivity are higher than in matched men. This was not explained by sex differences in proximal insulin signalling in women. In women it seems that a high capillary density and type 1 muscle fiber...... expression may be important for insulin action. Key words: Muscle Triglycerides, gender, insulin action, sex paradox....
Directory of Open Access Journals (Sweden)
Ge Peng
2018-02-01
Full Text Available An ice-free Arctic summer would have pronounced impacts on global climate, coastal habitats, national security, and the shipping industry. Rapid and accelerated Arctic sea ice loss has placed the reality of an ice-free Arctic summer even closer to the present day. Accurate projection of the first Arctic ice-free summer year is extremely important for business planning and climate change mitigation, but the projection can be affected by many factors. Using an inter-calibrated satellite sea ice product, this article examines the sensitivity of decadal trends of Arctic sea ice extent and statistical projections of the first occurrence of an ice-free Arctic summer. The projection based on the linear trend of the last 20 years of data places the first Arctic ice-free summer year at 2036, 12 years earlier compared to that of the trend over the last 30 years. The results from a sensitivity analysis of six commonly used curve-fitting models show that the projected timings of the first Arctic ice-free summer year tend to be earlier for exponential, Gompertz, quadratic, and linear with lag fittings, and later for linear and log fittings. Projections of the first Arctic ice-free summer year by all six statistical models appear to converge to the 2037 ± 6 timeframe, with a spread of 17 years, and the earliest first ice-free Arctic summer year at 2031.
Symmetries, invariants and generating functions: higher-order statistics of biased tracers
Munshi, Dipak
2018-01-01
Gravitationally collapsed objects are known to be biased tracers of an underlying density contrast. Using symmetry arguments, generalised biasing schemes have recently been developed to relate the halo density contrast δh with the underlying density contrast δ, divergence of velocity θ and their higher-order derivatives. This is done by constructing invariants such as s, t, ψ,η. We show how the generating function formalism in Eulerian standard perturbation theory (SPT) can be used to show that many of the additional terms based on extended Galilean and Lifshitz symmetry actually do not make any contribution to the higher-order statistics of biased tracers. Other terms can also be drastically simplified allowing us to write the vertices associated with δh in terms of the vertices of δ and θ, the higher-order derivatives and the bias coefficients. We also compute the cumulant correlators (CCs) for two different tracer populations. These perturbative results are valid for tree-level contributions but at an arbitrary order. We also take into account the stochastic nature bias in our analysis. Extending previous results of a local polynomial model of bias, we express the one-point cumulants Script SN and their two-point counterparts, the CCs i.e. Script Cpq, of biased tracers in terms of that of their underlying density contrast counterparts. As a by-product of our calculation we also discuss the results using approximations based on Lagrangian perturbation theory (LPT).
Higher order statistical moment application for solar PV potential analysis
Basri, Mohd Juhari Mat; Abdullah, Samizee; Azrulhisham, Engku Ahmad; Harun, Khairulezuan
2016-10-01
Solar photovoltaic energy could be as alternative energy to fossil fuel, which is depleting and posing a global warming problem. However, this renewable energy is so variable and intermittent to be relied on. Therefore the knowledge of energy potential is very important for any site to build this solar photovoltaic power generation system. Here, the application of higher order statistical moment model is being analyzed using data collected from 5MW grid-connected photovoltaic system. Due to the dynamic changes of skewness and kurtosis of AC power and solar irradiance distributions of the solar farm, Pearson system where the probability distribution is calculated by matching their theoretical moments with that of the empirical moments of a distribution could be suitable for this purpose. On the advantage of the Pearson system in MATLAB, a software programming has been developed to help in data processing for distribution fitting and potential analysis for future projection of amount of AC power and solar irradiance availability.
Energy Technology Data Exchange (ETDEWEB)
Sung, Yixing (Westinghouse Electric Company LLC, Cranberry Township, PA); Adams, Brian M.; Witkowski, Walter R.
2011-04-01
The CASL Level 2 Milestone VUQ.Y1.03, 'Enable statistical sensitivity and UQ demonstrations for VERA,' was successfully completed in March 2011. The VUQ focus area led this effort, in close partnership with AMA, and with support from VRI. DAKOTA was coupled to VIPRE-W thermal-hydraulics simulations representing reactors of interest to address crud-related challenge problems in order to understand the sensitivity and uncertainty in simulation outputs with respect to uncertain operating and model form parameters. This report summarizes work coupling the software tools, characterizing uncertainties, selecting sensitivity and uncertainty quantification algorithms, and analyzing the results of iterative studies. These demonstration studies focused on sensitivity and uncertainty of mass evaporation rate calculated by VIPRE-W, a key predictor for crud-induced power shift (CIPS).
Yilmaz, Ferkan
2012-12-01
The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.
Yilmaz, Ferkan; Alouini, Mohamed-Slim
2012-01-01
The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.
Directory of Open Access Journals (Sweden)
Rodrigo França de Espíndola
2014-12-01
Full Text Available Purpose: To evaluate whether implantation of an aspheric intraocular lens (IOL results in reduced ocular aberrations and improved contrast sensitivity after cataract surgery and, therefore, changes on frequency-doubling technology (FDT testing. Methods: The present prospective clinical study enrolled 25 patients with bilateral cataract (50 eyes, who randomly received either an aspheric (Akreos AO or a spherical (Akreos Fit IOL in one eye and the other IOL in the second eye. Assessment 12 months postoperatively included photopic and mesopic contrast sensitivity testing. Higher-order aberrations (HOAs were computed. FDT testing was divided into four areas to evaluate the variation of the values at different points. The median values of the local pattern thresholds (median area contrast sensitivity [MACS] obtained with that division were calculated. Results: The Akreos AO group obtained statistically significantly lower values of HOAs and spherical aberration compared with the Akreos Fit group. There was a statistically significant between-group difference in contrast sensitivity under mesopic conditions at all spatial frequencies. No statistically significant differences were observed in mean deviation and pattern standard deviation. The aspheric IOL exhibited higher MACS in all areas, although a statistically significant difference was reached only in the 20-degree field area (P=0.043. Conclusion: Aspheric IOLs significantly reduced spherical aberration and HOAs, improving mesopic contrast sensitivity. Although there was a trend toward slightly improved FDT in the aspheric IOL group, it was not statistically significant.
Classification of lung sounds using higher-order statistics: A divide-and-conquer approach.
Naves, Raphael; Barbosa, Bruno H G; Ferreira, Danton D
2016-06-01
Lung sound auscultation is one of the most commonly used methods to evaluate respiratory diseases. However, the effectiveness of this method depends on the physician's training. If the physician does not have the proper training, he/she will be unable to distinguish between normal and abnormal sounds generated by the human body. Thus, the aim of this study was to implement a pattern recognition system to classify lung sounds. We used a dataset composed of five types of lung sounds: normal, coarse crackle, fine crackle, monophonic and polyphonic wheezes. We used higher-order statistics (HOS) to extract features (second-, third- and fourth-order cumulants), Genetic Algorithms (GA) and Fisher's Discriminant Ratio (FDR) to reduce dimensionality, and k-Nearest Neighbors and Naive Bayes classifiers to recognize the lung sound events in a tree-based system. We used the cross-validation procedure to analyze the classifiers performance and the Tukey's Honestly Significant Difference criterion to compare the results. Our results showed that the Genetic Algorithms outperformed the Fisher's Discriminant Ratio for feature selection. Moreover, each lung class had a different signature pattern according to their cumulants showing that HOS is a promising feature extraction tool for lung sounds. Besides, the proposed divide-and-conquer approach can accurately classify different types of lung sounds. The classification accuracy obtained by the best tree-based classifier was 98.1% for classification accuracy on training, and 94.6% for validation data. The proposed approach achieved good results even using only one feature extraction tool (higher-order statistics). Additionally, the implementation of the proposed classifier in an embedded system is feasible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zheng, Luping; Wang, Yunfei; Zhang, Xianshuo; Ma, Liwei; Wang, Baoyan; Ji, Xiangling; Wei, Hua
2018-01-17
Dendrimer with hyperbranched structure and multivalent surface is regarded as one of the most promising candidates close to the ideal drug delivery systems, but the clinical translation and scale-up production of dendrimer has been hampered significantly by the synthetic difficulties. Therefore, there is considerable scope for the development of novel hyperbranched polymer that can not only address the drawbacks of dendrimer but maintain its advantages. The reversible addition-fragmentation chain transfer self-condensing vinyl polymerization (RAFT-SCVP) technique has enabled facile preparation of segmented hyperbranched polymer (SHP) by using chain transfer monomer (CTM)-based double-head agent during the past decade. Meanwhile, the design and development of block-statistical copolymers has been proven in our recent studies to be a simple yet effective way to address the extracellular stability vs intracellular high delivery efficacy dilemma. To integrate the advantages of both hyperbranched and block-statistical structures, we herein reported the fabrication of hyperbranched block-statistical copolymer-based prodrug with pH and reduction dual sensitivities using RAFT-SCVP and post-polymerization click coupling. The external homo oligo(ethylene glycol methyl ether methacrylate) (OEGMA) block provides sufficient extracellularly colloidal stability for the nanocarriers by steric hindrance, and the interior OEGMA units incorporated by the statistical copolymerization promote intracellular drug release by facilitating the permeation of GSH and H + for the cleavage of the reduction-responsive disulfide bond and pH-liable carbonate link as well as weakening the hydrophobic encapsulation of drug molecules. The delivery efficacy of the target hyperbranched block-statistical copolymer-based prodrug was evaluated in terms of in vitro drug release and cytotoxicity studies, which confirms both acidic pH and reduction-triggered drug release for inhibiting proliferation of He
Directory of Open Access Journals (Sweden)
Markowski Dominique N
2012-01-01
Full Text Available Abstract Background Spontaneous cessation of growth is a frequent finding in uterine fibroids. Increasing evidence suggests an important role of cellular senescence in this growth control. Deciphering the underlying mechanisms of growth control that can be expected not only to shed light on the biology of the tumors but also to identify novel therapeutic targets. Methods We have analyzed uterine leiomyomas and matching normal tissue for the expression of p14Arf and used explants to see if reducing the MDM2 activity using the small-molecule inhibitor nutlin-3 can induce p53 and activate genes involved in senescence and/or apoptosis. For these studies quantitative real-time RT-PCR, Western blots, and immunohistochemistry were used. Statistical analyses were performed using the student's t test. Results An in depth analysis of 52 fibroids along with matching myometrium from 31 patients revealed in almost all cases a higher expression of p14Arf in the tumors than in the matching normal tissue. In tissue explants, treatment with the MDM2 inhibitor nutlin-3 induced apoptosis as well as senescence as revealed by a dose-dependent increase of the expression of BAX as well as of p21, respectively. Simultaneously, the expression of the proliferation marker Ki-67 drastically decreased. Western-blot analysis identified an increase of the p53 level as the most likely reason for the increased activity of its downstream markers BAX and p21. Because as a rule fibroids express much higher levels of p14Arf, a major negative regulator of MDM2, than matching myometrium it was then analyzed if fibroids are more sensitive against nutlin-3 treatment than matching myometrium. We were able to show that in most fibroids analyzed a higher sensibility than that of matching myometrium was noted with a corresponding increase of the p53 immunopositivity of the fibroid samples compared to those from myometrium. Conclusions The results show that uterine fibroids represent a cell
msap: a tool for the statistical analysis of methylation-sensitive amplified polymorphism data.
Pérez-Figueroa, A
2013-05-01
In this study msap, an R package which analyses methylation-sensitive amplified polymorphism (MSAP or MS-AFLP) data is presented. The program provides a deep analysis of epigenetic variation starting from a binary data matrix indicating the banding pattern between the isoesquizomeric endonucleases HpaII and MspI, with differential sensitivity to cytosine methylation. After comparing the restriction fragments, the program determines if each fragment is susceptible to methylation (representative of epigenetic variation) or if there is no evidence of methylation (representative of genetic variation). The package provides, in a user-friendly command line interface, a pipeline of different analyses of the variation (genetic and epigenetic) among user-defined groups of samples, as well as the classification of the methylation occurrences in those groups. Statistical testing provides support to the analyses. A comprehensive report of the analyses and several useful plots could help researchers to assess the epigenetic and genetic variation in their MSAP experiments. msap is downloadable from CRAN (http://cran.r-project.org/) and its own webpage (http://msap.r-forge.R-project.org/). The package is intended to be easy to use even for those people unfamiliar with the R command line environment. Advanced users may take advantage of the available source code to adapt msap to more complex analyses. © 2013 Blackwell Publishing Ltd.
Using statistical sensitivities for adaptation of a best-estimate thermo-hydraulic simulation model
International Nuclear Information System (INIS)
Liu, X.J.; Kerner, A.; Schaefer, A.
2010-01-01
On-line adaptation of best-estimate simulations of NPP behaviour to time-dependent measurement data can be used to insure that simulations performed in parallel to plant operation develop synchronously with the real plant behaviour even over extended periods of time. This opens a range of applications including operator support in non-standard-situations, improving diagnostics and validation of measurements in real plants or experimental facilities. A number of adaptation methods have been proposed and successfully applied to control problems. However, these methods are difficult to be applied to best-estimate thermal-hydraulic codes, such as TRACE and ATHLET, with their large nonlinear differential equation systems and sophisticated time integration techniques. This paper presents techniques to use statistical sensitivity measures to overcome those problems by reducing the number of parameters subject to adaptation. It describes how to identify the most significant parameters for adaptation and how this information can be used by combining: -decomposition techniques splitting the system into a small set of component parts with clearly defined interfaces where boundary conditions can be derived from the measurement data, -filtering techniques to insure that the time frame for adaptation is meaningful, -numerical sensitivities to find minimal error conditions. The suitability of combining those techniques is shown by application to an adaptive simulation of the PKL experiment.
Tan, Deborah K L; Tay, Wan Ting; Chan, Cordelia; Tan, Donald T H; Mehta, Jodhbir S
2015-03-01
To evaluate and compare changes in contrast sensitivity and ocular higher-order aberrations (HOAs) after femtosecond lenticule extraction (FLEx) and pseudo small-incision lenticule extraction (SMILE). Singapore National Eye Centre, Singapore. Retrospective case series. Patients had femtosecond lenticule extraction (Group 1) or pseudo small-incision lenticule extraction (Group 2) between March 2010 and December 2011. The main outcome measures were manifest refraction, HOAs, and contrast sensitivity 1, 3, 6, and 12 months postoperatively. Fifty-two consecutive patients (102 eyes) were recruited, 21 patients (42 eyes) in Group 1 and the 31 patients (60 eyes) in Group 2. The uncorrected and corrected distance visual acuities were significantly better in Group 2 than in Group 1 at 12 months (P = .032). There was no significant increase in 3rd- or 4th-order aberrations at 1 year and no significant difference between the 2 groups preoperatively or postoperatively. At 1 year, there was a significant increase in mesopic contrast sensitivity in Group 2 at 1.5 cycles per degree (cpd) (P = .008) that was not found in Group 1, and photopic contrast sensitivity at 6.0 cpd was higher in Group 2 (P = .027). These results indicate that refractive lenticule extraction is safe and effective with no significant induction of HOAs or deterioration in contrast sensitivity at 1 year. Induction of HOAs was not significantly different between both variants of refractive lenticule extraction. However, there was significant improvement in photopic contrast sensitivity after pseudo small-incision lenticule extraction, which persisted through 1 year. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Kaiser, Franciscus; Hillegers, Harm; Legro, Iwen
2005-01-01
In this fourth IHEM trend report, key statistical issues like student flows (new entrants, enrolment, and graduation, broken down by discipline and gender), rates of participation, staff, and finance are presented for the ten IHEM core countries over the period 1995 till recent. National statistical
Directory of Open Access Journals (Sweden)
JiYeoun Lee
2009-01-01
Full Text Available A preprocessing scheme based on linear prediction coefficient (LPC residual is applied to higher-order statistics (HOSs for automatic assessment of an overall pathological voice quality. The normalized skewness and kurtosis are estimated from the LPC residual and show statistically meaningful distributions to characterize the pathological voice quality. 83 voice samples of the sustained vowel /a/ phonation are used in this study and are independently assessed by a speech and language therapist (SALT according to the grade of the severity of dysphonia of GRBAS scale. These are used to train and test classification and regression tree (CART. The best result is obtained using an optima l decision tree implemented by a combination of the normalized skewness and kurtosis, with an accuracy of 92.9%. It is concluded that the method can be used as an assessment tool, providing a valuable aid to the SALT during clinical evaluation of an overall pathological voice quality.
Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent
2012-01-01
The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.
Kittiwisit, Piyanat; Bowman, Judd D.; Jacobs, Daniel C.; Beardsley, Adam P.; Thyagarajan, Nithyanandan
2018-03-01
We present a baseline sensitivity analysis of the Hydrogen Epoch of Reionization Array (HERA) and its build-out stages to one-point statistics (variance, skewness, and kurtosis) of redshifted 21 cm intensity fluctuation from the Epoch of Reionization (EoR) based on realistic mock observations. By developing a full-sky 21 cm light-cone model, taking into account the proper field of view and frequency bandwidth, utilizing a realistic measurement scheme, and assuming perfect foreground removal, we show that HERA will be able to recover statistics of the sky model with high sensitivity by averaging over measurements from multiple fields. All build-out stages will be able to detect variance, while skewness and kurtosis should be detectable for HERA128 and larger. We identify sample variance as the limiting constraint of the measurements at the end of reionization. The sensitivity can also be further improved by performing frequency windowing. In addition, we find that strong sample variance fluctuation in the kurtosis measured from an individual field of observation indicates the presence of outlying cold or hot regions in the underlying fluctuations, a feature that can potentially be used as an EoR bubble indicator.
PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews.
Katchamart, Wanruchada; Faulkner, Amy; Feldman, Brian; Tomlinson, George; Bombardier, Claire
2011-07-01
To compare the performance of Ovid-MEDLINE vs. PubMed for identifying randomized controlled trials of methotrexate (MTX) in patients with rheumatoid arthritis (RA). We created search strategies for Ovid-MEDLINE and PubMed for a systematic review of MTX in RA. Their performance was evaluated using sensitivity, precision, and number needed to read (NNR). Comparing searches in Ovid-MEDLINE vs. PubMed, PubMed retrieved more citations overall than Ovid-MEDLINE; however, of the 20 citations that met eligibility criteria for the review, Ovid-MEDLINE retrieved 17 and PubMed 18. The sensitivity was 85% for Ovid-MEDLINE vs. 90% for PubMed, whereas the precision and NNR were comparable (precision: 0.881% for Ovid-MEDLINE vs. 0.884% for PubMed and NNR: 114 for Ovid-MEDLINE vs. 113 for PubMed). In systematic reviews of RA, PubMed has higher sensitivity than Ovid-MEDLINE with comparable precision and NNR. This study highlights the importance of well-designed database-specific search strategies. Copyright © 2010 Elsevier Inc. All rights reserved.
Lionel, Martellini; Tania, Regimbau
2015-01-01
Under standard assumptions including stationary and serially uncorrelated Gaussian gravitational wave stochastic background signal and noise distributions, as well as homogenous detector sensitivities, the standard cross-correlation detection statistic is known to be optimal in the sense of minimizing the probability of a false dismissal at a fixed value of the probability of a false alarm. The focus of this paper is to analyze the comparative efficiency of this statistic, versus a simple alt...
Sensitivity of precipitation statistics to urban growth in a subtropical coastal megacity cluster.
Holst, Christopher Claus; Chan, Johnny C L; Tam, Chi-Yung
2017-09-01
This short paper presents an investigation on how human activities may or may not affect precipitation based on numerical simulations of precipitation in a benchmark case with modified lower boundary conditions, representing different stages of urban development in the model. The results indicate that certain degrees of urbanization affect the likelihood of heavy precipitation significantly, while less urbanized or smaller cities are much less prone to these effects. Such a result can be explained based on our previous work where the sensitivity of precipitation statistics to surface anthropogenic heat sources lies in the generation of buoyancy and turbulence in the planetary boundary layer and dissipation through triggering of convection. Thus only mega cities of sufficient size, and hence human-activity-related anthropogenic heat emission, can expect to experience such effects. In other words, as cities grow, their effects upon precipitation appear to grow as well. Copyright © 2017. Published by Elsevier B.V.
Kepler Planet Detection Metrics: Statistical Bootstrap Test
Jenkins, Jon M.; Burke, Christopher J.
2016-01-01
This document describes the data produced by the Statistical Bootstrap Test over the final three Threshold Crossing Event (TCE) deliveries to NExScI: SOC 9.1 (Q1Q16)1 (Tenenbaum et al. 2014), SOC 9.2 (Q1Q17) aka DR242 (Seader et al. 2015), and SOC 9.3 (Q1Q17) aka DR253 (Twicken et al. 2016). The last few years have seen significant improvements in the SOC science data processing pipeline, leading to higher quality light curves and more sensitive transit searches. The statistical bootstrap analysis results presented here and the numerical results archived at NASAs Exoplanet Science Institute (NExScI) bear witness to these software improvements. This document attempts to introduce and describe the main features and differences between these three data sets as a consequence of the software changes.
Institute of Scientific and Technical Information of China (English)
XU Dian-Yan
2003-01-01
The free energy and entropy of Reissner-Nordstrom black holes in higher-dimensional space-time are calculated by the quantum statistic method with a brick wall model. The space-time of the black holes is divided into three regions: region 1, (r > r0); region 2, (r0 > r > n); and region 3, (T-J > r > 0), where r0 is the radius of the outer event horizon, and r, is the radius of the inner event horizon. Detailed calculation shows that the entropy contributed by region 2 is zero, the entropy contributed by region 1 is positive and proportional to the outer event horizon area, the entropy contributed by region 3 is negative and proportional to the inner event horizon area. The total entropy contributed by all the three regions is positive and proportional to the area difference between the outer and inner event horizons. As rt approaches r0 in the nearly extreme case, the total quantum statistical entropy approaches zero.
Qi, D.; Majda, A.
2017-12-01
A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with
Efficient statistical tests to compare Youden index: accounting for contingency correlation.
Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan
2015-04-30
Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.
Higher-Order Moment Characterisation of Rogue Wave Statistics in Supercontinuum Generation
DEFF Research Database (Denmark)
Sørensen, Simon Toft; Bang, Ole; Wetzel, Benjamin
2012-01-01
The noise characteristics of supercontinuum generation are characterized using higherorder statistical moments. Measures of skew and kurtosis, and the coefficient of variation allow quantitative identification of spectral regions dominated by rogue wave like behaviour.......The noise characteristics of supercontinuum generation are characterized using higherorder statistical moments. Measures of skew and kurtosis, and the coefficient of variation allow quantitative identification of spectral regions dominated by rogue wave like behaviour....
International Nuclear Information System (INIS)
Oblow, E.M.; Perey, F.G.
1984-01-01
A comprehensive rigorous theory is developed for screening sensitivity coefficients in largescale modeling applications. The theory uses Bayesian inference and group theory to establish a probabilistic framework for solving an underdetermined system of linear equations. The underdetermined problem is directly related to statistical screening sensitivity theory as developed in recent years. Several examples of the new approach to screening are worked out in detail and comparisons are made with statistical approaches to the problem. The drawbacks of these latter methods are discussed at some length
Visual and statistical analysis of 18F-FDG PET in primary progressive aphasia
International Nuclear Information System (INIS)
Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge; Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis
2015-01-01
Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)
Statistical methods in nonlinear dynamics
Indian Academy of Sciences (India)
Sensitivity to initial conditions in nonlinear dynamical systems leads to exponential divergence of trajectories that are initially arbitrarily close, and hence to unpredictability. Statistical methods have been found to be helpful in extracting useful information about such systems. In this paper, we review briefly some statistical ...
I. Arismendi; S. L. Johnson; J. B. Dunham
2015-01-01
Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical...
Chen, Yi-Ting; Sarangadharan, Indu; Sukesan, Revathi; Hseih, Ching-Yen; Lee, Geng-Yen; Chyi, Jen-Inn; Wang, Yu-Lin
2018-05-29
Lead ion selective membrane (Pb-ISM) coated AlGaN/GaN high electron mobility transistors (HEMT) was used to demonstrate a whole new methodology for ion-selective FET sensors, which can create ultra-high sensitivity (-36 mV/log [Pb 2+ ]) surpassing the limit of ideal sensitivity (-29.58 mV/log [Pb 2+ ]) in a typical Nernst equation for lead ion. The largely improved sensitivity has tremendously reduced the detection limit (10 -10 M) for several orders of magnitude of lead ion concentration compared to typical ion-selective electrode (ISE) (10 -7 M). The high sensitivity was obtained by creating a strong filed between the gate electrode and the HEMT channel. Systematical investigation was done by measuring different design of the sensor and gate bias, indicating ultra-high sensitivity and ultra-low detection limit obtained only in sufficiently strong field. Theoretical study in the sensitivity consistently agrees with the experimental finding and predicts the maximum and minimum sensitivity. The detection limit of our sensor is comparable to that of Inductively-Coupled-Plasma Mass Spectrum (ICP-MS), which also has detection limit near 10 -10 M.
Visual and statistical analysis of {sup 18}F-FDG PET in primary progressive aphasia
Energy Technology Data Exchange (ETDEWEB)
Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge [Hospital Clinico San Carlos, Department of Neurology, Madrid (Spain); Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis [San Carlos Health Research Institute (IdISSC) Complutense University of Madrid, Department of Nuclear Medicine, Hospital Clinico San Carlos, Madrid (Spain)
2015-05-01
Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)
Statistical Sensitive Data Protection and Inference Prevention with Decision Tree Methods
National Research Council Canada - National Science Library
Chang, LiWu
2003-01-01
.... We consider inference as correct classification and approach it with decision tree methods. As in our previous work, sensitive data are viewed as classes of those test data and non-sensitive data are the rest attribute values...
[Test and programme sensitivities of screening for colorectal cancer in Reggio Emilia].
Campari, Cinzia; Sassatelli, Romano; Paterlini, Luisa; Camellini, Lorenzo; Menozzi, Patrizia; Cattani, Antonella
2011-01-01
to estimate the sensitivity of the immunochemical test for faecal occult blood (FOBT) and the sensitivity of the colorectal tumour screening programme in the province of Reggio Emilia. retrospective cohort study, including a sample of 80,357 people of both genders, aged 50-69, who underwent FOBT, during the first round of the screening programme in the province of Reggio Emilia, from April 2005 to December 2007. incidence of interval cancer. The proportional incidence method was used to estimate the sensitivity of FOBT and of the screening programme. Data were stratified according to gender, age and year of interval. the overall sensitivity of FOBT was 73.2% (95%IC 63.8-80.7). The sensitivity of FOBT was lower in females (70.5% vs 75.1%), higher in the 50-59 age group (78.6% vs 70.2%) and higher in the colon than rectum (75.1% vs 68.9%). The test had a significantly higher sensitivity in the 1st year of interval than in the 2nd (84.4% vs 60.5%; RR=0.39, 95%IC 0.22-0.70), a difference which was confirmed, also when data were stratified according to gender. The overall sensitivity of the programme is 70.9% (95%IC 61.5-78.5). No statistically significant differences were shown, if data were stratified according to gender, age or site. Again the sensitivity in the 1st year was significantly higher than in the 2nd year of interval (83.2% vs 57.0%; RR=0.41, 95%IC 0.24-0.69). Overall our data confirmed the findings of similar Italian studies, despite subgroup analysis showed some differences in sensitivity in our study.
The extraction and integration framework: a two-process account of statistical learning.
Thiessen, Erik D; Kronstein, Alexandra T; Hufnagle, Daniel G
2013-07-01
The term statistical learning in infancy research originally referred to sensitivity to transitional probabilities. Subsequent research has demonstrated that statistical learning contributes to infant development in a wide array of domains. The range of statistical learning phenomena necessitates a broader view of the processes underlying statistical learning. Learners are sensitive to a much wider range of statistical information than the conditional relations indexed by transitional probabilities, including distributional and cue-based statistics. We propose a novel framework that unifies learning about all of these kinds of statistical structure. From our perspective, learning about conditional relations outputs discrete representations (such as words). Integration across these discrete representations yields sensitivity to cues and distributional information. To achieve sensitivity to all of these kinds of statistical structure, our framework combines processes that extract segments of the input with processes that compare across these extracted items. In this framework, the items extracted from the input serve as exemplars in long-term memory. The similarity structure of those exemplars in long-term memory leads to the discovery of cues and categorical structure, which guides subsequent extraction. The extraction and integration framework provides a way to explain sensitivity to both conditional statistical structure (such as transitional probabilities) and distributional statistical structure (such as item frequency and variability), and also a framework for thinking about how these different aspects of statistical learning influence each other. 2013 APA, all rights reserved
Naik, Ganesh R; Kumar, Dinesh K
2011-01-01
The electromyograpy (EMG) signal provides information about the performance of muscles and nerves. The shape of the muscle signal and motor unit action potential (MUAP) varies due to the movement of the position of the electrode or due to changes in contraction level. This research deals with evaluating the non-Gaussianity in Surface Electromyogram signal (sEMG) using higher order statistics (HOS) parameters. To achieve this, experiments were conducted for four different finger and wrist actions at different levels of Maximum Voluntary Contractions (MVCs). Our experimental analysis shows that at constant force and for non-fatiguing contractions, probability density functions (PDF) of sEMG signals were non-Gaussian. For lesser MVCs (below 30% of MVC) PDF measures tends to be Gaussian process. The above measures were verified by computing the Kurtosis values for different MVCs.
A general solution strategy of modified power method for higher mode solutions
International Nuclear Information System (INIS)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung
2016-01-01
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the new strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.
Hacham, Yael; Matityahu, Ifat; Amir, Rachel
2017-07-01
Methionine is an essential amino acid the low level of which limits the nutritional quality of plants. We formerly produced transgenic tobacco (Nicotiana tabacum) plants overexpressing CYSTATHIONE γ-SYNTHASE (CGS) (FA plants), methionine's main regulatory enzyme. These plants accumulate significantly higher levels of methionine compared with wild-type (WT) plants. The aim of this study was to gain more knowledge about the effect of higher methionine content on the metabolic profile of vegetative tissue and on the morphological and physiological phenotypes. FA plants exhibit slightly reduced growth, and metabolic profiling analysis shows that they have higher contents of stress-related metabolites. Despite this, FA plants were more sensitive to short- and long-term oxidative stresses. In addition, compared with WT plants and transgenic plants expressing an empty vector, the primary metabolic profile of FA was altered less during oxidative stress. Based on morphological and metabolic phenotypes, we strongly proposed that FA plants having higher levels of methionine suffer from stress under non-stress conditions. This might be one of the reasons for their lesser ability to cope with oxidative stress when it appeared. The observation that their metabolic profiling is much less responsive to stress compared with control plants indicates that the delta changes in metabolite contents between non-stress and stress conditions is important for enabling the plants to cope with stress conditions. © 2017 Scandinavian Plant Physiology Society.
National Center for Educational Statistics (DHEW/OE), Washington, DC.
In response to needs expressed by the community of higher education institutions, the National Center for Educational Statistics has produced early estimates of a selected group of mean salaries of instructional faculty in institutions of higher education in 1972-73. The number and salaries of male and female instructional staff by rank are of…
DEFF Research Database (Denmark)
Nielsen, Tine; Kreiner, Svend
Short abstract Motivated by experiencing with students’ psychological barriers for learning statistics we modified and extended the Statistical Anxiety Rating Scale (STARS) to develop a contemporary Danish measure of attitudes and relationship to statistics for use with higher education students...... with evidence of DIF in all cases: One TCA-item functioned differentially relative to age, one WS-item functioned differentially relative to statistics course (first or second), and two IA-items functioned differentially relative to statistics course and academic discipline (sociology, public health...
A Role for Chunk Formation in Statistical Learning of Second Language Syntax
Hamrick, Phillip
2014-01-01
Humans are remarkably sensitive to the statistical structure of language. However, different mechanisms have been proposed to account for such statistical sensitivities. The present study compared adult learning of syntax and the ability of two models of statistical learning to simulate human performance: Simple Recurrent Networks, which learn by…
FEAST: sensitive local alignment with multiple rates of evolution.
Hudek, Alexander K; Brown, Daniel G
2011-01-01
We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.
Directory of Open Access Journals (Sweden)
M. Amate
2007-01-01
Full Text Available An original algorithm for the detection of small objects in a noisy background is proposed. Its application to underwater objects detection by sonar imaging is addressed. This new method is based on the use of higher-order statistics (HOS that are locally estimated on the images. The proposed algorithm is divided into two steps. In a first step, HOS (skewness and kurtosis are estimated locally using a square sliding computation window. Small deterministic objects have different statistical properties from the background they are thus highlighted. The influence of the signal-to-noise ratio (SNR on the results is studied in the case of Gaussian noise. Mathematical expressions of the estimators and of the expected performances are derived and are experimentally confirmed. In a second step, the results are focused by a matched filter using a theoretical model. This enables the precise localization of the regions of interest. The proposed method generalizes to other statistical distributions and we derive the theoretical expressions of the HOS estimators in the case of a Weibull distribution (both when only noise is present or when a small deterministic object is present within the filtering window. This enables the application of the proposed technique to the processing of synthetic aperture sonar data containing underwater mines whose echoes have to be detected and located. Results on real data sets are presented and quantitatively evaluated using receiver operating characteristic (ROC curves.
Arismendi, Ivan; Johnson, Sherri L.; Dunham, Jason B.
2015-01-01
Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical moments (skewness and kurtosis) to examine potential changes of empirical distributions at decadal extents. Second, we adapt a statistical procedure combining a non-metric multidimensional scaling technique and higher density region plots to detect potentially anomalous years. We illustrate the use of these approaches by examining long-term stream temperature data from minimally and highly human-influenced streams. In particular, we contrast predictions about thermal regime responses to changing climates and human-related water uses. Using these methods, we effectively diagnose years with unusual thermal variability and patterns in variability through time, as well as spatial variability linked to regional and local factors that influence stream temperature. Our findings highlight the complexity of responses of thermal regimes of streams and reveal their differential vulnerability to climate warming and human-related water uses. The two approaches presented here can be applied with a variety of other continuous phenomena to address historical changes, extreme events, and their associated ecological responses.
Directory of Open Access Journals (Sweden)
Shallin Busch
2017-09-01
Full Text Available Information on ecosystem sensitivity to global change can help guide management decisions. Here, we characterize the sensitivity of the Puget Sound ecosystem to ocean acidification by estimating, at a number of taxonomic levels, the direct sensitivity of its species. We compare sensitivity estimates based on species mineralogy and on published literature from laboratory experiments and field studies. We generated information on the former by building a database of species in Puget Sound with mineralogy estimates for all CaCO3-forming species. For the latter, we relied on a recently developed database and meta-analysis on temperate species responses to increased CO2. In general, species sensitivity estimates based on the published literature suggest that calcifying species are more sensitive to increased CO2 than non-calcifying species. However, this generalization is incomplete, as non-calcifying species also show direct sensitivity to high CO2 conditions. We did not find a strong link between mineral solubility and the sensitivity of species survival to changes in carbonate chemistry, suggesting that, at coarse scales, mineralogy plays a lesser role to other physiological sensitivities. Summarizing species sensitivity at the family level resulted in higher sensitivity scalar scores than at the class level, suggesting that grouping results at the class level may overestimate species sensitivity. This result raises caution about the use of broad generalizations on species response to ocean acidification, particularly when developing summary information for specific locations. While we have much to learn about species response to ocean acidification and how to generalize ecosystem response, this study on Puget Sound suggests that detailed information on species performance under elevated carbon dioxide conditions, summarized at the lowest taxonomic level possible, is more valuable than information on species mineralogy.
Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model
Energy Technology Data Exchange (ETDEWEB)
Zhang, T. (Yale Univ., New Haven, CT (United States))
1994-06-01
A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations. 46 refs., 10 figs., 6 tabs.
Entangled-Pair Transmission Improvement Using Distributed Phase-Sensitive Amplification
Directory of Open Access Journals (Sweden)
Anjali Agarwal
2014-12-01
Full Text Available We demonstrate the transmission of time-bin entangled photon pairs through a distributed optical phase-sensitive amplifier (OPSA. We utilize four-wave mixing at telecom wavelengths in a 5-km dispersion-shifted fiber OPSA operating in the low-gain limit. Measurements of two-photon interference curves show no statistically significant degradation in the fringe visibility at the output of the OPSA. In addition, coincidence counting rates are higher than direct passive transmission because of constructive interference between amplitudes of input photon pairs and those generated in the OPSA. Our results suggest that application of distributed phase-sensitive amplification to transmission of entangled photon pairs could be highly beneficial towards advancing the rate and scalability of future quantum communications systems.
Business statistics for dummies
Anderson, Alan
2013-01-01
Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w
The semi-empirical low-level background statistics
International Nuclear Information System (INIS)
Tran Manh Toan; Nguyen Trieu Tu
1992-01-01
A semi-empirical low-level background statistics was proposed. The one can be applied to evaluated the sensitivity of low background systems, and to analyse the statistical error, the 'Rejection' and 'Accordance' criteria for processing of low-level experimental data. (author). 5 refs, 1 figs
An estimator for statistical anisotropy from the CMB bispectrum
International Nuclear Information System (INIS)
Bartolo, N.; Dimastrogiovanni, E.; Matarrese, S.; Liguori, M.; Riotto, A.
2012-01-01
Various data analyses of the Cosmic Microwave Background (CMB) provide observational hints of statistical isotropy breaking. Some of these features can be studied within the framework of primordial vector fields in inflationary theories which generally display some level of statistical anisotropy both in the power spectrum and in higher-order correlation functions. Motivated by these observations and the recent theoretical developments in the study of primordial vector fields, we develop the formalism necessary to extract statistical anisotropy information from the three-point function of the CMB temperature anisotropy. We employ a simplified vector field model and parametrize the bispectrum of curvature fluctuations in such a way that all the information about statistical anisotropy is encoded in some parameters λ LM (which measure the anisotropic to the isotropic bispectrum amplitudes). For such a template bispectrum, we compute an optimal estimator for λ LM and the expected signal-to-noise ratio. We estimate that, for f NL ≅ 30, an experiment like Planck can be sensitive to a ratio of the anisotropic to the isotropic amplitudes of the bispectrum as small as 10%. Our results are complementary to the information coming from a power spectrum analysis and particularly relevant for those models where statistical anisotropy turns out to be suppressed in the power spectrum but not negligible in the bispectrum
MIDAS: Regionally linear multivariate discriminative statistical mapping.
Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos
2018-07-01
statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.
Directory of Open Access Journals (Sweden)
Oktay Bilgir
2015-02-01
Full Text Available OBJECTIVE: The objective of this trial was to determine the levels of inflammatory markers, high-sensitivity C-reactive protein and fetuin-A pre- and post-levothyroxine treatment in cases of subclinical hypothyroidism. MATERIALS AND METHODS: A total of 32 patients with a diagnosis of subclinical hypothyroidism and a control group of 30 healthy individuals were tested for high-sensitivity C-reactive protein and fetuin-A, followed by the administration of 50 µg of levothyroxine in the patient group for 3 months. During the post-treatment stage, high-sensitivity C-reactive protein and fetuin-A levels in the patient group were re-assessed and compared with pre-treatment values. RESULTS: Pre-treatment levels of both high-sensitivity C-reactive protein and fetuin-A were observed to be higher in the patient group than in the control group. The decrease in high-sensitivity C-reactive protein levels during the post-treatment stage was not statistically significant. However, the decrease observed in post-treatment fetuin-A levels was found to be statistically significant. CONCLUSION: The decrease in fetuin-A levels in subclinical hypothyroidism cases indicates that levothyroxine treatment exerts anti-inflammatory and anti-apoptotic effects. Although the decrease in high-sensitivity C-reactive protein levels was statistically non-significant, it is predicted to reach significance with sustained treatment.
A statistical background noise correction sensitive to the steadiness of background noise.
Oppenheimer, Charles H
2016-10-01
A statistical background noise correction is developed for removing background noise contributions from measured source levels, producing a background noise-corrected source level. Like the standard background noise corrections of ISO 3741, ISO 3744, ISO 3745, and ISO 11201, the statistical background correction increases as the background level approaches the measured source level, decreasing the background noise-corrected source level. Unlike the standard corrections, the statistical background correction increases with steadiness of the background and is excluded from use when background fluctuation could be responsible for measured differences between the source and background noise levels. The statistical background noise correction has several advantages over the standard correction: (1) enveloping the true source with known confidence, (2) assuring physical source descriptions when measuring sources in fluctuating backgrounds, (3) reducing background corrected source descriptions by 1 to 8 dB for sources in steady backgrounds, and (4) providing a means to replace standardized background correction caps that incentivize against high precision grade methods.
Fernández, Leandro; Monbaliu, Jaak; Onorato, Miguel; Toffoli, Alessandro
2014-05-01
This research is focused on the study of nonlinear evolution of irregular wave fields in water of arbitrary depth by comparing field measurements and numerical simulations.It is now well accepted that modulational instability, known as one of the main mechanisms for the formation of rogue waves, induces strong departures from Gaussian statistics. However, whereas non-Gaussian properties are remarkable when wave fields follow one direction of propagation over an infinite water depth, wave statistics only weakly deviate from Gaussianity when waves spread over a range of different directions. Over finite water depth, furthermore, wave instability attenuates overall and eventually vanishes for relative water depths as low as kh=1.36 (where k is the wavenumber of the dominant waves and h the water depth). Recent experimental results, nonetheless, seem to indicate that oblique perturbations are capable of triggering and sustaining modulational instability even if khthe aim of this research is to understand whether the combined effect of directionality and finite water depth has a significant effect on wave statistics and particularly on the occurrence of extremes. For this purpose, numerical experiments have been performed solving the Euler equation of motion with the Higher Order Spectral Method (HOSM) and compared with data of short crested wave fields for different sea states observed at the Lake George (Australia). A comparative analysis of the statistical properties (i.e. density function of the surface elevation and its statistical moments skewness and kurtosis) between simulations and in-situ data provides a confrontation between the numerical developments and real observations in field conditions.
Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.
2011-12-01
Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch
Exclusion statistics and integrable models
International Nuclear Information System (INIS)
Mashkevich, S.
1998-01-01
The definition of exclusion statistics, as given by Haldane, allows for a statistical interaction between distinguishable particles (multi-species statistics). The thermodynamic quantities for such statistics ca be evaluated exactly. The explicit expressions for the cluster coefficients are presented. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models. The interesting questions of generalizing this correspondence onto the higher-dimensional and the multi-species cases remain essentially open
Context Sensitive Modeling of Cancer Drug Sensitivity.
Directory of Open Access Journals (Sweden)
Bo-Juen Chen
Full Text Available Recent screening of drug sensitivity in large panels of cancer cell lines provides a valuable resource towards developing algorithms that predict drug response. Since more samples provide increased statistical power, most approaches to prediction of drug sensitivity pool multiple cancer types together without distinction. However, pan-cancer results can be misleading due to the confounding effects of tissues or cancer subtypes. On the other hand, independent analysis for each cancer-type is hampered by small sample size. To balance this trade-off, we present CHER (Contextual Heterogeneity Enabled Regression, an algorithm that builds predictive models for drug sensitivity by selecting predictive genomic features and deciding which ones should-and should not-be shared across different cancers, tissues and drugs. CHER provides significantly more accurate models of drug sensitivity than comparable elastic-net-based models. Moreover, CHER provides better insight into the underlying biological processes by finding a sparse set of shared and type-specific genomic features.
The Relationship between Ethical Sensitivity, High Ability and Gender in Higher Education Students
Schutte, Ingrid; Wolfensberger, Marca; Tirri, Kirsi
2014-01-01
This study examined the ethical sensitivity of high-ability undergraduate students (n=731) in the Netherlands who completed the 28-item Ethical Sensitivity Scale Questionnaire (ESSQ) developed by Tirri & Nokelainen (2007; 2011). The ESSQ is based on Narvaez' (2001) operationalization of ethical sensitivity in seven dimensions. The following…
Time Series Analysis Based on Running Mann Whitney Z Statistics
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K
2016-08-01
Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.
Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.
2018-05-01
The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.
International Nuclear Information System (INIS)
Sharma, Deepak; Santosh Kumar, S.; Raghu, Rashmi; Maurya, D.K.; Sainis, K.B.
2007-01-01
Full text: It is well accepted that the sensitivity of mammalian cells is better following whole body irradiation (WBI) as compared to that following in vitro irradiation. However, the underlying mechanisms are not well understood. Following WBI, the lipid peroxidation and cell death were significantly higher in lymphocytes as compared to that in vitro irradiated lymphocytes. Further, WBI treatment of tumor bearing mice resulted in a significantly higher inhibition of EL-4 cell proliferation as compared to in vitro irradiation of EL-4 cells. The DNA repair was significantly slower in lymphocytes obtained from WBI treated mice as compared to that in the cells exposed to same dose of radiation in vitro. Generation of nitric oxide following irradiation and also its role in inhibition of DNA repair have been reported, hence, its levels were estimated under both WBI and in vitro irradiation conditions. Nitric oxide levels were significantly elevated in the plasma of WBI treated mice but not in the supernatant of in vitro irradiated cells. Addition of sodium nitroprusside (SNP), a nitric oxide donor to in vitro irradiated cells inhibited the repair of DNA damage and sensitized cells to undergo cell death. It also enhanced the radiation-induced functional impairment of lymphocytes as evinced from suppression of mitogen-induced IL-2, IFN-γ and bcl-2 mRNA expression. Administration of N G -nitro-L-arginine-methyl-ester(L-NAME), a nitric oxide synthase inhibitor, to mice significantly protected lymphocytes against WBI-induced DNA damage and inhibited in vivo radiation-induced production of nitric oxide. Our results indicated that nitric oxide plays a role in the higher radiosensitivity of lymphocytes in vivo by inhibiting repair of DNA damage
Bilgir, Oktay; Bilgir, Ferda; Topcuoglu, Tuba; Calan, Mehmet; Calan, Ozlem
2014-03-01
This study was designed to show the effect of propylthiouracil treatment on sCD40L, high-sensitivity C-reactive protein, and fetuin-A levels on subjects with subclinical hyperthyroidism. After checking sCD40L, high-sensitivity C-reactive protein, and fetuin-A levels of 35 patients with subclinical hyperthyroidism, each was given 50 mg tablets of propylthiouracil three times daily. After 3 months, sCD40L, high-sensitivity C-reactive protein, and fetuin-A levels were then compared to the levels before treatment. Although high-sensitivity C-reactive protein and sCD40L levels were normal in the subclinical hyperthyroidism patients compared to the healthy controls, fetuin-A levels were statistically significantly higher (*p = 0.022). After treatment, fetuin-A levels of subclinical hyperthyroidism patients decreased statistically significantly compared to the levels before treatment (**p = 0.026). sCD40L and high-sensitivity C-reactive protein levels did not have a statistically significant difference compared to the control group and post-propylthiouracil treatment. In subclinical hyperthyroidism patients, high fetuin-A levels before propylthiouracil treatment and decreases in these levels after treatment in cases with subclinical hyperthyroidism indicated the possibility of preventing long-term cardiac complications with propylthiouracil treatment.
Driving higher magnetic field sensitivity of the martensitic transformation in MnCoGe ferromagnet
Ma, S. C.; Ge, Q.; Hu, Y. F.; Wang, L.; Liu, K.; Jiang, Q. Z.; Wang, D. H.; Hu, C. C.; Huang, H. B.; Cao, G. P.; Zhong, Z. C.; Du, Y. W.
2017-11-01
The sharp metamagnetic martensitic transformation (MMT) triggered by a low critical field plays a pivotal role in magnetoresponsive effects for ferromagnetic shape memory alloys (FSMAs). Here, a sharper magnetic-field-induced metamagnetic martensitic transformation (MFIMMT) is realized in Mn1-xCo1+xGe systems with a giant magnetocaloric effect around room temperature, which represents the lowest magnetic driving and completion fields as well as the largest magnetization difference around MFIMMT reported heretofore in MnCoGe-based FSMAs. More interestingly, a reversible MFIMMT with field cycling is observed in the Mn0.965Co0.035Ge compound. These results indicate that the consensus would be broken that the magnetic field is difficult to trigger the MMT for MnCoGe-based systems. The origin of a higher degree of sensitivity of martensitic transformation to the magnetic field is discussed based on the X-ray absorption spectroscopic results.
Statistics in the Workplace: A Survey of Use by Recent Graduates with Higher Degrees
Harraway, John A.; Barker, Richard J.
2005-01-01
A postal survey was conducted regarding statistical techniques, research methods and software used in the workplace by 913 graduates with PhD and Masters degrees in the biological sciences, psychology, business, economics, and statistics. The study identified gaps between topics and techniques learned at university and those used in the workplace,…
A Measure of Spiritual Sensitivity for Children
Stoyles, Gerard John; Stanford, Bonnie; Caputi, Peter; Keating, Alysha-Leigh; Hyde, Brendan
2012-01-01
Spirituality is an essential influence in a child's development. However, an age-appropriate measure of child's spiritual sensitivity is not currently available in the literature. This paper describes the development of a measure of children's spiritual sensitivity, the Spiritual Sensitivity Scale for Children (SSSC). Statistical analyses…
Performances of non-parametric statistics in sensitivity analysis and parameter ranking
International Nuclear Information System (INIS)
Saltelli, A.
1987-01-01
Twelve parametric and non-parametric sensitivity analysis techniques are compared in the case of non-linear model responses. The test models used are taken from the long-term risk analysis for the disposal of high level radioactive waste in a geological formation. They describe the transport of radionuclides through a set of engineered and natural barriers from the repository to the biosphere and to man. The output data from these models are the dose rates affecting the maximum exposed individual of a critical group at a given point in time. All the techniques are applied to the output from the same Monte Carlo simulations, where a modified version of Latin Hypercube method is used for the sample selection. Hypothesis testing is systematically applied to quantify the degree of confidence in the results given by the various sensitivity estimators. The estimators are ranked according to their robustness and stability, on the basis of two test cases. The conclusions are that no estimator can be considered the best from all points of view and recommend the use of more than just one estimator in sensitivity analysis
Exclusion statistics and integrable models
International Nuclear Information System (INIS)
Mashkevich, S.
1998-01-01
The definition of exclusion statistics that was given by Haldane admits a 'statistical interaction' between distinguishable particles (multispecies statistics). For such statistics, thermodynamic quantities can be evaluated exactly; explicit expressions are presented here for cluster coefficients. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models of the Calogero-Sutherland type. The interesting questions of generalizing this correspondence to the higher-dimensional and the multispecies cases remain essentially open; however, our results provide some hints as to searches for the models in question
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Nuclear multifragmentation within the framework of different statistical ensembles
International Nuclear Information System (INIS)
Aguiar, C.E.; Donangelo, R.; Souza, S.R.
2006-01-01
The sensitivity of the statistical multifragmentation model to the underlying statistical assumptions is investigated. We concentrate on its microcanonical, canonical, and isobaric formulations. As far as average values are concerned, our results reveal that all the ensembles make very similar predictions, as long as the relevant macroscopic variables (such as temperature, excitation energy, and breakup volume) are the same in all statistical ensembles. It also turns out that the multiplicity dependence of the breakup volume in the microcanonical version of the model mimics a system at (approximately) constant pressure, at least in the plateau region of the caloric curve. However, in contrast to average values, our results suggest that the distributions of physical observables are quite sensitive to the statistical assumptions. This finding may help in deciding which hypothesis corresponds to the best picture for the freeze-out stage
The disagreeable behaviour of the kappa statistic.
Flight, Laura; Julious, Steven A
2015-01-01
It is often of interest to measure the agreement between a number of raters when an outcome is nominal or ordinal. The kappa statistic is used as a measure of agreement. The statistic is highly sensitive to the distribution of the marginal totals and can produce unreliable results. Other statistics such as the proportion of concordance, maximum attainable kappa and prevalence and bias adjusted kappa should be considered to indicate how well the kappa statistic represents agreement in the data. Each kappa should be considered and interpreted based on the context of the data being analysed. Copyright © 2014 John Wiley & Sons, Ltd.
Yilmaz, Ferkan; Tabassum, Hina; Alouini, Mohamed-Slim
2014-01-01
Higher order statistics (HOS) of the channel capacity provide useful information regarding the level of reliability of signal transmission at a particular rate. In this paper, we propose a novel and unified analysis, which is based on the moment-generating function (MGF) approach, to efficiently and accurately compute the HOS of the channel capacity for amplify-and-forward (AF) multihop transmission over generalized fading channels. More precisely, our easy-to-use and tractable mathematical formalism requires only the reciprocal MGFs of the transmission hop signal-to-noise ratio (SNR). Numerical and simulation results, which are performed to exemplify the usefulness of the proposed MGF-based analysis, are shown to be in perfect agreement. © 2013 IEEE.
Statistical learning across development: Flexible yet constrained
Directory of Open Access Journals (Sweden)
Lauren eKrogh
2013-01-01
Full Text Available Much research in the past two decades has documented infants’ and adults' ability to extract statistical regularities from auditory input. Importantly, recent research has extended these findings to the visual domain, demonstrating learners' sensitivity to statistical patterns within visual arrays and sequences of shapes. In this review we discuss both auditory and visual statistical learning to elucidate both the generality of and constraints on statistical learning. The review first outlines the major findings of the statistical learning literature with infants, followed by discussion of statistical learning across domains, modalities, and development. The second part of this review considers constraints on statistical learning. The discussion focuses on two categories of constraint: constraints on the types of input over which statistical learning operates and constraints based on the state of the learner. The review concludes with a discussion of possible mechanisms underlying statistical learning.
International Nuclear Information System (INIS)
Parvan, A.S.
2016-01-01
The Tsallis statistics was applied to describe the experimental data on the transverse momentum distributions of hadrons. We considered the energy dependence of the parameters of the Tsallis-factorized statistics, which is now widely used for the description of the experimental transverse momentum distributions of hadrons, and the Tsallis statistics for the charged pions produced in pp collisions at high energies. We found that the results of the Tsallis-factorized statistics deviate from the results of the Tsallis statistics only at low NA61/SHINE energies when the value of the entropic parameter is close to unity. At higher energies, when the value of the entropic parameter deviates essentially from unity, the Tsallis-factorized statistics satisfactorily recovers the results of the Tsallis statistics. (orig.)
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by
Probabilistic sensitivity analysis in health economics.
Baio, Gianluca; Dawid, A Philip
2015-12-01
Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.
Jiang, Haiping; Marot, Julien; Fossati, Caroline; Bourennane, Salah
2011-12-01
In real-world conditions, contours are most often blurred in digital images because of acquisition conditions such as movement, light transmission environment, and defocus. Among image segmentation methods, Hough transform requires a computational load which increases with the number of noise pixels, level set methods also require a high computational load, and some other methods assume that the contours are one-pixel wide. For the first time, we retrieve the characteristics of multiple possibly concentric blurred circles. We face correlated noise environment, to get closer to real-world conditions. For this, we model a blurred circle by a few parameters--center coordinates, radius, and spread--which characterize its mean position and gray level variations. We derive the signal model which results from signal generation on circular antenna. Linear antennas provide the center coordinates. To retrieve the circle radii, we adapt the second-order statistics TLS-ESPRIT method for non-correlated noise environment, and propose a novel version of TLS-ESPRIT based on higher-order statistics for correlated noise environment. Then, we derive a least-squares criterion and propose an alternating least-squares algorithm to retrieve simultaneously all spread values of concentric circles. Experiments performed on hand-made and real-world images show that the proposed methods outperform the Hough transform and a level set method dedicated to blurred contours in terms of computational load. Moreover, the proposed model and optimization method provide the information of the contour grey level variations.
Statistical data filtration in neutron coincidence counting
International Nuclear Information System (INIS)
Beddingfield, D.H.; Menlove, H.O.
1992-11-01
We assessed the effectiveness of statistical data filtration to minimize the contribution of matrix materials in 200-ell drums to the nondestructive assay of plutonium. Those matrices were examined: polyethylene, concrete, aluminum, iron, cadmium, and lead. Statistical filtration of neutron coincidence data improved the low-end sensitivity of coincidence counters. Spurious data arising from electrical noise, matrix spallation, and geometric effects were smoothed in a predictable fashion by the statistical filter. The filter effectively lowers the minimum detectable mass limit that can be achieved for plutonium assay using passive neutron coincidence counting
Statistical learning of action: the role of conditional probability.
Meyer, Meredith; Baldwin, Dare
2011-12-01
Identification of distinct units within a continuous flow of human action is fundamental to action processing. Such segmentation may rest in part on statistical learning. In a series of four experiments, we examined what types of statistics people can use to segment a continuous stream involving many brief, goal-directed action elements. The results of Experiment 1 showed no evidence for sensitivity to conditional probability, whereas Experiment 2 displayed learning based on joint probability. In Experiment 3, we demonstrated that additional exposure to the input failed to engender sensitivity to conditional probability. However, the results of Experiment 4 showed that a subset of adults-namely, those more successful at identifying actions that had been seen more frequently than comparison sequences-were also successful at learning conditional-probability statistics. These experiments help to clarify the mechanisms subserving processing of intentional action, and they highlight important differences from, as well as similarities to, prior studies of statistical learning in other domains, including language.
Energy Technology Data Exchange (ETDEWEB)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.
International Nuclear Information System (INIS)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man
2013-01-01
The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis
Sensitivity and uncertainty analyses for performance assessment modeling
International Nuclear Information System (INIS)
Doctor, P.G.
1988-08-01
Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs
Higher assimilation than respiration sensitivity to drought for a desert ecosystem in Central Asia.
Gu, Daxing; Otieno, Dennis; Huang, Yuqing; Wang, Quan
2017-12-31
Responses of ecosystem assimilation and respiration to global climate change vary considerably among terrestrial ecosystems constrained by both biotic and abiotic factors. In this study, net CO 2 exchange between ecosystem and atmosphere (NEE) was measured over a 4-year period (2013-2016) using eddy covariance technology in a desert ecosystem in Central Asia. Ecosystem assimilation (gross primary production, GPP) and respiration (R eco ) were derived from NEE by fitting light response curves to NEE data based on day- and nighttime data, and their responses to soil water content (SWC) and evaporative fraction (EF) were assessed during the growing season. Results indicated that both GPP and R eco linearly decreased with declining SWC, with the sensitivity of GPP to SWC being 3.8 times higher than that of R eco during the entire growing season. As a result, ecosystem CO 2 sequestration capacity decreased from 4.00μmolm -2 s -1 to 1.00μmolm -2 s -1 , with increasing soil drought . On a seasonal scale, significant correlation between GPP and SWC was only found in spring while that between R eco and SWC was found in all growing seasons with the sensitivity increasing steadily from spring to autumn. EF had a low correlation with SWC, GPP and R eco (R 2 =0.03, 0.02, 0.05, respectively), indicating that EF was not a good proxy for soil drought and energy partitioning was not tightly coupled to ecosystem carbon exchanges in this desert ecosystem. The study deepens our knowledge of ecosystem carbon exchange and its response to drought as well as its coupling with ecosystem energy partitioning in an extreme dry desert. The information is critical for better assessing carbon sequestration capacity in dryland, and for understanding its feedback to climate change. Copyright © 2017 Elsevier B.V. All rights reserved.
Statistical identification of effective input variables
International Nuclear Information System (INIS)
Vaurio, J.K.
1982-09-01
A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications
Contextual sensitivity in scientific reproducibility
Van Bavel, Jay J.; Mende-Siedlecki, Peter; Brady, William J.; Reinero, Diego A.
2016-01-01
In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher’s degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed “hidden moderators”) between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility. PMID:27217556
Contextual sensitivity in scientific reproducibility.
Van Bavel, Jay J; Mende-Siedlecki, Peter; Brady, William J; Reinero, Diego A
2016-06-07
In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher's degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed "hidden moderators") between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility.
International Nuclear Information System (INIS)
Li Bo; Torossian, Artour; Sun, Yunguang; Du, Ruihong; Dicker, Adam P.; Lu Bo
2012-01-01
Purpose: c-Met is overexpressed in some non-small cell lung cancer (NSCLC) cell lines and tissues. Cell lines with higher levels of c-Met expression and phosphorylation depend on this receptor for survival. We studied the effects of AMG-458 on 2 NSCLC cell lines. Methods and Materials: 3-(4,5-Dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl) -2H-tetrazolium assays assessed the sensitivities of the cells to AMG-458. Clonogenic survival assays illustrated the radiosensitizing effects of AMG-458. Western blot for cleaved caspase 3 measured apoptosis. Immunoblotting for c-Met, phospho-Met (p-Met), Akt/p-Akt, and Erk/p-Erk was performed to observe downstream signaling. Results: AMG-458 enhanced radiosensitivity in H441 but not in A549. H441 showed constitutive phosphorylation of c-Met. A549 expressed low levels of c-Met, which were phosphorylated only in the presence of exogenous hepatocyte growth factor. The combination of radiation therapy and AMG-458 treatment was found to synergistically increase apoptosis in the H441 cell line but not in A549. Radiation therapy, AMG-458, and combination treatment were found to reduce p-Akt and p-Erk levels in H441 but not in A549. H441 became less sensitive to AMG-458 after small interfering RNA knockdown of c-Met; there was no change in A549. After overexpression of c-Met, A549 became more sensitive, while H441 became less sensitive to AMG-458. Conclusions: AMG-458 was more effective in cells that expressed higher levels of c-Met/p-Met, suggesting that higher levels of c-Met and p-Met in NSCLC tissue may classify a subset of tumors that are more sensitive to molecular therapies against this receptor.
Energy Technology Data Exchange (ETDEWEB)
Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA
2016-04-08
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance
Geometric statistical inference
International Nuclear Information System (INIS)
Periwal, Vipul
1999-01-01
A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined
Statistical learning in social action contexts.
Monroy, Claire; Meyer, Marlene; Gerson, Sarah; Hunnius, Sabine
2017-01-01
Sensitivity to the regularities and structure contained within sequential, goal-directed actions is an important building block for generating expectations about the actions we observe. Until now, research on statistical learning for actions has solely focused on individual action sequences, but many actions in daily life involve multiple actors in various interaction contexts. The current study is the first to investigate the role of statistical learning in tracking regularities between actions performed by different actors, and whether the social context characterizing their interaction influences learning. That is, are observers more likely to track regularities across actors if they are perceived as acting jointly as opposed to in parallel? We tested adults and toddlers to explore whether social context guides statistical learning and-if so-whether it does so from early in development. In a between-subjects eye-tracking experiment, participants were primed with a social context cue between two actors who either shared a goal of playing together ('Joint' condition) or stated the intention to act alone ('Parallel' condition). In subsequent videos, the actors performed sequential actions in which, for certain action pairs, the first actor's action reliably predicted the second actor's action. We analyzed predictive eye movements to upcoming actions as a measure of learning, and found that both adults and toddlers learned the statistical regularities across actors when their actions caused an effect. Further, adults with high statistical learning performance were sensitive to social context: those who observed actors with a shared goal were more likely to correctly predict upcoming actions. In contrast, there was no effect of social context in the toddler group, regardless of learning performance. These findings shed light on how adults and toddlers perceive statistical regularities across actors depending on the nature of the observed social situation and the
D'Alessio, Michael
2012-01-01
AP Statistics Crash Course - Gets You a Higher Advanced Placement Score in Less Time Crash Course is perfect for the time-crunched student, the last-minute studier, or anyone who wants a refresher on the subject. AP Statistics Crash Course gives you: Targeted, Focused Review - Study Only What You Need to Know Crash Course is based on an in-depth analysis of the AP Statistics course description outline and actual Advanced Placement test questions. It covers only the information tested on the exam, so you can make the most of your valuable study time. Our easy-to-read format covers: exploring da
Statistical screening of input variables in a complex computer code
International Nuclear Information System (INIS)
Krieger, T.J.
1982-01-01
A method is presented for ''statistical screening'' of input variables in a complex computer code. The object is to determine the ''effective'' or important input variables by estimating the relative magnitudes of their associated sensitivity coefficients. This is accomplished by performing a numerical experiment consisting of a relatively small number of computer runs with the code followed by a statistical analysis of the results. A formula for estimating the sensitivity coefficients is derived. Reference is made to an earlier work in which the method was applied to a complex reactor code with good results
Gene flow analysis method, the D-statistic, is robust in a wide parameter space.
Zheng, Yichen; Janke, Axel
2018-01-08
We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.
International Nuclear Information System (INIS)
Brekke, L.; Imbo, T.D.
1992-01-01
The authors study the inequivalent quantizations of (1 + 1)-dimensional nonlinear sigma models with space manifold S 1 and target manifold X. If x is multiply connected, these models possess topological solitons. After providing a definition of spin and statistics for these solitons and demonstrating a spin-statistics correlation, we give various examples where the solitons can have exotic statistics. In some of these models, the solitons may obey a generalized version of fractional statistics called ambistatistics. In this paper the relevance of these 2d models to the statistics of vortices in (2 + 1)-dimensional spontaneously broken gauge theories is discussed. The authors close with a discussion concerning the extension of our results to higher dimensions
Zhong, Lin-sheng; Tang, Cheng-cai; Guo, Hua
2010-07-01
Based on the statistical data of natural ecology and social economy in Jinyintan Grassland Scenic Area in Qinghai Province in 2008, an evaluation index system for the ecological sensitivity of this area was established from the aspects of protected area rank, vegetation type, slope, and land use type. The ecological sensitivity of the sub-areas with higher tourism value and ecological function in the area was evaluated, and the tourism function zoning of these sub-areas was made by the technology of GIS and according to the analysis of eco-environmental characteristics and ecological sensitivity of each sensitive sub-area. It was suggested that the Jinyintan Grassland Scenic Area could be divided into three ecological sensitivity sub-areas (high, moderate, and low), three tourism functional sub-areas (restricted development ecotourism, moderate development ecotourism, and mass tourism), and six tourism functional sub-areas (wetland protection, primitive ecological sightseeing, agriculture and pasture tourism, grassland tourism, town tourism, and rural tourism).
Gonzalez, Marlen Z.; Allen, Joseph P.; Coan, James A.
2016-01-01
Life history theory suggests that adult reward sensitivity should be best explained by childhood, but not current, socioeconomic conditions. In this functional magnetic resonance imaging (fMRI) study, 83 participants from a larger longitudinal sample completed the monetary incentive delay (MID) task in adulthood (~25 years old). Parent-reports of neighborhood quality and parental SES were collected when participants were 13 years of age. Current income level was collected concurrently with scanning. Lower adolescent neighborhood quality, but neither lower current income nor parental SES, was associated with heightened sensitivity to the anticipation of monetary gain in putative mesolimbic reward areas. Lower adolescent neighborhood quality was also associated with heightened sensitivity to the anticipation of monetary loss activation in visuo-motor areas. Lower current income was associated with heightened sensitivity to anticipated loss in occipital areas and the operculum. We tested whether externalizing behaviors in childhood or adulthood could better account for neighborhood quality findings, but they did not. Findings suggest that neighborhood ecology in adolescence is associated with greater neural reward sensitivity in adulthood above the influence of parental SES or current income and not mediated through impulsivity and externalizing behaviors. PMID:27838595
Directory of Open Access Journals (Sweden)
Marlen Z. Gonzalez
2016-12-01
Full Text Available Life history theory suggests that adult reward sensitivity should be best explained by childhood, but not current, socioeconomic conditions. In this functional magnetic resonance imaging (fMRI study, 83 participants from a larger longitudinal sample completed the monetary incentive delay (MID task in adulthood (∼25 years old. Parent-reports of neighborhood quality and parental SES were collected when participants were 13 years of age. Current income level was collected concurrently with scanning. Lower adolescent neighborhood quality, but neither lower current income nor parental SES, was associated with heightened sensitivity to the anticipation of monetary gain in putative mesolimbic reward areas. Lower adolescent neighborhood quality was also associated with heightened sensitivity to the anticipation of monetary loss activation in visuo-motor areas. Lower current income was associated with heightened sensitivity to anticipated loss in occipital areas and the operculum. We tested whether externalizing behaviors in childhood or adulthood could better account for neighborhood quality findings, but they did not. Findings suggest that neighborhood ecology in adolescence is associated with greater neural reward sensitivity in adulthood above the influence of parental SES or current income and not mediated through impulsivity and externalizing behaviors.
Li, Xiangyu; Cai, Hao; Wang, Xianlong; Ao, Lu; Guo, You; He, Jun; Gu, Yunyan; Qi, Lishuang; Guan, Qingzhou; Lin, Xu; Guo, Zheng
2017-10-13
To detect differentially expressed genes (DEGs) in small-scale cell line experiments, usually with only two or three technical replicates for each state, the commonly used statistical methods such as significance analysis of microarrays (SAM), limma and RankProd (RP) lack statistical power, while the fold change method lacks any statistical control. In this study, we demonstrated that the within-sample relative expression orderings (REOs) of gene pairs were highly stable among technical replicates of a cell line but often widely disrupted after certain treatments such like gene knockdown, gene transfection and drug treatment. Based on this finding, we customized the RankComp algorithm, previously designed for individualized differential expression analysis through REO comparison, to identify DEGs with certain statistical control for small-scale cell line data. In both simulated and real data, the new algorithm, named CellComp, exhibited high precision with much higher sensitivity than the original RankComp, SAM, limma and RP methods. Therefore, CellComp provides an efficient tool for analyzing small-scale cell line data. © The Author 2017. Published by Oxford University Press.
Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona
2017-01-01
In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural
Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona
2017-01-01
In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural
Directory of Open Access Journals (Sweden)
Manuela Paechter
2017-07-01
Full Text Available In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men. Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in
Brink, Anne O'Leary; Jacobs, Anne Burleigh
2011-01-01
This study compared measures of hand sensitivity and handwriting quality in children aged 10 to 12 years identified by their teachers as having nonproficient or proficient handwriting. We hypothesized that children with nonproficient handwriting have decreased kinesthetic sensitivity of the hands and digits. Sixteen subjects without documented motor or cognitive concerns were tested for kinesthetic sensitivity, discriminate tactile awareness, diadochokinesia, stereognosis, and graphesthesia. Eight children were considered to have nonproficient handwriting; 8 had proficient handwriting. Nonparametric Mann-Whitney U tests were used to identify differences between groups on sensory tests. The 2 groups showed a statistically significant difference in handwriting legibility (P = .018). No significant difference was found on tests of kinesthetic sensitivity or other measures of sensation. Children presenting with handwriting difficulty as the only complaint have similar sensitivity in hands and digits as those with proficient handwriting. Failure to detect differences may result from a small sample size.
Spatial scan statistics using elliptic windows
DEFF Research Database (Denmark)
Christiansen, Lasse Engbo; Andersen, Jens Strodl; Wegener, Henrik Caspar
The spatial scan statistic is widely used to search for clusters in epidemiologic data. This paper shows that the usually applied elimination of secondary clusters as implemented in SatScan is sensitive to smooth changes in the shape of the clusters. We present an algorithm for generation of set...
The Compton Camera - medical imaging with higher sensitivity Exhibition LEPFest 2000
2000-01-01
The Compton Camera reconstructs the origin of Compton-scattered X-rays using electronic collimation with Silicon pad detectors instead of the heavy conventional lead collimators in Anger cameras - reaching up to 200 times better sensitivity and a factor two improvement in resolution. Possible applications are in cancer diagnosis, neurology neurobiology, and cardiology.
Mischlinger, Johannes; Pitzinger, Paul; Veletzky, Luzia; Groger, Mirjam; Zoleko-Manego, Rella; Adegnika, Ayola A; Agnandji, Selidji T; Lell, Bertrand; Kremsner, Peter G; Tannich, Egbert; Mombo-Ngoma, Ghyslain; Mordmüller, Benjamin; Ramharter, Michael
2018-05-25
Diagnosis of malaria is usually based on samples of peripheral blood. However, it is unclear whether capillary (CAP) or venous (VEN) blood samples provide better diagnostic performance. Quantitative differences of parasitemia between CAP and VEN blood and diagnostic performance characteristics were investigated. Patients were recruited between September 2015 and February 2016 in Gabon. Light microscopy and qPCR quantified parasitemia of paired CAP and VEN samples, whose preparation followed the exact same methodology. CAP and VEN performance characteristics using microscopy were evaluated against a qPCR gold-standard. Microscopy revealed a median (IQR) parasites/L of 495 (853,243) in CAP and 429 (524,074) in VEN samples manifesting in a +16.6% (p=0.04) higher CAPparasitemia compared with VENparasitemia. Concordantly, qPCR demonstrated that -0.278 (p=0.006) cycles were required for signal detection in CAP samples. CAPsensitivity of microscopy relative to the gold-standard was 81.5% (77.485.6%) versus VENsensitivity of 73.4% (68.878.1%), while CAPspecificity and VENspecificity were 91%. CAPsensitivity and VENsensitivity dropped to 63.3% and 45.9%, respectively for a sub-population of low-level parasitemias while specificities were 92%. CAP sampling leads to higher parasitemias compared to VEN sampling and improves diagnostic sensitivity. These findings may have important implications for routine diagnostics, research and elimination campaigns of malaria.
International Nuclear Information System (INIS)
Saber, Ahmed Yousuf; Chakraborty, Shantanu; Abdur Razzak, S.M.; Senjyu, Tomonobu
2009-01-01
This paper presents a modified particle swarm optimization (MPSO) for constrained economic load dispatch (ELD) problem. Real cost functions are more complex than conventional second order cost functions when multi-fuel operations, valve-point effects, accurate curve fitting, etc., are considering in deregulated changing market. The proposed modified particle swarm optimization (PSO) consists of problem dependent variable number of promising values (in velocity vector), unit vector and error-iteration dependent step length. It reliably and accurately tracks a continuously changing solution of the complex cost function and no extra concentration/effort is needed for the complex higher order cost polynomials in ELD. Constraint management is incorporated in the modified PSO. The modified PSO has balance between local and global searching abilities, and an appropriate fitness function helps to converge it quickly. To avoid the method to be frozen, stagnated/idle particles are reset. Sensitivity of the higher order cost polynomials is also analyzed visually to realize the importance of the higher order cost polynomials for the optimization of ELD. Finally, benchmark data sets and methods are used to show the effectiveness of the proposed method. (author)
Nassiri, Nader; Sheibani, Kourosh; Azimi, Abbas; Khosravi, Farinaz Mahmoodi; Heravian, Javad; Yekta, Abasali; Moghaddam, Hadi Ostadi; Nassiri, Saman; Yasseri, Mehdi; Nassiri, Nariman
2015-10-01
To compare refractive outcomes, contrast sensitivity, higher-order aberrations (HOAs), and patient satisfaction after photorefractive keratectomy for correction of moderate myopia with two methods: tissue saving versus wavefront optimized. In this prospective, comparative study, 152 eyes (80 patients) with moderate myopia with and without astigmatism were randomly divided into two groups: the tissue-saving group (Technolas 217z Zyoptix laser; Bausch & Lomb, Rochester, NY) (76 eyes of 39 patients) or the wavefront-optimized group (WaveLight Allegretto Wave Eye-Q laser; Alcon Laboratories, Inc., Fort Worth, TX) (76 eyes of 41 patients). Preoperative and 3-month postoperative refractive outcomes, contrast sensitivity, HOAs, and patient satisfaction were compared between the two groups. The mean spherical equivalent was -4.50 ± 1.02 diopters. No statistically significant differences were detected between the groups in terms of uncorrected and corrected distance visual acuity and spherical equivalent preoperatively and 3 months postoperatively. No statistically significant differences were seen in the amount of preoperative to postoperative contrast sensitivity changes between the two groups in photopic and mesopic conditions. HOAs and Q factor increased in both groups postoperatively (P = .001), with the tissue-saving method causing more increases in HOAs (P = .007) and Q factor (P = .039). Patient satisfaction was comparable between both groups. Both platforms were effective in correcting moderate myopia with or without astigmatism. No difference in refractive outcome, contrast sensitivity changes, and patient satisfaction between the groups was observed. Postoperatively, the tissue-saving method caused a higher increase in HOAs and Q factor compared to the wavefront-optimized method, which could be due to larger optical zone sizes in the tissue-saving group. Copyright 2015, SLACK Incorporated.
... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... them at higher risk. Reported Cases of Human Plague - United States, 1970-2016 Since the mid–20th ...
Spatial scan statistics using elliptic windows
DEFF Research Database (Denmark)
Christiansen, Lasse Engbo; Andersen, Jens Strodl; Wegener, Henrik Caspar
2006-01-01
The spatial scan statistic is widely used to search for clusters. This article shows that the usually applied elimination of secondary clusters as implemented in SatScan is sensitive to smooth changes in the shape of the clusters. We present an algorithm for generation of a set of confocal elliptic...
Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Fantino, Bruno; Levinoff, Elise J; Allali, Gilles; Beauchet, Olivier
2016-01-01
To compare performance criteria (i.e., sensitivity, specificity, positive predictive value, negative predictive value, area under receiver operating characteristic curve and accuracy) of linear and non-linear statistical models for fall risk in older community-dwellers. Participants were recruited in two large population-based studies, "Prévention des Chutes, Réseau 4" (PCR4, n=1760, cross-sectional design, retrospective collection of falls) and "Prévention des Chutes Personnes Agées" (PCPA, n=1765, cohort design, prospective collection of falls). Six linear statistical models (i.e., logistic regression, discriminant analysis, Bayes network algorithm, decision tree, random forest, boosted trees), three non-linear statistical models corresponding to artificial neural networks (multilayer perceptron, genetic algorithm and neuroevolution of augmenting topologies [NEAT]) and the adaptive neuro fuzzy interference system (ANFIS) were used. Falls ≥1 characterizing fallers and falls ≥2 characterizing recurrent fallers were used as outcomes. Data of studies were analyzed separately and together. NEAT and ANFIS had better performance criteria compared to other models. The highest performance criteria were reported with NEAT when using PCR4 database and falls ≥1, and with both NEAT and ANFIS when pooling data together and using falls ≥2. However, sensitivity and specificity were unbalanced. Sensitivity was higher than specificity when identifying fallers, whereas the converse was found when predicting recurrent fallers. Our results showed that NEAT and ANFIS were non-linear statistical models with the best performance criteria for the prediction of falls but their sensitivity and specificity were unbalanced, underscoring that models should be used respectively for the screening of fallers and the diagnosis of recurrent fallers. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Baliatsas, Christos, E-mail: c.baliatsas@nivel.nl [Netherlands Institute for Health Services Research (NIVEL), Utrecht (Netherlands); Kamp, Irene van, E-mail: irene.van.kamp@rivm.nl [National Institute for Public Health and the Environment (RIVM), Bilthoven (Netherlands); Swart, Wim, E-mail: wim.swart@rivm.nl [National Institute for Public Health and the Environment (RIVM), Bilthoven (Netherlands); Hooiveld, Mariëtte, E-mail: m.hooiveld@nivel.nl [Netherlands Institute for Health Services Research (NIVEL), Utrecht (Netherlands); Yzermans, Joris, E-mail: J.Yzermans@nivel.nl [Netherlands Institute for Health Services Research (NIVEL), Utrecht (Netherlands)
2016-10-15
Epidemiological evidence on the symptomatic profile, health status and illness behavior of people with subjective sensitivity to noise is still scarce. Also, it is unknown to what extent noise sensitivity co-occurs with other environmental sensitivities such as multi-chemical sensitivity and sensitivity to electromagnetic fields (EMF). A cross-sectional study performed in the Netherlands, combining self-administered questionnaires and electronic medical records of non-specific symptoms (NSS) registered by general practitioners (GP) allowed us to explore this further. The study sample consisted of 5806 participants, drawn from 21 general practices. Among participants, 722 (12.5%) responded “absolutely agree” to the statement “I am sensitive to noise”, comprising the high noise-sensitive (HNS) group. Compared to the rest of the sample, people in the HNS group reported significantly higher scores on number and duration of self-reported NSS, increased psychological distress, decreased sleep quality and general health, more negative symptom perceptions and higher prevalence of healthcare contacts, GP-registered NSS and prescriptions for antidepressants and benzodiazepines. These results remained robust after adjustment for demographic, residential and lifestyle characteristics, objectively measured nocturnal noise exposure from road-traffic and GP-registered morbidity. Co-occurrence rates with other environmental sensitivities varied between 9% and 50%. Individuals with self-declared sensitivity to noise are characterized by high prevalence of multiple NSS, poorer health status and increased illness behavior independently of noise exposure levels. Findings support the notion that different types of environmental sensitivities partly overlap. - Highlights: • People with self-reported noise sensitivity experience multiple non-specific symptoms. • They also report comparatively poorer health and increased illness behavior. • Co-occurrence with other
International Nuclear Information System (INIS)
Baliatsas, Christos; Kamp, Irene van; Swart, Wim; Hooiveld, Mariëtte; Yzermans, Joris
2016-01-01
Epidemiological evidence on the symptomatic profile, health status and illness behavior of people with subjective sensitivity to noise is still scarce. Also, it is unknown to what extent noise sensitivity co-occurs with other environmental sensitivities such as multi-chemical sensitivity and sensitivity to electromagnetic fields (EMF). A cross-sectional study performed in the Netherlands, combining self-administered questionnaires and electronic medical records of non-specific symptoms (NSS) registered by general practitioners (GP) allowed us to explore this further. The study sample consisted of 5806 participants, drawn from 21 general practices. Among participants, 722 (12.5%) responded “absolutely agree” to the statement “I am sensitive to noise”, comprising the high noise-sensitive (HNS) group. Compared to the rest of the sample, people in the HNS group reported significantly higher scores on number and duration of self-reported NSS, increased psychological distress, decreased sleep quality and general health, more negative symptom perceptions and higher prevalence of healthcare contacts, GP-registered NSS and prescriptions for antidepressants and benzodiazepines. These results remained robust after adjustment for demographic, residential and lifestyle characteristics, objectively measured nocturnal noise exposure from road-traffic and GP-registered morbidity. Co-occurrence rates with other environmental sensitivities varied between 9% and 50%. Individuals with self-declared sensitivity to noise are characterized by high prevalence of multiple NSS, poorer health status and increased illness behavior independently of noise exposure levels. Findings support the notion that different types of environmental sensitivities partly overlap. - Highlights: • People with self-reported noise sensitivity experience multiple non-specific symptoms. • They also report comparatively poorer health and increased illness behavior. • Co-occurrence with other
Sensitivity of submersed freshwater macrophytes and endpoints in laboratory toxicity tests
International Nuclear Information System (INIS)
Arts, Gertie H.P.; Belgers, J. Dick M.; Hoekzema, Conny H.; Thissen, Jac T.N.M.
2008-01-01
The toxicological sensitivity and variability of a range of macrophyte endpoints were statistically tested with data from chronic, non-axenic, macrophyte toxicity tests. Five submersed freshwater macrophytes, four pesticides/biocides and 13 endpoints were included in the statistical analyses. Root endpoints, reflecting root growth, were most sensitive in the toxicity tests, while endpoints relating to biomass, growth and shoot length were less sensitive. The endpoints with the lowest coefficients of variation were not necessarily the endpoints, which were toxicologically most sensitive. Differences in sensitivity were in the range of 10-1000 for different macrophyte-specific endpoints. No macrophyte species was consistently the most sensitive. Criteria to select endpoints in macrophyte toxicity tests should include toxicological sensitivity, variance and ecological relevance. Hence, macrophyte toxicity tests should comprise an array of endpoints, including very sensitive endpoints like those relating to root growth. - A range of endpoints is more representative of macrophyte fitness than biomass and growth only
Parental overprotection increases interpersonal sensitivity in healthy subjects.
Otani, Koichi; Suzuki, Akihito; Matsumoto, Yoshihiko; Kamata, Mitsuhiro
2009-01-01
The effect of parental rearing on interpersonal sensitivity was studied in 469 Japanese volunteers. Perceived parental rearing was assessed by the Parental Bonding Instrument, which consists of the factors of care and protection, and interpersonal sensitivity was measured by the Interpersonal Sensitivity Measure (IPSM). In male subjects, higher IPSM scores were related to higher scores of paternal protection (P < .01) and maternal protection (P < .05). In female subjects, higher IPSM scores were related to higher scores of maternal protection (P < .001). The present study suggests that in both males and females, interpersonal sensitivity is increased by high protection of the same-sex parents and that in males there is an additional effect of high maternal protection.
High-Sensitivity GaN Microchemical Sensors
Son, Kyung-ah; Yang, Baohua; Liao, Anna; Moon, Jeongsun; Prokopuk, Nicholas
2009-01-01
Systematic studies have been performed on the sensitivity of GaN HEMT (high electron mobility transistor) sensors using various gate electrode designs and operational parameters. The results here show that a higher sensitivity can be achieved with a larger W/L ratio (W = gate width, L = gate length) at a given D (D = source-drain distance), and multi-finger gate electrodes offer a higher sensitivity than a one-finger gate electrode. In terms of operating conditions, sensor sensitivity is strongly dependent on transconductance of the sensor. The highest sensitivity can be achieved at the gate voltage where the slope of the transconductance curve is the largest. This work provides critical information about how the gate electrode of a GaN HEMT, which has been identified as the most sensitive among GaN microsensors, needs to be designed, and what operation parameters should be used for high sensitivity detection.
Kleijnen, J.P.C.
1995-01-01
This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for
Directory of Open Access Journals (Sweden)
Svetlana V. Smirnova
2013-01-01
Full Text Available The features of using information technologies within applied statisticians in psychology are considered in the article. Requirements to statistical preparation of psychology students in the conditions of information society are analyzed.
Statistical emulation of a tsunami model for sensitivity analysis and uncertainty quantification
Directory of Open Access Journals (Sweden)
A. Sarri
2012-06-01
Full Text Available Due to the catastrophic consequences of tsunamis, early warnings need to be issued quickly in order to mitigate the hazard. Additionally, there is a need to represent the uncertainty in the predictions of tsunami characteristics corresponding to the uncertain trigger features (e.g. either position, shape and speed of a landslide, or sea floor deformation associated with an earthquake. Unfortunately, computer models are expensive to run. This leads to significant delays in predictions and makes the uncertainty quantification impractical. Statistical emulators run almost instantaneously and may represent well the outputs of the computer model. In this paper, we use the outer product emulator to build a fast statistical surrogate of a landslide-generated tsunami computer model. This Bayesian framework enables us to build the emulator by combining prior knowledge of the computer model properties with a few carefully chosen model evaluations. The good performance of the emulator is validated using the leave-one-out method.
STATLIB, Interactive Statistics Program Library of Tutorial System
International Nuclear Information System (INIS)
Anderson, H.E.
1986-01-01
1 - Description of program or function: STATLIB is a conversational statistical program library developed in conjunction with a Sandia National Laboratories applied statistics course intended for practicing engineers and scientists. STATLIB is a group of 15 interactive, argument-free, statistical routines. Included are analysis of sensitivity tests; sample statistics for the normal, exponential, hypergeometric, Weibull, and extreme value distributions; three models of multiple regression analysis; x-y data plots; exact probabilities for RxC tables; n sets of m permuted integers in the range 1 to m; simple linear regression and correlation; K different random integers in the range m to n; and Fisher's exact test of independence for a 2 by 2 contingency table. Forty-five other subroutines in the library support the basic 15
Wylde, Vikki; Sayers, Adrian; Lenguerrand, Erik; Gooberman-Hill, Rachael; Pyke, Mark; Beswick, Andrew D.; Dieppe, Paul; Blom, Ashley W.
2015-01-01
Abstract Chronic pain after joint replacement is common, affecting approximately 10% of patients after total hip replacement (THR) and 20% of patients after total knee replacement (TKR). Heightened generalized sensitivity to nociceptive input could be a risk factor for the development of this pain. The primary aim of this study was to investigate whether preoperative widespread pain sensitivity was associated with chronic pain after joint replacement. Data were analyzed from 254 patients receiving THR and 239 patients receiving TKR. Pain was assessed preoperatively and at 12 months after surgery using the Western Ontario and McMaster Universities Osteoarthritis Pain Scale. Preoperative widespread pain sensitivity was assessed through measurement of pressure pain thresholds (PPTs) at the forearm using an algometer. Statistical analysis was conducted using linear regression and linear mixed models, and adjustments were made for confounding variables. In both the THR and TKR cohort, lower PPTs (heightened widespread pain sensitivity) were significantly associated with higher preoperative pain severity. Lower PPTs were also significantly associated with higher pain severity at 12 months after surgery in the THR cohort. However, PPTs were not associated with the change in pain severity from preoperative to 12 months postoperative in either the TKR or THR cohort. These findings suggest that although preoperative widespread pressure pain sensitivity is associated with pain severity before and after joint replacement, it is not a predictor of the amount of pain relief that patients gain from joint replacement surgery, independent of preoperative pain severity. PMID:25599300
Zimmerman, Mark; Hrabosky, Joshua I; Francione, Caren; Young, Diane; Chelminski, Iwona; Dalrymple, Kristy; Galione, Janine N
2011-01-01
Obesity is associated with several symptoms that are components of the diagnostic criteria for major depressive disorder (MDD). Compared with nonobese individuals, obese individuals report more fatigue, sleep disturbance, and overeating. Obesity might, therefore, impact the psychometric properties of the MDD criteria. The goal of the present report from the Rhode Island Hospital Methods to Improve Diagnostic Assessment and Services project was to examine the impact of obesity on the psychometric characteristics of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition symptom criteria for major depression. Two thousand four hundred forty-eight psychiatric outpatients were administered a semistructured diagnostic interview. We inquired about all symptoms of depression for all patients. The mean sensitivity of the 9 criteria in the nonobese and obese patients was nearly identical (74.6% vs 74.3%). The mean specificity was slightly higher in the nonobese patients (82.0% vs 79.5%). No symptom was more specific in the obese than the nonobese patients, whereas the specificity of increased appetite, increased weight, and fatigue was more than 5% lower in the obese patients. Increased appetite, increased weight, hypersomnia, and fatigue had a higher sensitivity in the obese than the nonobese patients, whereas decreased appetite, weight loss, and diminished concentration had a higher sensitivity in the nonobese than the obese patients. Thus, although there were small differences between obese and nonobese patients in the operating characteristics of some symptoms, the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria for MDD generally performed equally well for obese and nonobese patients. Copyright © 2011 Elsevier Inc. All rights reserved.
Aortic Baroreceptors Display Higher Mechanosensitivity than Carotid Baroreceptors
Directory of Open Access Journals (Sweden)
Eva On-Chai Lau
2016-08-01
Full Text Available Arterial baroreceptors are mechanical sensors that detect blood pressure changes. It has long been suggested that the two arterial baroreceptors, aortic and carotid baroreceptors, have different pressure sensitivities. However, there is no consensus as to which of the arterial baroreceptors are more sensitive to changes in blood pressure. In the present study, we employed independent methods to compare the pressure sensitivity of the two arterial baroreceptors. Firstly, pressure-activated action potential firing was measured by whole-cell current clamp with a high-speed pressure clamp system in primary cultured baroreceptor neurons. The results show that aortic depressor neurons possessed a higher percentage of mechano-sensitive neurons. Furthermore, aortic baroreceptor neurons show a lower pressure threshold than that of carotid baroreceptor neurons. Secondly, uniaxial stretching of baroreceptor neurons, that mimics the forces exerted on blood vessels, elicited a larger increase in intracellular Ca2+ rise in aortic baroreceptor neurons than in carotid baroreceptor neurons. Thirdly, the pressure-induced action potential firing in the aortic depressor nerve recorded in vivo was also higher. The present study therefore provides for a basic physiological understanding on the pressure sensitivity of the two baroreceptor neurons and suggests that aortic baroreceptors have a higher pressure sensitivity than carotid baroreceptors.
Inferring Demographic History Using Two-Locus Statistics.
Ragsdale, Aaron P; Gutenkunst, Ryan N
2017-06-01
Population demographic history may be learned from contemporary genetic variation data. Methods based on aggregating the statistics of many single loci into an allele frequency spectrum (AFS) have proven powerful, but such methods ignore potentially informative patterns of linkage disequilibrium (LD) between neighboring loci. To leverage such patterns, we developed a composite-likelihood framework for inferring demographic history from aggregated statistics of pairs of loci. Using this framework, we show that two-locus statistics are more sensitive to demographic history than single-locus statistics such as the AFS. In particular, two-locus statistics escape the notorious confounding of depth and duration of a bottleneck, and they provide a means to estimate effective population size based on the recombination rather than mutation rate. We applied our approach to a Zambian population of Drosophila melanogaster Notably, using both single- and two-locus statistics, we inferred a substantially lower ancestral effective population size than previous works and did not infer a bottleneck history. Together, our results demonstrate the broad potential for two-locus statistics to enable powerful population genetic inference. Copyright © 2017 by the Genetics Society of America.
Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G
2014-09-01
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
Detection of Doppler Microembolic Signals Using High Order Statistics
Directory of Open Access Journals (Sweden)
Maroun Geryes
2016-01-01
Full Text Available Robust detection of the smallest circulating cerebral microemboli is an efficient way of preventing strokes, which is second cause of mortality worldwide. Transcranial Doppler ultrasound is widely considered the most convenient system for the detection of microemboli. The most common standard detection is achieved through the Doppler energy signal and depends on an empirically set constant threshold. On the other hand, in the past few years, higher order statistics have been an extensive field of research as they represent descriptive statistics that can be used to detect signal outliers. In this study, we propose new types of microembolic detectors based on the windowed calculation of the third moment skewness and fourth moment kurtosis of the energy signal. During energy embolus-free periods the distribution of the energy is not altered and the skewness and kurtosis signals do not exhibit any peak values. In the presence of emboli, the energy distribution is distorted and the skewness and kurtosis signals exhibit peaks, corresponding to the latter emboli. Applied on real signals, the detection of microemboli through the skewness and kurtosis signals outperformed the detection through standard methods. The sensitivities and specificities reached 78% and 91% and 80% and 90% for the skewness and kurtosis detectors, respectively.
Rejection Sensitivity, Jealousy, and the Relationship to Interpersonal Aggression.
Murphy, Anna M; Russell, Gemma
2018-07-01
The development and maintenance of interpersonal relationships lead individuals to risk rejection in the pursuit of acceptance. Some individuals are predisposed to experience a hypersensitivity to rejection that is hypothesized to be related to jealous and aggressive reactions within interpersonal relationships. The current study used convenience sampling to recruit 247 young adults to evaluate the relationship between rejection sensitivity, jealousy, and aggression. A mediation model was used to test three hypotheses: Higher scores of rejection sensitivity would be positively correlated to higher scores of aggression (Hypothesis 1); higher scores of rejection sensitivity would be positively correlated to higher scores of jealousy (Hypothesis 2); jealousy would mediate the relationship between rejection sensitivity and aggression (Hypothesis 3). Study results suggest a tendency for individuals with high rejection sensitivity to experience higher levels of jealousy, and subsequently have a greater propensity for aggression, than individuals with low rejection sensitivity. Future research that substantiates a link between hypersensitivity to rejection, jealousy, and aggression may provide an avenue for prevention, education, or intervention in reducing aggression within interpersonal relationships.
Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.
Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S
2018-02-21
Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from
Kerr, Laura T.; Adams, Aine; O'Dea, Shirley; Domijan, Katarina; Cullen, Ivor; Hennelly, Bryan M.
2014-05-01
Raman microspectroscopy can be applied to the urinary bladder for highly accurate classification and diagnosis of bladder cancer. This technique can be applied in vitro to bladder epithelial cells obtained from urine cytology or in vivo as an optical biopsy" to provide results in real-time with higher sensitivity and specificity than current clinical methods. However, there exists a high degree of variability across experimental parameters which need to be standardised before this technique can be utilized in an everyday clinical environment. In this study, we investigate different laser wavelengths (473 nm and 532 nm), sample substrates (glass, fused silica and calcium fluoride) and multivariate statistical methods in order to gain insight into how these various experimental parameters impact on the sensitivity and specificity of Raman cytology.
Sensitivity and statistical analysis within the elaboration of steel plated girder resistance
Czech Academy of Sciences Publication Activity Database
Melcher, J.; Škaloud, Miroslav; Kala, Z.; Karmazínová, M.
2009-01-01
Roč. 5, č. 2 (2009), s. 120-126 ISSN 1816-112X. [International conf. on steel and aluminium structures /6./. Oxford, 24.06.2007-27.06.2007] Institutional research plan: CEZ:AV0Z20710524 Keywords : steel structures * fatigue * sensitivity * imperfection * plated girder Subject RIV: JM - Building Engineering
Effect of model choice and sample size on statistical tolerance limits
International Nuclear Information System (INIS)
Duran, B.S.; Campbell, K.
1980-03-01
Statistical tolerance limits are estimates of large (or small) quantiles of a distribution, quantities which are very sensitive to the shape of the tail of the distribution. The exact nature of this tail behavior cannot be ascertained brom small samples, so statistical tolerance limits are frequently computed using a statistical model chosen on the basis of theoretical considerations or prior experience with similar populations. This report illustrates the effects of such choices on the computations
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
Statistical inference for noisy nonlinear ecological dynamic systems.
Wood, Simon N
2010-08-26
Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.
Sensitivity of direct immunofluorescence in oral diseases. Study of 125 cases.
Sano, Susana Mariela; Quarracino, María Cecilia; Aguas, Silvia Cristina; González, Ernestina Jesús; Harada, Laura; Krupitzki, Hugo; Mordoh, Ana
2008-05-01
Direct immunofluorescence (DIF) is widely used for the diagnosis of bullous diseases and other autoimmune pathologies such as oral lichen planus. There is no evidence in the literature on how the following variants influence the detection rate of DIF: intraoral site chosen for the biopsy, perilesional locus or distant site from the clinical lesion, number of biopsies and instrument used. to determine if the following variants influenced the sensitivity (detection rate): intraoral site chosen for the biopsy, perilesional or distant site from the clinical lesion, number of biopsies and instrument used (punch or scalpel). A retrospective study was done at the Cátedra de Patología y Clínica Bucodental II at the Facultad de Odontología, Universidad de Buenos Aires; 136 clinical medical histories were revised for the period March 2000 - March 2005 corresponding to patients with clinical diagnosis of OLP and bullous diseases (vulgar pemphigus, bullous pemphigoid and cicatricial pemphigoid). DIF detection rate was 65.8% in patients with OLP, 66.7% in cicatricial pemphigoid patients, in bullous pemphigoid 55.6%, in pemphigus vulgaris 100%, and in those cases in which certain diagnosis could not be obtained, the DIF positivity rate was 45.5% (Pearson chi(2) (4)= 21.5398 Pr= 0.000). There was no statistically significant difference between the different sites of biopsy (Fisher exact test: 0.825). DIF detection rate in perilesional biopsies was 66.1% and in those distant from the site of clinical lesion was 64.7% (Pearson chi(2) v1)= 0.0073 Pr= 0.932. When the number of biopsies were incremented, DIF detection rate also incremented (Pearson chi(2) = 8.7247 Pr= 0.003). The biopsies taken with punch had a higher detection rate than those taken with scalpel (39.1% versus 71.7%) (Pearson chi(2) = 49.0522 Pr= 0.000). While not statistically significant, the tendency outlined in this study indicates there are intraoral regions in which the detection rate of the DIF technique is
Finnish Teachers’ Ethical Sensitivity
Directory of Open Access Journals (Sweden)
Elina Kuusisto
2012-01-01
Full Text Available The study examined the ethical sensitivity of Finnish teachers (=864 using a 28-item Ethical Sensitivity Scale Questionnaire (ESSQ. The psychometric qualities of this instrument were analyzed, as were the differences in self-reported ethical sensitivity between practicing and student teachers and teachers of different subjects. The results showed that the psychometric qualities of the ESSQ were satisfactory and enabled the use of an explorative factor analysis. All Finnish teachers rated their level of ethical sensitivity as high, which indicates that they had internalized the ethical professionalism of teaching. However, practicing teachers’ assessments were higher than student teachers’. Moreover, science as a subject was associated with lower self-ratings of ethical sensitivity.
International Nuclear Information System (INIS)
Jeon, Tae Joo; Lee, Jong Doo; Kim, Hee Joung; Lee, Byung In; Kim, Ok Joon; Kim, Min Jung; Jeon, Jeong Dong
1999-01-01
Ictal brain SPECT has a high diagnostic sensitivity exceeding 90 % in the localization of seizure focus, however, it often shows increased uptake within the extratemporal areas due to early propagation of seizure discharge. This study aimed to evaluate seizure propagation on ictal brian SPECT in patients with temporal lobe epilepsy (TLE) by statistical parametric mapping (SPM). Twenty-one patients (age 27.14 5.79 y) with temporal lobe epilepsy (right in 8, left in 13) who had successful seizure outcome after surgery and nine normal control were included. The data of ictal and interictal brain SPECT of the patients and baseline SPECT of normal control group were analyzed using automatic image registration and SPM96 softwares. The statistical analysis was performed to compare the mean SPECT image of normal group with individual ictal SPECT, and each mean image of the interictal groups of the right or left TLE with individual ictal scans. The t statistic SPM [t] was transformed to SPM [Z] with a threshold of 1.64. The statistical results were displayed and rendered on the reference 3 dimensional MRI images with P value of 0.05 and uncorrected extent threshold p value of 0.5 for SPM [Z]. SPM data demonstrated increased uptake within the epileptic lesion in 19 patients (90.4 %), among them, localized increased uptake confined to the epileptogenic lesion was seen in only 4 (19%) but 15 patients (71.4%) showed hyperperfusion within propagation sites. Bi-temporal hyperperfusion was observed in 11 out of 19 patients (57.9%, 5 in the right and 6 in the left); higher uptake within the lesion than contralateral side in 9, similar activity in 1 and higher uptake within contralateral lobe in one. Extra-temporal hyperperfusion was observed in 8 (2 in the right, 3 in the left, 3 in bilateral); unilateral hyperperfusion within the epileptogenic temporal lobe and extra-temporal area in 4, bi-temporal with extra-temporal hyperperfusion in remaining 4. Ictal brain SPECT is highly
Directory of Open Access Journals (Sweden)
Laura Badenes-Ribera
2018-06-01
Full Text Available Introduction: Publications arguing against the null hypothesis significance testing (NHST procedure and in favor of good statistical practices have increased. The most frequently mentioned alternatives to NHST are effect size statistics (ES, confidence intervals (CIs, and meta-analyses. A recent survey conducted in Spain found that academic psychologists have poor knowledge about effect size statistics, confidence intervals, and graphic displays for meta-analyses, which might lead to a misinterpretation of the results. In addition, it also found that, although the use of ES is becoming generalized, the same thing is not true for CIs. Finally, academics with greater knowledge about ES statistics presented a profile closer to good statistical practice and research design. Our main purpose was to analyze the extension of these results to a different geographical area through a replication study.Methods: For this purpose, we elaborated an on-line survey that included the same items as the original research, and we asked academic psychologists to indicate their level of knowledge about ES, their CIs, and meta-analyses, and how they use them. The sample consisted of 159 Italian academic psychologists (54.09% women, mean age of 47.65 years. The mean number of years in the position of professor was 12.90 (SD = 10.21.Results: As in the original research, the results showed that, although the use of effect size estimates is becoming generalized, an under-reporting of CIs for ES persists. The most frequent ES statistics mentioned were Cohen's d and R2/η2, which can have outliers or show non-normality or violate statistical assumptions. In addition, academics showed poor knowledge about meta-analytic displays (e.g., forest plot and funnel plot and quality checklists for studies. Finally, academics with higher-level knowledge about ES statistics seem to have a profile closer to good statistical practices.Conclusions: Changing statistical practice is not
Comparing higher order models for the EORTC QLQ-C30
DEFF Research Database (Denmark)
Gundy, Chad M; Fayers, Peter M; Grønvold, Mogens
2012-01-01
To investigate the statistical fit of alternative higher order models for summarizing the health-related quality of life profile generated by the EORTC QLQ-C30 questionnaire.......To investigate the statistical fit of alternative higher order models for summarizing the health-related quality of life profile generated by the EORTC QLQ-C30 questionnaire....
Who Is Missing from Higher Education?
Gorard, Stephen
2008-01-01
This paper discusses the difficulties of establishing a clear count of UK higher education students in terms of the categories used for widening participation, such as occupational background or ethnicity. Using some of the best and most complete data available, such as the annual figures from the Higher Education Statistics Agency, the paper then…
Applications of sensitivity function to dosimetric data adjustments
International Nuclear Information System (INIS)
Nakazawa, Masaharu
1984-01-01
Sensitivity functions are applied to the dosimetric field in the spectrum unfolding technique, also called as the data adjustment technique which are statistical estimation procedures of the neutron spectrum or relating dosimetric quantities basing on the reaction-rate data measurements. Using the practical formulae and numerical examples of the sensitivity functions in the dosimetric data adjustments, two comments are made that (1) present sensitivity values are highly depending on the initial spectrum inputs and (2) more attention should be paid to the dependency of the sensitivity on the very uncertain covariance data inputs of the initial neutron spectrum. (author)
Patel, Parag C; Hill, Douglas A; Ayers, Colby R; Lavingia, Bhavna; Kaiser, Patricia; Dyer, Adrian K; Barnes, Aliessa P; Thibodeau, Jennifer T; Mishkin, Joseph D; Mammen, Pradeep P A; Markham, David W; Stastny, Peter; Ring, W Steves; de Lemos, James A; Drazner, Mark H
2014-05-01
A noninvasive biomarker that could accurately diagnose acute rejection (AR) in heart transplant recipients could obviate the need for surveillance endomyocardial biopsies. We assessed the performance metrics of a novel high-sensitivity cardiac troponin I (cTnI) assay for this purpose. Stored serum samples were retrospectively matched to endomyocardial biopsies in 98 cardiac transplant recipients, who survived ≥3 months after transplant. AR was defined as International Society for Heart and Lung Transplantation grade 2R or higher cellular rejection, acellular rejection, or allograft dysfunction of uncertain pathogenesis, leading to treatment for presumed rejection. cTnI was measured with a high-sensitivity assay (Abbott Diagnostics, Abbott Park, IL). Cross-sectional analyses determined the association of cTnI concentrations with rejection and International Society for Heart and Lung Transplantation grade and the performance metrics of cTnI for the detection of AR. Among 98 subjects, 37% had ≥1 rejection episode. cTnI was measured in 418 serum samples, including 35 paired to a rejection episode. cTnI concentrations were significantly higher in rejection versus nonrejection samples (median, 57.1 versus 10.2 ng/L; P<0.0001) and increased in a graded manner with higher biopsy scores (P(trend)<0.0001). The c-statistic to discriminate AR was 0.82 (95% confidence interval, 0.76-0.88). Using a cut point of 15 ng/L, sensitivity was 94%, specificity 60%, positive predictive value 18%, and negative predictive value 99%. A high-sensitivity cTnI assay seems useful to rule out AR in cardiac transplant recipients. If validated in prospective studies, a strategy of serial monitoring with a high-sensitivity cTnI assay may offer a low-cost noninvasive strategy for rejection surveillance. © 2014 American Heart Association, Inc.
Dong, Jian-Jun; Li, Qing-Liang; Yin, Hua; Zhong, Cheng; Hao, Jun-Guang; Yang, Pan-Fei; Tian, Yu-Hong; Jia, Shi-Ru
2014-10-15
Sensory evaluation is regarded as a necessary procedure to ensure a reproducible quality of beer. Meanwhile, high-throughput analytical methods provide a powerful tool to analyse various flavour compounds, such as higher alcohol and ester. In this study, the relationship between flavour compounds and sensory evaluation was established by non-linear models such as partial least squares (PLS), genetic algorithm back-propagation neural network (GA-BP), support vector machine (SVM). It was shown that SVM with a Radial Basis Function (RBF) had a better performance of prediction accuracy for both calibration set (94.3%) and validation set (96.2%) than other models. Relatively lower prediction abilities were observed for GA-BP (52.1%) and PLS (31.7%). In addition, the kernel function of SVM played an essential role of model training when the prediction accuracy of SVM with polynomial kernel function was 32.9%. As a powerful multivariate statistics method, SVM holds great potential to assess beer quality. Copyright © 2014 Elsevier Ltd. All rights reserved.
SPATIOTEMPORAL CONTRAST SENSITIVITY OF EARLY VISION
Hateren, J.H. van
Based on the spatial and temporal statistics of natural images, a theory is developed that specifies spatiotemporal filters that maximize the flow of information through noisy channels of limited dynamic range. Sensitivities resulting from these spatiotemporal filters are very similar to the human
High-Throughput Nanoindentation for Statistical and Spatial Property Determination
Hintsala, Eric D.; Hangen, Ude; Stauffer, Douglas D.
2018-04-01
Standard nanoindentation tests are "high throughput" compared to nearly all other mechanical tests, such as tension or compression. However, the typical rates of tens of tests per hour can be significantly improved. These higher testing rates enable otherwise impractical studies requiring several thousands of indents, such as high-resolution property mapping and detailed statistical studies. However, care must be taken to avoid systematic errors in the measurement, including choosing of the indentation depth/spacing to avoid overlap of plastic zones, pileup, and influence of neighboring microstructural features in the material being tested. Furthermore, since fast loading rates are required, the strain rate sensitivity must also be considered. A review of these effects is given, with the emphasis placed on making complimentary standard nanoindentation measurements to address these issues. Experimental applications of the technique, including mapping of welds, microstructures, and composites with varying length scales, along with studying the effect of surface roughness on nominally homogeneous specimens, will be presented.
Nguyen, Thuyuyen H.; Newby, Michael; Skordi, Panayiotis G.
2015-01-01
Statistics is a required subject of study in many academic disciplines, including business, education and psychology, that causes problems for many students. This has long been recognised and there have been a number of studies into students' attitudes towards statistics, particularly statistical anxiety. However, none of these studies…
AN EXPLORATION OF THE STATISTICAL SIGNATURES OF STELLAR FEEDBACK
Energy Technology Data Exchange (ETDEWEB)
Boyden, Ryan D.; Offner, Stella S. R. [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States); Koch, Eric W.; Rosolowsky, Erik W., E-mail: soffner@astro.umass.edu [Department of Physics, University of Alberta, Edmonton, T6G 2E1 (Canada)
2016-12-20
All molecular clouds are observed to be turbulent, but the origin, means of sustenance, and evolution of the turbulence remain debated. One possibility is that stellar feedback injects enough energy into the cloud to drive observed motions on parsec scales. Recent numerical studies of molecular clouds have found that feedback from stars, such as protostellar outflows and winds, injects energy and impacts turbulence. We expand upon these studies by analyzing magnetohydrodynamic simulations of molecular clouds, including stellar winds, with a range of stellar mass-loss rates and magnetic field strengths. We generate synthetic {sup 12}CO(1–0) maps assuming that the simulations are at the distance of the nearby Perseus molecular cloud. By comparing the outputs from different initial conditions and evolutionary times, we identify differences in the synthetic observations and characterize these using common astrostatistics. We quantify the different statistical responses using a variety of metrics proposed in the literature. We find that multiple astrostatistics, including the principal component analysis, the spectral correlation function, and the velocity coordinate spectrum (VCS), are sensitive to changes in stellar mass-loss rates and/or time evolution. A few statistics, including the Cramer statistic and VCS, are sensitive to the magnetic field strength. These findings demonstrate that stellar feedback influences molecular cloud turbulence and can be identified and quantified observationally using such statistics.
Statistics for the LHC: Quantifying our Scientific Narrative (1/4)
CERN. Geneva
2011-01-01
Now that the LHC physics program is well under way and results have begun to pour out of the experiments, the statistical methodology used for these results is a hot topic. This is a challenge at the LHC, as we have sensitivity to discover new physics in a stage of the experiments where systematic uncertainties can still be quite large. The emphasis of these lectures is how we can translate the scientific narrative of why we think we know what we know into quantitative statistical statements about the presence or absence of new physics. Topics will include statistical modeling, incorporation of control samples to constrain systematics, and Bayesian and Frequentist statistical tests that are capable of answering these questions.
Statistical Transmutation in Floquet Driven Optical Lattices.
Sedrakyan, Tigran A; Galitski, Victor M; Kamenev, Alex
2015-11-06
We show that interacting bosons in a periodically driven two dimensional (2D) optical lattice may effectively exhibit fermionic statistics. The phenomenon is similar to the celebrated Tonks-Girardeau regime in 1D. The Floquet band of a driven lattice develops the moat shape, i.e., a minimum along a closed contour in the Brillouin zone. Such degeneracy of the kinetic energy favors fermionic quasiparticles. The statistical transmutation is achieved by the Chern-Simons flux attachment similar to the fractional quantum Hall case. We show that the velocity distribution of the released bosons is a sensitive probe of the fermionic nature of their stationary Floquet state.
Tabatadze, Shalva; Gorgadze, Natia
2018-01-01
Purpose: The purpose of this paper is to assess the intercultural sensitivity of students in teacher educational programs at higher education institutes (HEIs) in Georgia. Design/methodology/approach: This research explored the intercultural sensitivity among 355 randomly selected students in teacher education programs at higher education…
Research and development statistics 2001
2002-01-01
This publication provides recent basic statistics on the resources devoted to R&D in OECD countries. The statistical series are presented for the last seven years for which data are available and cover expenditure by source of funds and type of costs; personnel by occupation and/or level of qualification; both at the national level by performance sector, for enterprises by industry, and for higher education by field of science. The publication also provides information on the output of science and technology (S&T) activities relating to the technology balance of payments.
Więckowska, Barbara; Marcinkowska, Justyna
2017-11-06
When searching for epidemiological clusters, an important tool can be to carry out one's own research with the incidence rate from the literature as the reference level. Values exceeding this level may indicate the presence of a cluster in that location. This paper presents a method of searching for clusters that have significantly higher incidence rates than those specified by the investigator. The proposed method uses the classic binomial exact test for one proportion and an algorithm that joins areas with potential clusters while reducing the number of multiple comparisons needed. The sensitivity and specificity are preserved by this new method, while avoiding the Monte Carlo approach and still delivering results comparable to the commonly used Kulldorff's scan statistics and other similar methods of localising clusters. A strong contributing factor afforded by the statistical software that makes this possible is that it allows analysis and presentation of the results cartographically.
Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.
Potter, Christine E; Wang, Tianlin; Saffran, Jenny R
2017-04-01
Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.
Multivariate statistical methods and data mining in particle physics (4/4)
CERN. Geneva
2008-01-01
The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.
Multivariate statistical methods and data mining in particle physics (2/4)
CERN. Geneva
2008-01-01
The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.
Multivariate statistical methods and data mining in particle physics (1/4)
CERN. Geneva
2008-01-01
The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.
DEFF Research Database (Denmark)
Mørtz, Charlotte G; Andersen, Klaus Ejner; Bindslev-Jensen, Carsten
2005-01-01
In the last decade an increased occurrence of peanut hypersensitivity and severe anaphylactic reactions to peanut have been reported. However, few prevalence studies have been performed in unselected populations. This study evaluated the point prevalence of peanut hypersensitivity in Danish...... adolescents. The point prevalence of peanut allergy confirmed by oral challenge was estimated to 0.5%. The number of adolescents sensitized to peanut by specific immunoglobulin E (IgE) (CAP FEIA) and skin prick test (SPT) were higher (5.8% resp. 3.4%). In adolescents without clinically relevant peanut...... sensitization most cases were sensitized to grass pollen and the IgE class for grass was higher than for peanut. A correlation between peanut and pollen (grass) sensitization is therefore plausible. Before a positive SPT or specific IgE measurement to peanut is considered clinically relevant in a patient...
QSAR Study of Skin Sensitization Using Local Lymph Node Assay Data
Directory of Open Access Journals (Sweden)
Eugene Demchuk
2004-01-01
Full Text Available Abstract: Allergic Contact Dermatitis (ACD is a common work-related skin disease that often develops as a result of repetitive skin exposures to a sensitizing chemical agent. A variety of experimental tests have been suggested to assess the skin sensitization potential. We applied a method of Quantitative Structure-Activity Relationship (QSAR to relate measured and calculated physical-chemical properties of chemical compounds to their sensitization potential. Using statistical methods, each of these properties, called molecular descriptors, was tested for its propensity to predict the sensitization potential. A few of the most informative descriptors were subsequently selected to build a model of skin sensitization. In this work sensitization data for the murine Local Lymph Node Assay (LLNA were used. In principle, LLNA provides a standardized continuous scale suitable for quantitative assessment of skin sensitization. However, at present many LLNA results are still reported on a dichotomous scale, which is consistent with the scale of guinea pig tests, which were widely used in past years. Therefore, in this study only a dichotomous version of the LLNA data was used. To the statistical end, we relied on the logistic regression approach. This approach provides a statistical tool for investigating and predicting skin sensitization that is expressed only in categorical terms of activity and nonactivity. Based on the data of compounds used in this study, our results suggest a QSAR model of ACD that is based on the following descriptors: nDB (number of double bonds, C-003 (number of CHR3 molecular subfragments, GATS6M (autocorrelation coefficient and HATS6m (GETAWAY descriptor, although the relevance of the identified descriptors to the continuous ACD QSAR has yet to be shown. The proposed QSAR model gives a percentage of positively predicted responses of 83% on the training set of compounds, and in cross validation it correctly identifies 79% of
de Fiebre, NancyEllen C; Dawson, Ralph; de Fiebre, Christopher M
2002-06-01
Studies in rodents selectively bred to differ in alcohol sensitivity have suggested that nicotine and ethanol sensitivities may cosegregate during selective breeding. This suggests that ethanol and nicotine sensitivities may in part be genetically correlated. Male and female high alcohol sensitivity (HAS), control alcohol sensitivity, and low alcohol sensitivity (LAS) rats were tested for nicotine-induced alterations in locomotor activity, body temperature, and seizure activity. Plasma and brain levels of nicotine and its primary metabolite, cotinine, were measured in these animals, as was the binding of [3H]cytisine, [3H]epibatidine, and [125I]alpha-bungarotoxin in eight brain regions. Both replicate HAS lines were more sensitive to nicotine-induced locomotor activity depression than the replicate LAS lines. No consistent HAS/LAS differences were seen on other measures of nicotine sensitivity; however, females were more susceptible to nicotine-induced seizures than males. No HAS/LAS differences in nicotine or cotinine levels were seen, nor were differences seen in the binding of nicotinic ligands. Females had higher levels of plasma cotinine and brain nicotine than males but had lower brain cotinine levels than males. Sensitivity to a specific action of nicotine cosegregates during selective breeding for differential sensitivity to a specific action of ethanol. The differential sensitivity of the HAS/LAS rats is due to differences in central nervous system sensitivity and not to pharmacokinetic differences. The differential central nervous system sensitivity cannot be explained by differences in the numbers of nicotinic receptors labeled in ligand-binding experiments. The apparent genetic correlation between ethanol and nicotine sensitivities suggests that common genes modulate, in part, the actions of both ethanol and nicotine and may explain the frequent coabuse of these agents.
Energy Technology Data Exchange (ETDEWEB)
Montani, Claudia [Laboratory of Biotechnology, Department of Laboratory Medicine, Civic Hospital of Brescia (Italy); Steimberg, Nathalie; Boniotti, Jennifer [Laboratory of Tissue Engineering, Anatomy and Physiopathology Unit, Department of Clinical and Experimental Sciences, School of Medicine, University of Brescia (Italy); Biasiotto, Giorgio; Zanella, Isabella [Laboratory of Biotechnology, Department of Laboratory Medicine, Civic Hospital of Brescia (Italy); Department of Molecular and Translational Medicine, University of Brescia, Brescia (Italy); Diafera, Giuseppe [Integrated Systems Engineering (ISE), Milan (Italy); Biunno, Ida [IRGB-CNR, Milan (Italy); IRCCS-Multimedica, Milan (Italy); Caimi, Luigi [Laboratory of Biotechnology, Department of Laboratory Medicine, Civic Hospital of Brescia (Italy); Department of Molecular and Translational Medicine, University of Brescia, Brescia (Italy); Mazzoleni, Giovanna [Laboratory of Tissue Engineering, Anatomy and Physiopathology Unit, Department of Clinical and Experimental Sciences, School of Medicine, University of Brescia (Italy); Di Lorenzo, Diego, E-mail: diego.dilorenzo@yahoo.it [Laboratory of Biotechnology, Department of Laboratory Medicine, Civic Hospital of Brescia (Italy)
2014-11-01
maintaining cells in more physiological conditions. • RCCS-cultured fibroblasts showed higher hormonal sensitivity to estradiol. • This bioreactor is a novel 3D model to be applied to pharmacotoxicological studies.
Optimizing human activity patterns using global sensitivity analysis.
Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M
2014-12-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Thermoluminescence in fluorite: sensitization mechanism
International Nuclear Information System (INIS)
Cruz, M.T. da; Watanabe, S.; Mayhugh, M.R.
1974-01-01
The sensitization of the major glow peaks (approximately to 100 and 200 0 C) in fluorite correlates with population of traps causing higher temperature glow peaks. When considered with supralinearity results, it is concluded that either the sensitization results from an increase in trap-filling efficiencies, or the deeper traps are not filled during irradiation
Thiessen, Erik D
2017-01-05
Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik
Moment-based metrics for global sensitivity analysis of hydrological systems
Directory of Open Access Journals (Sweden)
A. Dell'Oca
2017-12-01
Full Text Available We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE, other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
Review on the Gender Sensitive Women Education- Legal Revolution in Higher Education
Pradeep M. D
2017-01-01
Education spreads parallel with the life span of a person starting from his birth to death. Education is known to be the instrument which fills human actions with the essence of values, dignity, ethics and human virtues. Life progress along with the process of civilization equipped with social, moral, cultural attributes in the path of education. The Educational system should be gender sensitive to impart knowledge and disseminate skills to the marginalized sections of the society. The countr...
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Thermoluminescence in fluorite: sensitization mechanism
Energy Technology Data Exchange (ETDEWEB)
Cruz, M.T. da; Watanabe, S; Mayhugh, M R
1974-11-01
The sensitization of the major glow peaks (approximately to 100 and 200/sup 0/ C) in fluorite correlates with population of traps causing higher temperature glow peaks. When considered with supralinearity results, it is concluded that either the sensitization results from an increase in trap-filling efficiencies, or the deeper traps are not filled during irradiation.
Yonemaru, Naoyuki; Kumamoto, Hiroki; Takahashi, Keitaro; Kuroyanagi, Sachiko
2018-04-01
A new detection method for ultra-low frequency gravitational waves (GWs) with a frequency much lower than the observational range of pulsar timing arrays (PTAs) was suggested in Yonemaru et al. (2016). In the PTA analysis, ultra-low frequency GWs (≲ 10-10 Hz) which evolve just linearly during the observation time span are absorbed by the pulsar spin-down rates since both have the same effect on the pulse arrival time. Therefore, such GWs cannot be detected by the conventional method of PTAs. However, the bias on the observed spin-down rates depends on relative direction of a pulsar and GW source and shows a quadrupole pattern in the sky. Thus, if we divide the pulsars according to the position in the sky and see the difference in the statistics of the spin-down rates, ultra-low frequency GWs from a single source can be detected. In this paper, we evaluate the potential of this method by Monte-Carlo simulations and estimate the sensitivity, considering only the "Earth term" while the "pulsar term" acts like random noise for GW frequencies 10-13 - 10-10 Hz. We find that with 3,000 milli-second pulsars, which are expected to be discovered by a future survey with the Square Kilometre Array, GWs with the derivative of amplitude of about 3 × 10^{-19} {s}^{-1} can in principle be detected. Implications for possible supermassive binary black holes in Sgr* and M87 are also given.
Species sensitivity distributions in a context of practical applications for risk-based decisions
Energy Technology Data Exchange (ETDEWEB)
Posthuma, L.; Traas, T.P.; Roelofs, W.; Winterse, A.; Zwart, D. de; Meent, D. van de [RIVM, Bilthoven (Netherlands)
2003-07-01
Different biological species clearly show different sensitivities to toxic compounds present in their habitat. The absence of differences in sensitivity would entail an 'all nothing' response, with all species responding similarly to pollution. Given the sensitivity variation, however, some species show adverse effects, and others flourish. In this sense, variation is 'music for ecotoxicologists'. But it also is a nuisance: ecotoxicologists need to handle the vast diversity of sensitivities in Ecological Risk Assessment (ERA). This contribution addresses a pragmatic, versatile, statistics-based concept to address species sensitivity differences, and shows options for application. In the 1980s, ecotoxicologists have coined the term 'Species Sensitivity Distributions' (SSDs) for the statistical descriptions that can be applied to address sensitivity variation. An SSD is a Probability Density Function (usually bell-shaped) or Cumulative Distribution Function (sigmoid), that relates the environmental concentration (x) to 'risk' (y). Since their original description and use, the concept was further developed, criticised, and tailored to various policy or assessment problems. (orig.)
International Nuclear Information System (INIS)
Picard, R.R.
1987-01-01
Verification of an inventory or of a reported material unaccounted for (MUF) calls for the remeasurement of a sample of items by an inspector followed by comparison of the inspector's data to the facility's reported values. Such comparison is intended to protect against falsification of accounting data that could conceal material loss. In the international arena, the observed discrepancies between the inspector's data and the reported data are quantified using the D statistic. If data have been falsified by the facility, the standard deviations of the D and MUF-D statistics are inflated owing to the sampling distribution. Moreover, under certain conditions the distributions of those statistics can depart markedly from normality, complicating evaluation of an inspection plan's performance. Detection probabilities estimated using standard deviations appropriate for the no-falsification case in conjunction with assumed normality can be far too optimistic. Under very general conditions regarding the facility's and/or the inspector's measurement error procedures and the inspector's sampling regime, the variance of the MUF-D statistic can be broken into three components. The inspection's sensitivity against various falsification scenarios can be traced to one or more of these components. Obvious implications exist for the planning of effective inspections, particularly in the area of resource optimization
Optimal statistic for detecting gravitational wave signals from binary inspirals with LISA
Rogan, A
2004-01-01
A binary compact object early in its inspiral phase will be picked up by its nearly monochromatic gravitational radiation by LISA. But even this innocuous appearing candidate poses interesting detection challenges. The data that will be scanned for such sources will be a set of three functions of LISA's twelve data streams obtained through time-delay interferometry, which is necessary to cancel the noise contributions from laser-frequency fluctuations and optical-bench motions to these data streams. We call these three functions pseudo-detectors. The sensitivity of any pseudo-detector to a given sky position is a function of LISA's orbital position. Moreover, at a given point in LISA's orbit, each pseudo-detector has a different sensitivity to the same sky position. In this work, we obtain the optimal statistic for detecting gravitational wave signals, such as from compact binaries early in their inspiral stage, in LISA data. We also present how the sensitivity of LISA, defined by this optimal statistic, vari...
Directory of Open Access Journals (Sweden)
Xiao Chen
2017-11-01
Full Text Available AIM: To investigate the efficacy and visual sensitivity of occlusion therapy combined training for children with ametropic amblyopia. METHODS: Totally 85 children(85 eyeswith anisometropic amblyopia treated in our hospital from January 2013 to January 2015 were selected. All patients were given occlusion therapy combined training. Statistical analysis of clinical efficacy and visual sensitivity changes were taken, and the changes of visual acuity, AULCSF, Smax, Frmax were analyzed. RESULTS: The visual acuity after therapy was significantly better than that before treatment(1.12±0.29 vs 0.45±0.25, Pmax and Frmax all increased, the difference between the two groups was statistically significant(PPP=0.001. Mild group and moderate group had no significant difference on the total clinical efficiency difference(χ2=3.091, P=0.079; between mild group and severe group total effective rate was significantly different(χ2=11.471, P=0.001; the moderate and severe groups total clinical efficiency were no significantly different(χ2=3.359, P=0.067. In addition, the total efficiency rate of wearing glasses under the age of 6 was significantly higher than that after 6 years old(95% vs 77%, statistical difference between the two groups was significant(PCONCLUSION: Masking therapy combined with comprehensive training, in the treatment of children with ametropic amblyopia, and wearing a corrective spectacles, is desirable, especially for children under 7 years of age.
Mikolajewski, Amy J; Scheeringa, Michael S; Weems, Carl F
2017-05-01
Few studies have assessed how the diagnostic criteria for posttraumatic stress disorder (PTSD) apply to older children and adolescents. With the introduction of a new, developmentally sensitive set of criteria for very young children (age 6 years and younger) in Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5), this raises new questions about the validity of the criteria for older children and adolescents. The current study investigated how diagnostic changes in DSM-5 impact diagnosis rates in 7-18-year olds. PTSD, impairment, and comorbid psychopathology were assessed in 135 trauma-exposed, treatment-seeking participants. Children (ages 7-12) were examined separately from adolescents (ages 13-18) to assess for potential developmental differences. A significantly higher proportion of 7-12-year-old children met criteria for DSM-5 diagnosis (53%) compared to Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) (37%). However, among 13-18-year-old adolescents, the proportions diagnosed with DSM-5 (73%) and DSM-IV (74%) did not differ. Participants who met criteria for DSM-5 only (17%) did not differ from those diagnosed with DSM-IV in terms impairment or comorbidity. Using the newly accepted age 6 years and younger criteria resulted in a significantly higher proportion of 7-12-year-old (but not 13-18-year olds) children meeting criteria compared to DSM-IV or DSM-5. However, these children showed less impairment and comorbidity than those diagnosed with DSM-IV. These findings suggest that DSM-5 criteria may be more developmentally sensitive than DSM-IV criteria, and may lead to higher prevalence rates of PTSD for 7-12-year-old children, but not for adolescents. Using the very young children criteria for 7-12-year-old children may further increase prevalence, but capture children with less severe psychopathology.
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment
Assessment of Problem-Based Learning in the Undergraduate Statistics Course
Karpiak, Christie P.
2011-01-01
Undergraduate psychology majors (N = 51) at a mid-sized private university took a statistics examination on the first day of the research methods course, a course for which a grade of "C" or higher in statistics is a prerequisite. Students who had taken a problem-based learning (PBL) section of the statistics course (n = 15) were compared to those…
A Simple Statistical Thermodynamics Experiment
LoPresto, Michael C.
2010-01-01
Comparing the predicted and actual rolls of combinations of both two and three dice can help to introduce many of the basic concepts of statistical thermodynamics, including multiplicity, probability, microstates, and macrostates, and demonstrate that entropy is indeed a measure of randomness, that disordered states (those of higher entropy) are…
Renyi statistics in equilibrium statistical mechanics
International Nuclear Information System (INIS)
Parvan, A.S.; Biro, T.S.
2010-01-01
The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.
Prabhash, K; Bajpai, J; Gokarn, A; Arora, B; Kurkure, P A; Medhekar, A; Kelkar, R; Biswas, S; Gupta, S; Naronha, V; Shetty, N; Goyel, G; Banavali, S D
2014-01-01
Infection is a common cause of mortality and morbidity in cancer patients. Organisms are becoming resistant to antibiotics; age appears to be one of the factors responsible. We analyzed common organisms and their antibiotic sensitivity pattern in the correlation with age. This is a single institutional, retrospective analysis of all culture positive adult and pediatric cancer patients from January 2007 to December 2007. For statistical analysis, Chi-square test for trend was used and P values were obtained. Of 1251 isolates, 262 were from children 12 years of age). Gram-negative organisms were predominant (64.95) while Gram-positive constituted 35.09% of isolates. The most common source in all age groups was peripheral-blood, accounting to 47.8% of all samples. The most common organisms in adults were Pseudomonas aeruginosa (15.3%) while in children it was coagulase negative Staphylococcus aureus (19.8%). Antibiotic sensitivity was different in both groups. In pediatric group higher sensitivity was seen for Cefoparazone-sulbactum, Cefipime, Amikacin, and Tobramycin. No resistance was found for Linezolid. The isolates in both children and adults were predominantly Gram-negative though children had proportionately higher Gram-positive organisms. High-dose cytarabine use, cotrimoxazole prophylaxis, and frequent use of central lines in children especially in hematological malignancies could explain this observation. Children harbor less antibiotic resistance than adults; Uncontrolled, cumulative exposure to antibiotics in our community with increasing age, age-related immune factors and variable bacterial flora in different wards might explain the higher antibiotic resistance in adults. Thus age is an important factor to be considered while deciding empirical antibiotic therapy.
Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong
2016-01-01
Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.
Applications of advances in nonlinear sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Werbos, P J
1982-01-01
The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.
Sensitive beam-bunch phase detector
Energy Technology Data Exchange (ETDEWEB)
Takeuchi, S; Shepard, K W
1984-11-15
A sensitive heavy-ion beam-bunch phase detector has been developed by first examining the relationship between the sensitivity of an rf resonant cavity as a particle bunch detector and the shunt impedance of the same cavity as an accelerating structure. Then the various high shunt impedance rf cavities previously developed for accelerating heavy ions were evaluated for use as bunch detectors. A spiral-loaded geometry was chosen, built, and tested with beam. The sensitivity obtained, 14 V per electrical nA of beam, is a factor 3 higher than previously reported. (orig.).
Sensitive beam-bunch phase detector
Energy Technology Data Exchange (ETDEWEB)
Takeuchi, S; Shepard, K W [Argonne National Lab., IL (USA). Physics Div.
1984-11-15
A sensitive heavy-ion beam-bunch phase detector has been developed by first examining the relationship between the sensitivity of an RF resonant cavity as a particle bunch detector and the shunt impedance of the same cavity as an accelerating structure. Then the various high shunt impedance RF cavities previously developed for accelerating heavy ions were evaluated for use as bunch detectors. A spiral-loaded geometry was chosen, built, and tested with beam. The sensitivity obtained, 14 ..mu.. V per electrical nA of beam, is a factor 3 higher than previously reported.
Access to Higher Education in China: Differences in Opportunity
Wang, Houxiong
2011-01-01
Access to higher education in China has opened up significantly in the move towards a mass higher education system. However, aggregate growth does not necessarily imply fair or reasonable distribution of opportunity. In fact, the expansion of higher education has a rather more complex influence on opportunity when admissions statistics are viewed…
Sensitivity of the vibrios to ultraviolet-radiation
International Nuclear Information System (INIS)
Banerjee, S.K.; Chatterjee, S.N.
1977-01-01
The ultraviolet-inactivation kinetics of a number of strains of Vibrio cholerae (classical), Vibrio cholerae (el tor), NAG vibrios and Vibrio parahaemolyticus were investigated. Statistical analyses revealed significant differences between any two of the four types of vibrio in respect of their sensitivity to U.V. (author)
International Nuclear Information System (INIS)
Kaufmann, R.K.; Kauppi, H.; Stock, J.H.
2006-01-01
Comparing statistical estimates for the long-run temperature effect of doubled CO2 with those generated by climate models begs the question, is the long-run temperature effect of doubled CO2 that is estimated from the instrumental temperature record using statistical techniques consistent with the transient climate response, the equilibrium climate sensitivity, or the effective climate sensitivity. Here, we attempt to answer the question, what do statistical analyses of the observational record measure, by using these same statistical techniques to estimate the temperature effect of a doubling in the atmospheric concentration of carbon dioxide from seventeen simulations run for the Coupled Model Intercomparison Project 2 (CMIP2). The results indicate that the temperature effect estimated by the statistical methodology is consistent with the transient climate response and that this consistency is relatively unaffected by sample size or the increase in radiative forcing in the sample
Integer Set Compression and Statistical Modeling
DEFF Research Database (Denmark)
Larsson, N. Jesper
2014-01-01
enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...
Sensitivity Enhancement in Si Nanophotonic Waveguides Used for Refractive Index Sensing
Directory of Open Access Journals (Sweden)
Yaocheng Shi
2016-03-01
Full Text Available A comparative study is given for the sensitivity of several typical Si nanophotonic waveguides, including SOI (silicon-on-insulator nanowires, nanoslot waveguides, suspended Si nanowires, and nanofibers. The cases for gas sensing (ncl ~ 1.0 and liquid sensing (ncl ~ 1.33 are considered. When using SOI nanowires (with a SiO2 buffer layer, the sensitivity for liquid sensing (S ~ 0.55 is higher than that for gas sensing (S ~ 0.35 due to lower asymmetry in the vertical direction. By using SOI nanoslot waveguides, suspended Si nanowires, and Si nanofibers, one could achieve a higher sensitivity compared to sensing with a free-space beam (S = 1.0. The sensitivity for gas sensing is higher than that for liquid sensing due to the higher index-contrast. The waveguide sensitivity of an optimized suspended Si nanowire for gas sensing is as high as 1.5, which is much higher than that of a SOI nanoslot waveguide. Furthermore, the optimal design has very large tolerance to the core width variation due to the fabrication error (∆w ~ ±50 nm. In contrast, a Si nanofiber could also give a very high sensitivity (e.g., ~1.43 while the fabrication tolerance is very small (i.e., ∆w < ±5 nm. The comparative study shows that suspended Si nanowire is a good choice to achieve ultra-high waveguide sensitivity.
Ten Years of Cloud Properties from MODIS: Global Statistics and Use in Climate Model Evaluation
Platnick, Steven E.
2011-01-01
The NASA Moderate Resolution Imaging Spectroradiometer (MODIS), launched onboard the Terra and Aqua spacecrafts, began Earth observations on February 24, 2000 and June 24,2002, respectively. Among the algorithms developed and applied to this sensor, a suite of cloud products includes cloud masking/detection, cloud-top properties (temperature, pressure), and optical properties (optical thickness, effective particle radius, water path, and thermodynamic phase). All cloud algorithms underwent numerous changes and enhancements between for the latest Collection 5 production version; this process continues with the current Collection 6 development. We will show example MODIS Collection 5 cloud climatologies derived from global spatial . and temporal aggregations provided in the archived gridded Level-3 MODIS atmosphere team product (product names MOD08 and MYD08 for MODIS Terra and Aqua, respectively). Data sets in this Level-3 product include scalar statistics as well as 1- and 2-D histograms of many cloud properties, allowing for higher order information and correlation studies. In addition to these statistics, we will show trends and statistical significance in annual and seasonal means for a variety of the MODIS cloud properties, as well as the time required for detection given assumed trends. To assist in climate model evaluation, we have developed a MODIS cloud simulator with an accompanying netCDF file containing subsetted monthly Level-3 statistical data sets that correspond to the simulator output. Correlations of cloud properties with ENSO offer the potential to evaluate model cloud sensitivity; initial results will be discussed.
A High-Sensitivity Current Sensor Utilizing CrNi Wire and Microfiber Coils
Directory of Open Access Journals (Sweden)
Xiaodong Xie
2014-05-01
Full Text Available We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications.
Baykara, Zehra Gocmen; Demir, Sevil Guler; Yaman, Sengul
2015-09-01
Moral sensitivity is a life-long cognitive ability. It is expected that nurses who work in a professional purpose at "curing human beings" should have a highly developed moral sensitivity. The general opinion is that ethics education plays a significant role in this sense to enhance the moral sensitivity in terms of nurses' professional behaviors and distinguish ethical violations. This study was conducted as intervention research for the purpose of determining the effect of the ethics training on fourth-year students of the nursing department recognizing ethical violations experienced in the hospital and developing ethical sensitivity. The study was conducted with 50 students, with 25 students each in the experiment and control groups. Students in the experiment group were provided ethics training and consultancy services. The data were collected through the data collection form, which consists of questions on the socio-demographic characteristics and ethical sensitivity of the students, Moral Sensitivity Questionnaire, and the observation form on ethical principle violations/protection in the clinic environment. The data were digitized on the computer with the SPSS for Windows 13.0 program. The data were evaluated utilizing number, percentile calculation, paired samples t-test, Wilcoxon test, and the McNemar test. The total Moral Sensitivity Questionnaire pre-test score averages of students in the experiment group were determined to be 93.88 ± 13.57, and their total post-test score averages were determined to be 89.24 ± 15.90. The total pre-test score averages of students in the control group were determined to be 91.48 ± 17.59, and their total post-test score averages were determined to be 97.72 ± 19.91. In the study, it was determined that the post-training ethical sensitivity of students in the experiment group increased; however, this was statistically not significant. Furthermore, it was determined that the number of ethical principle protection
A comprehensive sensitivity and uncertainty analysis of a milk drying process
DEFF Research Database (Denmark)
Ferrari, A.; Gutiérrez, S.; Sin, G.
2015-01-01
A simple steady state model of a milk drying process was built to help process understanding. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a statistical analysis for quality assurance using sensitivity analysis (SA) of inputs/parameters, identifiab......A simple steady state model of a milk drying process was built to help process understanding. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a statistical analysis for quality assurance using sensitivity analysis (SA) of inputs...... technique. SA results provide evidence towards over-parameterization in the model, and the chamber inlet dry bulb air temperature was the variable (input) with the highest sensitivity. IA results indicated that at most 4 parameters are identifiable: two from spray chamber and one from each fluid bed dryer...
Journal data sharing policies and statistical reporting inconsistencies in psychology.
Nuijten, M.B.; Borghuis, J.; Veldkamp, C.L.S.; Dominguez Alvarez, L.; van Assen, M.A.L.M.; Wicherts, J.M.
2018-01-01
In this paper, we present three retrospective observational studies that investigate the relation between data sharing and statistical reporting inconsistencies. Previous research found that reluctance to share data was related to a higher prevalence of statistical errors, often in the direction of
Understanding Statistics - Cancer Statistics
Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.
Disentangling Aerosol Cooling and Greenhouse Warming to Reveal Earth's Climate Sensitivity
Storelvmo, Trude; Leirvik, Thomas; Phillips, Petter; Lohmann, Ulrike; Wild, Martin
2015-04-01
Earth's climate sensitivity has been the subject of heated debate for decades, and recently spurred renewed interest after the latest IPCC assessment report suggested a downward adjustment of the most likely range of climate sensitivities. Here, we present a study based on the time period 1964 to 2010, which is unique in that it does not rely on global climate models (GCMs) in any way. The study uses surface observations of temperature and incoming solar radiation from approximately 1300 surface sites, along with observations of the equivalent CO2 concentration (CO2,eq) in the atmosphere, to produce a new best estimate for the transient climate sensitivity of 1.9K (95% confidence interval 1.2K - 2.7K). This is higher than other recent observation-based estimates, and is better aligned with the estimate of 1.8K and range (1.1K - 2.5K) derived from the latest generation of GCMs. The new estimate is produced by incorporating the observations in an energy balance framework, and by applying statistical methods that are standard in the field of Econometrics, but less common in climate studies. The study further suggests that about a third of the continental warming due to increasing CO2,eq was masked by aerosol cooling during the time period studied.
Statistical and low dose response
International Nuclear Information System (INIS)
Thorson, M.R.; Endres, G.W.R.
1981-01-01
The low dose response and the lower limit of detection of the Hanford dosimeter depend upon may factors, including the energy of the radiation, whether the exposure is to be a single radiation or mixed fields, annealing cycles, environmental factors, and how well various batches of TLD materials are matched in the system. A careful statistical study and sensitivity analysis were performed to determine how these factors influence the response of the dosimeter system. Estimates have been included in this study of the standard deviation of calculated dose for various mixed field exposures from 0 to 1000 mrem
Statistical characterization of the standard map
Ruiz, Guiomar; Tirnakli, Ugur; Borges, Ernesto P.; Tsallis, Constantino
2017-06-01
The standard map, paradigmatic conservative system in the (x, p) phase space, has been recently shown (Tirnakli and Borges (2016 Sci. Rep. 6 23644)) to exhibit interesting statistical behaviors directly related to the value of the standard map external parameter K. A comprehensive statistical numerical description is achieved in the present paper. More precisely, for large values of K (e.g. K = 10) where the Lyapunov exponents are neatly positive over virtually the entire phase space consistently with Boltzmann-Gibbs (BG) statistics, we verify that the q-generalized indices related to the entropy production q{ent} , the sensitivity to initial conditions q{sen} , the distribution of a time-averaged (over successive iterations) phase-space coordinate q{stat} , and the relaxation to the equilibrium final state q{rel} , collapse onto a fixed point, i.e. q{ent}=q{sen}=q{stat}=q{rel}=1 . In remarkable contrast, for small values of K (e.g. K = 0.2) where the Lyapunov exponents are virtually zero over the entire phase space, we verify q{ent}=q{sen}=0 , q{stat} ≃ 1.935 , and q{rel} ≃1.4 . The situation corresponding to intermediate values of K, where both stable orbits and a chaotic sea are present, is discussed as well. The present results transparently illustrate when BG behavior and/or q-statistical behavior are observed.
International Nuclear Information System (INIS)
Weathers, J.B.; Luck, R.; Weathers, J.W.
2009-01-01
The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.
Energy Technology Data Exchange (ETDEWEB)
Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com
2009-11-15
The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.
Zhang, P; Jones, R M
2014-01-01
Beam-excited higher order modes (HOM) can be used to provide beam diagnostics. Here we focus on 3.9 GHz superconducting accelerating cavities. In particular we study dipole mode excitation and its application to beam position determinations. In order to extract beam position information, linear regression can be used. Due to a large number of sampling points in the waveforms, statistical methods are used to effectively reduce the dimension of the system, such as singular value decomposition (SVD) and k-means clustering. These are compared with the direct linear regression (DLR) on the entire waveforms. A cross-validation technique is used to study the sample independent precisions of the position predictions given by these three methods. A RMS prediction error in the beam position of approximately 50 micron can be achieved by DLR and SVD, while k-means clustering suggests 70 micron.
Spectral envelope sensitivity of musical instrument sounds.
Gunawan, David; Sen, D
2008-01-01
It is well known that the spectral envelope is a perceptually salient attribute in musical instrument timbre perception. While a number of studies have explored discrimination thresholds for changes to the spectral envelope, the question of how sensitivity varies as a function of center frequency and bandwidth for musical instruments has yet to be addressed. In this paper a two-alternative forced-choice experiment was conducted to observe perceptual sensitivity to modifications made on trumpet, clarinet and viola sounds. The experiment involved attenuating 14 frequency bands for each instrument in order to determine discrimination thresholds as a function of center frequency and bandwidth. The results indicate that perceptual sensitivity is governed by the first few harmonics and sensitivity does not improve when extending the bandwidth any higher. However, sensitivity was found to decrease if changes were made only to the higher frequencies and continued to decrease as the distorted bandwidth was widened. The results are analyzed and discussed with respect to two other spectral envelope discrimination studies in the literature as well as what is predicted from a psychoacoustic model.
Statistical model for prediction of hearing loss in patients receiving cisplatin chemotherapy.
Johnson, Andrew; Tarima, Sergey; Wong, Stuart; Friedland, David R; Runge, Christina L
2013-03-01
This statistical model might be used to predict cisplatin-induced hearing loss, particularly in patients undergoing concomitant radiotherapy. To create a statistical model based on pretreatment hearing thresholds to provide an individual probability for hearing loss from cisplatin therapy and, secondarily, to investigate the use of hearing classification schemes as predictive tools for hearing loss. Retrospective case-control study. Tertiary care medical center. A total of 112 subjects receiving chemotherapy and audiometric evaluation were evaluated for the study. Of these subjects, 31 met inclusion criteria for analysis. The primary outcome measurement was a statistical model providing the probability of hearing loss following the use of cisplatin chemotherapy. Fifteen of the 31 subjects had significant hearing loss following cisplatin chemotherapy. American Academy of Otolaryngology-Head and Neck Society and Gardner-Robertson hearing classification schemes revealed little change in hearing grades between pretreatment and posttreatment evaluations for subjects with or without hearing loss. The Chang hearing classification scheme could effectively be used as a predictive tool in determining hearing loss with a sensitivity of 73.33%. Pretreatment hearing thresholds were used to generate a statistical model, based on quadratic approximation, to predict hearing loss (C statistic = 0.842, cross-validated = 0.835). The validity of the model improved when only subjects who received concurrent head and neck irradiation were included in the analysis (C statistic = 0.91). A calculated cutoff of 0.45 for predicted probability has a cross-validated sensitivity and specificity of 80%. Pretreatment hearing thresholds can be used as a predictive tool for cisplatin-induced hearing loss, particularly with concomitant radiotherapy.
Medical Statistics – Mathematics or Oracle? Farewell Lecture
Directory of Open Access Journals (Sweden)
Gaus, Wilhelm
2005-06-01
Full Text Available Certainty is rare in medicine. This is a direct consequence of the individuality of each and every human being and the reason why we need medical statistics. However, statistics have their pitfalls, too. Fig. 1 shows that the suicide rate peaks in youth, while in Fig. 2 the rate is highest in midlife and Fig. 3 in old age. Which of these contradictory messages is right? After an introduction to the principles of statistical testing, this lecture examines the probability with which statistical test results are correct. For this purpose the level of significance and the power of the test are compared with the sensitivity and specificity of a diagnostic procedure. The probability of obtaining correct statistical test results is the same as that for the positive and negative correctness of a diagnostic procedure and therefore depends on prevalence. The focus then shifts to the problem of multiple statistical testing. The lecture demonstrates that for each data set of reasonable size at least one test result proves to be significant - even if the data set is produced by a random number generator. It is extremely important that a hypothesis is generated independently from the data used for its testing. These considerations enable us to understand the gradation of "lame excuses, lies and statistics" and the difference between pure truth and the full truth. Finally, two historical oracles are cited.
ter Burg, Wouter; Bouma, Krista; Schakel, Durk J; Wijnhoven, Susan W P; van Engelen, Jacqueline; van Loveren, Henk; Ezendam, Janine
2014-04-01
Consumers using air fresheners are exposed to the emitted ingredients, including fragrances, via the respiratory tract. Several fragrances are known skin sensitizers, but it is unknown whether inhalation exposure to these chemicals can induce respiratory sensitization. Effects on the immune system were assessed by testing a selection of five fragrance allergens in the respiratory local lymph node assay (LLNA). The probability and extent of exposure were assessed by measuring concentrations of the 24 known fragrance allergens in 109 air fresheners. It was shown that the most frequently used fragrances in air fresheners were D-limonene and linalool. In the respiratory LLNA, these fragrances were negative. Of the other tested chemicals, only isoeugenol induced a statistically significant increase in cell proliferation. Consumer exposure was assessed in more detail for D-limonene, linalool, and isoeugenol by using exposure modeling tools. It was shown that the most frequently used fragrances in air fresheners, D-limonene, and linalool gave rise to a higher consumer exposure compared with isoeugenol. To evaluate whether the consumer exposure to these fragrances is low or high, these levels were compared with measured air concentrations of diisocyanates, known human respiratory sensitizers. This comparison showed that consumer exposure from air fresheners to D-limonene, linalool, and isoeugenol is considerably lower than occupational exposure to diisocyanates. By combing this knowledge on sensitizing potency with the much lower exposure compared to diisocyanates it seems highly unlikely that isoeugenol can induce respiratory sensitization in consumers using air fresheners.
Statistical learning and selective inference.
Taylor, Jonathan; Tibshirani, Robert J
2015-06-23
We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.
Student Drop-Out from German Higher Education Institutions
Heublein, Ulrich
2014-01-01
28% of students of any one year currently give up their studies in bachelor degree programmes at German higher education institutions. Drop-out is to be understood as the definite termination in the higher education system without obtaining an academic degree. The drop-out rate is thereby calculated with the help of statistical estimation…
Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P
1999-01-01
Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149
Kiekkas, Panagiotis; Panagiotarou, Aliki; Malja, Alvaro; Tahirai, Daniela; Zykai, Rountina; Bakalis, Nick; Stefanopoulos, Nikolaos
2015-12-01
Although statistical knowledge and skills are necessary for promoting evidence-based practice, health sciences students have expressed anxiety about statistics courses, which may hinder their learning of statistical concepts. To evaluate the effects of a biostatistics course on nursing students' attitudes toward statistics and to explore the association between these attitudes and their performance in the course examination. One-group quasi-experimental pre-test/post-test design. Undergraduate nursing students of the fifth or higher semester of studies, who attended a biostatistics course. Participants were asked to complete the pre-test and post-test forms of The Survey of Attitudes Toward Statistics (SATS)-36 scale at the beginning and end of the course respectively. Pre-test and post-test scale scores were compared, while correlations between post-test scores and participants' examination performance were estimated. Among 156 participants, post-test scores of the overall SATS-36 scale and of the Affect, Cognitive Competence, Interest and Effort components were significantly higher than pre-test ones, indicating that the course was followed by more positive attitudes toward statistics. Among 104 students who participated in the examination, higher post-test scores of the overall SATS-36 scale and of the Affect, Difficulty, Interest and Effort components were significantly but weakly correlated with higher examination performance. Students' attitudes toward statistics can be improved through appropriate biostatistics courses, while positive attitudes contribute to higher course achievements and possibly to improved statistical skills in later professional life. Copyright © 2015 Elsevier Ltd. All rights reserved.
Higher Education Institutions in India and its Management
Niradhar Dey
2011-01-01
Managing higher education institutions in India is just like a junction, how to show the path to the future nation builder of our country. There are very hue and cry situations/difficulties in present Indian higher education about what to do and what not to do. It becomes so sensitive that it creates conflict in inter and intra management of higher education institutions. In recent years there have been debates and controversies regarding management of higher education institutions so as to i...
National Statistical Commission and Indian Official Statistics*
Indian Academy of Sciences (India)
IAS Admin
a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.
Official Statistics and Statistics Education: Bridging the Gap
Directory of Open Access Journals (Sweden)
Gal Iddo
2017-03-01
Full Text Available This article aims to challenge official statistics providers and statistics educators to ponder on how to help non-specialist adult users of statistics develop those aspects of statistical literacy that pertain to official statistics. We first document the gap in the literature in terms of the conceptual basis and educational materials needed for such an undertaking. We then review skills and competencies that may help adults to make sense of statistical information in areas of importance to society. Based on this review, we identify six elements related to official statistics about which non-specialist adult users should possess knowledge in order to be considered literate in official statistics: (1 the system of official statistics and its work principles; (2 the nature of statistics about society; (3 indicators; (4 statistical techniques and big ideas; (5 research methods and data sources; and (6 awareness and skills for citizens’ access to statistical reports. Based on this ad hoc typology, we discuss directions that official statistics providers, in cooperation with statistics educators, could take in order to (1 advance the conceptualization of skills needed to understand official statistics, and (2 expand educational activities and services, specifically by developing a collaborative digital textbook and a modular online course, to improve public capacity for understanding of official statistics.
Evaluation of clustering statistics with N-body simulations
International Nuclear Information System (INIS)
Quinn, T.R.
1986-01-01
Two series of N-body simulations are used to determine the effectiveness of various clustering statistics in revealing initial conditions from evolved models. All the simulations contained 16384 particles and were integrated with the PPPM code. One series is a family of models with power at only one wavelength. The family contains five models with the wavelength of the power separated by factors of √2. The second series is a family of all equal power combinations of two wavelengths taken from the first series. The clustering statistics examined are the two point correlation function, the multiplicity function, the nearest neighbor distribution, the void probability distribution, the distribution of counts in cells, and the peculiar velocity distribution. It is found that the covariance function, the nearest neighbor distribution, and the void probability distribution are relatively insensitive to the initial conditions. The distribution of counts in cells show a little more sensitivity, but the multiplicity function is the best of the statistics considered for revealing the initial conditions
Radioecological sensitivity of permanent grasslands
International Nuclear Information System (INIS)
Besson, Benoit
2009-01-01
The project 'SENSIB' of the Institute for Radiological Protection and Nuclear Safety (IRSN) aims at characterizing and classifying parameters with significant impact on the transfer of radioactive contaminants in the environment. This thesis is focused on permanent grassland areas. Its objectives are the analysis of the activity variations of two artificial radionuclides ( 137 Cs and 90 Sr) in the chain from soil to dairy products as well as the categorization of ecological and anthropogenic parameters, which determine the sensitivity of the studied area. For this study, in situ sampling is carried out in 15 farms in 3 different French regions (Charente, Puy-de-Dome and Jura). The sampling sites are chosen according to their natural variations (geology, altitude and climate) and the soil types. Additionally to the radiologic measurements, geographic, soil and vegetation data as well as data concerning cattle-rearing and cheese manufacturing processes are gathered. From the soil to the grass vegetation, 137 Cs transfer factors vary between 3 x 10-3 and 148 x 10-3 Bq kg-1 (dry weight) per Bq kg-1 (dry weight) (N = 73). Theses transfer factors are significantly higher in the Puy-de-Dome region than in the Jura region. The 137 Cs transfer factor from cattle feed to milk varies from 5.9 x 10-3 to 258 x 10-3 Bq kg-1 (fresh weight) per Bq kg-1 (dry weight) (N = 28). Statistically, it is higher in the Charente region. Finally, the 90 Sr transfer factor from milk to cheese ranges from 3.9 to 12.1. The studied site with the highest factor is the Jura (N = 25). The link between milk and dairy products is the stage with the most 137 Cs and 90 Sr transfers. A nonlinear approach based on a discretization method of the transfer factor with multiple comparison tests admits a classification of the sensitivity factors from soil to grass vegetation. We can determine 20 factors interfering in the 137 Cs transfer into the vegetation, for instance, the clay rate of the soils or a marker
The neurobiology of uncertainty: implications for statistical learning.
Hasson, Uri
2017-01-05
The capacity for assessing the degree of uncertainty in the environment relies on estimating statistics of temporally unfolding inputs. This, in turn, allows calibration of predictive and bottom-up processing, and signalling changes in temporally unfolding environmental features. In the last decade, several studies have examined how the brain codes for and responds to input uncertainty. Initial neurobiological experiments implicated frontoparietal and hippocampal systems, based largely on paradigms that manipulated distributional features of visual stimuli. However, later work in the auditory domain pointed to different systems, whose activation profiles have interesting implications for computational and neurobiological models of statistical learning (SL). This review begins by briefly recapping the historical development of ideas pertaining to the sensitivity to uncertainty in temporally unfolding inputs. It then discusses several issues at the interface of studies of uncertainty and SL. Following, it presents several current treatments of the neurobiology of uncertainty and reviews recent findings that point to principles that serve as important constraints on future neurobiological theories of uncertainty, and relatedly, SL. This review suggests it may be useful to establish closer links between neurobiological research on uncertainty and SL, considering particularly mechanisms sensitive to local and global structure in inputs, the degree of input uncertainty, the complexity of the system generating the input, learning mechanisms that operate on different temporal scales and the use of learnt information for online prediction.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Directory of Open Access Journals (Sweden)
Ugander Martin
2006-11-01
Full Text Available Abstract Background A Paleolithic diet has been suggested to be more in concordance with human evolutionary legacy than a cereal based diet. This might explain the lower incidence among hunter-gatherers of diseases of affluence such as type 2 diabetes, obesity and cardiovascular disease. The aim of this study was to experimentally study the long-term effect of a Paleolithic diet on risk factors for these diseases in domestic pigs. We examined glucose tolerance, post-challenge insulin response, plasma C-reactive protein and blood pressure after 15 months on Paleolithic diet in comparison with a cereal based swine feed. Methods Upon weaning twenty-four piglets were randomly allocated either to cereal based swine feed (Cereal group or cereal free Paleolithic diet consisting of vegetables, fruit, meat and a small amount of tubers (Paleolithic group. At 17 months of age an intravenous glucose tolerance test was performed and pancreas specimens were collected for immunohistochemistry. Group comparisons of continuous variables were made by use of the t-test. P Results At the end of the study the Paleolithic group weighed 22% less and had 43% lower subcutaneous fat thickness at mid sternum. No significant difference was seen in fasting glucose between groups. Dynamic insulin sensitivity was significantly higher (p = 0.004 and the insulin response was significantly lower in the Paleolithic group (p = 0.001. The geometric mean of C-reactive protein was 82% lower (p = 0.0007 and intra-arterial diastolic blood pressure was 13% lower in the Paleolithic group (p = 0.007. In evaluations of multivariate correlations, diet emerged as the strongest explanatory variable for the variations in dynamic insulin sensitivity, insulin response, C-reactive protein and diastolic blood pressure when compared to other relevant variables such as weight and subcutaneous fat thickness at mid sternum. There was no obvious immunohistochemical difference in pancreatic islets
Naeger, D M; Chang, S D; Kolli, P; Shah, V; Huang, W; Thoeni, R F
2011-01-01
Objective The study compared the sensitivity, specificity, confidence and interpretation time of readers of differing experience in diagnosing acute appendicitis with contrast-enhanced CT using neutral vs positive oral contrast agents. Methods Contrast-enhanced CT for right lower quadrant or right flank pain was performed in 200 patients with neutral and 200 with positive oral contrast including 199 with proven acute appendicitis and 201 with other diagnoses. Test set disease prevalence was 50%. Two experienced gastrointestinal radiologists, one fellow and two first-year residents blindly assessed all studies for appendicitis (2000 readings) and assigned confidence scores (1=poor to 4=excellent). Receiver operating characteristic (ROC) curves were generated. Total interpretation time was recorded. Each reader's interpretation with the two agents was compared using standard statistical methods. Results Average reader sensitivity was found to be 96% (range 91–99%) with positive and 95% (89–98%) with neutral oral contrast; specificity was 96% (92–98%) and 94% (90–97%). For each reader, no statistically significant difference was found between the two agents (sensitivities p-values >0.6; specificities p-values>0.08), in the area under the ROC curve (range 0.95–0.99) or in average interpretation times. In cases without appendicitis, positive oral contrast demonstrated improved appendix identification (average 90% vs 78%) and higher confidence scores for three readers. Average interpretation times showed no statistically significant differences between the agents. Conclusion Neutral vs positive oral contrast does not affect the accuracy of contrast-enhanced CT for diagnosing acute appendicitis. Although positive oral contrast might help to identify normal appendices, we continue to use neutral oral contrast given its other potential benefits. PMID:20959365
Energy Technology Data Exchange (ETDEWEB)
Zhang, Pei, E-mail: pei.zhang@desy.de [School of Physics and Astronomy, The University of Manchester, Oxford Road, Manchester M13 9PL (United Kingdom); Deutsches Elektronen-Synchrotron (DESY), Notkestraße 85, D-22607 Hamburg (Germany); Cockcroft Institute of Science and Technology, Daresbury WA4 4AD (United Kingdom); Baboi, Nicoleta [Deutsches Elektronen-Synchrotron (DESY), Notkestraße 85, D-22607 Hamburg (Germany); Jones, Roger M. [School of Physics and Astronomy, The University of Manchester, Oxford Road, Manchester M13 9PL (United Kingdom); Cockcroft Institute of Science and Technology, Daresbury WA4 4AD (United Kingdom)
2014-01-11
Beam-excited higher order modes (HOMs) can be used to provide beam diagnostics. Here we focus on 3.9 GHz superconducting accelerating cavities. In particular we study dipole mode excitation and its application to beam position determinations. In order to extract beam position information, linear regression can be used. Due to a large number of sampling points in the waveforms, statistical methods are used to effectively reduce the dimension of the system, such as singular value decomposition (SVD) and k-means clustering. These are compared with the direct linear regression (DLR) on the entire waveforms. A cross-validation technique is used to study the sample independent precisions of the position predictions given by these three methods. A RMS prediction error in the beam position of approximately 50 μm can be achieved by DLR and SVD, while k-means clustering suggests 70 μm.
Statistics that learn: can logistic discriminant analysis improve diagnosis in brain SPECT?
International Nuclear Information System (INIS)
Behin-Ain, S.; Barnden, L.; Kwiatek, R.; Del Fante, P.; Casse, R.; Burnet, R.; Chew, G.; Kitchener, M.; Boundy, K.; Unger, S.
2002-01-01
Full text: Logistic discriminant analysis (LDA) is a statistical technique capable of discriminating individuals within a diseased group against normals. It also enables classification of various diseases within a group of patients. This technique provides a quantitative, automated and non-subjective clinical diagnostic tool. Based on a population known to have the disease and a normal control group, an algorithm was developed and trained to identify regions in the human brain responsible for the disease in question. The algorithm outputs a statistical map representing diseased or normal probability on a voxel or cluster basis from which an index is generated for each subject. The algorithm also generates a set of coefficients which is used to generate an index for the purpose of classification of new subjects. The results are comparable and complement those of Statistical Parametric Mapping (SPM) which employs a more common linear discriminant technique. The results are presented for brain SPECT studies of two diseases: chronic fatigue syndrome (CFS) and fibromyalgia (FM). A 100% specificity and 94% sensitivity is achieved for the CFS study (similar to SPM results) and for the FM study 82% specificity and 94% sensitivity is achieved with corresponding SPM results showing 90% specificity and 82% sensitivity. The results encourages application of LDA for discrimination of new single subjects as well as of diseased and normal groups. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc
Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi
2012-01-01
The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.
Forward and adjoint sensitivity computation of chaotic dynamical systems
Energy Technology Data Exchange (ETDEWEB)
Wang, Qiqi, E-mail: qiqi@mit.edu [Department of Aeronautics and Astronautics, MIT, 77 Mass Ave., Cambridge, MA 02139 (United States)
2013-02-15
This paper describes a forward algorithm and an adjoint algorithm for computing sensitivity derivatives in chaotic dynamical systems, such as the Lorenz attractor. The algorithms compute the derivative of long time averaged “statistical” quantities to infinitesimal perturbations of the system parameters. The algorithms are demonstrated on the Lorenz attractor. We show that sensitivity derivatives of statistical quantities can be accurately estimated using a single, short trajectory (over a time interval of 20) on the Lorenz attractor.
Sensitivity Analysis of features in tolerancing based on constraint function level sets
International Nuclear Information System (INIS)
Ziegler, Philipp; Wartzack, Sandro
2015-01-01
Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. - Highlights: • Alternative definition of Deviation Domains. • Proof of mathematical properties of the Deviation Domains. • Definition of the interface between Deviation Domains and Sensitivity Analysis. • Sensitivity analysis of a gearbox to show the methods practical use
Experimental Design for Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2001-01-01
This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as
Rorabaugh, P. A.; Salisbury, F. B.
1989-01-01
Although the Cholodny-Went model of auxin redistribution has been used to explain the transduction phase of gravitropism for over 60 years, problems are apparent, especially with dicot stems. An alternative to an auxin gradient is a physiological gradient in which lower tissues of a horizontal stem become more sensitive than upper tissues to auxin already present. Changes in tissue sensitivity to auxin were tested by immersing marked Glycine max Merrill (soybean) hypocotyl sections in buffered auxin solutions (0, 10(-8) to 10(-2) molar indoleacetic acid) and observing bending and growth of upper and lower surfaces. The two surfaces of horizontal hypocotyl sections responded differently to the same applied auxin stimulus; hypocotyls bent up (lower half grew more) in buffer alone or in low auxin levels, but bent down (upper half grew more) in high auxin. Dose-response curves were evaluated with Michaelis-Menten kinetics, with auxin-receptor binding analogous to enzyme-substrate binding. Vmax for the lower half was usually greater than that for the upper half, which could indicate more binding sites in the lower half. Km of the upper half was always greater than that of the lower half (unmeasurably low), which could indicate that upper-half binding sites had a much lower affinity for auxin than lower-half sites. Dose-response curves were also obtained for sections scrubbed' (cuticle abraded) on top or bottom before immersion in auxin, and gravitropic memory' experiments of L. Brauner and A. Hagar (1958 Planta 51: 115-147) were duplicated. [1-14C]Indoleacetic acid penetration was equal into the two halves, and endogenous plus exogenously supplied (not radiolabeled) free auxin in the two halves (by gas chromatography-selected ion monitoring-mass spectrometry) was also equal. Thus, differential growth occurred without free auxin redistribution, contrary to Cholodny-Went but in agreement with a sensitivity model.
Disentangling Greenhouse Warming and Aerosol Cooling to Reveal Earth's Transient Climate Sensitivity
Storelvmo, T.
2015-12-01
Earth's climate sensitivity has been the subject of heated debate for decades, and recently spurred renewed interest after the latest IPCC assessment report suggested a downward adjustment of the most likely range of climate sensitivities. Here, we present an observation-based study based on the time period 1964 to 2010, which is unique in that it does not rely on global climate models (GCMs) in any way. The study uses surface observations of temperature and incoming solar radiation from approximately 1300 surface sites, along with observations of the equivalent CO2 concentration (CO2,eq) in the atmosphere, to produce a new best estimate for the transient climate sensitivity of 1.9K (95% confidence interval 1.2K - 2.7K). This is higher than other recent observation-based estimates, and is better aligned with the estimate of 1.8K and range (1.1K - 2.5K) derived from the latest generation of GCMs. The new estimate is produced by incorporating the observations in an energy balance framework, and by applying statistical methods that are standard in the field of Econometrics, but less common in climate studies. The study further suggests that about a third of the continental warming due to increasing CO2,eq was masked by aerosol cooling during the time period studied.
International Nuclear Information System (INIS)
Robinson, M.T.
1993-01-01
The MARLOWE program was used to study the statistics of sputtering on the example of 1- to 100-keV Au atoms normally incident on static (001) and (111) Au crystals. The yield of sputtered atoms was examined as a function of the impact point of the incident particles (''ions'') on the target surfaces. There were variations on two scales. The effects of the axial and planar channeling of the ions could be traced, the details depending on the orientation of the target and the energies of the ions. Locally, the sputtering yield was very sensitive to the impact point, small changes in position often producing large changes yield. Results indicate strongly that the sputtering yield is a random (''chaotic'') function of the impact point
Boyapati, Ramanarayana; Chinthalapani, Srikanth; Ramisetti, Arpita; Salavadhi, Shyam Sunder; Ramachandran, Radhika
2018-01-01
Inflammation is a common feature of both peripheral artery disease (PAD) and periodontal disease. The aim of this study is to evaluate the relationship between PAD and periodontal disease by examining the levels of inflammatory cytokines, pentraxin-3 (PTX-3), and high-sensitive C-reactive protein from serum. A total of 50 patients were included in this cross-sectional study. Patients were divided into two groups: those with PAD (test group) and those with the non-PAD group (control group) based on ankle-brachial index values. Periodontal examinations and biochemical analysis for PTX-3 and high-sensitive C-reactive protein were performed to compare the two groups. All the obtained data were sent for statistical analyses using SPSS version 18. In the clinical parameters, there is statistically significant difference present between plaque index, clinical attachment loss, and periodontal inflammatory surface area with higher mean values in patients with PAD having periodontitis. There is statistical significant ( P C-reactive protein (hs-CRP), and PTX-3. PTX-3 and acute-phase cytokine such as hs-CRP can be regarded as one of the best indicators to show the association between the PAD and periodontitis followed by hs-CRP, TC, very LDL (VLDL), and LDL. However, high-density lipoprotein (HDL) is a poor indicator for its association with chronic periodontitis and PAD.
Statistics of ductile fracture surfaces: the effect of material parameters
DEFF Research Database (Denmark)
Ponson, Laurent; Cao, Yuanyuan; Bouchaud, Elisabeth
2013-01-01
distributed. The three dimensional analysis permits modeling of a three dimensional material microstructure and of the resulting three dimensional stress and deformation states that develop in the fracture process region. Material parameters characterizing void nucleation are varied and the statistics...... of the resulting fracture surfaces is investigated. All the fracture surfaces are found to be self-affine over a size range of about two orders of magnitude with a very similar roughness exponent of 0.56 ± 0.03. In contrast, the full statistics of the fracture surfaces is found to be more sensitive to the material...
International Nuclear Information System (INIS)
Marrel, A.; Marie, N.; De Lozzo, M.
2015-01-01
Within the framework of the generation IV Sodium Fast Reactors, the safety in case of severe accidents is assessed. From this statement, CEA has developed a new physical tool to model the accident initiated by the Total Instantaneous Blockage (TIB) of a sub-assembly. This TIB simulator depends on many uncertain input parameters. This paper aims at proposing a global methodology combining several advanced statistical techniques in order to perform a global sensitivity analysis of this TIB simulator. The objective is to identify the most influential uncertain inputs for the various TIB outputs involved in the safety analysis. The proposed statistical methodology combining several advanced statistical techniques enables to take into account the constraints on the TIB simulator outputs (positivity constraints) and to deal simultaneously with various outputs. To do this, a space-filling design is used and the corresponding TIB model simulations are performed. Based on this learning sample, an efficient constrained Gaussian process metamodel is fitted on each TIB model outputs. Then, using the metamodels, classical sensitivity analyses are made for each TIB output. Multivariate global sensitivity analyses based on aggregated indices are also performed, providing additional valuable information. Main conclusions on the influence of each uncertain input are derived. - Highlights: • Physical-statistical tool for Sodium Fast Reactors TIB accident. • 27 uncertain parameters (core state, lack of physical knowledge) are highlighted. • Constrained Gaussian process efficiently predicts TIB outputs (safety criteria). • Multivariate sensitivity analyses reveal that three inputs are mainly influential. • The type of corium propagation (thermal or hydrodynamic) is the most influential
Human sensitivity to vertical self-motion.
Nesti, Alessandro; Barnett-Cowan, Michael; Macneilage, Paul R; Bülthoff, Heinrich H
2014-01-01
Perceiving vertical self-motion is crucial for maintaining balance as well as for controlling an aircraft. Whereas heave absolute thresholds have been exhaustively studied, little work has been done in investigating how vertical sensitivity depends on motion intensity (i.e., differential thresholds). Here we measure human sensitivity for 1-Hz sinusoidal accelerations for 10 participants in darkness. Absolute and differential thresholds are measured for upward and downward translations independently at 5 different peak amplitudes ranging from 0 to 2 m/s(2). Overall vertical differential thresholds are higher than horizontal differential thresholds found in the literature. Psychometric functions are fit in linear and logarithmic space, with goodness of fit being similar in both cases. Differential thresholds are higher for upward as compared to downward motion and increase with stimulus intensity following a trend best described by two power laws. The power laws' exponents of 0.60 and 0.42 for upward and downward motion, respectively, deviate from Weber's Law in that thresholds increase less than expected at high stimulus intensity. We speculate that increased sensitivity at high accelerations and greater sensitivity to downward than upward self-motion may reflect adaptations to avoid falling.
Assessment of the beryllium lymphocyte proliferation test using statistical process control.
Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M
2006-10-01
Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that
PINGU and the neutrino mass hierarchy: Statistical and systematical aspects
International Nuclear Information System (INIS)
Capozzi, F.; Marrone, A.; Lisi, E.
2016-01-01
The proposed PINGU project (Precision IceCube Next Generation Upgrade) is supposed to determine neutrino mass hierarchy through matter effects of atmospheric neutrinos crossing the Earth core and mantle, which leads to variations in the events spectrum in energy and zenith angle. The presence of non-negligible (and partly unknown) systematics on the spectral shape can make the statistical analysis particularly challenging in the limit of high statistics. Assuming plausible spectral shape uncertainties at the percent level (due to effective volume, cross section, resolution functions, oscillation parameters, etc.), we obtain a significant reduction in the sensitivity to the hierarchy. The obtained results show the importance of a dedicated research program aimed at a better characterization and reduction of the uncertainties in future high-statistics experiments with atmospheric neutrinos.
Hybrid statistics-simulations based method for atom-counting from ADF STEM images
Energy Technology Data Exchange (ETDEWEB)
De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)
2017-06-15
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.
The Playground Game: Inquiry‐Based Learning About Research Methods and Statistics
Westera, Wim; Slootmaker, Aad; Kurvers, Hub
2014-01-01
The Playground Game is a web-based game that was developed for teaching research methods and statistics to nursing and social sciences students in higher education and vocational training. The complexity and abstract nature of research methods and statistics poses many challenges for students. The
Implementation of an adaptive training and tracking game in statistics teaching
Groeneveld, C.M.; Kalz, M.; Ras, E.
2014-01-01
Statistics teaching in higher education has a number of challenges. An adaptive training, tracking and teaching tool in a gaming environment aims to address problems inherent in statistics teaching. This paper discusses the implementation of this tool in a large first year university programme and
Statistical distribution of resonance parameters for inelastic scattering of fast neutrons
International Nuclear Information System (INIS)
Radunovic, J.
1973-01-01
This paper deals with the application of statistical method for the analysis of nuclear reactions related to complex nuclei. It is shown that inelastic neutron scattering which occurs by creation of a complex nucleus in the higher energy range can be treated by statistical approach
Environmental context determines community sensitivity of freshwater zooplankton to a pesticide
International Nuclear Information System (INIS)
Stampfli, Nathalie C.; Knillmann, Saskia; Liess, Matthias; Beketov, Mikhail A.
2011-01-01
The environment is currently changing worldwide, and ecosystems are being exposed to multiple anthropogenic pressures. Understanding and consideration of such environmental conditions is required in ecological risk assessment of toxicants, but it remains basically limited. In the present study, we aimed to determine how and to what extent alterations in the abiotic and biotic environmental conditions can alter the sensitivity of a community to an insecticide, as well as its recovery after contamination. We conducted an outdoor microcosm experiment in which zooplankton communities were exposed to the insecticide esfenvalerate (0.03, 0.3, and 3 μg/L) under different regimes of solar radiation and community density, which represented different levels of food availability and competition. We focused on the sensitivity of the entire community and analysed it using multivariate statistical methods, such as principal response curves and redundancy analysis. The results showed that community sensitivity varied markedly between the treatments. In the experimental series with the lowest availability of food and strongest competition significant effects of the insecticide were found at the concentration of 0.03 μg/L. In contrast, in the series with relatively higher food availability and weak competition such effects were detected at 3 μg/L only. However, we did not find significant differences in the community recovery rates between the experimental treatments. These findings indicate that environmental context is more important for ecotoxicological evaluation than assumed previously.
Yilmaz, Ferkan
2012-06-01
The exact analysis of the higher-order statistics of the channel capacity (i.e., higher-order ergodic capacity) often leads to complicated expressions involving advanced special functions. In this paper, we provide a generic framework for the computation of the higher-order statistics of the channel capacity over generalized fading channels. As such, this novel framework for the higher-order statistics results in simple, closed-form expressions which are shown to be asymptotically tight bounds in the high signal-to-noise ratio (SNR) regime of a variety of fading environment. In addition, it reveals the existence of differences (i.e., constant capacity gaps in log-domain) among different fading environments. By asymptotically tight bound we mean that the high SNR limit of the difference between the actual higher-order statistics of the channel capacity and its asymptotic bound (i.e., lower bound) tends to zero. The mathematical formalism is illustrated with some selected numerical examples that validate the correctness of our newly derived results. © 2012 IEEE.
Directory of Open Access Journals (Sweden)
Esmeil Soleymani
2016-11-01
Full Text Available Abstract Background: The purpose of this study was to compare the cognitive emotional regulation strategies, sensory processing sensitivity and anxiety sensitivity in patients with multiple sclerosis and normal people. Materials and Methods: Statistical population of this study was all of patients with multiple sclerosis that referred to M.S association of Iran in the tehran. Sample of this study was 30 individuals of patients with multiple sclerosis selected by available sampling method and were matched with 30 individuals of normal people. Two groups completed cognitive emotion regulation, high sensory processing sensitivity and anxiety sensitivity questionnaires. Data were analyzed by one-way analysis of variance and Multivariate Analysis of Variance. Results: The results indicated that there is significant difference between two groups in view of cognitive emotion regulation strategies in which the mean of scores of patients with multiple sclerosis in maladaptive strategies of self- blame, catastrophizing and other blame were more than normal people and mean of scores of them in adaptive strategies of positive refocusing, positive reappraisal and putting into perspective were less than normal people. The results also indicated that there is a significant difference between two groups in anxiety sensitivity and sensory processing sensitivity. Conclusion: The most of emotional problems in patients with multiple sclerosis can be the result of more application of maladaptive strategies of cognitive emotion regulation, high sensory processing sensitivity and high anxiety sensitivity.
Scheeringa, Michael S.; Weems, Carl F.
2017-01-01
Abstract Objectives: Few studies have assessed how the diagnostic criteria for posttraumatic stress disorder (PTSD) apply to older children and adolescents. With the introduction of a new, developmentally sensitive set of criteria for very young children (age 6 years and younger) in Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5), this raises new questions about the validity of the criteria for older children and adolescents. The current study investigated how diagnostic changes in DSM-5 impact diagnosis rates in 7–18-year olds. Methods: PTSD, impairment, and comorbid psychopathology were assessed in 135 trauma-exposed, treatment-seeking participants. Children (ages 7–12) were examined separately from adolescents (ages 13–18) to assess for potential developmental differences. Results: A significantly higher proportion of 7–12-year-old children met criteria for DSM-5 diagnosis (53%) compared to Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) (37%). However, among 13–18-year-old adolescents, the proportions diagnosed with DSM-5 (73%) and DSM-IV (74%) did not differ. Participants who met criteria for DSM-5 only (17%) did not differ from those diagnosed with DSM-IV in terms impairment or comorbidity. Using the newly accepted age 6 years and younger criteria resulted in a significantly higher proportion of 7–12-year-old (but not 13–18-year olds) children meeting criteria compared to DSM-IV or DSM-5. However, these children showed less impairment and comorbidity than those diagnosed with DSM-IV. Conclusion: These findings suggest that DSM-5 criteria may be more developmentally sensitive than DSM-IV criteria, and may lead to higher prevalence rates of PTSD for 7–12-year-old children, but not for adolescents. Using the very young children criteria for 7–12-year-old children may further increase prevalence, but capture children with less severe psychopathology. PMID:28170306
Statistics of Microstructure, Peak Stress and Interface Damage in Fiber Reinforced Composites
DEFF Research Database (Denmark)
Kushch, Volodymyr I.; Shmegera, Sergii V.; Mishnaevsky, Leon
2009-01-01
This paper addresses an effect of the fiber arrangement and interactions on the peak interface stress statistics in a fiber reinforced composite material (FRC). The method we apply combines the multipole expansion technique with the representative unit cell model of composite bulk, which is able...... to simulate both the uniform and clustered random fiber arrangements. By averaging over a number of numerical tests, the empirical probability functions have been obtained for the nearest neighbor distance and the peak interface stress. It is shown that the considered statistical parameters are rather...... sensitive to the fiber arrangement, particularly cluster formation. An explicit correspondence between them has been established and an analytical formula linking the microstructure and peak stress statistics in FRCs has been suggested. Application of the statistical theory of extreme values to the local...
International Nuclear Information System (INIS)
Huang, Jian-Feng; Liu, Jun-Min; Su, Pei-Yang; Chen, Yi-Fan; Shen, Yong; Xiao, Li-Min; Kuang, Dai-Bin; Su, Cheng-Yong
2015-01-01
Highlights: • Four novel thiocyanate-free cyclometalated ruthenium sensitizer were conveniently synthesized. • The D-CF 3 -sensitized DSSCs show higher efficiency compared to N719 based cells. • The DSSCs based on D-CF 3 and D-bisCF 3 sensitizers exhibit excellent long-term stability. • The diverse cyclometalated Ru complexes can be developed as high-performance sensitizers for use in DSSC. - Abstract: Four novel thiocyanate-free cyclometallted Ru(II) complexes, D-bisCF 3 , D-CF 3 , D-OMe, and D-DPA, with two 4,4′-dicarboxylic acid-2,2′-bipyridine together with a functionalized phenylpyridine ancillary ligand, have been designed and synthesized. The effect of different substituents (R = bisCF 3 , CF 3 , OMe, and DPA) on the ancillary C^N ligand on the photophysical properties and photovoltaic performance is investigated. Under standard global AM 1.5 solar conditions, the device based on D-CF 3 sensitizer gives a higher conversion efficiency of 8.74% than those based on D-bisCF 3 , D-OMe, and D-DPA, which can be ascribed to its broad range of visible light absorption, appropriate localization of the frontier orbitals, weak hydrogen bonds between -CF 3 and -OH groups at the TiO 2 surface, moderate dye loading on TiO 2 , and high charge collection efficiency. Moreover, the D-bisCF 3 and D-CF 3 based DSSCs exhibit good stability under 100 mW cm −2 light soaking at 60 °C for 400 h
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
International Nuclear Information System (INIS)
Lim, Gyeong Hui
2008-03-01
This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics
Statistical learning in a natural language by 8-month-old infants.
Pelucchi, Bruna; Hay, Jessica F; Saffran, Jenny R
2009-01-01
Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants' ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Federal Policies and Higher Education in the United States.
Prisco, Anne; Hurley, Alicia D.; Carton, Thomas C; Richardson, Richard C., Jr.
The purpose of this report is to describe U.S. federal policies that have helped to shape the context within which state systems of higher education operated during the past decade. It also presents descriptive statistics about the higher education enterprise in the United States, including available performance data. The report is based on the…
Irrigated Area Maps and Statistics of India Using Remote Sensing and National Statistics
Directory of Open Access Journals (Sweden)
Prasad S. Thenkabail
2009-04-01
Full Text Available The goal of this research was to compare the remote-sensing derived irrigated areas with census-derived statistics reported in the national system. India, which has nearly 30% of global annualized irrigated areas (AIAs, and is the leading irrigated area country in the World, along with China, was chosen for the study. Irrigated areas were derived for nominal year 2000 using time-series remote sensing at two spatial resolutions: (a 10-km Advanced Very High Resolution Radiometer (AVHRR and (b 500-m Moderate Resolution Imaging Spectroradiometer (MODIS. These areas were compared with the Indian National Statistical Data on irrigated areas reported by the: (a Directorate of Economics and Statistics (DES of the Ministry of Agriculture (MOA, and (b Ministry of Water Resources (MoWR. A state-by-state comparison of remote sensing derived irrigated areas when compared with MoWR derived irrigation potential utilized (IPU, an equivalent of AIA, provided a high degree of correlation with R2 values of: (a 0.79 with 10-km, and (b 0.85 with MODIS 500-m. However, the remote sensing derived irrigated area estimates for India were consistently higher than the irrigated areas reported by the national statistics. The remote sensing derived total area available for irrigation (TAAI, which does not consider intensity of irrigation, was 101 million hectares (Mha using 10-km and 113 Mha using 500-m. The AIAs, which considers intensity of irrigation, was 132 Mha using 10-km and 146 Mha using 500-m. In contrast the IPU, an equivalent of AIAs, as reported by MoWR was 83 Mha. There are “large variations” in irrigated area statistics reported, even between two ministries (e.g., Directorate of Statistics of Ministry of Agriculture and Ministry of Water Resources of the same national system. The causes include: (a reluctance on part of the states to furnish irrigated area data in view of their vested interests in sharing of water, and (b reporting of large volumes of data
[Statistics for statistics?--Thoughts about psychological tools].
Berger, Uwe; Stöbel-Richter, Yve
2007-12-01
Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.
Does airborne nickel exposure induce nickel sensitization?
Mann, Eugen; Ranft, Ulrich; Eberwein, Georg; Gladtke, Dieter; Sugiri, Dorothee; Behrendt, Heidrun; Ring, Johannes; Schäfer, Torsten; Begerow, Jutta; Wittsiepe, Jürgen; Krämer, Ursula; Wilhelm, Michael
2010-06-01
Nickel is one of the most prevalent causes of contact allergy in the general population. This study focuses on human exposure to airborne nickel and its potential to induce allergic sensitization. The study group consisted of 309 children at school-starter age living in the West of Germany in the vicinity of two industrial sources and in a rural town without nearby point sources of nickel. An exposure assessment of nickel in ambient air was available for children in the Ruhr district using routinely monitored ambient air quality data and dispersion modelling. Internal nickel exposure was assessed by nickel concentrations in morning urine samples of the children. The observed nickel sensitization prevalence rates varied between 12.6% and 30.7%. Statistically significant associations were showed between exposure to nickel in ambient air and urinary nickel concentration as well as between urinary nickel concentration and nickel sensitization. Furthermore, an elevated prevalence of nickel sensitization was associated with exposure to increased nickel concentrations in ambient air. The observed associations support the assumption that inhaled nickel in ambient air might be a risk factor for nickel sensitization; further studies in larger collectives are necessary.
Applied statistics for civil and environmental engineers
Kottegoda, N T
2009-01-01
Civil and environmental engineers need an understanding of mathematical statistics and probability theory to deal with the variability that affects engineers'' structures, soil pressures, river flows and the like. Students, too, need to get to grips with these rather difficult concepts.This book, written by engineers for engineers, tackles the subject in a clear, up-to-date manner using a process-orientated approach. It introduces the subjects of mathematical statistics and probability theory, and then addresses model estimation and testing, regression and multivariate methods, analysis of extreme events, simulation techniques, risk and reliability, and economic decision making.325 examples and case studies from European and American practice are included and each chapter features realistic problems to be solved.For the second edition new sections have been added on Monte Carlo Markov chain modeling with details of practical Gibbs sampling, sensitivity analysis and aleatory and epistemic uncertainties, and co...
Directory of Open Access Journals (Sweden)
Saeed Ahmad
2016-11-01
Full Text Available study explored intercultural sensitivity of 103 faculty members of the English Language Centre (ELC of Jazan University, Saudi Arabia. A quantitative and non-experimental design was adopted for this study in which intercultural sensitivity of the English language teachers was evaluated on five demographic variables (e.g. gender, education, religion, total teaching experience, and experience of teaching in intercultural context. The results revealed that the international faculty of ELC abreast the basic canons of Intercultural adjustments. This suggests that the teachers are not only familiar with different cultural patterns (like beliefs, values and communication styles they are willing to minimize these differences and adopt universal set of values for effective educational practices. The results indicate the participants’ higher level of empathy, respect for others’ culture, tolerance on differences and high willingness to integrate with other cultures. The data reveals no statistically significant difference between the two groups in three variables, i.e. gender (Male & Female, qualification (Masters' & Ph.D and religion (Muslims & Non-Muslims. However, there was found a statistically significant difference in the two groups (Less than ten years & More than ten years in two variables, i.e. total teaching experience and teaching experience in intercultural context.
Monroy, Claire D; Gerson, Sarah A; Hunnius, Sabine
2018-05-01
Humans are sensitive to the statistical regularities in action sequences carried out by others. In the present eyetracking study, we investigated whether this sensitivity can support the prediction of upcoming actions when observing unfamiliar action sequences. In two between-subjects conditions, we examined whether observers would be more sensitive to statistical regularities in sequences performed by a human agent versus self-propelled 'ghost' events. Secondly, we investigated whether regularities are learned better when they are associated with contingent effects. Both implicit and explicit measures of learning were compared between agent and ghost conditions. Implicit learning was measured via predictive eye movements to upcoming actions or events, and explicit learning was measured via both uninstructed reproduction of the action sequences and verbal reports of the regularities. The findings revealed that participants, regardless of condition, readily learned the regularities and made correct predictive eye movements to upcoming events during online observation. However, different patterns of explicit-learning outcomes emerged following observation: Participants were most likely to re-create the sequence regularities and to verbally report them when they had observed an actor create a contingent effect. These results suggest that the shift from implicit predictions to explicit knowledge of what has been learned is facilitated when observers perceive another agent's actions and when these actions cause effects. These findings are discussed with respect to the potential role of the motor system in modulating how statistical regularities are learned and used to modify behavior.
Statistical Validation of Engineering and Scientific Models: Background
International Nuclear Information System (INIS)
Hills, Richard G.; Trucano, Timothy G.
1999-01-01
A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made
Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)
Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur
2016-01-01
We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497
Bunn, A.G.; Jansma, E.; Korpela, M.; Westfall, R.D.; Baldwin, J.
2013-01-01
Mean sensitivity (ζ) continues to be used in dendrochronology despite a literature that shows it to be of questionable value in describing the properties of a time series. We simulate first-order autoregressive models with known parameters and show that ζ is a function of variance and
Limitations of Poisson statistics in describing radioactive decay.
Sitek, Arkadiusz; Celler, Anna M
2015-12-01
The assumption that nuclear decays are governed by Poisson statistics is an approximation. This approximation becomes unjustified when data acquisition times longer than or even comparable with the half-lives of the radioisotope in the sample are considered. In this work, the limits of the Poisson-statistics approximation are investigated. The formalism for the statistics of radioactive decay based on binomial distribution is derived. The theoretical factor describing the deviation of variance of the number of decays predicated by the Poisson distribution from the true variance is defined and investigated for several commonly used radiotracers such as (18)F, (15)O, (82)Rb, (13)N, (99m)Tc, (123)I, and (201)Tl. The variance of the number of decays estimated using the Poisson distribution is significantly different than the true variance for a 5-minute observation time of (11)C, (15)O, (13)N, and (82)Rb. Durations of nuclear medicine studies often are relatively long; they may be even a few times longer than the half-lives of some short-lived radiotracers. Our study shows that in such situations the Poisson statistics is unsuitable and should not be applied to describe the statistics of the number of decays in radioactive samples. However, the above statement does not directly apply to counting statistics at the level of event detection. Low sensitivities of detectors which are used in imaging studies make the Poisson approximation near perfect. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Statistical physics of medical ultrasonic images
International Nuclear Information System (INIS)
Wagner, R.F.; Insana, M.F.; Brown, D.G.; Smith, S.W.
1987-01-01
The physical and statistical properties of backscattered signals in medical ultrasonic imaging are reviewed in terms of: 1) the radiofrequency signal; 2) the envelope (video or magnitude) signal; and 3) the density of samples in simple and in compounded images. There is a wealth of physical information in backscattered signals in medical ultrasound. This information is contained in the radiofrequency spectrum - which is not typically displayed to the viewer - as well as in the higher statistical moments of the envelope or video signal - which are not readily accessed by the human viewer of typical B-scans. This information may be extracted from the detected backscattered signals by straightforward signal processing techniques at low resolution
Statistical theory applications and associated computer codes
International Nuclear Information System (INIS)
Prince, A.
1980-01-01
The general format is along the same lines as that used in the O.M. Session, i.e. an introduction to the nature of the physical problems and methods of solution based on the statistical model of the nucleus. Both binary and higher multiple reactions are considered. The computer codes used in this session are a combination of optical model and statistical theory. As with the O.M. sessions, the preparation of input and analysis of output are thoroughly examined. Again, comparison with experimental data serves to demonstrate the validity of the results and possible areas for improvement. (author)
About the use of rank transformation in sensitivity analysis of model output
International Nuclear Information System (INIS)
Saltelli, Andrea; Sobol', Ilya M
1995-01-01
Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis
Sensitivity of Footbridge Vibrations to Stochastic Walking Parameters
DEFF Research Database (Denmark)
Pedersen, Lars; Frier, Christian
2010-01-01
of the pedestrian. A stochastic modelling approach is adopted for this paper and it facilitates quantifying the probability of exceeding various vibration levels, which is useful in a discussion of serviceability of a footbridge design. However, estimates of statistical distributions of footbridge vibration levels...... to walking loads might be influenced by the models assumed for the parameters of the load model (the walking parameters). The paper explores how sensitive estimates of the statistical distribution of vertical footbridge response are to various stochastic assumptions for the walking parameters. The basis...... for the study is a literature review identifying different suggestions as to how the stochastic nature of these parameters may be modelled, and a parameter study examines how the different models influence estimates of the statistical distribution of footbridge vibrations. By neglecting scatter in some...
Students' Perspectives of Using Cooperative Learning in a Flipped Statistics Classroom
Chen, Liwen; Chen, Tung-Liang; Chen, Nian-Shing
2015-01-01
Statistics has been recognised as one of the most anxiety-provoking subjects to learn in the higher education context. Educators have continuously endeavoured to find ways to integrate digital technologies and innovative pedagogies in the classroom to eliminate the fear of statistics. The purpose of this study is to systematically identify…
International Nuclear Information System (INIS)
Dai, Wu-Sheng; Xie, Mi
2013-01-01
In this paper, we give a general discussion on the calculation of the statistical distribution from a given operator relation of creation, annihilation, and number operators. Our result shows that as long as the relation between the number operator and the creation and annihilation operators can be expressed as a † b=Λ(N) or N=Λ −1 (a † b), where N, a † , and b denote the number, creation, and annihilation operators, i.e., N is a function of quadratic product of the creation and annihilation operators, the corresponding statistical distribution is the Gentile distribution, a statistical distribution in which the maximum occupation number is an arbitrary integer. As examples, we discuss the statistical distributions corresponding to various operator relations. In particular, besides the Bose–Einstein and Fermi–Dirac cases, we discuss the statistical distributions for various schemes of intermediate statistics, especially various q-deformation schemes. Our result shows that the statistical distributions corresponding to various q-deformation schemes are various Gentile distributions with different maximum occupation numbers which are determined by the deformation parameter q. This result shows that the results given in much literature on the q-deformation distribution are inaccurate or incomplete. -- Highlights: ► A general discussion on calculating statistical distribution from relations of creation, annihilation, and number operators. ► A systemic study on the statistical distributions corresponding to various q-deformation schemes. ► Arguing that many results of q-deformation distributions in literature are inaccurate or incomplete
International Nuclear Information System (INIS)
Takeyoshi, Masahiro; Sawaki, Masakuni; Yamasaki, Kanji; Kimber, Ian
2003-01-01
The murine local lymph node assay (LLNA) is used for the identification of chemicals that have the potential to cause skin sensitization. However, it requires specific facility and handling procedures to accommodate a radioisotopic (RI) endpoint. We have developed non-radioisotopic (non-RI) endpoint of LLNA based on BrdU incorporation to avoid a use of RI. Although this alternative method appears viable in principle, it is somewhat less sensitive than the standard assay. In this study, we report investigations to determine the use of statistical analysis to improve the sensitivity of a non-RI LLNA procedure with α-hexylcinnamic aldehyde (HCA) in two separate experiments. Consequently, the alternative non-RI method required HCA concentrations of greater than 25% to elicit a positive response based on the criterion for classification as a skin sensitizer in the standard LLNA. Nevertheless, dose responses to HCA in the alternative method were consistent in both experiments and we examined whether the use of an endpoint based upon the statistical significance of induced changes in LNC turnover, rather than an SI of 3 or greater, might provide for additional sensitivity. The results reported here demonstrate that with HCA at least significant responses were, in each of two experiments, recorded following exposure of mice to 25% of HCA. These data suggest that this approach may be more satisfactory--at least when BrdU incorporation is measured. However, this modification of the LLNA is rather less sensitive than the standard method if employing statistical endpoint. Taken together the data reported here suggest that a modified LLNA in which BrdU is used in place of radioisotope incorporation shows some promise, but that in its present form, even with the use of a statistical endpoint, lacks some of the sensitivity of the standard method. The challenge is to develop strategies for further refinement of this approach
Takeyoshi, Masahiro; Sawaki, Masakuni; Yamasaki, Kanji; Kimber, Ian
2003-09-30
The murine local lymph node assay (LLNA) is used for the identification of chemicals that have the potential to cause skin sensitization. However, it requires specific facility and handling procedures to accommodate a radioisotopic (RI) endpoint. We have developed non-radioisotopic (non-RI) endpoint of LLNA based on BrdU incorporation to avoid a use of RI. Although this alternative method appears viable in principle, it is somewhat less sensitive than the standard assay. In this study, we report investigations to determine the use of statistical analysis to improve the sensitivity of a non-RI LLNA procedure with alpha-hexylcinnamic aldehyde (HCA) in two separate experiments. Consequently, the alternative non-RI method required HCA concentrations of greater than 25% to elicit a positive response based on the criterion for classification as a skin sensitizer in the standard LLNA. Nevertheless, dose responses to HCA in the alternative method were consistent in both experiments and we examined whether the use of an endpoint based upon the statistical significance of induced changes in LNC turnover, rather than an SI of 3 or greater, might provide for additional sensitivity. The results reported here demonstrate that with HCA at least significant responses were, in each of two experiments, recorded following exposure of mice to 25% of HCA. These data suggest that this approach may be more satisfactory-at least when BrdU incorporation is measured. However, this modification of the LLNA is rather less sensitive than the standard method if employing statistical endpoint. Taken together the data reported here suggest that a modified LLNA in which BrdU is used in place of radioisotope incorporation shows some promise, but that in its present form, even with the use of a statistical endpoint, lacks some of the sensitivity of the standard method. The challenge is to develop strategies for further refinement of this approach.
Falgreen, Steffen; Laursen, Maria Bach; Bødker, Julie Støve; Kjeldsen, Malene Krag; Schmitz, Alexander; Nyegaard, Mette; Johnsen, Hans Erik; Dybkær, Karen; Bøgsted, Martin
2014-06-05
In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves' dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological
Silicon nanowire structures as high-sensitive pH-sensors
International Nuclear Information System (INIS)
Belostotskaya, S O; Chuyko, O V; Kuznetsov, A E; Kuznetsov, E V; Rybachek, E N
2012-01-01
Sensitive elements for pH-sensors created on silicon nanostructures were researched. Silicon nanostructures have been used as ion-sensitive field effect transistor (ISFET) for the measurement of solution pH. Silicon nanostructures have been fabricated by 'top-down' approach and have been studied as pH sensitive elements. Nanowires have the higher sensitivity. It was shown, that sensitive element, which is made of 'one-dimensional' silicon nanostructure have bigger pH-sensitivity as compared with 'two-dimensional' structure. Integrated element formed from two p- and n-type nanowire ISFET ('inverter') can be used as high sensitivity sensor for local relative change [H+] concentration in very small volume.
Gamma irradiation increase the sensitivity of Salmonella to antibiotics
International Nuclear Information System (INIS)
Ben Miloud, Najla; Barkallah, Insaf
2008-01-01
In order to study the effect of ionizing radiation on the resistance of Salmonella to antibiotics, four strains of Salmonella were isolated from foods, The different strains used in the present study are (S. Hadar isolate 287, S. Hadar isolate 63, S. Cerro isolate 291, S. Zanzibar isolate 1103), antibiogram analyses were made to test the in vitro-sensitivity of irradiated Salmonella isolates to different antibiotics.The analyse of Control and exposed antibiograms showed that gamma radiation have increased the sensitivity of Salmonella isolates to Cefalotin, Chloramphenicol, Nalidixic acid, Spiramycin and Gentamycin excepted S. Hadar isolate 287 that was resistant to Cefalotin and became sensitive after irradiation. Statistical analyses showed that the effect of different irradiation dose treatment on the antibiotic sensitivity is increasingly significant. The irradiation didn't induce modifications of the sensitivity to other antibiotics,probably because of their nature, of their penetration mode inside the cell or their action way
Challenges for statistics teaching and teacher’s training in Mexico
Directory of Open Access Journals (Sweden)
Sergio Hernández González
2013-08-01
Full Text Available This work will cover the problems that are found in teacher training and professional development in Probability and Statistics in higher education in Mexico. It will be approached through four focuses: a the characterization and training of teachers that drive the development and implementation of curriculum reforms in the teaching of Statistics; b challenges of teachers in the instruction of university-level Statistics; c new curricular reforms with respect to the instruction of Statistics that propose the development of a learning based in projects through the use of appropriate statistical software, and d educational innovation as a body of knowledge in development, by which the shaping of networks consisting of professors who favor the emergence of real innovation is brought about. Starting from these perspectives, the challenges confronted in the teaching and training of Statistics professors will be proposed.
International Nuclear Information System (INIS)
Babu, Dickson D.; Su, Rui; El-Shafei, Ahmed; Adhikari, Airody Vasudeva
2016-01-01
Highlights: • First report on the effect of anchoring groups on co-sensitization performance. • Two novel co-adsorbers have been designed and synthesized. • Barbituric acid has emerged as a good anchoring group for co-sensitizers. • Co-sensitized device displayed an enhanced efficiency of 8.06%. - Abstract: Herein, we report the molecular design and synthesis of two novel organic co-adsorbers DBA-1((Z)-2-cyano-3-(5-(4-(cyclohexa-1,5-dien-3-ynyl(phenyl)amino)phenyl) -1-hexyl-1H-indol-3-yl)acrylic acid) and (DBA-2) 5-((5-(4-(diphenylamino)phenyl)-1-hexyl-1H-indol-3-yl)methylene) pyrimidine-2,4,6(1H,3H,5H)-trione with D-D-A (donor-donor-acceptor) architecture. We have combined the strong electron donating triphenylamine group with indole moiety attached to different acceptors/anchoring groups, as co-adsorbers for dye-sensitized solar cells and we present for the first time, the role of anchoring/acceptor unit on their co-adsorption properties. In this study, cyanoacetic acid and barbituric acid are employed as anchoring groups in the co-sensitizers DBA-1 and DBA-2, respectively_. Their electrochemical and photo-physical properties along with molecular geometries, obtained from Density Functional Theory (DFT) are employed to vindicate the effect of co-sensitizer structures on photovoltaic properties of DSSCs. We have demonstrated that the co-sensitization effect is profoundly dependent upon the anchoring/acceptor unit in the co-adsorber molecule. Devices co-sensitized using DBA-1 and DBA-2 along with HD-2 (Ru-complex of 4, 4'-bis-(1,4-benzodioxan-5-yl-vinyl)-[2,2']bipyridine), displayed higher power conversion efficiencies (PCEs) than the device sensitized using only HD-2. In the present work, ruthenium based sensitizer, HD-2, has been chosen due to its better solar-to-power conversion efficiency and impressively higher photocurrent densities than that of standard N719. Among them, co-adsorber DBA-2, containing barbituric acid as the acceptor/anchoring group
Roy, Andrew K; McCullagh, Brian N; Segurado, Ricardo; McGorrian, Catherine; Keane, Elizabeth; Keaney, John; Fitzgibbon, Maria N; Mahon, Niall G; Murray, Patrick T; Gaine, Sean P
2014-01-01
The detection of elevations in cardiorenal biomarkers, such as troponins, B-type natriuretic peptides (BNPs), and neutrophil gelatinase-associated lipocalins, are associated with poor outcomes in patients hospitalized with acute heart failure. Less is known about the association of these markers with adverse events in chronic right ventricular dysfunction due to pulmonary hypertension, or whether their measurement may improve risk assessment in the outpatient setting. We performed a cohort study of 108 patients attending the National Pulmonary Hypertension Unit in Dublin, Ireland, from 2007 to 2009. Cox proportional hazards analysis and receiver operating characteristic curves were used to determine predictors of mortality and hospitalization. Death or hospitalization occurred in 50 patients (46.3%) during the median study period of 4.1 years. Independent predictors of mortality were: 1) decreasing 6-minute walk test (6MWT; hazard ratio [HR] 12.8; P < .001); 2) BNP (HR 6.68; P < .001); and 3) highly sensitive troponin (hsTnT; HR 5.48; P < .001). Adjusted hazard analyses remained significant when hsTnT was added to a model with BNP and 6MWT (HR 9.26, 95% CI 3.61-23.79), as did the predictive ability of the model for death and rehospitalization (area under the receiver operating characteristic curve 0.81, 95% CI 0.73-0.90). Detection of troponin using a highly sensitive assay identifies a pulmonary hypertension subgroup with a poorer prognosis. hsTnT may also be used in a risk prediction model to identify patients at higher risk who may require escalation of targeted pulmonary vasodilator therapies and closer clinical surveillance. Copyright © 2014 Elsevier Inc. All rights reserved.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Ke Li
2016-01-01
Full Text Available A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF and Diagnostic Bayesian Network (DBN is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO. To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA is proposed to evaluate the sensitiveness of symptom parameters (SPs for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Statistical model predictions for p+p and Pb+Pb collisions at LHC
Kraus, I.; Cleymans, J.; Oeschler, H.; Redlich, K.; Wheaton, S.
2009-01-01
Particle production in p+p and central collisions at LHC is discussed in the context of the statistical thermal model. For heavy-ion collisions, predictions of various particle ratios are presented. The sensitivity of several ratios on the temperature and the baryon chemical potential is studied in
Probing the exchange statistics of one-dimensional anyon models
Greschner, Sebastian; Cardarelli, Lorenzo; Santos, Luis
2018-05-01
We propose feasible scenarios for revealing the modified exchange statistics in one-dimensional anyon models in optical lattices based on an extension of the multicolor lattice-depth modulation scheme introduced in [Phys. Rev. A 94, 023615 (2016), 10.1103/PhysRevA.94.023615]. We show that the fast modulation of a two-component fermionic lattice gas in the presence a magnetic field gradient, in combination with additional resonant microwave fields, allows for the quantum simulation of hardcore anyon models with periodic boundary conditions. Such a semisynthetic ring setup allows for realizing an interferometric arrangement sensitive to the anyonic statistics. Moreover, we show as well that simple expansion experiments may reveal the formation of anomalously bound pairs resulting from the anyonic exchange.
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
Energy Technology Data Exchange (ETDEWEB)
Woth, K. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Kuestenforschung
2001-07-01
In this study, the sensitivity of the estimation of small-scale climate variables using the technique of statistical downscaling is investigated and one method to select the most suitable input data is presented. For the example of precipitation in southwest Europe, the input data are selected systematically by extracting those stations that show a strong statistical relation in time with North Atlantic sea level pressure (SLP). From these stations the sector of North Atlantic SLP is selected that best explains the dominant spatial pattern of regional precipitation. For comparison, one alternative, slightly different geographical box is used. For both sectors a statistical model for the estimation of future rainfall in the southwest of Europe is constructed. It is shown that the method of statistical downscaling is sensitive to small changes of the input data and that the estimations of future precipitation show remarkable differences for the two different Atlantic SLP sectors considered. Possible reasons are discussed. (orig.)
Statistical measurement of power spectrum density of large aperture optical component
International Nuclear Information System (INIS)
Xu Jiancheng; Xu Qiao; Chai Liqun
2010-01-01
According to the requirement of ICF, a method based on statistical theory has been proposed to measure the power spectrum density (PSD) of large aperture optical components. The method breaks the large-aperture wavefront into small regions, and obtains the PSD of the large-aperture wavefront by weighted averaging of the PSDs of the regions, where the weight factor is each region's area. Simulation and experiment demonstrate the effectiveness of the proposed method. They also show that, the obtained PSDs of the large-aperture wavefront by statistical method and sub-aperture stitching method fit well, when the number of small regions is no less than 8 x 8. The statistical method is not sensitive to translation stage's errors and environment instabilities, thus it is appropriate for PSD measurement during the process of optical fabrication. (authors)
Keratosis reduces sensitivity of anal cytology in detecting anal intraepithelial neoplasia.
ElNaggar, Adam C; Santoso, Joseph T; Xie, Huiwen Bill
2012-02-01
To identify factors that may contribute to poor sensitivity of anal cytology in contrast to the sensitivity of anoscopy in heterosexual women. We analyzed 324 patients with biopsy confirmed diagnosis of genital intraepithelial neoplasia (either vulva, vaginal, or cervical) from 2006 to 2011 who underwent both anal cytology and anoscopy. Cytology, anoscopy, and biopsy results were recorded. Biopsy specimens underwent independent analysis for quality of specimen. Also, biopsy specimens were analyzed for characteristics that may contribute to correlation, or lack thereof, between anal cytology and anoscopic directed biopsy. 133 (41%) patients had abnormal anoscopy and underwent directed biopsy. 120 patients with normal anal cytology had anoscopy directed biopsies, resulting in 58 cases of AIN (sensitivity 9.4%; 0.039-0.199). This cohort was noted to have extensive keratosis covering the entire dysplastic anal lesion. 18 patients yielded abnormal anal cytology. Of these patients, 13 had anoscopic directed biopsies revealing 6 with AIN and absent keratosis (specificity 88.6%; 0.78-0.95). The κ statistic for anal cytology and anoscopy was -0.0213 (95% CI=-0.128-0.086). Keratosis reduces the sensitivity of anal cytology. Furthermore, anal cytology poorly correlates with anoscopy in the detection of AIN (κ statistic=-0.0213). Copyright © 2011 Elsevier Inc. All rights reserved.
Statistical core design methodology using the VIPRE thermal-hydraulics code
International Nuclear Information System (INIS)
Lloyd, M.W.; Feltus, M.A.
1995-01-01
An improved statistical core design methodology for developing a computational departure from nucleate boiling ratio (DNBR) correlation has been developed and applied in order to analyze the nominal 1.3 DNBR limit on Westinghouse Pressurized Water Reactor (PWR) cores. This analysis, although limited in scope, found that the DNBR limit can be reduced from 1.3 to some lower value and be accurate within an adequate confidence level of 95%, for three particular FSAR operational transients: turbine trip, complete loss of flow, and inadvertent opening of a pressurizer relief valve. The VIPRE-01 thermal-hydraulics code, the SAS/STAT statistical package, and the EPRI/Columbia University DNBR experimental data base were used in this research to develop the Pennsylvania State Statistical Core Design Methodology (PSSCDM). The VIPRE code was used to perform the necessary sensitivity studies and generate the EPRI correlation-calculated DNBR predictions. The SAS package used for these EPRI DNBR correlation predictions from VIPRE as a data set to determine the best fit for the empirical model and to perform the statistical analysis. (author)
Directory of Open Access Journals (Sweden)
Urtado CB
2011-11-01
Full Text Available Christiano Bertoldo Urtado1,2, Guilherme Borges Pereira3, Marilia Bertoldo Urtado4, Érica Blascovi de Carvalho2, Gerson dos Santos Leite1, Felipe Fedrizzi Donatto1, Claudio de Oliveira Assumpção1, Richard Diego Leite3, Carlos Alberto da Silva1, Marcelo Magalhães de Sales5, Ramires Alsamir Tibana5, Silvia Cristina Crepaldi Alves1, Jonato Prestes51Health Sciences, Methodist University of Piracicaba, Piracicaba, SP, 2Center for Investigation in Pediatrics, Faculty of Medical Sciences, State University of Campinas, Campinas, SP, 3Department of Physiological Sciences, Federal University of São Carlos, São Carlos, SP, 4Laboratory of Orofacial Pain, Division of Oral Physiology, Piracicaba Dental School, State University of Campinas, Campinas, SP, 5Graduation Program in Physical Education, Catholic University of Brasilia, Brasilia, DF, BrazilAbstract: The aim of the present study was to investigate the effects of anabolic-androgenic steroids and resistance training (RT on insulin sensitivity in ovariectomized rats. Adult female Wistar rats were divided into ten experimental groups (n = 5 animals per group: (1 sedentary (Sed-Intact; (2 sedentary ovariectomized (Sed-Ovx; (3 sedentary nandrolone (Sed-Intact-ND; (4 sedentary ovariectomized plus nandrolone (Sed-Ovx-ND; (5 trained (TR-Intact; (6 trained nandrolone (TR-Intact-ND; (7 trained ovariectomized (TR-Ovx; (8 trained ovariectomized plus nandrolone; (9 trained sham; and (10 trained ovariectomized plus sham. Four sessions of RT were used, during which the animals climbed a 1.1 m vertical ladder with weights attached to their tails. The sessions were performed once every 3 days, with between four and nine climbs and with eight to twelve dynamic movements per climb. To test the sensitivity of insulin in the pancreas, glucose and insulin tolerance tests were performed. For insulin sensitivity, there was a statistically significant interaction for the TR-Ovx group, which presented higher sensitivity
Association between atopic dermatitis and contact sensitization
DEFF Research Database (Denmark)
Hamann, Carsten R; Hamann, Dathan; Egeberg, Alexander
2017-01-01
BACKGROUND: It is unclear whether patients with atopic dermatitis (AD) have an altered prevalence or risk for contact sensitization. Increased exposure to chemicals in topical products together with impaired skin barrier function suggest a higher risk, whereas the immune profile suggests a lower...... contact dermatitis is suspected....... risk. OBJECTIVE: To perform a systematic review and meta-analysis of the association between AD and contact sensitization. METHODS: The PubMed/Medline, Embase, and Cochrane databases were searched for articles that reported on contact sensitization in individuals with and without AD. RESULTS...
Sensitivity of goodness-of-fit statistics to rainfall data rounding off
Deidda, Roberto; Puliga, Michelangelo
An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.
Directory of Open Access Journals (Sweden)
Babak Vojudi
2015-02-01
Full Text Available Objective: The present study was aimed at comparing interpersonal sensitivity and assertiveness between drug-dependent persons and ordinary people. Method: The research method was causal-comparative. The statistical population of the study consisted of all narcotic addicts of Tabriz City who referred to Addiction Treatment Centers while the research was being conducted. The number of 30 addicted persons was selected through cluster sampling and 30 ordinary persons as control group through convenience sampling method. Gmbryl & Ritchie’s assertiveness questionnaire (1975 and Boyce & Parker’s Interpersonal Sensitivity Measure (IPSM 1989 were used for data collection purposes. Results: The results showed that there was a statistically significant difference between two groups in terms of interpersonal sensitivity and assertiveness. The addicts showed less assertiveness and more interpersonal sensitivity in comparison with their healthy counterparts. Conclusion: The findings show that people who are unable to express themselves and exert sensitivity in interpersonal relationships are more likely at high risk of substance dependence. However, it is possible to prevent these persons from turning to addiction by teaching them these skills.
Industrial commodity statistics yearbook 2001. Production statistics (1992-2001)
International Nuclear Information System (INIS)
2003-01-01
This is the thirty-fifth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1992-2001 for about 200 countries and areas
Industrial commodity statistics yearbook 2002. Production statistics (1993-2002)
International Nuclear Information System (INIS)
2004-01-01
This is the thirty-sixth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title 'The Growth of World industry' and the next eight editions under the title 'Yearbook of Industrial Statistics'. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1993-2002 for about 200 countries and areas
Industrial commodity statistics yearbook 2000. Production statistics (1991-2000)
International Nuclear Information System (INIS)
2002-01-01
This is the thirty-third in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. Most of the statistics refer to the ten-year period 1991-2000 for about 200 countries and areas
Position-sensitive superconductor detectors
International Nuclear Information System (INIS)
Kurakado, M.; Taniguchi, K.
2016-01-01
Superconducting tunnel junction (STJ) detectors and superconducting transition- edge sensors (TESs) are representative superconductor detectors having energy resolutions much higher than those of semiconductor detectors. STJ detectors are thin, thereby making it suitable for detecting low-energy X rays. The signals of STJ detectors are more than 100 times faster than those of TESs. By contrast, TESs are microcalorimeters that measure the radiation energy from the change in the temperature. Therefore, signals are slow and their time constants are typically several hundreds of μs. However, TESs possess excellent energy resolutions. For example, TESs have a resolution of 1.6 eV for 5.9-keV X rays. An array of STJs or TESs can be used as a pixel detector. Superconducting series-junction detectors (SSJDs) comprise multiple STJs and a single-crystal substrate that acts as a radiation absorber. SSJDs are also position sensitive, and their energy resolutions are higher than those of semiconductor detectors. In this paper, we give an overview of position-sensitive superconductor detectors.
Statistical power and the Rorschach: 1975-1991.
Acklin, M W; McDowell, C J; Orndoff, S
1992-10-01
The Rorschach Inkblot Test has been the source of long-standing controversies as to its nature and its psychometric properties. Consistent with behavioral science research in general, the concept of statistical power has been entirely ignored by Rorschach researchers. The concept of power is introduced and discussed, and a power survey of the Rorschach literature published between 1975 and 1991 in the Journal of Personality Assessment, Journal of Consulting and Clinical Psychology, Journal of Abnormal Psychology, Journal of Clinical Psychology, Journal of Personality, Psychological Bulletin, American Journal of Psychiatry, and Journal of Personality and Social Psychology was undertaken. Power was calculated for 2,300 statistical tests in 158 journal articles. Power to detect small, medium, and large effect sizes was .13, .56, and .85, respectively. Similar to the findings in other power surveys conducted on behavioral science research, we concluded that Rorschach research is underpowered to detect the differences under investigation. This undoubtedly contributes to the inconsistency of research findings which has been a source of controversy and criticism over the decades. It appears that research conducted according to the Comprehensive System for the Rorschach is more powerful. Recommendations are offered for improving power and strengthening the design sensitivity of Rorschach research, including increasing sample sizes, use of parametric statistics, reduction of error variance, more accurate reporting of findings, and editorial policies reflecting concern about the magnitude of relationships beyond an exclusive focus on levels of statistical significance.
A statistic sensitive to deviations from the zero-loss condition in a sequence of material balances
International Nuclear Information System (INIS)
Sellinschegg, D.
1982-01-01
The CUMUFR (cumulative sum of standardized MUFresiduals) statistic is proposed to examine materials balance data for deviations from the zero-loss condition. The time series of MUF-residuals is shown to be a linear transformation of the MUF-time series. The MUF-residuals can directly be obtained by applying the transformation or they can be obtained, approximately, by the application of a Kalman filter to estimate the true state of MUF. A modified sequential test with power one is formulated for testing the CUMUFR statistic. The detection capability of the proposed examination procedure is demonstrated by an example, based on Monte Carlo simulations, where the materials balance of the chemical separation process in a reference reprocessing facility is considered. It is shown that abrupt as well as protracted loss patterns are detected with rather high probability when they occur after a zeroloss period
Screening Test for Detection of Leptinotarsa decemlineata (Say Sensitivity to Insecticides
Directory of Open Access Journals (Sweden)
Dušanka Inđić
2012-01-01
Full Text Available In 2009, the sensitivity of 15 field populations of Colorado potato beetle (Leptinotarsadecemlineata Say. - CPB was assessed to chlorpyrifos, cypermethrin, thiamethoxam and fipronil,four insecticides which are mostly used for its control in Serbia. Screening test that allows rapidassessment of sensitivity of overwintered adults to insecticides was performed. Insecticideswere applied at label rates, and two, five and 10 fold higher rates by soaking method (5 sec.Mortality was assessed after 72h. From 15 monitored populations of CPB, two were sensitiveto label rate of chlorpyrifos, one was slightly resistant, 11 were resistant and one populationwas highly resistant. Concerning cypermethrin, two populations were sensitive, two slightlyresistant, five were resistant and six highly resistant. Highly sensitive to thiamethoxam labelrate were 12 populations, while three were sensitive. In the case of fipronil applied at label rate,two populations were highly sensitive, six sensitive, one slightly resistant and six were resistant.The application of insecticides at higher rates (2, 5 and 10 fold, that is justified only in bioassays,provided a rapid insight into sensitivity of field populations of CPB to insecticides.
Leadership: Underrepresentation of Women in Higher Education
Krause, Susan Faye
2017-01-01
In 2014, statisticians at the Bureau of Labor Statistics found that women constitute 45% of the workforce. Women's participation in high-level organizational leadership roles remains low. In higher education, women's representation in top-ranking leadership roles is less than one-third at colleges and universities. The conceptual framework for…
Delphi Decision Methods in Higher Education Administration.
Judd, Robert C.
This document describes and comments on the extent of use of the Delphi method in higher education decision making. Delphi is characterized by: (1) anonymity of response; (2) multiple iterations; (3) convergence of the distribution of answers; and (4) statistical group response (median, interquartile range) preserving intact a distribution that…
Fordyce, James A
2010-07-23
Phylogenetic hypotheses are increasingly being used to elucidate historical patterns of diversification rate-variation. Hypothesis testing is often conducted by comparing the observed vector of branching times to a null, pure-birth expectation. A popular method for inferring a decrease in speciation rate, which might suggest an early burst of diversification followed by a decrease in diversification rate is the gamma statistic. Using simulations under varying conditions, I examine the sensitivity of gamma to the distribution of the most recent branching times. Using an exploratory data analysis tool for lineages through time plots, tree deviation, I identified trees with a significant gamma statistic that do not appear to have the characteristic early accumulation of lineages consistent with an early, rapid rate of cladogenesis. I further investigated the sensitivity of the gamma statistic to recent diversification by examining the consequences of failing to simulate the full time interval following the most recent cladogenic event. The power of gamma to detect rate decrease at varying times was assessed for simulated trees with an initial high rate of diversification followed by a relatively low rate. The gamma statistic is extraordinarily sensitive to recent diversification rates, and does not necessarily detect early bursts of diversification. This was true for trees of various sizes and completeness of taxon sampling. The gamma statistic had greater power to detect recent diversification rate decreases compared to early bursts of diversification. Caution should be exercised when interpreting the gamma statistic as an indication of early, rapid diversification.
Directory of Open Access Journals (Sweden)
James A Fordyce
Full Text Available BACKGROUND: Phylogenetic hypotheses are increasingly being used to elucidate historical patterns of diversification rate-variation. Hypothesis testing is often conducted by comparing the observed vector of branching times to a null, pure-birth expectation. A popular method for inferring a decrease in speciation rate, which might suggest an early burst of diversification followed by a decrease in diversification rate is the gamma statistic. METHODOLOGY: Using simulations under varying conditions, I examine the sensitivity of gamma to the distribution of the most recent branching times. Using an exploratory data analysis tool for lineages through time plots, tree deviation, I identified trees with a significant gamma statistic that do not appear to have the characteristic early accumulation of lineages consistent with an early, rapid rate of cladogenesis. I further investigated the sensitivity of the gamma statistic to recent diversification by examining the consequences of failing to simulate the full time interval following the most recent cladogenic event. The power of gamma to detect rate decrease at varying times was assessed for simulated trees with an initial high rate of diversification followed by a relatively low rate. CONCLUSIONS: The gamma statistic is extraordinarily sensitive to recent diversification rates, and does not necessarily detect early bursts of diversification. This was true for trees of various sizes and completeness of taxon sampling. The gamma statistic had greater power to detect recent diversification rate decreases compared to early bursts of diversification. Caution should be exercised when interpreting the gamma statistic as an indication of early, rapid diversification.
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.
Directory of Open Access Journals (Sweden)
Anita Lindmark
Full Text Available When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance.The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method.Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252 and high specificity (0.991. There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence.The study emphasizes the importance of combining clinical relevance and level of statistical
Lindmark, Anita; van Rompaye, Bart; Goetghebeur, Els; Glader, Eva-Lotta; Eriksson, Marie
2016-01-01
When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke) to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance. The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method. Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252) and high specificity (0.991). There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence. The study emphasizes the importance of combining clinical relevance and level of statistical confidence when
Metals Are Important Contact Sensitizers: An Experience from Lithuania
Directory of Open Access Journals (Sweden)
Kotryna Linauskienė
2017-01-01
Full Text Available Background. Metals are very frequent sensitizers causing contact allergy and allergic contact dermatitis worldwide; up-to-date data based on patch test results has proved useful for the identification of a problem. Objectives. In this retrospective study prevalence of contact allergy to metals (nickel, chromium, palladium, gold, cobalt, and titanium in Lithuania is analysed. Patients/Methods. Clinical and patch test data of 546 patients patch tested in 2014–2016, in Vilnius University Hospital Santariskiu Klinikos, was analysed and compared with previously published data. Results. Almost third of tested patients (29.56% were sensitized to nickel. Younger women were more often sensitized to nickel than older ones (36% versus 22.8%, p=0.0011. Women were significantly more often sensitized to nickel than men (33% versus 6.1%, p<0.0001. Younger patients were more often sensitized to cobalt (11.6% versus 5.7%, p=0.0183. Sensitization to cobalt was related to sensitization to nickel (p<0.0001. Face dermatitis and oral discomfort were related to gold allergy (28% versus 6.9% dermatitis of other parts, p<0.0001. Older patients were patch test positive to gold(I sodium thiosulfate statistically significantly more often than younger ones (44.44% versus 21.21%, p=0.0281. Conclusions. Nickel, gold, cobalt, and chromium are leading metal sensitizers in Lithuania. Cobalt sensitization is often accompanied by sensitization to nickel. Sensitivity rate to palladium and nickel indicates possible cross-reactivity. No sensitization to titanium was found.
Sensitivity analysis of a low-level waste environmental transport code
International Nuclear Information System (INIS)
Hiromoto, G.
1989-01-01
Results are presented from a sensivity analysis of a computer code designed to simulate the environmental transport of radionuclides buried at shallow land waste repositories. A sensitivity analysis methodology, based on the surface response replacement and statistic sensitivity estimators, was developed to address the relative importance of the input parameters on the model output. Response surface replacement for the model was constructed by stepwise regression, after sampling input vectors from range and distribution of the input variables, and running the code to generate the associated output data. Sensitivity estimators were compute using the partial rank correlation coefficients and the standardized rank regression coefficients. The results showed that the tecniques employed in this work provides a feasible means to perform a sensitivity analysis of a general not-linear environmental radionuclides transport models. (author) [pt
Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories
International Nuclear Information System (INIS)
Vallisneri, Michele; Galley, Chad R
2012-01-01
The signal-to-noise ratio (SNR) is used in gravitational-wave observations as the basic figure of merit for detection confidence and, together with the Fisher matrix, for the amount of physical information that can be extracted from a detected signal. SNRs are usually computed from a sensitivity curve, which describes the gravitational-wave amplitude needed by a monochromatic source of given frequency to achieve a threshold SNR. Although the term 'sensitivity' is used loosely to refer to the detector's noise spectral density, the two quantities are not the same: the sensitivity includes also the frequency- and orientation-dependent response of the detector to gravitational waves and takes into account the duration of observation. For interferometric space-based detectors similar to LISA, which are sensitive to long-lived signals and have constantly changing position and orientation, exact SNRs need to be computed on a source-by-source basis. For convenience, most authors prefer to work with sky-averaged sensitivities, accepting inaccurate SNRs for individual sources and giving up control over the statistical distribution of SNRs for source populations. In this paper, we describe a straightforward end-to-end recipe to compute the non-sky-averaged sensitivity of interferometric space-based detectors of any geometry. This recipe includes the effects of spacecraft motion and of seasonal variations in the partially subtracted confusion foreground from Galactic binaries, and it can be used to generate a sampling distribution of sensitivities for a given source population. In effect, we derive error bars for the sky-averaged sensitivity curve, which provide a stringent statistical interpretation for previously unqualified statements about sky-averaged SNRs. As a worked-out example, we consider isotropic and Galactic-disk populations of monochromatic sources, as observed with the 'classic LISA' configuration. We confirm that the (standard) inverse-rms average sensitivity
Contribution to the sample mean plot for graphical and numerical sensitivity analysis
International Nuclear Information System (INIS)
Bolado-Lavin, R.; Castaings, W.; Tarantola, S.
2009-01-01
The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling
Statistical evaluation and measuring strategy for extremely small line shifts
International Nuclear Information System (INIS)
Hansen, P.G.
1978-01-01
For a measuring situation limited by counting statistics, but where the level of precision is such that possible systematic errors are a major concern, it is proposed to determine the position of a spectral line from a measured line segment by applying a bias correction to the centre of gravity of the segment. This procedure is statistically highly efficient and not sensitive to small errors in assumptions about the line shape. The counting strategy for an instrument that takes data point by point is also considered. It is shown that an optimum (''two-point'') strategy exists; a scan of the central part of the line is 68% efficient by this standard. (Auth.)
Andrew G. Bunn; Esther Jansma; Mikko Korpela; Robert D. Westfall; James Baldwin
2013-01-01
Mean sensitivity (ζ) continues to be used in dendrochronology despite a literature that shows it to be of questionable value in describing the properties of a time series. We simulate first-order autoregressive models with known parameters and show that ζ is a function of variance and autocorrelation of a time series. We then use 500 random tree-ring...
Comparison of Measures of Organizational Effectiveness in U.K. Higher Education.
Lysons, Art; Hatherly, David; Mitchell, David A.
1998-01-01
Research on the organizational effectiveness of higher education institutions in the United Kingdom and Australia is compared with research on United States higher education. Focus is on identification of and statistical discrimination between institution types, based on faculty and administrator perceptions and values. (MSE)
Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics
International Nuclear Information System (INIS)
Prix, Reinhard; Krishnan, Badri
2009-01-01
We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.
Higher Education in Non-Standard Wage Contracts
Rosti, Luisa; Chelli, Francesco
2012-01-01
Purpose: The purpose of this paper is to verify whether higher education increases the likelihood of young Italian workers moving from non-standard to standard wage contracts. Design/methodology/approach: The authors exploit a data set on labour market flows, produced by the Italian National Statistical Office, by interviewing about 85,000…
Additional methodology development for statistical evaluation of reactor safety analyses
International Nuclear Information System (INIS)
Marshall, J.A.; Shore, R.W.; Chay, S.C.; Mazumdar, M.
1977-03-01
The project described is motivated by the desire for methods to quantify uncertainties and to identify conservatisms in nuclear power plant safety analysis. The report examines statistical methods useful for assessing the probability distribution of output response from complex nuclear computer codes, considers sensitivity analysis and several other topics, and also sets the path for using the developed methods for realistic assessment of the design basis accident
Simon, Heather; Baker, Kirk R; Akhtar, Farhan; Napelenok, Sergey L; Possiel, Norm; Wells, Benjamin; Timin, Brian
2013-03-05
In setting primary ambient air quality standards, the EPA's responsibility under the law is to establish standards that protect public health. As part of the current review of the ozone National Ambient Air Quality Standard (NAAQS), the US EPA evaluated the health exposure and risks associated with ambient ozone pollution using a statistical approach to adjust recent air quality to simulate just meeting the current standard level, without specifying emission control strategies. One drawback of this purely statistical concentration rollback approach is that it does not take into account spatial and temporal heterogeneity of ozone response to emissions changes. The application of the higher-order decoupled direct method (HDDM) in the community multiscale air quality (CMAQ) model is discussed here to provide an example of a methodology that could incorporate this variability into the risk assessment analyses. Because this approach includes a full representation of the chemical production and physical transport of ozone in the atmosphere, it does not require assumed background concentrations, which have been applied to constrain estimates from past statistical techniques. The CMAQ-HDDM adjustment approach is extended to measured ozone concentrations by determining typical sensitivities at each monitor location and hour of the day based on a linear relationship between first-order sensitivities and hourly ozone values. This approach is demonstrated by modeling ozone responses for monitor locations in Detroit and Charlotte to domain-wide reductions in anthropogenic NOx and VOCs emissions. As seen in previous studies, ozone response calculated using HDDM compared well to brute-force emissions changes up to approximately a 50% reduction in emissions. A new stepwise approach is developed here to apply this method to emissions reductions beyond 50% allowing for the simulation of more stringent reductions in ozone concentrations. Compared to previous rollback methods, this
Directory of Open Access Journals (Sweden)
Renshaw Andrew
2009-01-01
Full Text Available Background: Measuring the sensitivity of screening in gynecologic cytology in real life is problematic. However, other quality measures may correlate with sensitivity, including the atypical squamous cells (ASC/squamous intraepithelial lesion (SIL ratio. Whether these other measures can function as "surrogate indicators" for sensitivity and improve the assessment of sensitivity in the laboratory is not known. Materials and Methods: We compared multiple quality measures with true screening sensitivity in a variety of situations. Results: The abnormal rate, ASC rate, and ASC/SIL ratio were all highly correlated (r = .83 or greater with sensitivity when the overall laboratory sensitivity was low (85% but became less correlated (.64 or less or uncorrelated when the screening sensitivity was higher (88% or 95%, respectively. Sensitivity was more highly correlated with the abnormal rate than the ASC/SIL ratio at low screening sensitivity. While thresholds could be set that were highly sensitive and specific for suboptimal screening, these thresholds were often less than one standard deviation away from the mean. Conclusion: The correlation of the abnormal rate and the ASC/SIL ratio with sensitivity depends on overall sensitivity. Standards to define minimum screening sensitivity can be defined, but these standards are relatively narrow. These features may limit the utility of these quality measures as surrogates for sensitivity.
Steinbrink, Nicholas M. N.; Glück, Ferenc; Heizmann, Florian; Kleesiek, Marco; Valerius, Kathrin; Weinheimer, Christian; Hannestad, Steen
2017-06-01
The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2)R symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritium β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.
International Nuclear Information System (INIS)
Steinbrink, Nicholas M.N.; Weinheimer, Christian; Glück, Ferenc; Valerius, Kathrin; Heizmann, Florian; Kleesiek, Marco; Hannestad, Steen
2017-01-01
The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2) R symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritium β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.
Energy Technology Data Exchange (ETDEWEB)
Steinbrink, Nicholas M.N.; Weinheimer, Christian [Institute for Nuclear Physics, University of Münster, Wilhelm Klemm-Str. 9, 41849 Münster (Germany); Glück, Ferenc; Valerius, Kathrin [Institute for Nuclear Physics, Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe (Germany); Heizmann, Florian; Kleesiek, Marco [Institute of Experimental Nuclear Physics, Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe (Germany); Hannestad, Steen, E-mail: n.steinbrink@uni-muenster.de, E-mail: ferenc.glueck@kit.edu, E-mail: florian.heizmann@kit.edu, E-mail: marco.kleesiek@kit.edu, E-mail: kathrin.valerius@kit.edu, E-mail: weinheimer@uni-muenster.de, E-mail: steen@phys.au.dk [Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, 8000 Aarhus C (Denmark)
2017-06-01
The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2){sub R} symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritium β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.
International Nuclear Information System (INIS)
Eliazar, Iddo
2017-01-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.
Energy Technology Data Exchange (ETDEWEB)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
2017-05-15
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.
International Nuclear Information System (INIS)
2000-10-01
Denmark's gross energy consumption increased in 1999 with almost 0,5% while the CO 2 emission decreased with 1,4%. Energy Statistics 1999 shows that the energy consumption in households and the production industries was the same as the year before. The consumption in the trade and service sectors and for transportation increased. The Danish production of petroleum, natural gas and renewable energy increased in 1999 to 1000 PJ which is an increase of 17% compared to 1998. The degree of self-supply increased to 118%, which means that the energy production was 18% higher than the energy consumption in 1999. This was primarily due to a very high increase of production of petroleum of 26%. (LN)
Online neural monitoring of statistical learning.
Batterink, Laura J; Paller, Ken A
2017-05-01
The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. Copyright © 2017 Elsevier Ltd. All rights reserved.
Trajectories in higher education: ProUni in focus
Felicetti,Vera Lucia; Cabrera,Alberto F.
2017-01-01
Abstract Trajectories in higher education and the University for All Program (ProUni) are the central theme of this paper. The research question was: To what extent were some factors experienced during university difficulties in the academic trajectory of ProUni and non-ProUni graduates? The approach was quantitative with an explanatory goal. Descriptive and inferential statistics were used in the data analysis. The research subjects were 197 higher education graduates from a Southern Brazil ...
International Nuclear Information System (INIS)
Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe
2013-01-01
Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)
Statistical mechanics for a class of quantum statistics
International Nuclear Information System (INIS)
Isakov, S.B.
1994-01-01
Generalized statistical distributions for identical particles are introduced for the case where filling a single-particle quantum state by particles depends on filling states of different momenta. The system of one-dimensional bosons with a two-body potential that can be solved by means of the thermodynamic Bethe ansatz is shown to be equivalent thermodynamically to a system of free particles obeying statistical distributions of the above class. The quantum statistics arising in this way are completely determined by the two-particle scattering phases of the corresponding interacting systems. An equation determining the statistical distributions for these statistics is derived
Higher cigarette prices influence cigarette purchase patterns.
Hyland, A; Bauer, J E; Li, Q; Abrams, S M; Higbee, C; Peppone, L; Cummings, K M
2005-04-01
To examine cigarette purchasing patterns of current smokers and to determine the effects of cigarette price on use of cheaper sources, discount/generic cigarettes, and coupons. Higher cigarette prices result in decreased cigarette consumption, but price sensitive smokers may seek lower priced or tax-free cigarette sources, especially if they are readily available. This price avoidance behaviour costs states excise tax money and dampens the health impact of higher cigarette prices. Telephone survey data from 3602 US smokers who were originally in the COMMIT (community intervention trial for smoking cessation) study were analysed to assess cigarette purchase patterns, use of discount/generic cigarettes, and use of coupons. 59% reported engaging in a high price avoidance strategy, including 34% who regularly purchase from a low or untaxed venue, 28% who smoke a discount/generic cigarette brand, and 18% who report using cigarette coupons more frequently that they did five years ago. The report of engaging in a price avoidance strategy was associated with living within 40 miles of a state or Indian reservation with lower cigarette excise taxes, higher average cigarette consumption, white, non-Hispanic race/ethnicity, and female sex. Data from this study indicate that most smokers are price sensitive and seek out measures to purchase less expensive cigarettes, which may decrease future cessation efforts.
Testing the statistical isotropy of large scale structure with multipole vectors
International Nuclear Information System (INIS)
Zunckel, Caroline; Huterer, Dragan; Starkman, Glenn D.
2011-01-01
A fundamental assumption in cosmology is that of statistical isotropy - that the Universe, on average, looks the same in every direction in the sky. Statistical isotropy has recently been tested stringently using cosmic microwave background data, leading to intriguing results on large angular scales. Here we apply some of the same techniques used in the cosmic microwave background to the distribution of galaxies on the sky. Using the multipole vector approach, where each multipole in the harmonic decomposition of galaxy density field is described by unit vectors and an amplitude, we lay out the basic formalism of how to reconstruct the multipole vectors and their statistics out of galaxy survey catalogs. We apply the algorithm to synthetic galaxy maps, and study the sensitivity of the multipole vector reconstruction accuracy to the density, depth, sky coverage, and pixelization of galaxy catalog maps.
Photon statistical properties of photon-added two-mode squeezed coherent states
International Nuclear Information System (INIS)
Xu Xue-Fen; Wang Shuai; Tang Bin
2014-01-01
We investigate photon statistical properties of the multiple-photon-added two-mode squeezed coherent states (PA-TMSCS). We find that the photon statistical properties are sensitive to the compound phase involved in the TMSCS. Our numerical analyses show that the photon addition can enhance the cross-correlation and anti-bunching effects of the PA-TMSCS. Compared with that of the TMSCS, the photon number distribution of the PA-TMSCS is modulated by a factor that is a monotonically increasing function of the numbers of adding photons to each mode; further, that the photon addition essentially shifts the photon number distribution. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Statistical moments of the Strehl ratio
Yaitskova, Natalia; Esselborn, Michael; Gladysz, Szymon
2012-07-01
Knowledge of the statistical characteristics of the Strehl ratio is essential for the performance assessment of the existing and future adaptive optics systems. For full assessment not only the mean value of the Strehl ratio but also higher statistical moments are important. Variance is related to the stability of an image and skewness reflects the chance to have in a set of short exposure images more or less images with the quality exceeding the mean. Skewness is a central parameter in the domain of lucky imaging. We present a rigorous theory for the calculation of the mean value, the variance and the skewness of the Strehl ratio. In our approach we represent the residual wavefront as being formed by independent cells. The level of the adaptive optics correction defines the number of the cells and the variance of the cells, which are the two main parameters of our theory. The deliverables are the values of the three moments as the functions of the correction level. We make no further assumptions except for the statistical independence of the cells.
An effective algorithm for computing global sensitivity indices (EASI)
International Nuclear Information System (INIS)
Plischke, Elmar
2010-01-01
We present an algorithm named EASI that estimates first order sensitivity indices from given data using Fast Fourier Transformations. Hence it can be used as a post-processing module for pre-computed model evaluations. Ideas for the estimation of higher order sensitivity indices are also discussed.
Australian Indigenous Higher Education: Politics, Policy and Representation
Wilson, Katie; Wilks, Judith
2015-01-01
The growth of Aboriginal and Torres Strait Islander participation in Australian higher education from 1959 to the present is notable statistically, but below population parity. Distinct patterns in government policy-making and programme development, inconsistent funding and political influences, together with Indigenous representation during the…
Gilardini, Luisa; Vallone, Luciana; Cottafava, Raffaella; Redaelli, Gabriella; Croci, Marina; Conti, Antonio; Pasqualinotto, Lucia; Invitti, Cecilia
2012-01-01
To investigate the effects of a 3-month lifestyle intervention on insulin sensitivity and its related cardiometabolic factors in obese patients. Anthropometry, body composition, oral glucose tolerance test, lipids, alanine aminotransferase, insulin sensitivity (insulinogenic index (ISI), homeostasis model assessment, β-cell performance (disposition index)) were evaluated in 263 obese women and 93 obese men before and after 3 months of hypocaloric low fat/high protein diet associated with physical activity 30 min/day. Patients were divided into 3 groups according to the intervention-induced ISI changes: group 1 (decrease), group 2 (stability) and group 3 (increase). Insulin sensitivity and the disposition index were significantly higher before the intervention in group 1 than in group 3. BMI, waist circumference, and fat mass significantly decreased in groups 1 and 3 in both sexes. β-cell performance decreased in group 1 and increased in group 3. Metabolic variables improved in group 3, whereas glucose levels increased in women of group 1. The post-intervention insulin sensitivity was lower in group 1 than in group 3. Lifestyle intervention induces changes in insulin sensitivity and metabolic factors that depend on the pre-intervention degree of insulin sensitivity. Weight loss leads to metabolic benefits in insulin-resistant, obese patients, whereas it may paradoxically worsen the metabolic conditions in the insulin-sensitive phenotype of obesity. Copyright © 2012 S. Karger GmbH, Freiburg.
An introduction to Bayesian statistics in health psychology.
Depaoli, Sarah; Rus, Holly M; Clifton, James P; van de Schoot, Rens; Tiemensma, Jitske
2017-09-01
The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of health psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation models, latent growth curve (and mixture) models, and hierarchical linear models. Likewise, Bayesian methods can be used with small sample sizes since they do not rely on large sample theory. In this article, we discuss several important components of Bayesian statistics as they relate to health-based inquiries. We discuss the incorporation and impact of prior knowledge into the estimation process and the different components of the analysis that should be reported in an article. We present an example implementing Bayesian estimation in the context of blood pressure changes after participants experienced an acute stressor. We conclude with final thoughts on the implementation of Bayesian statistics in health psychology, including suggestions for reviewing Bayesian manuscripts and grant proposals. We have also included an extensive amount of online supplementary material to complement the content presented here, including Bayesian examples using many different software programmes and an extensive sensitivity analysis examining the impact of priors.
Noise removing in encrypted color images by statistical analysis
Islam, N.; Puech, W.
2012-03-01
Cryptographic techniques are used to secure confidential data from unauthorized access but these techniques are very sensitive to noise. A single bit change in encrypted data can have catastrophic impact over the decrypted data. This paper addresses the problem of removing bit error in visual data which are encrypted using AES algorithm in the CBC mode. In order to remove the noise, a method is proposed which is based on the statistical analysis of each block during the decryption. The proposed method exploits local statistics of the visual data and confusion/diffusion properties of the encryption algorithm to remove the errors. Experimental results show that the proposed method can be used at the receiving end for the possible solution for noise removing in visual data in encrypted domain.
Statistical MOSFET Parameter Extraction with Parameter Selection for Minimal Point Measurement
Directory of Open Access Journals (Sweden)
Marga Alisjahbana
2013-11-01
Full Text Available A method to statistically extract MOSFET model parameters from a minimal number of transistor I(V characteristic curve measurements, taken during fabrication process monitoring. It includes a sensitivity analysis of the model, test/measurement point selection, and a parameter extraction experiment on the process data. The actual extraction is based on a linear error model, the sensitivity of the MOSFET model with respect to the parameters, and Newton-Raphson iterations. Simulated results showed good accuracy of parameter extraction and I(V curve fit for parameter deviations of up 20% from nominal values, including for a process shift of 10% from nominal.
Adler, Adam S; Bedinger, Daniel; Adams, Matthew S; Asensio, Michael A; Edgar, Robert C; Leong, Renee; Leong, Jackson; Mizrahi, Rena A; Spindler, Matthew J; Bandi, Srinivasa Rao; Huang, Haichun; Tawde, Pallavi; Brams, Peter; Johnson, David S
2018-04-01
Deep sequencing and single-chain variable fragment (scFv) yeast display methods are becoming more popular for discovery of therapeutic antibody candidates in mouse B cell repertoires. In this study, we compare a deep sequencing and scFv display method that retains native heavy and light chain pairing with a related method that randomly pairs heavy and light chain. We performed the studies in a humanized mouse, using interleukin 21 receptor (IL-21R) as a test immunogen. We identified 44 high-affinity binder scFv with the native pairing method and 100 high-affinity binder scFv with the random pairing method. 30% of the natively paired scFv binders were also discovered with the randomly paired method, and 13% of the randomly paired binders were also discovered with the natively paired method. Additionally, 33% of the scFv binders discovered only in the randomly paired library were initially present in the natively paired pre-sort library. Thus, a significant proportion of "randomly paired" scFv were actually natively paired. We synthesized and produced 46 of the candidates as full-length antibodies and subjected them to a panel of binding assays to characterize their therapeutic potential. 87% of the antibodies were verified as binding IL-21R by at least one assay. We found that antibodies with native light chains were more likely to bind IL-21R than antibodies with non-native light chains, suggesting a higher false positive rate for antibodies from the randomly paired library. Additionally, the randomly paired method failed to identify nearly half of the true natively paired binders, suggesting a higher false negative rate. We conclude that natively paired libraries have critical advantages in sensitivity and specificity for antibody discovery programs.
Statistics with JMP graphs, descriptive statistics and probability
Goos, Peter
2015-01-01
Peter Goos, Department of Statistics, University ofLeuven, Faculty of Bio-Science Engineering and University ofAntwerp, Faculty of Applied Economics, BelgiumDavid Meintrup, Department of Mathematics and Statistics,University of Applied Sciences Ingolstadt, Faculty of MechanicalEngineering, GermanyThorough presentation of introductory statistics and probabilitytheory, with numerous examples and applications using JMPDescriptive Statistics and Probability provides anaccessible and thorough overview of the most important descriptivestatistics for nominal, ordinal and quantitative data withpartic
A simulation of orientation dependent, global changes in camera sensitivity in ECT
International Nuclear Information System (INIS)
Bieszk, J.A.; Hawman, E.G.; Malmin, R.E.
1984-01-01
ECT promises the abilities to: 1) observe radioisotope distributions in a patient without the summation of overlying activity to reduce contrast, and 2) measure quantitatively these distributions to further and more accurately assess organ function. Ideally, camera-based ECT systems should have a performance that is independent of camera orientation or gantry angle. This study is concerned with ECT quantitation errors that can arise from angle-dependent variations of camera sensitivity. Using simulated phantoms representative of heart and liver sections, the effects of sensitivity changes on reconstructed images were assessed both visually and quantitatively based on ROI sums. The sinogram for each test image was simulated with 128 linear digitization and 180 angular views. The global orientation-dependent sensitivity was modelled by applying an angular sensitivity dependence to the sinograms of the test images. Four sensitivity variations were studied. Amplitudes of 0% (as a reference), 5%, 10%, and 25% with a cosθ dependence were studied as well as a cos2θ dependence with a 5% amplitude. Simulations were done with and without Poisson noise to: 1) determine trends in the quantitative effects as a function of the magnitude of the variation, and 2) to see how these effects are manifested in studies having statistics comparable to clinical cases. For the most realistic sensitivity variation (cosθ, 5% ampl.), the ROIs chosen in the present work indicated changes of <0.5% in the noiseless case and <5% for the case with Poisson noise. The effects of statistics appear to dominate any effects due to global, sinusoidal, orientation-dependent sensitivity changes in the cases studied
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Enevoldsen, I.
1993-01-01
It has been observed and shown that in some examples a sensitivity analysis of the first order reliability index results in increasing reliability index, when the standard deviation for a stochastic variable is increased while the expected value is fixed. This unfortunate behaviour can occur when...... a stochastic variable is modelled by an asymmetrical density function. For lognormally, Gumbel and Weibull distributed stochastic variables it is shown for which combinations of the/3-point, the expected value and standard deviation the weakness can occur. In relation to practical application the behaviour...... is probably rather infrequent. A simple example is shown as illustration and to exemplify that for second order reliability methods and for exact calculations of the probability of failure this behaviour is much more infrequent....
On the inflationary perturbations of massive higher-spin fields
Energy Technology Data Exchange (ETDEWEB)
Kehagias, Alex [Physics Division, National Technical University of Athens, 15780 Zografou Campus, Athens (Greece); Riotto, Antonio, E-mail: kehagias@central.ntua.gr, E-mail: Antonio.Riotto@unige.ch [Department of Theoretical Physics and Center for Astroparticle Physics (CAP), 24 quai E. Ansermet, CH-1211 Geneva 4 (Switzerland)
2017-07-01
Cosmological perturbations of massive higher-spin fields are generated during inflation, but they decay on scales larger than the Hubble radius as a consequence of the Higuchi bound. By introducing suitable couplings to the inflaton field, we show that one can obtain statistical correlators of massive higher-spin fields which remain constant or decay very slowly outside the Hubble radius. This opens up the possibility of new observational signatures from inflation.
The crossing statistic: dealing with unknown errors in the dispersion of Type Ia supernovae
International Nuclear Information System (INIS)
Shafieloo, Arman; Clifton, Timothy; Ferreira, Pedro
2011-01-01
We propose a new statistic that has been designed to be used in situations where the intrinsic dispersion of a data set is not well known: The Crossing Statistic. This statistic is in general less sensitive than χ 2 to the intrinsic dispersion of the data, and hence allows us to make progress in distinguishing between different models using goodness of fit to the data even when the errors involved are poorly understood. The proposed statistic makes use of the shape and trends of a model's predictions in a quantifiable manner. It is applicable to a variety of circumstances, although we consider it to be especially well suited to the task of distinguishing between different cosmological models using type Ia supernovae. We show that this statistic can easily distinguish between different models in cases where the χ 2 statistic fails. We also show that the last mode of the Crossing Statistic is identical to χ 2 , so that it can be considered as a generalization of χ 2
International Nuclear Information System (INIS)
Kim, Kyu Tae; Kim, Oh Hwan
1999-01-01
A simplified statistical methodology is developed in order to both reduce over-conservatism of deterministic methodologies employed for PWR fuel rod internal pressure (RIP) calculation and simplify the complicated calculation procedure of the widely used statistical methodology which employs the response surface method and Monte Carlo simulation. The simplified statistical methodology employs the system moment method with a deterministic statistical methodology employs the system moment method with a deterministic approach in determining the maximum variance of RIP. The maximum RIP variance is determined with the square sum of each maximum value of a mean RIP value times a RIP sensitivity factor for all input variables considered. This approach makes this simplified statistical methodology much more efficient in the routine reload core design analysis since it eliminates the numerous calculations required for the power history-dependent RIP variance determination. This simplified statistical methodology is shown to be more conservative in generating RIP distribution than the widely used statistical methodology. Comparison of the significances of each input variable to RIP indicates that fission gas release model is the most significant input variable. (author). 11 refs., 6 figs., 2 tabs
Robust and Reversible Audio Watermarking by Modifying Statistical Features in Time Domain
Directory of Open Access Journals (Sweden)
Shijun Xiang
2017-01-01
Full Text Available Robust and reversible watermarking is a potential technique in many sensitive applications, such as lossless audio or medical image systems. This paper presents a novel robust reversible audio watermarking method by modifying the statistic features in time domain in the way that the histogram of these statistical values is shifted for data hiding. Firstly, the original audio is divided into nonoverlapped equal-sized frames. In each frame, the use of three samples as a group generates a prediction error and a statistical feature value is calculated as the sum of all the prediction errors in the frame. The watermark bits are embedded into the frames by shifting the histogram of the statistical features. The watermark is reversible and robust to common signal processing operations. Experimental results have shown that the proposed method not only is reversible but also achieves satisfactory robustness to MP3 compression of 64 kbps and additive Gaussian noise of 35 dB.
Visualization of nonlinear kernel models in neuroimaging by sensitivity maps
DEFF Research Database (Denmark)
Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard
There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...
Dumitrache-Rujinski, Stefan; Dinu, Ioana; Călcăianu, George; Erhan, Ionela; Cocieru, Alexandru; Zaharia, Dragoş; Toma, Claudia Lucia; Bogdan, Miron Alexandru
2014-01-01
Obstructive sleep apnea syndrome (OSAS) may induce metabolic abnormalities through intermittent hypoxemia and simpathetic activation. It is difficult to demonstrate an independent role of OSAS in the occurrence of metabolic abnormalities, as obesity represents an important risk factor for both OSAS and metabolic abnormalities. to assess the relations between insulin resistance (IR), insulin sensitivity (IS), OSAS severity and nocturnal oxyhaemoglobin levels in obese, nondiabetic patients with daytime sleepiness. We evaluated 99 consecutive, obese, nondiabetic patients (fasting glycemia 5/hour and daytime sleepiness) by an ambulatory six channel cardio-respiratory polygraphy. Hight, weight serum triglycerides (TG), high density lipoprotein-cholesterol (HDL-C) levels were evaluated. Correlations between Apneea Hypopnea Index (AHI), Oxygen Desaturation Index (ODI), average and lowest oxyhaemoglobin saturation (SaO), body mass index (BMI) and insulin resistance or sensitivity were assesed. IR was defined as a TG/ HDL-Cratio > 3, and insulin sensitivity (IS) as a TG/HDL-C ratio obese nondiabetic patients. Preserving insulin sensitivity is more likely when oxyhaemoglobin levels are higher and ODI is lower. Mean lowest nocturnal SaO2 levels seems to be independently involved in the development of insulin resistance as no statistically significant differences were found for BMI between the two groups.
Competitiveness - higher education
Directory of Open Access Journals (Sweden)
Labas Istvan
2016-03-01
Full Text Available Involvement of European Union plays an important role in the areas of education and training equally. The member states are responsible for organizing and operating their education and training systems themselves. And, EU policy is aimed at supporting the efforts of member states and trying to find solutions for the common challenges which appear. In order to make our future sustainable maximally; the key to it lies in education. The highly qualified workforce is the key to development, advancement and innovation of the world. Nowadays, the competitiveness of higher education institutions has become more and more appreciated in the national economy. In recent years, the frameworks of operation of higher education systems have gone through a total transformation. The number of applying students is continuously decreasing in some European countries therefore only those institutions can “survive” this shortfall, which are able to minimize the loss of the number of students. In this process, the factors forming the competitiveness of these budgetary institutions play an important role from the point of view of survival. The more competitive a higher education institution is, the greater the chance is that the students would like to continue their studies there and thus this institution will have a greater chance for the survival in the future, compared to ones lagging behind in the competition. Aim of our treatise prepared is to present the current situation and main data of the EU higher education and we examine the performance of higher education: to what extent it fulfils the strategy for smart, sustainable and inclusive growth which is worded in the framework of Europe 2020 programme. The treatise is based on analysis of statistical data.
Statistics Anxiety and Business Statistics: The International Student
Bell, James A.
2008-01-01
Does the international student suffer from statistics anxiety? To investigate this, the Statistics Anxiety Rating Scale (STARS) was administered to sixty-six beginning statistics students, including twelve international students and fifty-four domestic students. Due to the small number of international students, nonparametric methods were used to…
Sensitivity coefficients for the 238U neutron-capture shielded-group cross sections
International Nuclear Information System (INIS)
Munoz-Cobos, J.L.; de Saussure, G.; Perez, R.B.
1981-01-01
In the unresolved resonance region cross sections are represented with statistical resonance parameters. The average values of these parameters are chosen in order to fit evaluated infinitely dilute group cross sections. The sensitivity of the shielded group cross sections to the choice of mean resonance data has recently been investigated for the case of 235 U and 239 Pu by Ganesan and by Antsipov et al; similar sensitivity studies for 238 U are reported
Mendoza Beltran, A.; Heijungs, R.; Guinée, J.; Tukker, A.
2016-01-01
Purpose: Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, one-at-a-time (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes
Quality assurance and statistical control
DEFF Research Database (Denmark)
Heydorn, K.
1991-01-01
In scientific research laboratories it is rarely possible to use quality assurance schemes, developed for large-scale analysis. Instead methods have been developed to control the quality of modest numbers of analytical results by relying on statistical control: Analysis of precision serves...... to detect analytical errors by comparing the a priori precision of the analytical results with the actual variability observed among replicates or duplicates. The method relies on the chi-square distribution to detect excess variability and is quite sensitive even for 5-10 results. Interference control...... serves to detect analytical bias by comparing results obtained by two different analytical methods, each relying on a different detection principle and therefore exhibiting different influence from matrix elements; only 5-10 sets of results are required to establish whether a regression line passes...
Changing world extreme temperature statistics
Finkel, J. M.; Katz, J. I.
2018-04-01
We use the Global Historical Climatology Network--daily database to calculate a nonparametric statistic that describes the rate at which all-time daily high and low temperature records have been set in nine geographic regions (continents or major portions of continents) during periods mostly from the mid-20th Century to the present. This statistic was defined in our earlier work on temperature records in the 48 contiguous United States. In contrast to this earlier work, we find that in every region except North America all-time high records were set at a rate significantly (at least $3\\sigma$) higher than in the null hypothesis of a stationary climate. Except in Antarctica, all-time low records were set at a rate significantly lower than in the null hypothesis. In Europe, North Africa and North Asia the rate of setting new all-time highs increased suddenly in the 1990's, suggesting a change in regional climate regime; in most other regions there was a steadier increase.
Hill, Mary C.
2010-01-01
Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.
Spreadsheets as tools for statistical computing and statistics education
Neuwirth, Erich
2000-01-01
Spreadsheets are an ubiquitous program category, and we will discuss their use in statistics and statistics education on various levels, ranging from very basic examples to extremely powerful methods. Since the spreadsheet paradigm is very familiar to many potential users, using it as the interface to statistical methods can make statistics more easily accessible.
Drought sensitivity changes over the last century at the North American savanna-forest boundary
Heilman, K.; McLachlan, J. S.
2017-12-01
Future environmental changes can affect the sensitivity of tree growth to climate. Theses changes are of particular concern at biome boundaries where tree distribution could shift as a result of changes in both drought and drought sensitivity. One such region is the North American savanna-forest boundary, where increased CO2 and droughts could alter savanna and forest ecosystem distributions in two contrasting ways: 1). More severe droughts may increase drought sensitivity, favoring open savanna ecosystems or, 2). Increases in water use efficiency resulting from higher atmospheric CO2 may decrease drought sensitivity, promoting forest expansion. This study sought to understand whether the past 100 years of climate and CO2 changes have impacted regional tree growth-climate sensitivity. To test for these climate sensitivity changes, we measured the sensitivity of Quercus spp. radial growth to Palmer Drought Severity Index (PDSI). Tree growth sensitivity to climate can vary according to many factors, including: stand structure, available moisture, and tree age. To control for these factors, we sampled tree growth-climate responses at sites in both open and closed forests, and at both low and high annual precipitation. Within each site, we compared growth responses to climate between trees established under high CO2 conditions after 1950 (high CO2 young), and tree established before 1950 under low CO2 levels (low CO2 young). At most sites, low CO2 young have a higher drought sensitivity than higher CO2 young. These changes in the sensitivity to drought are consistent with CO2 enhancement of water use efficiency. Furthermore, these differences in drought sensitivity are higher at sites with high temperature and low precipitation, suggesting that the alleviation of drought is more likely in hot and dry regions. Thus, if CO2 enhancement is indeed occurring in these systems, lower growth sensitivity to drought in hot and dry regions could favor increased forest growth. If
Register-based statistics statistical methods for administrative data
Wallgren, Anders
2014-01-01
This book provides a comprehensive and up to date treatment of theory and practical implementation in Register-based statistics. It begins by defining the area, before explaining how to structure such systems, as well as detailing alternative approaches. It explains how to create statistical registers, how to implement quality assurance, and the use of IT systems for register-based statistics. Further to this, clear details are given about the practicalities of implementing such statistical methods, such as protection of privacy and the coordination and coherence of such an undertaking. Thi
... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...
Variance-based sensitivity indices for stochastic models with correlated inputs
Energy Technology Data Exchange (ETDEWEB)
Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)
2015-03-10
The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.
Variance-based sensitivity indices for stochastic models with correlated inputs
International Nuclear Information System (INIS)
Kala, Zdeněk
2015-01-01
The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics
Milic, Natasa M.; Trajkovic, Goran Z.; Bukumiric, Zoran M.; Cirkovic, Andja; Nikolic, Ivan M.; Milin, Jelena S.; Milic, Nikola V.; Savic, Marko D.; Corac, Aleksandar M.; Marinkovic, Jelena M.; Stanisavljevic, Dejana M.
2016-01-01
Background Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. Methods This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013–14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Results Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (plearning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional classroom training in medical statistics. PMID:26859832
The accuracy of remotely-sensed IWC: An assessment from MLS, TRMM and CloudSat statistics
Wu, D. L.; Heymsfield, A. J.
2006-12-01
Understanding climate change requires accurate global cloud ice water content (IWC) measurements. Satellite remote sensing has been the major tool to provide such global observations, but the accuracy of deduced IWC depends on knowledge of cloud microphysics learned from in-situ samples. Because only limited number and type of ice clouds have been measured by in-situ sensors, the knowledge about cloud microphysics is incomplete, and the IWC accuracy from remote sensing can vary from 30% to 200% from case to case. Recent observations from MLS, TRMM and CloudSat allow us to evaluate consistency and accuracy of IWCs deduced from passive and active satellite techniques. In this study we conduct statistical analyses on the tropical and subtropical IWCs observed by MLS, TRMM and CloudSat. The probability density functions (PDFs) of IWC are found to depend on the volume size of averaging, and therefore data need to be averaged into the same volume in order for fair comparisons. Showing measurement noise, bias and sensitivity, the PDF is a better characterization than an average for evaluating IWC accuracy because an averaged IWC depends on cloud-detection threshold that can vary from sensor to sensor. Different thresholds will not only change the average value but also change cloud fraction and occurrence frequency. Our study shows that MLS and TRMM IWCs, despite large differences in sensitivity with little overlap, can still be compared under PDF. The two statistics are generally consistent within 50% at ~13 km, obeying an approximate lognormal distribution as suggested by some ground-based radar observations. MLS has sensitivity to IWC of 1-100 mg/m3 whereas TRMM can improve its sensitivity to IWC as low as 70 mg/m3 if the radar data are averaged properly for the equivalent volume of MLS samples. The proper statistical averaging requires full characteristics of IWC noise, which are not available for products normally derived from radar reflectivity, and therefore we
Statistical yearbook. 2000. Data available as of 31 January 2003. 47 ed
International Nuclear Information System (INIS)
2003-01-01
This is the forty-seventh issue of the United Nations Statistical Yearbook, prepared by the Statistics Division, Department of Economic and Social Affairs of the United Nations Secretariat, since 1948. The present issue contains series covering, in general, 1989-1998 or 1990-1999, using statistics available to the Statistics Division up to 30 November 2000. The Yearbook is based on data compiled by the Statistics Division from over 40 different international and national sources. These include the United Nations Statistics Division in the fields of national accounts, industry, energy, transport and international trade; the United Nations Statistics Division and Population Division in the field of demographic statistics; and data provided by over 20 offices of the United Nations system and international organizations in other specialized fields.United Nations agencies and other international organizations which furnished data are listed under 'Statistical sources and references' at the end of the Yearbook. Acknowledgement is gratefully made for their generous cooperation in providing data. The Statistics Division also publishes the Monthly Bulletin of Statistics, which provides a valuable complement to the Yearbook covering current international economic statistics for most countries and areas of the world and quarterly world and regional aggregates. Subscribers to the Monthly Bulletin of Statistics may also access the Bulletin on-line via the World Wide Web on Internet. MBS On-line allows time-sensitive statistics to reach users much faster than the traditional print publication. For further information see . The present issue of the Yearbook reflects a phased programme of major changes in its organization and presentation undertaken in 1990 which until then was relatively unchanged since the first issue was released in 1948. The Yearbook has also been published on CD-ROM for IBM-compatible microcomputers, since the thirty-eighth issue
CONSTRUCTION SENSITIVITY IN PINGYAO TONE SANDHI
Directory of Open Access Journals (Sweden)
Hui-shan Lin
2012-06-01
Full Text Available This paper investigates tone sandhi phenomena in Pingyao, a Jin dialect spoken in Shanxi province in China. Pingyao tone sandhi is special in that tone sandhi in bi-syllabic strings is construction sensitive, but tone sandhi in tri-syllabic strings is not fully conditioned by construction types. Based on Optimality Theory (OT, this paper proposes analyses for bi-tonal and tri-tonal sandhi in Pingyao. We show that while bi-tonal sandhi can be accounted for by assuming that there are different grammars associated with different construction types, the lack of construction sensitivity in certain tri-syllabic strings suggests that the association between construction types and phonological grammars can be sacrificed to comply with a higher demand. In Pingyao, the higher demand is to avoid having a tri-tonal string with marked tone sandhi domain from being associated with conflicting grammars.
Slice sensitivity profiles and pixel noise of multi-slice CT in comparison with single-slice CT
International Nuclear Information System (INIS)
Schorn, C.; Obenauer, S.; Funke, M.; Hermann, K.P.; Kopka, L.; Grabbe, E.
1999-01-01
Purpose: Presentation and evaluation of slice sensitivity profile and pixel noise of multi-slice CT in comparison to single-slice CT. Methods: Slice sensitivity profiles and pixel noise of a multi-slice CT equiped with a 2D matrix detector array and of a single-slice CT were evaluated in phantom studies. Results: For the single-slice CT the width of the slice sensitivity profiles increased with increasing pitch. In spite of a much higher table speed the slice sensitivity profiles of multi-slice CT were narrower and did not increase with higher pitch. Noise in single-slice CT was independent of pitch. For multi-slice CT noise increased with higher pitch and for the higher pitch decreased slightly with higher detector row collimation. Conclusions: Multi-slice CT provides superior z-resolution and higher volume coverage speed. These qualities fulfill one of the prerequisites for improvement of 3D postprocessing. (orig.) [de
Learning Object Names at Different Hierarchical Levels Using Cross-Situational Statistics.
Chen, Chi-Hsin; Zhang, Yayun; Yu, Chen
2018-05-01
Objects in the world usually have names at different hierarchical levels (e.g., beagle, dog, animal). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use co-occurrence information to learn hierarchical labels in contexts where the labels for individual objects and labels for categories were presented in completely separated blocks, in interleaved blocks, or mixed in the same trial. Temporal presentation schedules significantly affected the learning of individual object labels, but not the learning of category labels. Learners' subsequent generalization of category labels indicated sensitivity to the structure of statistical input. Copyright © 2017 Cognitive Science Society, Inc.
Statistical and Thurstonian models for the A-not A protocol with and without sureness
DEFF Research Database (Denmark)
Christensen, Rune Haubo Bojesen; Cleaver, Graham; Brockhoff, Per B.
2011-01-01
-product variations to be analyzed in the same model providing additional insight and reduced experimental costs. The effects of explanatory variables on the Thurstonian delta, the sensitivity (AUC), the ROC curve and the response category thresholds are discussed in detail. All statistical methods are implemented...
Software Used to Generate Cancer Statistics - SEER Cancer Statistics
Videos that highlight topics and trends in cancer statistics and definitions of statistical terms. Also software tools for analyzing and reporting cancer statistics, which are used to compile SEER's annual reports.
Understanding Statistics and Statistics Education: A Chinese Perspective
Shi, Ning-Zhong; He, Xuming; Tao, Jian
2009-01-01
In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…
Pestman, Wiebe R
2009-01-01
This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.
Demographic factors associated with moral sensitivity among nursing students.
Tuvesson, Hanna; Lützén, Kim
2017-11-01
Today's healthcare environment is often characterized by an ethically demanding work situation, and nursing students need to prepare to meet ethical challenges in their future role. Moral sensitivity is an important aspect of the ethical decision-making process, but little is known regarding nursing students' moral sensitivity and its possible development during nursing education. The aims of this study were to investigate moral sensitivity among nursing students, differences in moral sensitivity according to sample sub-group, and the relation between demographic characteristics of nursing students and moral sensitivity. A convenience sample of 299 nursing students from one university completed a questionnaire comprising questions about demographic information and the revised Moral Sensitivity Questionnaire. With the use of SPSS, non-parametric statistics, including logistic regression models, were used to investigate the relationship between demographic characteristics and moral sensitivity. Ethical considerations: The study followed the regulations according to the Swedish Ethical Review Act and was reviewed by the Ethics Committee of South-East Sweden. The findings showed that mean scores of nursing students' moral sensitivity were found in the middle to upper segment of the rating scale. Multivariate analysis showed that gender (odds ratio = 3.32), age (odds ratio = 2.09; 1.73), and parental status (odds ratio = 0.31) were of relevance to nursing students' moral sensitivity. Academic year was found to be unrelated to moral sensitivity. These demographic aspects should be considered when designing ethics education for nursing students. Future studies should continue to investigate moral sensitivity in nursing students, such as if and how various pedagogical strategies in ethics may contribute to moral sensitivity in nursing students.
Higher order antibunching in intermediate states
International Nuclear Information System (INIS)
Verma, Amit; Sharma, Navneet K.; Pathak, Anirban
2008-01-01
Since the introduction of binomial state as an intermediate state, different intermediate states have been proposed. Different nonclassical effects have also been reported in these intermediate states. But till now higher order antibunching is predicted in only one type of intermediate state, which is known as shadowed negative binomial state. Recently we have shown that the higher order antibunching is not a rare phenomenon [P. Gupta, P. Pandey, A. Pathak, J. Phys. B 39 (2006) 1137]. To establish our earlier claim further, here we have shown that the higher order antibunching can be seen in different intermediate states, such as binomial state, reciprocal binomial state, hypergeometric state, generalized binomial state, negative binomial state and photon added coherent state. We have studied the possibility of observing the higher order subpoissonian photon statistics in different limits of intermediate states. The effects of different control parameters on the depth of non classicality have also been studied in this connection and it has been shown that the depth of nonclassicality can be tuned by controlling various physical parameters
Dysfunctional parenting styles increase interpersonal sensitivity in healthy subjects.
Otani, Koichi; Suzuki, Akihito; Shibuya, Naoshi; Matsumoto, Yoshihiko; Kamata, Mitsuhiro
2009-12-01
The effects of dysfunctional parenting styles on interpersonal sensitivity were studied in 640 Japanese volunteers. Interpersonal sensitivity was assessed by the Interpersonal Sensitivity Measure (IPSM), and perceived parental rearing was evaluated by the Parental Bonding Instrument (PBI), which is consisted of care and protection factors. Parental rearing was classified into 4 types, i.e., optimal parenting (high care/low protection), affectionate constraint (high care/high protection), neglectful parenting (low care/low protection), and affectionless control (low care/high protection). Males with paternal affectionless control showed higher total IPSM scores than those with paternal optimal parenting (p = 0.022). Females with maternal affectionate constraint (p = 0.001), neglectful parenting (p = 0.022), and affectionless control (p = 0.003) showed higher total IPSM scores than those with maternal optimal parenting. In males and females, dysfunctional parenting styles by the opposite-sex parents did not affected total IPSM scores. The present study suggests that in both males and females interpersonal sensitivity is increased by dysfunctional parenting styles by the same-sex parents.
Implementing statistical equating for MRCP(UK) Parts 1 and 2.
McManus, I C; Chis, Liliana; Fox, Ray; Waller, Derek; Tang, Peter
2014-09-26
The MRCP(UK) exam, in 2008 and 2010, changed the standard-setting of its Part 1 and Part 2 examinations from a hybrid Angoff/Hofstee method to statistical equating using Item Response Theory, the reference group being UK graduates. The present paper considers the implementation of the change, the question of whether the pass rate increased amongst non-UK candidates, any possible role of Differential Item Functioning (DIF), and changes in examination predictive validity after the change. Analysis of data of MRCP(UK) Part 1 exam from 2003 to 2013 and Part 2 exam from 2005 to 2013. Inspection suggested that Part 1 pass rates were stable after the introduction of statistical equating, but showed greater annual variation probably due to stronger candidates taking the examination earlier. Pass rates seemed to have increased in non-UK graduates after equating was introduced, but was not associated with any changes in DIF after statistical equating. Statistical modelling of the pass rates for non-UK graduates found that pass rates, in both Part 1 and Part 2, were increasing year on year, with the changes probably beginning before the introduction of equating. The predictive validity of Part 1 for Part 2 was higher with statistical equating than with the previous hybrid Angoff/Hofstee method, confirming the utility of IRT-based statistical equating. Statistical equating was successfully introduced into the MRCP(UK) Part 1 and Part 2 written examinations, resulting in higher predictive validity than the previous Angoff/Hofstee standard setting. Concerns about an artefactual increase in pass rates for non-UK candidates after equating were shown not to be well-founded. Most likely the changes resulted from a genuine increase in candidate ability, albeit for reasons which remain unclear, coupled with a cognitive illusion giving the impression of a step-change immediately after equating began. Statistical equating provides a robust standard-setting method, with a better
International Nuclear Information System (INIS)
Chanas, Brian; Wang, Hongbing; Ghanayem, Burhan I.
2003-01-01
Acrylonitrile (AN) is a potent toxicant and a known rodent carcinogen. AN epoxidation to cyanoethylene oxide (CEO) via CYP2E1 and its subsequent metabolism via epoxide hydrolases (EH) to yield cyanide is thought to be responsible for the acute toxicity and mortality of AN. Recent reports showed that male mice are more sensitive than females to the acute toxicity/mortality of AN. The present work was undertaken to assess the metabolic and enzymatic basis for the greater sensitivity of male vs female mice to AN toxicity. Male and female wild-type and CYP2E1-null mice received AN at 0, 2.5, 10, 20, or 40 mg/kg by gavage. Cyanide concentrations were measured at 1 or 3 h after dosing. Current data demonstrated that cyanide levels in blood and tissues of AN-treated wild-type mice of both sexes were significantly greater than in vehicle-treated controls and increased in a dose-dependent manner. In contrast, cyanide levels in AN-treated CYP2E1-null mice were not statistically different from those measured in vehicle-treated controls. Furthermore, higher levels of cyanide were detected in male wild-type mice vs females in association with greater sensitivity of males to the acute toxicity/mortality of this chemical. Using Western blot analysis, negligible difference in CYP2E1 expression with higher levels of soluble and microsomal EH (sEH and mEH) was detected in the liver of male vs female mice. In kidneys, male mice exhibited higher expression of both renal CYP2E1 and sEH than did female mice. In conclusion, higher blood and tissue cyanide levels are responsible for the greater sensitivity of male vs female mice to AN. Further, higher expression of CYP2E1 and EH in male mice may contribute to greater formation of CEO and its subsequent metabolism to yield cyanide, respectively
Sampling, Probability Models and Statistical Reasoning Statistical
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...
Near-term hybrid vehicle program, phase 1. Appendix D: Sensitivity analysis resport
1979-01-01
Parametric analyses, using a hybrid vehicle synthesis and economics program (HYVELD) are described investigating the sensitivity of hybrid vehicle cost, fuel usage, utility, and marketability to changes in travel statistics, energy costs, vehicle lifetime and maintenance, owner use patterns, internal combustion engine (ICE) reference vehicle fuel economy, and drive-line component costs and type. The lowest initial cost of the hybrid vehicle would be $1200 to $1500 higher than that of the conventional vehicle. For nominal energy costs ($1.00/gal for gasoline and 4.2 cents/kWh for electricity), the ownership cost of the hybrid vehicle is projected to be 0.5 to 1.0 cents/mi less than the conventional ICE vehicle. To attain this ownership cost differential, the lifetime of the hybrid vehicle must be extended to 12 years and its maintenance cost reduced by 25 percent compared with the conventional vehicle. The ownership cost advantage of the hybrid vehicle increases rapidly as the price of fuel increases from $1 to $2/gal.
Ren, W. X.; Lin, Y. Q.; Fang, S. E.
2011-11-01
One of the key issues in vibration-based structural health monitoring is to extract the damage-sensitive but environment-insensitive features from sampled dynamic response measurements and to carry out the statistical analysis of these features for structural damage detection. A new damage feature is proposed in this paper by using the system matrices of the forward innovation model based on the covariance-driven stochastic subspace identification of a vibrating system. To overcome the variations of the system matrices, a non-singularity transposition matrix is introduced so that the system matrices are normalized to their standard forms. For reducing the effects of modeling errors, noise and environmental variations on measured structural responses, a statistical pattern recognition paradigm is incorporated into the proposed method. The Mahalanobis and Euclidean distance decision functions of the damage feature vector are adopted by defining a statistics-based damage index. The proposed structural damage detection method is verified against one numerical signal and two numerical beams. It is demonstrated that the proposed statistics-based damage index is sensitive to damage and shows some robustness to the noise and false estimation of the system ranks. The method is capable of locating damage of the beam structures under different types of excitations. The robustness of the proposed damage detection method to the variations in environmental temperature is further validated in a companion paper by a reinforced concrete beam tested in the laboratory and a full-scale arch bridge tested in the field.
The use and misuse of statistical methodologies in pharmacology research.
Marino, Michael J
2014-01-01
Descriptive, exploratory, and inferential statistics are necessary components of hypothesis-driven biomedical research. Despite the ubiquitous need for these tools, the emphasis on statistical methods in pharmacology has become dominated by inferential methods often chosen more by the availability of user-friendly software than by any understanding of the data set or the critical assumptions of the statistical tests. Such frank misuse of statistical methodology and the quest to reach the mystical αstatistical training. Perhaps more critically, a poor understanding of statistical tools limits the conclusions that may be drawn from a study by divorcing the investigator from their own data. The net result is a decrease in quality and confidence in research findings, fueling recent controversies over the reproducibility of high profile findings and effects that appear to diminish over time. The recent development of "omics" approaches leading to the production of massive higher dimensional data sets has amplified these issues making it clear that new approaches are needed to appropriately and effectively mine this type of data. Unfortunately, statistical education in the field has not kept pace. This commentary provides a foundation for an intuitive understanding of statistics that fosters an exploratory approach and an appreciation for the assumptions of various statistical tests that hopefully will increase the correct use of statistics, the application of exploratory data analysis, and the use of statistical study design, with the goal of increasing reproducibility and confidence in the literature. Copyright © 2013. Published by Elsevier Inc.
Monte Carlo testing in spatial statistics, with applications to spatial residuals
DEFF Research Database (Denmark)
Mrkvička, Tomáš; Soubeyrand, Samuel; Myllymäki, Mari
2016-01-01
This paper reviews recent advances made in testing in spatial statistics and discussed at the Spatial Statistics conference in Avignon 2015. The rank and directional quantile envelope tests are discussed and practical rules for their use are provided. These tests are global envelope tests...... with an appropriate type I error probability. Two novel examples are given on their usage. First, in addition to the test based on a classical one-dimensional summary function, the goodness-of-fit of a point process model is evaluated by means of the test based on a higher dimensional functional statistic, namely...
International Nuclear Information System (INIS)
2005-01-01
For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees
Colorado River basin sensitivity to disturbance impacts
Bennett, K. E.; Urrego-Blanco, J. R.; Jonko, A. K.; Vano, J. A.; Newman, A. J.; Bohn, T. J.; Middleton, R. S.
2017-12-01
The Colorado River basin is an important river for the food-energy-water nexus in the United States and is projected to change under future scenarios of increased CO2emissions and warming. Streamflow estimates to consider climate impacts occurring as a result of this warming are often provided using modeling tools which rely on uncertain inputs—to fully understand impacts on streamflow sensitivity analysis can help determine how models respond under changing disturbances such as climate and vegetation. In this study, we conduct a global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the Variable Infiltration Capacity (VIC) hydrologic model to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in VIC. Additionally, we examine sensitivities of basin-wide model simulations using an approach that incorporates changes in temperature, precipitation and vegetation to consider impact responses for snow-dominated headwater catchments, low elevation arid basins, and for the upper and lower river basins. We find that for the Colorado River basin, snow-dominated regions are more sensitive to uncertainties. New parameter sensitivities identified include runoff/evapotranspiration sensitivity to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI). Basin-wide streamflow sensitivities to precipitation, temperature and vegetation are variable seasonally and also between sub-basins; with the largest sensitivities for smaller, snow-driven headwater systems where forests are dense. For a major headwater basin, a 1ºC of warming equaled a 30% loss of forest cover, while a 10% precipitation loss equaled a 90% forest cover decline. Scenarios utilizing multiple disturbances led to unexpected results where changes could either magnify or diminish extremes, such as low and peak flows and streamflow timing
International Nuclear Information System (INIS)
Correia, E.
1983-01-01
A review on the statistical studies of solar burst parameters at X-rays and microwaves, as well as an analysis of the limits caused by instrumental sensitivity and their effect on the form of the distributions and on the establishment of boundary conditions for solar flare phenomena are presented. A study on the statistical behaviour of events observed with high sensitivity at hard X-rays with the HXRBS experiment (SMM) was performed. Maxima have been formed in the parameters distribution, which may be related to intrinsic characteristics of the source-regions. This result seems to confirm searly studies which indicated the influence of the sensitivity limits. Assuming the maxima of the distributions as real, it was possible to establish boundary conditions for the mechanisms of primary energy release. The principal condition establishes that solar bursts can be interpreted as a superposition of primary explosions. The statistical analysis permitted the estimate of a value for the amount of energy in a primary explosion, making use of adjustments of Poisson functions. The value found is consistent with values derived directly from ultra-fast time structures observed in bursts. Assuming an empirical pulse shape for the primary burst and the superposition condition, simulations of bursts have been successfully obtained. (Author) [pt
Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics
Novel diyne-bridged dyes for efficient dye-sensitized solar cells
Energy Technology Data Exchange (ETDEWEB)
Fang, Jing-Kun, E-mail: fjk@njust.edu.cn [Department of Chemistry, School of Chemical Engineering, Nanjing University of Science and Technology, Xiaolingwei Street No. 200, Nanjing, 210094 (China); Sun, Tengxiao [Department of Chemistry, School of Chemical Engineering, Nanjing University of Science and Technology, Xiaolingwei Street No. 200, Nanjing, 210094 (China); Tian, Yi [Advanced Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai, 980-8577 (Japan); Zhang, Yingjun, E-mail: ZhangYingjun@hec.cn [HEC Pharm Group, HEC R& D Center, Dongguan, 523871 (China); Jin, Chuanfei [HEC Pharm Group, HEC R& D Center, Dongguan, 523871 (China); Xu, Zhimin; Fang, Yu; Hu, Xiangyu; Wang, Haobin [Department of Chemistry, School of Chemical Engineering, Nanjing University of Science and Technology, Xiaolingwei Street No. 200, Nanjing, 210094 (China)
2017-07-01
Three new metal free organic dyes (FSD101-103) were synthesized to investigate the influence of diyne unit on dye molecules. FSD101 and FSD102 with diyne unit and FSD103 with monoyne unit were applied as sensitizers in the dye-sensitized solar cells (DSSCs). The optical and electrochemical properties, theoretical studies, and photovoltaic parameters of DSSCs sensitized by these dyes were systematically investigated. By replacing the monoyne unit with a diyne unit, FSD101 exhibited broader absorption spectrum, lower IP, higher EA, lower band gap energy, higher oscillator strength, more efficient electron injection ability, broader IPCE response range and higher τ{sub e} in comparison with FSD103. Hence, DSSC sensitized by FSD101 showed higher J{sub sc} and V{sub oc} values, and demonstrated a power conversion efficiency of 3.12%, about 2-fold as that of FSD103 (1.55%). FSD102 showed similar results as FSD101, with a power conversion efficiency of 2.98%, despite a stronger electron withdraw cyanoacrylic acid group was introduced. This may be due to the lower efficiency of the electron injection from dye to TiO{sub 2} and lower τ{sub e} of FSD102 than that of FSD101. These results indicate that the performance of DSSCs can be significantly improved by introducing a diyne unit into this type of organic dyes. - Highlights: • Diyne-bridge was introduced into dye molecules by a transition-metal-free protocol. • Power conversion efficiency grows from 1.55% to 3.12% by replacing monoyne unit with diyne unit. • FSD101 with diyne unit shows the highest electron lifetime resulting in a higher V{sub oc}.
Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI).
Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur
2016-01-01
We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non-expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI's robustness and sensitivity in capturing useful data relating to the students' conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. © 2016 T. Deane et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Statistical fluctuations of electromagnetic transition intensities in pf-shell nuclei
International Nuclear Information System (INIS)
Hamoudi, A.; Nazmitdinov, R.G.; Shakhaliev, E.; Alhassid, Y.
2000-01-01
We study the fluctuation properties of ΔT = 0 electromagnetic transition intensities in A ∼ 60 nuclei within the framework of the interacting shell model, using a realistic effective interaction for pf-shell nuclei with a 56 Ni core. It is found that the B(E2) and the ΔJ ≠ 0 distributions are well described by the Gaussian orthogonal ensemble of random matrices (Porter-Thomas distribution) independently of the isobaric quantum number T Z . However, the statistics of the B(M1) transitions with Δ = 0 are sensitive to T Z : T Z = 1 nuclei exhibit a Porter-Thomas distribution, while a significant deviation from the GOE statistics is observed for self-conjugate nuclei (T Z = 0). Similar results are found for A = 22 sd-shell nuclei
Algorithm for image retrieval based on edge gradient orientation statistical code.
Zeng, Jiexian; Zhao, Yonggang; Li, Weiye; Fu, Xiang
2014-01-01
Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results.
Statistical model based gender prediction for targeted NGS clinical panels
Directory of Open Access Journals (Sweden)
Palani Kannan Kandavel
2017-12-01
The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.
Enhanced short-term sensitization of facial compared with limb heat pain.
Schmidt, Katharina; Schunke, Odette; Forkmann, Katarina; Bingel, Ulrike
2015-08-01
Habituation and sensitization are important features of individual sensitivity to repetitive noxious stimulation and have been investigated in numerous studies. However, it is unclear whether these phenomena vary depending on the site of stimulation. Here we compared short-term and long-term effects of painful heat stimulation on the forehead and limb using an established longitudinal heat pain paradigm performed over 8 consecutive days in 36 healthy volunteers. Participants were randomized into 2 groups; participants received repetitive heat pain stimulation either on the left volar forearm or on the left side of the forehead. Our data show a comparable degree of habituation over the course of 8 days in both groups. However, participants in the trigeminal stimulation group exhibited stronger within-session sensitization (indexed by a higher within-session increase in pain intensity ratings) than those who received the forearm stimulation. Furthermore, over the course of the experiment we found a correlation between habituation and anxiety, showing less habituation in participants with higher trait anxiety scores. Our findings are in line with somatotopic differences in response to painful stimulation and a higher proneness of trigeminal pain to sensitization processes, which might be explained by the biological relevance of the head and facial area for vital functions. The contribution of this sensitivity to the development and maintenance of clinical facial pain and headache disorders warrants further investigation. This study uses psychophysical methods to evaluate the differences in long-term habituation and short-term sensitization to heat pain between the trigeminal and spinal systems. We found stronger sensitization for trigeminal compared with nociceptive stimuli on the forearm. The contribution of this sensitivity to clinical pain states warrants further investigation. Copyright © 2015 American Pain Society. Published by Elsevier Inc. All rights reserved.
A Review of Modeling Bioelectrochemical Systems: Engineering and Statistical Aspects
Directory of Open Access Journals (Sweden)
Shuai Luo
2016-02-01
Full Text Available Bioelectrochemical systems (BES are promising technologies to convert organic compounds in wastewater to electrical energy through a series of complex physical-chemical, biological and electrochemical processes. Representative BES such as microbial fuel cells (MFCs have been studied and advanced for energy recovery. Substantial experimental and modeling efforts have been made for investigating the processes involved in electricity generation toward the improvement of the BES performance for practical applications. However, there are many parameters that will potentially affect these processes, thereby making the optimization of system performance hard to be achieved. Mathematical models, including engineering models and statistical models, are powerful tools to help understand the interactions among the parameters in BES and perform optimization of BES configuration/operation. This review paper aims to introduce and discuss the recent developments of BES modeling from engineering and statistical aspects, including analysis on the model structure, description of application cases and sensitivity analysis of various parameters. It is expected to serves as a compass for integrating the engineering and statistical modeling strategies to improve model accuracy for BES development.
Machine learning Z2 quantum spin liquids with quasiparticle statistics
Zhang, Yi; Melko, Roger G.; Kim, Eun-Ah
2017-12-01
After decades of progress and effort, obtaining a phase diagram for a strongly correlated topological system still remains a challenge. Although in principle one could turn to Wilson loops and long-range entanglement, evaluating these nonlocal observables at many points in phase space can be prohibitively costly. With growing excitement over topological quantum computation comes the need for an efficient approach for obtaining topological phase diagrams. Here we turn to machine learning using quantum loop topography (QLT), a notion we have recently introduced. Specifically, we propose a construction of QLT that is sensitive to quasiparticle statistics. We then use mutual statistics between the spinons and visons to detect a Z2 quantum spin liquid in a multiparameter phase space. We successfully obtain the quantum phase boundary between the topological and trivial phases using a simple feed-forward neural network. Furthermore, we demonstrate advantages of our approach for the evaluation of phase diagrams relating to speed and storage. Such statistics-based machine learning of topological phases opens new efficient routes to studying topological phase diagrams in strongly correlated systems.
Analysis of statistical misconception in terms of statistical reasoning
Maryati, I.; Priatna, N.
2018-05-01
Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.
State analysis of BOP using statistical and heuristic methods
International Nuclear Information System (INIS)
Heo, Gyun Young; Chang, Soon Heung
2003-01-01
Under the deregulation environment, the performance enhancement of BOP in nuclear power plants is being highlighted. To analyze performance level of BOP, we use the performance test procedures provided from an authorized institution such as ASME. However, through plant investigation, it was proved that the requirements of the performance test procedures about the reliability and quantity of sensors was difficult to be satisfied. As a solution of this, state analysis method that are the expanded concept of signal validation, was proposed on the basis of the statistical and heuristic approaches. Authors recommended the statistical linear regression model by analyzing correlation among BOP parameters as a reference state analysis method. Its advantage is that its derivation is not heuristic, it is possible to calculate model uncertainty, and it is easy to apply to an actual plant. The error of the statistical linear regression model is below 3% under normal as well as abnormal system states. Additionally a neural network model was recommended since the statistical model is impossible to apply to the validation of all of the sensors and is sensitive to the outlier that is the signal located out of a statistical distribution. Because there are a lot of sensors need to be validated in BOP, wavelet analysis (WA) were applied as a pre-processor for the reduction of input dimension and for the enhancement of training accuracy. The outlier localization capability of WA enhanced the robustness of the neural network. The trained neural network restored the degraded signals to the values within ±3% of the true signals
Statistical Inference at Work: Statistical Process Control as an Example
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
Directory of Open Access Journals (Sweden)
Samira Khoshkam
2012-08-01
Full Text Available The present study evaluated the relations between anxious attachment styles and rejection sensitivity, and the potential mediating role of self-esteem and worry. A sample of 125 Iranian college students completed surveys assessing rejection sensitivity, attachment style, worry and self-esteem. Structural Equation Modeling (SEM analyses were conducted. Results show that there is a significant positive relationship between anxious attachment styles and rejection sensitivity. The study suggests that a higher score in anxious attachment styles is associated with a higher level of worry and lower level of self-esteem and it is also associated with higher level of rejection sensitivity. Furthermore, there is a positive significant relationship between worry and rejection sensitivity and there is a negative significant relationship between self-esteem and rejection sensitivity. Results indicate that self-esteem and worry mediate the relationship between anxious attachment styles and rejection sensitivity.
Androgens increase lws opsin expression and red sensitivity in male three-spined sticklebacks.
Directory of Open Access Journals (Sweden)
Yi Ta Shao
Full Text Available Optomotor studies have shown that three-spined sticklebacks (Gasterosteus aculeatus are more sensitive to red during summer than winter, which may be related to the need to detect the red breeding colour of males. This study aimed to determine whether this change of red light sensitivity is specifically related to reproductive physiology. The mRNA levels of opsin genes were examined in the retinae of sexually mature and immature fish, as well as in sham-operated males, castrated control males, or castrated males implanted with androgen 11-ketoandrostenedione (11 KA, maintained under stimulatory (L16:D8 or inhibitory (L8:D16 photoperiods. In both sexes, red-sensitive opsin gene (lws mRNA levels were higher in sexually mature than in immature fish. Under L16:D8, lws mRNA levels were higher in intact than in castrated males, and were up-regulated by 11 KA treatment in castrated males. Moreover, electroretinogram data confirmed that sexual maturation resulted in higher relative red spectral sensitivity. Mature males under L16:D8 were more sensitive to red light than males under L8:D16. Red light sensitivity under L16:D8 was diminished by castration, but increased by 11 KA treatment. Thus, in sexually mature male sticklebacks, androgen is a key factor in enhancing sensitivity to red light via regulation of opsin gene expression. This is the first study to demonstrate that sex hormones can regulate spectral vision sensitivity.
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
A Statistical Primer: Understanding Descriptive and Inferential Statistics
Gillian Byrne
2007-01-01
As libraries and librarians move more towards evidence‐based decision making, the data being generated in libraries is growing. Understanding the basics of statistical analysis is crucial for evidence‐based practice (EBP), in order to correctly design and analyze researchas well as to evaluate the research of others. This article covers the fundamentals of descriptive and inferential statistics, from hypothesis construction to sampling to common statistical techniques including chi‐square, co...
Solution of the statistical bootstrap with Bose statistics
International Nuclear Information System (INIS)
Engels, J.; Fabricius, K.; Schilling, K.
1977-01-01
A brief and transparent way to introduce Bose statistics into the statistical bootstrap of Hagedorn and Frautschi is presented. The resulting bootstrap equation is solved by a cluster expansion for the grand canonical partition function. The shift of the ultimate temperature due to Bose statistics is determined through an iteration process. We discuss two-particle spectra of the decaying fireball (with given mass) as obtained from its grand microcanonical level density
Blood pressure and pain sensitivity in children and adolescents.
Drouin, Sammantha; McGrath, Jennifer J
2013-06-01
Elevated blood pressure is associated with diminished pain sensitivity. While this finding is well established in adults, it is less clear when the relation between blood pressure and pain sensitivity emerges across the life course. Evidence suggests this phenomenon may exist during childhood. Children (N = 309; 56% boys) aged 10-15 years and their parents participated. Blood pressure readings were taken during a resting baseline. Maximum pain intensity was rated using a visual analogue scale (rated 0-10) in response to a finger prick pain induction. Parent-measured resting blood pressure was inversely associated with boys' pain ratings only. Cross-sectionally, lower pain ratings were related to higher SBP, univariately. Longitudinally, pain ratings predicted higher DBP, even after controlling for covariates. Determining when and how the relation between blood pressure and pain sensitivity emerges may elucidate the pathophysiology of hypertension. Copyright © 2013 Society for Psychophysiological Research.
Trends In Funding Higher Education In Romania And EU
Directory of Open Access Journals (Sweden)
Raluca Mariana Dragoescu
2014-05-01
Full Text Available Education is one of the determinants of the economic growth in any state, education funding representing thus a very important aspect in public policies. In this article we present the general principles of funding higher education in Romania and how it evolved over the last decade, stressing that the public higher education has been consistently underfunded. We also present an overview of the evolution of the main statistical indicators that characterize higher education in Romania, the number of universities and faculties, the number of students, number of teachers, revealing discrepancies between their evolution and the evolution of funding. We compared the funding of higher education in Romania and EU countries highlighting the fact that Romania should pay a special attention to higher education to achieve the performancen of other EU member countries.
Survey of sampling-based methods for uncertainty and sensitivity analysis
International Nuclear Information System (INIS)
Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.
2006-01-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
Variation of a test's sensitivity and specificity with disease prevalence.
Leeflang, Mariska M G; Rutjes, Anne W S; Reitsma, Johannes B; Hooft, Lotty; Bossuyt, Patrick M M
2013-08-06
Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. We used data from 23 meta-analyses, each of which included 10-39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p disease prevalence; there was no such systematic effect for sensitivity. The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation.
Energy Technology Data Exchange (ETDEWEB)
Ribeiro, Daniel Rios Pinto; Ramos, Adriane Monserrat; Vieira, Pedro Lima; Menti, Eduardo; Bordin, Odemir Luiz Jr.; Souza, Priscilla Azambuja Lopes de; Quadros, Alexandre Schaan de; Portal, Vera Lúcia, E-mail: veraportal.pesquisa@gmail.com [Programa de Pós-Graduação em Ciências da Saúde: Cardiologia - Instituto de Cardiologia/Fundação Universitária de Cardiologia, Porto Alegre, RS (Brazil)
2014-07-15
The association between high-sensitivity C-reactive protein and recurrent major adverse cardiovascular events (MACE) in patients with ST-elevation myocardial infarction who undergo primary percutaneous coronary intervention remains controversial. To investigate the potential association between high-sensitivity C-reactive protein and an increased risk of MACE such as death, heart failure, reinfarction, and new revascularization in patients with ST-elevation myocardial infarction treated with primary percutaneous coronary intervention. This prospective cohort study included 300 individuals aged >18 years who were diagnosed with ST-elevation myocardial infarction and underwent primary percutaneous coronary intervention at a tertiary health center. An instrument evaluating clinical variables and the Thrombolysis in Myocardial Infarction (TIMI) and Global Registry of Acute Coronary Events (GRACE) risk scores was used. High-sensitivity C-reactive protein was determined by nephelometry. The patients were followed-up during hospitalization and up to 30 days after infarction for the occurrence of MACE. Student's t, Mann-Whitney, chi-square, and logistic regression tests were used for statistical analyses. P values of ≤0.05 were considered statistically significant. The mean age was 59.76 years, and 69.3% of patients were male. No statistically significant association was observed between high-sensitivity C-reactive protein and recurrent MACE (p = 0.11). However, high-sensitivity C-reactive protein was independently associated with 30-day mortality when adjusted for TIMI [odds ratio (OR), 1.27; 95% confidence interval (CI), 1.07-1.51; p = 0.005] and GRACE (OR, 1.26; 95% CI, 1.06-1.49; p = 0.007) risk scores. Although high-sensitivity C-reactive protein was not predictive of combined major cardiovascular events within 30 days after ST-elevation myocardial infarction in patients who underwent primary angioplasty and stent implantation, it was an independent predictor
A statistical evaluation of asbestos air concentrations
International Nuclear Information System (INIS)
Lange, J.H.
1999-01-01
Both area and personal air samples collected during an asbestos abatement project were matched and statistically analysed. Among the many parameters studied were fibre concentrations and their variability. Mean values for area and personal samples were 0.005 and 0.024 f cm - - 3 of air, respectively. Summary values for area and personal samples suggest that exposures are low with no single exposure value exceeding the current OSHA TWA value of 0.1 f cm -3 of air. Within- and between-worker analysis suggests that these data are homogeneous. Comparison of within- and between-worker values suggests that the exposure source and variability for abatement are more related to the process than individual practices. This supports the importance of control measures for abatement. Study results also suggest that area and personal samples are not statistically related, that is, there is no association observed for these two sampling methods when data are analysed by correlation or regression analysis. Personal samples were statistically higher in concentration than area samples. Area sampling cannot be used as a surrogate exposure for asbestos abatement workers. (author)
A cautionary note on the rank product statistic.
Koziol, James A
2016-06-01
The rank product method introduced by Breitling R et al. [2004, FEBS Letters 573, 83-92] has rapidly generated popularity in practical settings, in particular, detecting differential expression of genes in microarray experiments. The purpose of this note is to point out a particular property of the rank product method, namely, its differential sensitivity to over- and underexpression. It turns out that overexpression is less likely to be detected than underexpression with the rank product statistic. We have conducted both empirical and exact power studies that demonstrate this phenomenon, and summarize these findings in this note. © 2016 Federation of European Biochemical Societies.
Szulc, Stefan
1965-01-01
Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then
Goodman, Joseph W
2015-01-01
This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications. The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i
Anatomically standardized statistical mapping of 123I-IMP SPECT in brain tumors
International Nuclear Information System (INIS)
Shibata, Yasushi; Akimoto, Manabu; Matsushita, Akira; Yamamoto, Tetsuya; Takano, Shingo; Matsumura, Akira
2010-01-01
123 I-iodoamphetamine Single Photon Emission Computed Tomography (IMP SPECT) is used to evaluate cerebral blood flow. However, application of IMP SPECT to patients with brain tumors has been rarely reported. Primary central nervous system lymphoma (PCNSL) is a rare tumor that shows delayed IMP uptake. The relatively low spatial resolution of SPECT is a clinical problem in diagnosing brain tumors. We examined anatomically standardized statistical mapping of IMP SPECT in patients with brain lesions. This study included 49 IMP SPECT images for 49 patients with brain lesions: 20 PCNSL, 1 Burkitt's lymphoma, 14 glioma, 4 other tumor, 7 inflammatory disease and 3 without any pathological diagnosis but a clinical diagnosis of PCNSL. After intravenous injection of 222 MBq of 123 I-IMP, early (15 minutes) and delayed (4 hours) images were acquired using a multi-detector SPECT machine. All SPECT data were transferred to a newly developed software program iNeurostat+ (Nihon Medi-physics). SPECT data were anatomically standardized on normal brain images. Regions of increased uptake of IMP were statistically mapped on the tomographic images of normal brain. Eighteen patients showed high uptake in the delayed IMP SPECT images (16 PCNSL, 2 unknown). Other tumor or diseases did not show high uptake of delayed IMP SPECT, so there were no false positives. Four patients with pathologically proven PCNSL showed no uptake in original IMP SPECT. These tumors were too small to detect in IMP SPECT. However, statistical mapping revealed IMP uptake in 18 of 20 pathologically verified PCNSL patients. A heterogeneous IMP uptake was seen in homogenous tumors in MRI. For patients with a hot IMP uptake, statistical mapping showed clearer uptake. IMP SPECT is a sensitive test to diagnose of PCNSL, although it produced false negative results for small posterior fossa tumor. Anatomically standardized statistical mapping is therefore considered to be a useful method for improving the diagnostic
Ghanizadeh, Ahmad
2013-01-01
There is no empirical literature about the American Psychiatry Association proposed new diagnostic criteria for attention deficit hyperactivity disorder (ADHD). This study examined the agreement between ADHD diagnosis derived from Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), and DSM-V diagnostic criteria. It also reports sensitivity, specificity, and agreement for ADHD diagnosis. A clinical sample of 246 children and adolescents were interviewed face to face using both ADHD diagnostic criteria for DSM-V and DSM-IV by interviewing clinician. Comorbid psychiatric disorders were screened using DSM-IV criteria. The rate of ADHD diagnosis using DSM-V was significantly higher than the rate detected by using DSM-IV diagnostic criteria. The sensitivity of DSM-V diagnostic criteria was 100%, while its specificity was 71.1%. The kappa agreement between DSM-IV and DSM-V was 0.75. In addition, positive predictive value was 85.1%. All the four newly added symptoms to ADHD diagnostic criteria are statistically more common in the children with ADHD than those in the comparison group. However, these symptoms are also very common in the children without ADHD. It is expected that the rate of ADHD would increase using the proposed ADHD DSM-V criteria. Moreover, the newly added symptoms have a low specificity for ADHD diagnosis. Copyright © 2013 Elsevier Inc. All rights reserved.
New Graphical Methods and Test Statistics for Testing Composite Normality
Directory of Open Access Journals (Sweden)
Marc S. Paolella
2015-07-01
Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.
Milic, Natasa M; Trajkovic, Goran Z; Bukumiric, Zoran M; Cirkovic, Andja; Nikolic, Ivan M; Milin, Jelena S; Milic, Nikola V; Savic, Marko D; Corac, Aleksandar M; Marinkovic, Jelena M; Stanisavljevic, Dejana M
2016-01-01
Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013-14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (pstatistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional classroom training in medical statistics.
Doctoral production in South Africa: Statistics, challenges and ...
African Journals Online (AJOL)
The past few years have witnessed new interest in doctoral production in South Africa. In the first section of the article, it is argued that this new interest has its roots in various higher education policy documents over the past decade. The second part of the article presents some of the most recent statistics on various aspects ...
All of statistics a concise course in statistical inference
Wasserman, Larry
2004-01-01
This book is for people who want to learn probability and statistics quickly It brings together many of the main ideas in modern statistics in one place The book is suitable for students and researchers in statistics, computer science, data mining and machine learning This book covers a much wider range of topics than a typical introductory text on mathematical statistics It includes modern topics like nonparametric curve estimation, bootstrapping and classification, topics that are usually relegated to follow-up courses The reader is assumed to know calculus and a little linear algebra No previous knowledge of probability and statistics is required The text can be used at the advanced undergraduate and graduate level Larry Wasserman is Professor of Statistics at Carnegie Mellon University He is also a member of the Center for Automated Learning and Discovery in the School of Computer Science His research areas include nonparametric inference, asymptotic theory, causality, and applications to astrophysics, bi...
Equilibrium statistical mechanics of lattice models
Lavis, David A
2015-01-01
Most interesting and difficult problems in equilibrium statistical mechanics concern models which exhibit phase transitions. For graduate students and more experienced researchers this book provides an invaluable reference source of approximate and exact solutions for a comprehensive range of such models. Part I contains background material on classical thermodynamics and statistical mechanics, together with a classification and survey of lattice models. The geometry of phase transitions is described and scaling theory is used to introduce critical exponents and scaling laws. An introduction is given to finite-size scaling, conformal invariance and Schramm—Loewner evolution. Part II contains accounts of classical mean-field methods. The parallels between Landau expansions and catastrophe theory are discussed and Ginzburg—Landau theory is introduced. The extension of mean-field theory to higher-orders is explored using the Kikuchi—Hijmans—De Boer hierarchy of approximations. In Part III the use of alge...
Male circumcision decreases penile sensitivity as measured in a large cohort.
Bronselaer, Guy A; Schober, Justine M; Meyer-Bahlburg, Heino F L; T'Sjoen, Guy; Vlietinck, Robert; Hoebeke, Piet B
2013-05-01
WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The sensitivity of the foreskin and its importance in erogenous sensitivity is widely debated and controversial. This is part of the actual public debate on circumcision for non-medical reason. Today some studies on the effect of circumcision on sexual function are available. However they vary widely in outcome. The present study shows in a large cohort of men, based on self-assessment, that the foreskin has erogenous sensitivity. It is shown that the foreskin is more sensitive than the uncircumcised glans mucosa, which means that after circumcision genital sensitivity is lost. In the debate on clitoral surgery the proven loss of sensitivity has been the strongest argument to change medical practice. In the present study there is strong evidence on the erogenous sensitivity of the foreskin. This knowledge hopefully can help doctors and patients in their decision on circumcision for non-medical reason. To test the hypothesis that sensitivity of the foreskin is a substantial part of male penile sensitivity. To determine the effects of male circumcision on penile sensitivity in a large sample. The study aimed at a sample size of ≈1000 men. Given the intimate nature of the questions and the intended large sample size, the authors decided to create an online survey. Respondents were recruited by means of leaflets and advertising. The analysis sample consisted of 1059 uncircumcised and 310 circumcised men. For the glans penis, circumcised men reported decreased sexual pleasure and lower orgasm intensity. They also stated more effort was required to achieve orgasm, and a higher percentage of them experienced unusual sensations (burning, prickling, itching, or tingling and numbness of the glans penis). For the penile shaft a higher percentage of circumcised men described discomfort and pain, numbness and unusual sensations. In comparison to men circumcised before puberty, men circumcised during adolescence or
Beyond the GUM: variance-based sensitivity analysis in metrology
International Nuclear Information System (INIS)
Lira, I
2016-01-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)
... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...
Raut, Chetan Purushottam; Sethi, Kunal Sunder; Kohale, Bhagyashree; Mamajiwala, Alefiya; Warang, Ayushya
2018-01-01
Postsurgical root sensitivity has always been an enigma to the periodontists. There is a plethora of evidence suggesting the presence of root sensitivity following periodontal flap surgical procedures. Thus, the aim of the present study was to compare and evaluate the effect of low-power diode lasers with and without topical application of stannous fluoride (SnF 2 ) gel in the treatment of root sensitivity and also evaluate whether laser creates any placebo effect in the control group or not. Thirty patients participated in this study and 99 teeth were included. Root sensitivity was assessed for all groups with a Verbal Rating Scale (VRS). For each patient, the teeth were randomized into three groups. In the test Group I, sensitive teeth were treated with SnF 2 and diode laser. In the test Group II, sensitive teeth were irradiated with laser only. In the control group, no treatment was performed. The mean ± standard deviation (SD) score for VRS and Visual Analog Scale at baseline was not statistically significant ( P > 0.05) between the three groups. After 15 min, statistical significant difference was seen in test Group I and test Group II, although no difference was found in the control group. At 15 th day and 30 th day, the mean ± SD scores were statistically significant ( P diode lasers alone and in combination with 0.4% SnF 2 was effective in the treatment of root sensitivity after access flap surgery.
International Nuclear Information System (INIS)
Wang, Brian; Kim, C.-H.; Xu, X. George
2004-01-01
Metal-oxide-semiconductor field effect transistor (MOSFET) dosimeters are increasingly utilized in radiation therapy and diagnostic radiology. While it is difficult to characterize the dosimeter responses for monoenergetic sources by experiments, this paper reports a detailed Monte Carlo simulation model of the High-Sensitivity MOSFET dosimeter using Monte Carlo N-Particle (MCNP) 4C. A dose estimator method was used to calculate the dose in the extremely thin sensitive volume. Efforts were made to validate the MCNP model using three experiments: (1) comparison of the simulated dose with the measurement of a Cs-137 source, (2) comparison of the simulated dose with analytical values, and (3) comparison of the simulated energy dependence with theoretical values. Our simulation results show that the MOSFET dosimeter has a maximum response at about 40 keV of photon energy. The energy dependence curve is also found to agree with the predicted value from theory within statistical uncertainties. The angular dependence study shows that the MOSFET dosimeter has a higher response (about 8%) when photons come from the epoxy side, compared with the kapton side for the Cs-137 source
Higher-order turbulence statistics of wave–current flow over a submerged hemisphere
Energy Technology Data Exchange (ETDEWEB)
Barman, Krishnendu; Debnath, Koustuv; Mazumder, Bijoy S, E-mail: debnath_koustuv@yahoo.com [Department of Aerospace Engineering and Applied Mechanics, Indian Institute of Engineering Science and Technology, Shibpur, Howrah 711103, West Bengal (India)
2017-04-15
Higher-order turbulence characteristics such as turbulence production, turbulence kinetic energy flux, third order moments and velocity spectra associated with turbulent bursting events due to the influence of a submerged hemisphere under wave–current interactions are presented. The velocity components were measured using three dimensional (3D) 16 MHz micro-acoustic Doppler velocimetry (Micro-ADV). In the wave–current interactions, the contributions of turbulent bursting events such as ejections and sweeps significantly reduce in comparison to the current-only case. The distributions of the mean time intervals of ejection and sweeping events are found to alter due to the superposition of surface waves. Results also depict that the turbulence production in the wake region of the hemisphere reduces remarkably, due to the superposition of surface waves on the current. Further, spectral and co-spectral analysis demonstrates that there is a significant reduction of power spectral peak for both longitudinal and bottom-normal velocities upon superposition of surface waves, which signifies a remarkable change in energy distribution between different frequencies of waves. (paper)
SyntEyes KTC: higher order statistical eye model for developing keratoconus.
Rozema, Jos J; Rodriguez, Pablo; Ruiz Hidalgo, Irene; Navarro, Rafael; Tassignon, Marie-José; Koppen, Carina
2017-05-01
To present and validate a stochastic eye model for developing keratoconus to e.g. improve optical corrective strategies. This could be particularly useful for researchers that do not have access to original keratoconic data. The Scheimpflug tomography, ocular biometry and wavefront of 145 keratoconic right eyes were collected. These data were processed using principal component analysis for parameter reduction, followed by a multivariate Gaussian fit that produces a stochastic model for keratoconus (SyntEyes KTC). The output of this model is filtered to remove the occasional incorrect topography patterns by either an automatic or manual procedure. Finally, the output of this keratoconus model is matched to that of the original model for normal eyes using the non-corneal biometry to obtain a description of keratoconus development. The synthetic data generated by the model were found to be significantly equal to the original data (non-parametric Mann-Whitney equivalence test; 145/154 passed). The variability of the synthetic data, however, was often significantly less than that of the original data, especially for the higher order Zernike terms of corneal elevation (non-parametric Levene test; p eyes with incorrect topographies. Interpolation between matched pairs of normal and keratoconic SyntEyes appears to provide an adequate model for keratoconus progression. The synthetic data provided by the proposed keratoconus model closely resembles actual clinical data and may be used for a range of research applications when (sufficient) real data is not available. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
Generalized quantum statistics
International Nuclear Information System (INIS)
Chou, C.
1992-01-01
In the paper, a non-anyonic generalization of quantum statistics is presented, in which Fermi-Dirac statistics (FDS) and Bose-Einstein statistics (BES) appear as two special cases. The new quantum statistics, which is characterized by the dimension of its single particle Fock space, contains three consistent parts, namely the generalized bilinear quantization, the generalized quantum mechanical description and the corresponding statistical mechanics
Generalized Statistical Mechanics at the Onset of Chaos
Directory of Open Access Journals (Sweden)
Alberto Robledo
2013-11-01
Full Text Available Transitions to chaos in archetypal low-dimensional nonlinear maps offer real and precise model systems in which to assess proposed generalizations of statistical mechanics. The known association of chaotic dynamics with the structure of Boltzmann–Gibbs (BG statistical mechanics has suggested the potential verification of these generalizations at the onset of chaos, when the only Lyapunov exponent vanishes and ergodic and mixing properties cease to hold. There are three well-known routes to chaos in these deterministic dissipative systems, period-doubling, quasi-periodicity and intermittency, which provide the setting in which to explore the limit of validity of the standard BG structure. It has been shown that there is a rich and intricate behavior for both the dynamics within and towards the attractors at the onset of chaos and that these two kinds of properties are linked via generalized statistical-mechanical expressions. Amongst the topics presented are: (i permanently growing sensitivity fluctuations and their infinite family of generalized Pesin identities; (ii the emergence of statistical-mechanical structures in the dynamics along the routes to chaos; (iii dynamical hierarchies with modular organization; and (iv limit distributions of sums of deterministic variables. The occurrence of generalized entropy properties in condensed-matter physical systems is illustrated by considering critical fluctuations, localization transition and glass formation. We complete our presentation with the description of the manifestations of the dynamics at the transitions to chaos in various kinds of complex systems, such as, frequency and size rank distributions and complex network images of time series. We discuss the results.
Prediction of skin sensitization potency using machine learning approaches.
Zang, Qingda; Paris, Michael; Lehmann, David M; Bell, Shannon; Kleinstreuer, Nicole; Allen, David; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Strickland, Judy
2017-07-01
The replacement of animal use in testing for regulatory classification of skin sensitizers is a priority for US federal agencies that use data from such testing. Machine learning models that classify substances as sensitizers or non-sensitizers without using animal data have been developed and evaluated. Because some regulatory agencies require that sensitizers be further classified into potency categories, we developed statistical models to predict skin sensitization potency for murine local lymph node assay (LLNA) and human outcomes. Input variables for our models included six physicochemical properties and data from three non-animal test methods: direct peptide reactivity assay; human cell line activation test; and KeratinoSens™ assay. Models were built to predict three potency categories using four machine learning approaches and were validated using external test sets and leave-one-out cross-validation. A one-tiered strategy modeled all three categories of response together while a two-tiered strategy modeled sensitizer/non-sensitizer responses and then classified the sensitizers as strong or weak sensitizers. The two-tiered model using the support vector machine with all assay and physicochemical data inputs provided the best performance, yielding accuracy of 88% for prediction of LLNA outcomes (120 substances) and 81% for prediction of human test outcomes (87 substances). The best one-tiered model predicted LLNA outcomes with 78% accuracy and human outcomes with 75% accuracy. By comparison, the LLNA predicts human potency categories with 69% accuracy (60 of 87 substances correctly categorized). These results suggest that computational models using non-animal methods may provide valuable information for assessing skin sensitization potency. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Predictors of Career Adaptability Skill among Higher Education Students in Nigeria
Directory of Open Access Journals (Sweden)
Amos Shaibu Ebenehi
2016-12-01
Full Text Available This paper examined predictors of career adaptability skill among higher education students in Nigeria. A sample of 603 higher education students randomly selected from six colleges of education in Nigeria participated in this study. A set of self-reported questionnaire was used for data collection, and multiple linear regression analysis was used to analyze the data. Results indicated that 33.3% of career adaptability skill was explained by the model. Four out of the five predictor variables significantly predicted career adaptability skill among higher education students in Nigeria. Among the four predictors, career self-efficacy sources was the most statistically significant predictor of career adaptability skill among higher education students in Nigeria, followed by personal goal orientation, career future concern, and perceived social support respectively. Vocational identity did not statistically predict career adaptability skill among higher education students in Nigeria. The study suggested that similar study should be replicated in other parts of the world in view of the importance of career adaptability skill to the smooth transition of graduates from school to the labor market. The study concluded by requesting stakeholders of higher institutions in Nigeria to provide career exploration database for the students, and encourage career intervention program in order to enhance career adaptability skill among the students.
CONCURRENT CONTACT SENSITIZATION TO METALS IN DENTAL EXPOSURES
Directory of Open Access Journals (Sweden)
Maya Lyapina
2018-03-01
Full Text Available Purpose: Sensitization to metals is a significant problem for both dental patients treated with dental materials and for dental professionals in occupational exposures. The purpose of the present study was to evaluate the incidence of concurrent contact sensitization to relevant for dental practice metals among students of dental medicine, students from dental technician school, dental professionals and patients. Material and Methods: A total of 128 participants were included in the study. All of them were patch-tested with nickel, cobalt, copper, potassium dichromate, palladium, aluminium, gold and tin. The results were subject to statistical analysis (p < 0.05. Results: For the whole studied population, potassium dichromate exhibited concomitant reactivity most often; copper and tin also often manifested co-reactivity. For the groups, exposed in dental practice, potassium dichromate and tin were outlined as the most often co-reacting metal allergens, but statistical significance concerning the co-sensitization to copper and the other metals was established only for aluminium. An increased incidence and OR for concomitant sensitization to cobalt and nickel was established in the group of dental students; to copper and nickel - in the control group; to palladium and nickel - in the group of dental professionals, the group of students of dental medicine and in the control group; to potassium dichromate and cobalt - in the group of dental students; to copper and palladium - in the control group of dental patients; to potassium dichromate and copper - in the group of dental professionals; to copper and aluminum - in the groups of students from dental technician school and of dental professionals; to copper and gold - in the groups of dental professionals and in the group of dental patients; to potassium dichromate and aluminum - in the group of dental professionals; to potassium dichromate and gold - in the group of dental professionals, and to
On chaos in quantum mechanics: The two meanings of sensitive dependence
International Nuclear Information System (INIS)
Ingraham, R.L.; Luna Acosta, G.A.
1993-08-01
Sensitive dependence on initial conditions, the most important signature of chaos, can mean failure of Lyapunov stability, the primary meaning adopted in dynamical systems theory, or the presence of positive Lyapunov exponents, the meaning favored in physics. These are not equivalent in general. We show that there is sensitive dependence in quantum mechanics in the sense of violation of Lyapunov stability for maps of the state vector like involving unbounded operators A. This is true even for bounded quantum systems, where the corresponding Lyapunov exponents are all zero. Experiments to reveal this sensitive dependence, a definite though unfamiliar prediction of quantum mechanics, should be devised. It may also invalidate the usual assumption of linear response theory in quantum statistical mechanics in some cases. (author) 13 refs
Rumsey, Deborah
2011-01-01
The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou
Lensing corrections to the Eg(z) statistics from large scale structure
Moradinezhad Dizgah, Azadeh; Durrer, Ruth
2016-09-01
We study the impact of the often neglected lensing contribution to galaxy number counts on the Eg statistics which is used to constrain deviations from GR. This contribution affects both the galaxy-galaxy and the convergence-galaxy spectra, while it is larger for the latter. At higher redshifts probed by upcoming surveys, for instance at z = 1.5, neglecting this term induces an error of (25-40)% in the spectra and therefore on the Eg statistics which is constructed from the combination of the two. Moreover, including it, renders the Eg statistics scale and bias-dependent and hence puts into question its very objective.
Statistical conditional sampling for variable-resolution video compression.
Directory of Open Access Journals (Sweden)
Alexander Wong
Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.
International Nuclear Information System (INIS)
Harper, W.V.; Gupta, S.K.
1983-10-01
A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables
Nick, Todd G
2007-01-01
Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.
Statistical determination of significant curved I-girder bridge seismic response parameters
Seo, Junwon
2013-06-01
Curved steel bridges are commonly used at interchanges in transportation networks and more of these structures continue to be designed and built in the United States. Though the use of these bridges continues to increase in locations that experience high seismicity, the effects of curvature and other parameters on their seismic behaviors have been neglected in current risk assessment tools. These tools can evaluate the seismic vulnerability of a transportation network using fragility curves. One critical component of fragility curve development for curved steel bridges is the completion of sensitivity analyses that help identify influential parameters related to their seismic response. In this study, an accessible inventory of existing curved steel girder bridges located primarily in the Mid-Atlantic United States (MAUS) was used to establish statistical characteristics used as inputs for a seismic sensitivity study. Critical seismic response quantities were captured using 3D nonlinear finite element models. Influential parameters from these quantities were identified using statistical tools that incorporate experimental Plackett-Burman Design (PBD), which included Pareto optimal plots and prediction profiler techniques. The findings revealed that the potential variation in the influential parameters included number of spans, radius of curvature, maximum span length, girder spacing, and cross-frame spacing. These parameters showed varying levels of influence on the critical bridge response.
Investigation to determine the absolute sensitivity of Rh SPNDs
International Nuclear Information System (INIS)
Adorian, F.; Patai Szabo, S.; Pos, I.
1998-01-01
The goal of the work was to find an empirical sensitivity function of the Rh SPNDs used in VVER-440 reactors and to investigate the accuracy and adequateness of the detector signal predicting capability of the associated model. In our case the model was based on the HELIOS transport code and the C-PORCA nodal code. A statistical sensitivity analysis versus some selected parameters (e.g. enrichment, burn-up) has been carried out by using a substantial amount of measured data. We also investigated the stability of the electron collecting probability of the detectors versus their burn-up and other parameters with the aim of obtaining a tuned semi-empirical formula for the detector burnup correction. (Authors)
Directory of Open Access Journals (Sweden)
Natasa M Milic
Full Text Available Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face learning to further assess the potential value of web-based learning in medical statistics.This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545 the final exam of the obligatory introductory statistics course during 2013-14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course.Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001 and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023 with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001.This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional
Ultrahigh humidity sensitivity of graphene oxide.
Bi, Hengchang; Yin, Kuibo; Xie, Xiao; Ji, Jing; Wan, Shu; Sun, Litao; Terrones, Mauricio; Dresselhaus, Mildred S
2013-01-01
Humidity sensors have been extensively used in various fields, and numerous problems are encountered when using humidity sensors, including low sensitivity, long response and recovery times, and narrow humidity detection ranges. Using graphene oxide (G-O) films as humidity sensing materials, we fabricate here a microscale capacitive humidity sensor. Compared with conventional capacitive humidity sensors, the G-O based humidity sensor has a sensitivity of up to 37800% which is more than 10 times higher than that of the best one among conventional sensors at 15%-95% relative humidity. Moreover, our humidity sensor shows a fast response time (less than 1/4 of that of the conventional one) and recovery time (less than 1/2 of that of the conventional one). Therefore, G-O appears to be an ideal material for constructing humidity sensors with ultrahigh sensitivity for widespread applications.
On detection and assessment of statistical significance of Genomic Islands
Directory of Open Access Journals (Sweden)
Chaudhuri Probal
2008-04-01
Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.
Leonard, R H; Haywood, V B; Phillips, C
1997-08-01
The purpose of this study was to determine risk factors in the development of tooth sensitivity and gingival irritation associated with the nightguard vital bleaching technique. The potential risk factors evaluated (sex, age, reported allergy, whitening solution, number of times the solution was changed daily [its usage pattern], and dental arch) were collected from the daily log form turned in by each of the 64 participants after completion of the 6-week lightening process. Also evaluated for each participant, from color slides, were tooth characteristics such as gingival recession, defective restorations, abfraction lesions, enamel-cementum abrasion, etc, and reported side effects. The generalized Mantel-Haenszel statistic was used to assess the association between the potential risk factors and the development of tooth sensitivity and/or gingival irritation. No statistical relationship existed between age, sex, allergy, tooth characteristics, or the dental arch lightened and the development of side effects. Initially, a statistically significant association existed between side effects and the whitening solution used. However, when the analysis was controlled for usage pattern, this relationship disappeared. Patients who changed the whitening solution more than once a day reported statistically significantly more side effects than did those who did not change the whitening solution during their usage time.
Estimation of measurement variance in the context of environment statistics
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
Information Statistics in Schools Educate your students about the value and everyday use of statistics. The Statistics in Schools program provides resources for teaching and learning with real life data. Explore the site for standards-aligned, classroom-ready activities. Statistics in Schools Math Activities History
Park, Jong In; Park, Jong Min; Kim, Jung-In; Park, So-Yeon; Ye, Sung-Joon
2015-12-01
The aim of this study was to investigate the sensitivity of the gamma-index method according to various gamma criteria for volumetric modulated arc therapy (VMAT). Twenty head and neck (HN) and twenty prostate VMAT plans were retrospectively selected for this study. Both global and local 2D gamma evaluations were performed with criteria of 3%/3 mm, 2%/2 mm, 1%/2 mm and 2%/1 mm. In this study, the global and local gamma-index calculated the differences in doses relative to the maximum dose and the dose at the current measurement point, respectively. Using log files acquired during delivery, the differences in parameters at every control point between the VMAT plans and the log files were acquired. The differences in dose-volumetric parameters between reconstructed VMAT plans using the log files and the original VMAT plans were calculated. The Spearman's rank correlation coefficients (rs) were calculated between the passing rates and those differences. Considerable correlations with statistical significances were observed between global 1%/2 mm, local 1%/2 mm and local 2%/1 mm and the MLC position differences (rs = -0.712, -0.628 and -0.581). The numbers of rs values with statistical significance between the passing rates and the changes in dose-volumetric parameters were largest in global 2%/2 mm (n = 16), global 2%/1 mm (n = 15) and local 2%/1 mm (n = 13) criteria. Local gamma-index method with 2%/1 mm generally showed higher sensitivity to detect deviations between a VMAT plan and the delivery of the VMAT plan. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Appelbaum, L Gregory; Cain, Matthew S; Darling, Elise F; Mitroff, Stephen R
2013-08-01
Action video game playing has been experimentally linked to a number of perceptual and cognitive improvements. These benefits are captured through a wide range of psychometric tasks and have led to the proposition that action video game experience may promote the ability to extract statistical evidence from sensory stimuli. Such an advantage could arise from a number of possible mechanisms: improvements in visual sensitivity, enhancements in the capacity or duration for which information is retained in visual memory, or higher-level strategic use of information for decision making. The present study measured the capacity and time course of visual sensory memory using a partial report performance task as a means to distinguish between these three possible mechanisms. Sensitivity measures and parameter estimates that describe sensory memory capacity and the rate of memory decay were compared between individuals who reported high evels and low levels of action video game experience. Our results revealed a uniform increase in partial report accuracy at all stimulus-to-cue delays for action video game players but no difference in the rate or time course of the memory decay. The present findings suggest that action video game playing may be related to enhancements in the initial sensitivity to visual stimuli, but not to a greater retention of information in iconic memory buffers.
Higher plasma level of STIM1, OPG are correlated with stent restenosis after PCI.
Li, Haibin; Jiang, Zhian; Liu, Xiangdong; Yang, Zhihui
2015-01-01
Percutaneous Coronary Intervention (PCI) is one of the most effective treatments for Coronary Heart Disease (CHD), but the high rate of In Stent Restenosis (ISR) has plagued clinicians after PCI. We aim to investigate the correlation of plasma Stromal Interaction Molecular 1 (STIM1) and Osteoprotegerin (OPG) level with stent restenosis after PCI. A total of 100 consecutive patients with Coronary Heart Disease (CHD) received PCI procedure were recruited. Coronary angiography was performed 8 months after their PCI. Then patients were divided into 2 groups: observation group was composed by patients who existing postoperative stenosis after intervention; Control group was composed by patients with no postoperative stenosis. The plasma levels of STIM, OPG in all patients were tested before and after intervention. Pearson correlation and multiple linear regression analysis were performed to analysis the correlation between STIM, OPG level and postoperative stenosis. 35 cases were divided into observation group and other 65 were divided into control group. The plasma levels of STIM, OPG have no statistical difference before their PCI procedure, but we observed higher level of High-sensitivity C-reactive protein (Hs-CRP) existed in observation group. We observed higher level of plasma STIM, OPG in observation group when compared with control group after PCI procedure (P PCI, which could provide useful information for the restenosis control after PCI.
Antibiotic sensitivity of Enterobacteriaceae at a tertiary care center in India
Directory of Open Access Journals (Sweden)
Summaiya Mulla
2011-01-01
Full Text Available Aims and Objectives: It has been observed that various microorganisms are acquiring resistance to most of the available potent antibiotics; hence, there is a need for every hospital to follow the use of antibiotics according to antibiotic sensitivity pattern in that particular hospital or geographical area. It has been reported that Enterobacteriaceae group of microorganisms are increasingly acquiring resistance to many antibiotics and this resistance varies geographically. As there is a short of recent data with respect to Indian hospital, this particular study was designed with the aim of establishing sensitivity pattern of Enterobacteriaceae group of microorganisms to various antibiotics. Materials and Methods: Data of antibiotic sensitivity from December 2010 to April 2011 of different Enterobacteriaceae was taken from the Department of Microbiology, Govt. Medical College, Surat. Sensitivity of different Enterobacteriaceae was shown as using descriptive statistics. Results: E. coli (55.6% and Klebsiella (31.2% were the most frequent bacteria isolated. Enterobacteriaceae were very less sensitive to amoxicillin + clavulanic acid (13.7%, chloramphenicol (7.6%, cefoperazone (14.4%, cefixime (15.7%, and cefuroxime (17.6. Sensitivity to aztreonam was 32.7%. Sensitivity to carbapenem group of drugs included in this study, i.e., meropenem was 69.8%. Highest sensitivity was shown for ceftazidime (74.1%. E. coli is more sensitive to meropenem as compared with Klebsiella. Conclusion: Sensitivity of Enterobacteriaceae group of microorganisms to known antibiotics is decreasing. Decreased sensitivity to carbapenem group of antibiotics is a matter of concern.
Axial asymmetry for improved sensitivity in MEMS piezoresistors
International Nuclear Information System (INIS)
Shuvra, Pranoy Deb; McNamara, Shamus; Lin, Ji-Tzuoh; Alphenaar, Bruce; Walsh, Kevin; Davidson, Jim
2016-01-01
The strain induced resistance change is compared for asymmetric, symmetric and diffused piezoresistive elements. Finite element analysis is used to simulate the performance of a T-shaped piezoresistive MEMS cantilever, including a lumped parameter model to show the effect of geometric asymmetry on the piezoresistor sensitivity. Asymmetric piezoresistors are found to be much more sensitive to applied load than the typical symmetric design producing about two orders of magnitude higher resistance change. This is shown to be due to the difference in the stress distribution in the symmetric and asymmetric geometries resulting in less resistance change cancellation in the asymmetric design. Although still less sensitive than diffused piezoresistors, asymmetric piezoresistors are sensitive enough for many applications, and are much easier to fabricate and integrate into MEMS devices. (paper)
Sensitivity of wildlife habitat models to uncertainties in GIS data
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
CFAssay: statistical analysis of the colony formation assay
International Nuclear Information System (INIS)
Braselmann, Herbert; Michna, Agata; Heß, Julia; Unger, Kristian
2015-01-01
Colony formation assay is the gold standard to determine cell reproductive death after treatment with ionizing radiation, applied for different cell lines or in combination with other treatment modalities. Associated linear-quadratic cell survival curves can be calculated with different methods. For easy code exchange and methodological standardisation among collaborating laboratories a software package CFAssay for R (R Core Team, R: A Language and Environment for Statistical Computing, 2014) was established to perform thorough statistical analysis of linear-quadratic cell survival curves after treatment with ionizing radiation and of two-way designs of experiments with chemical treatments only. CFAssay offers maximum likelihood and related methods by default and the least squares or weighted least squares method can be optionally chosen. A test for comparision of cell survival curves and an ANOVA test for experimental two-way designs are provided. For the two presented examples estimated parameters do not differ much between maximum-likelihood and least squares. However the dispersion parameter of the quasi-likelihood method is much more sensitive for statistical variation in the data than the multiple R 2 coefficient of determination from the least squares method. The dispersion parameter for goodness of fit and different plot functions in CFAssay help to evaluate experimental data quality. As open source software interlaboratory code sharing between users is facilitated
Park, Mihyun; Kjervik, Diane; Crandell, Jamie; Oermann, Marilyn H
2012-07-01
This study described the relationships between academic class and student moral sensitivity and reasoning and between curriculum design components for ethics education and student moral sensitivity and reasoning. The data were collected from freshman (n = 506) and senior students (n = 440) in eight baccalaureate nursing programs in South Korea by survey; the survey consisted of the Korean Moral Sensitivity Questionnaire and the Korean Defining Issues Test. The results showed that moral sensitivity scores in patient-oriented care and conflict were higher in senior students than in freshman students. Furthermore, more hours of ethics content were associated with higher principled thinking scores of senior students. Nursing education in South Korea may have an impact on developing student moral sensitivity. Planned ethics content in nursing curricula is necessary to improve moral sensitivity and moral reasoning of students.
DEFF Research Database (Denmark)
Tura, A.; Bagger, J. I.; Ferrannini, E.
2017-01-01
Background and aims The incretin effect is impaired in type 2 diabetes (T2D), but the underlying mechanisms are only partially understood. We investigated the relationships between the time course of the incretin effect and that of glucose-dependent insulinotropic polypeptide (GIP) and glucagon......-like peptide-1 (GLP-1) during oral glucose tolerance tests (OGTTs), thereby estimating incretin sensitivity of the beta cell, and its associated factors. Methods and results Eight patients with T2D and eight matched subjects with normal glucose tolerance (NGT) received 25, 75, and 125 g OGTTs and corresponding...... was correlated with that of both GIP and GLP-1 in each subject (median r = 0.67 in NGT and 0.45 in T2D). We calculated an individual beta cell sensitivity to incretins (SINCR) using a weighted average of GIP and GLP-1 (pooled incretin concentration, PIC), as the slope of the relationship between PINCR and PIC...
The clinic-statistic study of osteoporosis
Directory of Open Access Journals (Sweden)
Florin MARCU
2008-05-01
Full Text Available Osteoporosis is the most common metabolic bone disease and is characterized by the shrinkage in bone mass and the distruction of bone quality, thus conferring a higher risk for fractures and injuries. Osteoporosis reaches clinical attention when it is severe enough to induce microfractures and the collapsing of vertebral bodies manifesting with back aches or predisposition to other bone fractures. The aim of the study was to establish a statistic-numeric report between women and men in subjects diagnosed with osteoporosis through DEXA that present with a clinical simptomatology. We studied a group of subjects of masculine and feminine genders that have been diagnosed with osteoporosis through DEXA at the EURORAD clinic in Oradea from 01.01.2007-to present time .The result of the study was that the simptomatology of osteoporosis with pain and even cases of fractures is more obvious in female subjects then in male patients; statistically ,a woman/man report of 6.1/1 was established.
Rapid Statistical Learning Supporting Word Extraction From Continuous Speech.
Batterink, Laura J
2017-07-01
The identification of words in continuous speech, known as speech segmentation, is a critical early step in language acquisition. This process is partially supported by statistical learning, the ability to extract patterns from the environment. Given that speech segmentation represents a potential bottleneck for language acquisition, patterns in speech may be extracted very rapidly, without extensive exposure. This hypothesis was examined by exposing participants to continuous speech streams composed of novel repeating nonsense words. Learning was measured on-line using a reaction time task. After merely one exposure to an embedded novel word, learners demonstrated significant learning effects, as revealed by faster responses to predictable than to unpredictable syllables. These results demonstrate that learners gained sensitivity to the statistical structure of unfamiliar speech on a very rapid timescale. This ability may play an essential role in early stages of language acquisition, allowing learners to rapidly identify word candidates and "break in" to an unfamiliar language.
Ultrahigh sensitivity and layer-dependent sensing performance of phosphorene-based gas sensors
Cui, Shumao; Pu, Haihui; Wells, Spencer A.; Wen, Zhenhai; Mao, Shun; Chang, Jingbo; Hersam, Mark C.; Chen, Junhong
2015-01-01
Two-dimensional (2D) layered materials have attracted significant attention for device applications because of their unique structures and outstanding properties. Here, a field-effect transistor (FET) sensor device is fabricated based on 2D phosphorene nanosheets (PNSs). The PNS sensor exhibits an ultrahigh sensitivity to NO2 in dry air and the sensitivity is dependent on its thickness. A maximum response is observed for 4.8-nm-thick PNS, with a sensitivity up to 190% at 20 parts per billion (p.p.b.) at room temperature. First-principles calculations combined with the statistical thermodynamics modelling predict that the adsorption density is ∼1015 cm−2 for the 4.8-nm-thick PNS when exposed to 20 p.p.b. NO2 at 300 K. Our sensitivity modelling further suggests that the dependence of sensitivity on the PNS thickness is dictated by the band gap for thinner sheets (10 nm). PMID:26486604
Risk and sensitivity analysis in relation to external events
International Nuclear Information System (INIS)
Alzbutas, R.; Urbonas, R.; Augutis, J.
2001-01-01
This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)
PRIS-STATISTICS: Power Reactor Information System Statistical Reports. User's Manual
International Nuclear Information System (INIS)
2013-01-01
The IAEA developed the Power Reactor Information System (PRIS)-Statistics application to assist PRIS end users with generating statistical reports from PRIS data. Statistical reports provide an overview of the status, specification and performance results of every nuclear power reactor in the world. This user's manual was prepared to facilitate the use of the PRIS-Statistics application and to provide guidelines and detailed information for each report in the application. Statistical reports support analyses of nuclear power development and strategies, and the evaluation of nuclear power plant performance. The PRIS database can be used for comprehensive trend analyses and benchmarking against best performers and industrial standards.
Sadovskii, Michael V
2012-01-01
This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.
International Nuclear Information System (INIS)
Qin Fang; Wen Wen; Chen Ji-Sheng
2014-01-01
The thermal and electrical transport properties of an ideal anyon gas within fractional exclusion statistics are studied. By solving the Boltzmann equation with the relaxation-time approximation, the analytical expressions for the thermal and electrical conductivities of a three-dimensional ideal anyon gas are given. The low-temperature expressions for the two conductivities are obtained by using the Sommerfeld expansion. It is found that the Wiedemann—Franz law should be modified by the higher-order temperature terms, which depend on the statistical parameter g for a charged anyon gas. Neglecting the higher-order terms of temperature, the Wiedemann—Franz law is respected, which gives the Lorenz number. The Lorenz number is a function of the statistical parameter g. (condensed matter: electronic structure, electrical, magnetic, and optical properties)
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro, María
2016-12-26
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Wavelet Transform Based Higher Order Statistical Analysis of Wind and Wave Time Histories
Habib Huseni, Gulamhusenwala; Balaji, Ramakrishnan
2017-10-01
Wind, blowing on the surface of the ocean, imparts the energy to generate the waves. Understanding the wind-wave interactions is essential for an oceanographer. This study involves higher order spectral analyses of wind speeds and significant wave height time histories, extracted from European Centre for Medium-Range Weather Forecast database at an offshore location off Mumbai coast, through continuous wavelet transform. The time histories were divided by the seasons; pre-monsoon, monsoon, post-monsoon and winter and the analysis were carried out to the individual data sets, to assess the effect of various seasons on the wind-wave interactions. The analysis revealed that the frequency coupling of wind speeds and wave heights of various seasons. The details of data, analysing technique and results are presented in this paper.
High sensitivity MOSFET-based neutron dosimetry
International Nuclear Information System (INIS)
Fragopoulou, M.; Konstantakos, V.; Zamani, M.; Siskos, S.; Laopoulos, T.; Sarrabayrouse, G.
2010-01-01
A new dosemeter based on a metal-oxide-semiconductor field effect transistor sensitive to both neutrons and gamma radiation was manufactured at LAAS-CNRS Laboratory, Toulouse, France. In order to be used for neutron dosimetry, a thin film of lithium fluoride was deposited on the surface of the gate of the device. The characteristics of the dosemeter, such as the dependence of its response to neutron dose and dose rate, were investigated. The studied dosemeter was very sensitive to gamma rays compared to other dosemeters proposed in the literature. Its response in thermal neutrons was found to be much higher than in fast neutrons and gamma rays.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
International Nuclear Information System (INIS)
Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Helvie, Mark A.; Goodsitt, Mitchell M.
1998-01-01
A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of 0.1mmx0.1mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection (LDA sfs ). With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of LDA sfs , although the latter provided a higher total area (A z ) under the ROC curve. By setting an appropriate decision threshold, the high-sensitivity classifier and LDA sfs correctly
Statistical-Based Insights in Spence’s Theory of Honest Signaling
Directory of Open Access Journals (Sweden)
Mihaela Grecu
2015-09-01
Full Text Available Since Michael Spence revealed the secrets of (dishonest signalling on labour market, an increasing body of literature in various fields struggled to find the best way to solve the game under imperfect information that describes the interaction between the employer and the employee. Despite the value of the signal originally acknowledged by Spence, the university degree, a recent trend of increasing in unemployment rate among graduates of higher education suggests that between higher education and labour market may be a less significant connection than universities claim, potentially resulting in a decreasing power of the signal consisting of an university diploma. The aim of this study is to provide statistical evidence of the connection between higher education and labour market in Romania and to discuss some of the factors that potentially cause young people to choose a particular study program. Based on statistical analysis, we investigate the gap between the number of graduates in Law and the labour market capacity in the field, and draw conclusions regarding the accuracy of the mechanism that leads to equilibrium between supply and demand on the university market.
Muchembled, Jérôme; Deweer, Caroline; Sahmer, Karin; Halama, Patrice
2017-11-02
The antifungal activity of seven essential oils (eucalyptus, clove, mint, oregano, savory, tea tree, and thyme) was studied on Venturia inaequalis, the fungus responsible for apple scab. The composition of the essential oils was checked by gas chromatography-mass spectrometry. Each essential oil had its main compound. Liquid tests were performed to calculate the IC 50 of essential oils as well as their majority compounds. The tests were made on two strains with different sensitivities to tebuconazole: S755, the sensitive strain, and rs552, the strain with reduced sensitivity. Copper sulfate was selected as the reference mineral fungicidal substance. IC 50 with confidence intervals were calculated after three independent experiments. The results showed that all essential oils and all major compounds had in vitro antifungal activities. Moreover, it was highlighted that the effectiveness of four essential oils (clove, eucalyptus, mint, and savory) was higher than copper sulfate on both strains. For each strain, the best activity was obtained using clove and eucalyptus essential oils. For clove, the IC 50 obtained on the sensitive strain (5.2 mg/L [4.0-6.7 mg/L]) was statistically lower than the IC 50 of reduced sensitivity strain (14 mg/L [11.1-17.5 mg/L]). In contrast, for eucalyptus essential oil, the IC 50 were not different with respectively 9.4-13.0 and 12.2-17.9 mg/L for S755 and rs552 strains. For mint, origano, savory, tea tree, and thyme, IC 50 were always the best on rs552 strain. The majority compounds were not necessarily more efficient than their corresponding oils; only eugenol (for clove) and carvacrol (for oregano and savory) seemed to be more effective on S755 strain. On the other hand, rs552 strain seemed to be more sensitive to essential oils than S755 strain. In overall, it was shown that essential oils have different antifungal activities but do not have the same antifungal activities depending on the fungus strain used.
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Stangel, Christina; Bagaki, Anthi; Angaridis, Panagiotis A; Charalambidis, Georgios; Sharma, Ganesh D; Coutsolelos, Athanasios G
2014-11-17
Two novel "spider-shaped" porphyrins, meso-tetraaryl-substituted 1PV-Por and zinc-metalated 1PV-Zn-Por, bearing four oligo(p-phenylenevinylene) (oPPV) pyridyl groups with long dodecyloxy chains on the phenyl groups, have been synthesized. The presence of four pyridyl groups in both porphyrins, which allow them to act as anchoring groups upon coordination to various Lewis acid sites, the conjugated oPPV bridges, which offer the possibility of electronic communication between the porphyrin core and the pyridyl groups, and the dodecyloxy groups, which offer the advantage of high solubility in a variety of organic solvents of different polarities and could prevent porphyrin aggregation, renders porphyrins 1PV-Por and 1PV-Zn-Por very promising sensitizers for dye-sensitized solar cells (DSSCs). Photophysical measurements, together with electrochemistry experiments and density functional theory calculations, suggest that both porphyrins have frontier molecular orbital energy levels that favor electron injection and dye regeneration in DSSCs. Solar cells sensitized by 1PV-Por and 1PV-Zn-Por were fabricated, and it was found that they show power conversion efficiencies (PCEs) of 3.28 and 5.12%, respectively. Photovoltaic measurements (J-V curves) together with incident photon-to-electron conversion efficiency spectra of the two cells reveal that the higher PCE value of the DSSC based on 1PV-Zn-Por is ascribed to higher short-circuit current (Jsc), open-circuit voltage (Voc), and dye loading values. Emission spectra and electrochemistry experiments suggest a greater driving force for injection of the photogenerated electrons into the TiO2 conduction band for 1PV-Zn-Por rather than its free-base analogue. Furthermore, electrochemical impedance spectroscopy measurements prove that the utilization of 1PV-Zn-Por as a sensitizer offers a high charge recombination resistance and, therefore, leads to a longer electron lifetime.
Directory of Open Access Journals (Sweden)
Agus Joko Susanto
2018-01-01
.066. Conclusion: Sensitization of HDM allergens was shown to be highest for D. farinae 62.1%, followed by D. pteronyssinus 51.7% and Blomia tropicalis 48.3%. Specific IgE level induced by D. farinae and Blomia tropicalis sensitization were significantly higher in patients with persistent asthma compared to intermittent asthma, whereas specific IgE level induced by D. pteronyssinus sensitization was higher in persistent asthma although not statistically significant.
An adaptive Mantel-Haenszel test for sensitivity analysis in observational studies.
Rosenbaum, Paul R; Small, Dylan S
2017-06-01
In a sensitivity analysis in an observational study with a binary outcome, is it better to use all of the data or to focus on subgroups that are expected to experience the largest treatment effects? The answer depends on features of the data that may be difficult to anticipate, a trade-off between unknown effect-sizes and known sample sizes. We propose a sensitivity analysis for an adaptive test similar to the Mantel-Haenszel test. The adaptive test performs two highly correlated analyses, one focused analysis using a subgroup, one combined analysis using all of the data, correcting for multiple testing using the joint distribution of the two test statistics. Because the two component tests are highly correlated, this correction for multiple testing is small compared with, for instance, the Bonferroni inequality. The test has the maximum design sensitivity of two component tests. A simulation evaluates the power of a sensitivity analysis using the adaptive test. Two examples are presented. An R package, sensitivity2x2xk, implements the procedure. © 2016, The International Biometric Society.
Highway runoff quality models for the protection of environmentally sensitive areas
Trenouth, William R.; Gharabaghi, Bahram
2016-11-01
This paper presents novel highway runoff quality models using artificial neural networks (ANN) which take into account site-specific highway traffic and seasonal storm event meteorological factors to predict the event mean concentration (EMC) statistics and mean daily unit area load (MDUAL) statistics of common highway pollutants for the design of roadside ditch treatment systems (RDTS) to protect sensitive receiving environs. A dataset of 940 monitored highway runoff events from fourteen sites located in five countries (Canada, USA, Australia, New Zealand, and China) was compiled and used to develop ANN models for the prediction of highway runoff suspended solids (TSS) seasonal EMC statistical distribution parameters, as well as the MDUAL statistics for four different heavy metal species (Cu, Zn, Cr and Pb). TSS EMCs are needed to estimate the minimum required removal efficiency of the RDTS needed in order to improve highway runoff quality to meet applicable standards and MDUALs are needed to calculate the minimum required capacity of the RDTS to ensure performance longevity.