WorldWideScience

Sample records for performed statistical analyses

  1. Review of Statistical Analyses Resulting from Performance of HLDWD-DWPF-005

    International Nuclear Information System (INIS)

    Beck, R.S.

    1997-01-01

    The Engineering Department at the Defense Waste Processing Facility (DWPF) has reviewed two reports from the Statistical Consulting Section (SCS) involving the statistical analysis of test results for analysis of small sample inserts (references 1 ampersand 2). The test results cover two proposed analytical methods, a room temperature hydrofluoric acid preparation (Cold Chem) and a sodium peroxide/sodium hydroxide fusion modified for insert samples (Modified Fusion). The reports support implementation of the proposed small sample containers and analytical methods at DWPF. Hydragard sampler valve performance was typical of previous results (reference 3). Using an element from each major feed stream. lithium from the frit and iron from the sludge, the sampler was determined to deliver a uniform mixture in either sample container.The lithium to iron ratios were equivalent for the standard 15 ml vial and the 3 ml insert.The proposed method provide equivalent analyses as compared to the current methods. The biases associated with the proposed methods on a vitrified basis are less than 5% for major elements. The sum of oxides for the proposed method compares favorably with the sum of oxides for the conventional methods. However, the average sum of oxides for the Cold Chem method was 94.3% which is below the minimum required recovery of 95%. Both proposed methods, cold Chem and Modified Fusion, will be required at first to provide an accurate analysis which will routinely meet the 95% and 105% average sum of oxides limit for Product Composition Control System (PCCS).Issued to be resolved during phased implementation are as follows: (1) Determine calcine/vitrification factor for radioactive feed; (2) Evaluate covariance matrix change against process operating ranges to determine optimum sample size; (3) Evaluate sources for low sum of oxides; and (4) Improve remote operability of production versions of equipment and instruments for installation in 221-S.The specifics of

  2. Statistical analyses of the performance of Macedonian investment and pension funds

    Directory of Open Access Journals (Sweden)

    Petar Taleski

    2015-10-01

    Full Text Available The foundation of the post-modern portfolio theory is creating a portfolio based on a desired target return. This specifically applies to the performance of investment and pension funds that provide a rate of return meeting payment requirements from investment funds. A desired target return is the goal of an investment or pension fund. It is the primary benchmark used to measure performances, dynamic monitoring and evaluation of the risk–return ratio on investment funds. The analysis in this paper is based on monthly returns of Macedonian investment and pension funds (June 2011 - June 2014. Such analysis utilizes the basic, but highly informative statistical characteristic moments like skewness, kurtosis, Jarque–Bera, and Chebyishev’s Inequality. The objective of this study is to perform a trough analysis, utilizing the above mentioned and other types of statistical techniques (Sharpe, Sortino, omega, upside potential, Calmar, Sterling to draw relevant conclusions regarding the risks and characteristic moments in Macedonian investment and pension funds. Pension funds are the second largest segment of the financial system, and has great potential for further growth due to constant inflows from pension insurance. The importance of investment funds for the financial system in the Republic of Macedonia is still small, although open-end investment funds have been the fastest growing segment of the financial system. Statistical analysis has shown that pension funds have delivered a significantly positive volatility-adjusted risk premium in the analyzed period more so than investment funds.

  3. Statistical analyses of variability/reproducibility of environmentally assisted cyclic crack growth rate data utilizing JAERI Material Performance Database (JMPD)

    International Nuclear Information System (INIS)

    Tsuji, Hirokazu; Yokoyama, Norio; Nakajima, Hajime; Kondo, Tatsuo

    1993-05-01

    Statistical analyses were conducted by using the cyclic crack growth rate data for pressure vessel steels stored in the JAERI Material Performance Database (JMPD), and comparisons were made on variability and/or reproducibility of the data between obtained by ΔK-increasing and by ΔK-constant type tests. Based on the results of the statistical analyses, it was concluded that ΔK-constant type tests are generally superior to the commonly used ΔK-increasing type ones from the viewpoint of variability and/or reproducibility of the data. Such a tendency was more pronounced in the tests conducted in simulated LWR primary coolants than those in air. (author)

  4. Statistical evaluation of the performance of gridded monthly precipitation products from reanalysis data, satellite estimates, and merged analyses over China

    Science.gov (United States)

    Deng, Xueliang; Nie, Suping; Deng, Weitao; Cao, Weihua

    2018-04-01

    In this study, we compared the following four different gridded monthly precipitation products: the National Centers for Environmental Prediction version 2 (NCEP-2) reanalysis data, the satellite-based Climate Prediction Center Morphing technique (CMORPH) data, the merged satellite-gauge Global Precipitation Climatology Project (GPCP) data, and the merged satellite-gauge-model data from the Beijing Climate Center Merged Estimation of Precipitation (BMEP). We evaluated the performances of these products using monthly precipitation observations spanning the period of January 2003 to December 2013 from a dense, national, rain gauge network in China. Our assessment involved several statistical techniques, including spatial pattern, temporal variation, bias, root-mean-square error (RMSE), and correlation coefficient (CC) analysis. The results show that NCEP-2, GPCP, and BMEP generally overestimate monthly precipitation at the national scale and CMORPH underestimates it. However, all of the datasets successfully characterized the northwest to southeast increase in the monthly precipitation over China. Because they include precipitation gauge information from the Global Telecommunication System (GTS) network, GPCP and BMEP have much smaller biases, lower RMSEs, and higher CCs than NCEP-2 and CMORPH. When the seasonal and regional variations are considered, NCEP-2 has a larger error over southern China during the summer. CMORPH poorly reproduces the magnitude of the precipitation over southeastern China and the temporal correlation over western and northwestern China during all seasons. BMEP has a lower RMSE and higher CC than GPCP over eastern and southern China, where the station network is dense. In contrast, BMEP has a lower CC than GPCP over western and northwestern China, where the gauge network is relatively sparse.

  5. EDI Performance Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...

  6. Statistical analyses of extreme food habits

    International Nuclear Information System (INIS)

    Breuninger, M.; Neuhaeuser-Berthold, M.

    2000-01-01

    This report is a summary of the results of the project ''Statistical analyses of extreme food habits'', which was ordered from the National Office for Radiation Protection as a contribution to the amendment of the ''General Administrative Regulation to paragraph 45 of the Decree on Radiation Protection: determination of the radiation exposition by emission of radioactive substances from facilities of nuclear technology''. Its aim is to show if the calculation of the radiation ingested by 95% of the population by food intake, like it is planned in a provisional draft, overestimates the true exposure. If such an overestimation exists, the dimension of it should be determined. It was possible to prove the existence of this overestimation but its dimension could only roughly be estimated. To identify the real extent of it, it is necessary to include the specific activities of the nuclides, which were not available for this investigation. In addition to this the report shows how the amounts of food consumption of different groups of foods influence each other and which connections between these amounts should be taken into account, in order to estimate the radiation exposition as precise as possible. (orig.) [de

  7. Applied statistics a handbook of BMDP analyses

    CERN Document Server

    Snell, E J

    1987-01-01

    This handbook is a realization of a long term goal of BMDP Statistical Software. As the software supporting statistical analysis has grown in breadth and depth to the point where it can serve many of the needs of accomplished statisticians it can also serve as an essential support to those needing to expand their knowledge of statistical applications. Statisticians should not be handicapped by heavy computation or by the lack of needed options. When Applied Statistics, Principle and Examples by Cox and Snell appeared we at BMDP were impressed with the scope of the applications discussed and felt that many statisticians eager to expand their capabilities in handling such problems could profit from having the solutions carried further, to get them started and guided to a more advanced level in problem solving. Who would be better to undertake that task than the authors of Applied Statistics? A year or two later discussions with David Cox and Joyce Snell at Imperial College indicated that a wedding of the proble...

  8. Analysing performance through value creation

    Directory of Open Access Journals (Sweden)

    Adrian TRIFAN

    2015-12-01

    Full Text Available This paper draws a parallel between measuring financial performance in 2 variants: the first one using data offered by accounting, which lays emphasis on maximizing profit, and the second one which aims to create value. The traditional approach to performance is based on some indicators from accounting data: ROI, ROE, EPS. The traditional management, based on analysing the data from accounting, has shown its limits, and a new approach is needed, based on creating value. The evaluation of value based performance tries to avoid the errors due to accounting data, by using other specific indicators: EVA, MVA, TSR, CVA. The main objective is shifted from maximizing the income to maximizing the value created for shareholders. The theoretical part is accompanied by a practical analysis regarding the creation of value and an analysis of the main indicators which evaluate this concept.

  9. Statistical analyses of conserved features of genomic islands in bacteria.

    Science.gov (United States)

    Guo, F-B; Xia, Z-K; Wei, W; Zhao, H-L

    2014-03-17

    We performed statistical analyses of five conserved features of genomic islands of bacteria. Analyses were made based on 104 known genomic islands, which were identified by comparative methods. Four of these features include sequence size, abnormal G+C content, flanking tRNA gene, and embedded mobility gene, which are frequently investigated. One relatively new feature, G+C homogeneity, was also investigated. Among the 104 known genomic islands, 88.5% were found to fall in the typical length of 10-200 kb and 80.8% had G+C deviations with absolute values larger than 2%. For the 88 genomic islands whose hosts have been sequenced and annotated, 52.3% of them were found to have flanking tRNA genes and 64.7% had embedded mobility genes. For the homogeneity feature, 85% had an h homogeneity index less than 0.1, indicating that their G+C content is relatively uniform. Taking all the five features into account, 87.5% of 88 genomic islands had three of them. Only one genomic island had only one conserved feature and none of the genomic islands had zero features. These statistical results should help to understand the general structure of known genomic islands. We found that larger genomic islands tend to have relatively small G+C deviations relative to absolute values. For example, the absolute G+C deviations of 9 genomic islands longer than 100,000 bp were all less than 5%. This is a novel but reasonable result given that larger genomic islands should have greater restrictions in their G+C contents, in order to maintain the stable G+C content of the recipient genome.

  10. Statistical and extra-statistical considerations in differential item functioning analyses

    Directory of Open Access Journals (Sweden)

    G. K. Huysamen

    2004-10-01

    Full Text Available This article briefly describes the main procedures for performing differential item functioning (DIF analyses and points out some of the statistical and extra-statistical implications of these methods. Research findings on the sources of DIF, including those associated with translated tests, are reviewed. As DIF analyses are oblivious of correlations between a test and relevant criteria, the elimination of differentially functioning items does not necessarily improve predictive validity or reduce any predictive bias. The implications of the results of past DIF research for test development in the multilingual and multi-cultural South African society are considered. Opsomming Hierdie artikel beskryf kortliks die hoofprosedures vir die ontleding van differensiële itemfunksionering (DIF en verwys na sommige van die statistiese en buite-statistiese implikasies van hierdie metodes. ’n Oorsig word verskaf van navorsingsbevindings oor die bronne van DIF, insluitend dié by vertaalde toetse. Omdat DIF-ontledings nie die korrelasies tussen ’n toets en relevante kriteria in ag neem nie, sal die verwydering van differensieel-funksionerende items nie noodwendig voorspellingsgeldigheid verbeter of voorspellingsydigheid verminder nie. Die implikasies van vorige DIF-navorsingsbevindings vir toetsontwikkeling in die veeltalige en multikulturele Suid-Afrikaanse gemeenskap word oorweeg.

  11. Hydrometeorological and statistical analyses of heavy rainfall in Midwestern USA

    Science.gov (United States)

    Thorndahl, S.; Smith, J. A.; Krajewski, W. F.

    2012-04-01

    During the last two decades the mid-western states of the United States of America has been largely afflicted by heavy flood producing rainfall. Several of these storms seem to have similar hydrometeorological properties in terms of pattern, track, evolution, life cycle, clustering, etc. which raise the question if it is possible to derive general characteristics of the space-time structures of these heavy storms. This is important in order to understand hydrometeorological features, e.g. how storms evolve and with what frequency we can expect extreme storms to occur. In the literature, most studies of extreme rainfall are based on point measurements (rain gauges). However, with high resolution and quality radar observation periods exceeding more than two decades, it is possible to do long-term spatio-temporal statistical analyses of extremes. This makes it possible to link return periods to distributed rainfall estimates and to study precipitation structures which cause floods. However, doing these statistical frequency analyses of rainfall based on radar observations introduces some different challenges, converting radar reflectivity observations to "true" rainfall, which are not problematic doing traditional analyses on rain gauge data. It is for example difficult to distinguish reflectivity from high intensity rain from reflectivity from other hydrometeors such as hail, especially using single polarization radars which are used in this study. Furthermore, reflectivity from bright band (melting layer) should be discarded and anomalous propagation should be corrected in order to produce valid statistics of extreme radar rainfall. Other challenges include combining observations from several radars to one mosaic, bias correction against rain gauges, range correction, ZR-relationships, etc. The present study analyzes radar rainfall observations from 1996 to 2011 based the American NEXRAD network of radars over an area covering parts of Iowa, Wisconsin, Illinois, and

  12. Statistical reliability analyses of two wood plastic composite extrusion processes

    International Nuclear Information System (INIS)

    Crookston, Kevin A.; Mark Young, Timothy; Harper, David; Guess, Frank M.

    2011-01-01

    Estimates of the reliability of wood plastic composites (WPC) are explored for two industrial extrusion lines. The goal of the paper is to use parametric and non-parametric analyses to examine potential differences in the WPC metrics of reliability for the two extrusion lines that may be helpful for use by the practitioner. A parametric analysis of the extrusion lines reveals some similarities and disparities in the best models; however, a non-parametric analysis reveals unique and insightful differences between Kaplan-Meier survival curves for the modulus of elasticity (MOE) and modulus of rupture (MOR) of the WPC industrial data. The distinctive non-parametric comparisons indicate the source of the differences in strength between the 10.2% and 48.0% fractiles [3,183-3,517 MPa] for MOE and for MOR between the 2.0% and 95.1% fractiles [18.9-25.7 MPa]. Distribution fitting as related to selection of the proper statistical methods is discussed with relevance to estimating the reliability of WPC. The ability to detect statistical differences in the product reliability of WPC between extrusion processes may benefit WPC producers in improving product reliability and safety of this widely used house-decking product. The approach can be applied to many other safety and complex system lifetime comparisons.

  13. Methodology development for statistical evaluation of reactor safety analyses

    International Nuclear Information System (INIS)

    Mazumdar, M.; Marshall, J.A.; Chay, S.C.; Gay, R.

    1976-07-01

    In February 1975, Westinghouse Electric Corporation, under contract to Electric Power Research Institute, started a one-year program to develop methodology for statistical evaluation of nuclear-safety-related engineering analyses. The objectives of the program were to develop an understanding of the relative efficiencies of various computational methods which can be used to compute probability distributions of output variables due to input parameter uncertainties in analyses of design basis events for nuclear reactors and to develop methods for obtaining reasonably accurate estimates of these probability distributions at an economically feasible level. A series of tasks was set up to accomplish these objectives. Two of the tasks were to investigate the relative efficiencies and accuracies of various Monte Carlo and analytical techniques for obtaining such estimates for a simple thermal-hydraulic problem whose output variable of interest is given in a closed-form relationship of the input variables and to repeat the above study on a thermal-hydraulic problem in which the relationship between the predicted variable and the inputs is described by a short-running computer program. The purpose of the report presented is to document the results of the investigations completed under these tasks, giving the rationale for choices of techniques and problems, and to present interim conclusions

  14. Comparative Gender Performance in Business Statistics.

    Science.gov (United States)

    Mogull, Robert G.

    1989-01-01

    Comparative performance of male and female students in introductory and intermediate statistics classes was examined for over 16 years at a state university. Gender means from 97 classes and 1,609 males and 1,085 females revealed a probabilistic--although statistically insignificant--superior performance by female students that appeared to…

  15. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  16. Non-Statistical Methods of Analysing of Bankruptcy Risk

    Directory of Open Access Journals (Sweden)

    Pisula Tomasz

    2015-06-01

    Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.

  17. A weighted U statistic for association analyses considering genetic heterogeneity.

    Science.gov (United States)

    Wei, Changshuai; Elston, Robert C; Lu, Qing

    2016-07-20

    Converging evidence suggests that common complex diseases with the same or similar clinical manifestations could have different underlying genetic etiologies. While current research interests have shifted toward uncovering rare variants and structural variations predisposing to human diseases, the impact of heterogeneity in genetic studies of complex diseases has been largely overlooked. Most of the existing statistical methods assume the disease under investigation has a homogeneous genetic effect and could, therefore, have low power if the disease undergoes heterogeneous pathophysiological and etiological processes. In this paper, we propose a heterogeneity-weighted U (HWU) method for association analyses considering genetic heterogeneity. HWU can be applied to various types of phenotypes (e.g., binary and continuous) and is computationally efficient for high-dimensional genetic data. Through simulations, we showed the advantage of HWU when the underlying genetic etiology of a disease was heterogeneous, as well as the robustness of HWU against different model assumptions (e.g., phenotype distributions). Using HWU, we conducted a genome-wide analysis of nicotine dependence from the Study of Addiction: Genetics and Environments dataset. The genome-wide analysis of nearly one million genetic markers took 7h, identifying heterogeneous effects of two new genes (i.e., CYP3A5 and IKBKB) on nicotine dependence. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Statistical learning methods: Basics, control and performance

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de

    2006-04-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms.

  19. Statistical learning methods: Basics, control and performance

    International Nuclear Information System (INIS)

    Zimmermann, J.

    2006-01-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms

  20. Analyses of hydraulic performance of velocity caps

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Degn Eskesen, Mark Chr.; Buhrkall, Jeppe

    2014-01-01

    The hydraulic performance of a velocity cap has been investigated. Velocity caps are often used in connection with offshore intakes. CFD (computational fluid dynamics) examined the flow through the cap openings and further down into the intake pipes. This was combined with dimension analyses...

  1. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    Science.gov (United States)

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  2. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    Science.gov (United States)

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  3. Using statistical inference for decision making in best estimate analyses

    International Nuclear Information System (INIS)

    Sermer, P.; Weaver, K.; Hoppe, F.; Olive, C.; Quach, D.

    2008-01-01

    For broad classes of safety analysis problems, one needs to make decisions when faced with randomly varying quantities which are also subject to errors. The means for doing this involves a statistical approach which takes into account the nature of the physical problems, and the statistical constraints they impose. We describe the methodology for doing this which has been developed at Nuclear Safety Solutions, and we draw some comparisons to other methods which are commonly used in Canada and internationally. Our methodology has the advantages of being robust and accurate and compares favourably to other best estimate methods. (author)

  4. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  5. Additional methodology development for statistical evaluation of reactor safety analyses

    International Nuclear Information System (INIS)

    Marshall, J.A.; Shore, R.W.; Chay, S.C.; Mazumdar, M.

    1977-03-01

    The project described is motivated by the desire for methods to quantify uncertainties and to identify conservatisms in nuclear power plant safety analysis. The report examines statistical methods useful for assessing the probability distribution of output response from complex nuclear computer codes, considers sensitivity analysis and several other topics, and also sets the path for using the developed methods for realistic assessment of the design basis accident

  6. Performing Inferential Statistics Prior to Data Collection

    Science.gov (United States)

    Trafimow, David; MacDonald, Justin A.

    2017-01-01

    Typically, in education and psychology research, the investigator collects data and subsequently performs descriptive and inferential statistics. For example, a researcher might compute group means and use the null hypothesis significance testing procedure to draw conclusions about the populations from which the groups were drawn. We propose an…

  7. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  8. High performance liquid chromatography in pharmaceutical analyses

    Directory of Open Access Journals (Sweden)

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  9. Statistical reporting errors and collaboration on statistical analyses in psychological science

    NARCIS (Netherlands)

    Veldkamp, C.L.S.; Nuijten, M.B.; Dominguez Alvarez, L.; van Assen, M.A.L.M.; Wicherts, J.M.

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we

  10. Statistical analyses of local transport coefficients in Ohmic ASDEX discharges

    International Nuclear Information System (INIS)

    Simmet, E.; Stroth, U.; Wagner, F.; Fahrbach, H.U.; Herrmann, W.; Kardaun, O.J.W.F.; Mayer, H.M.

    1991-01-01

    Tokamak energy transport is still an unsolved problem. Many theoretical models have been developed, which try to explain the anomalous high energy-transport coefficients. Up to now these models have been applied to global plasma parameters. A comparison of transport coefficients with global confinement time is only conclusive if the transport is dominated by one process across the plasma diameter. This, however, is not the case in most Ohmic confinement regimes, where at least three different transport mechanisms play an important role. Sawtooth activity leads to an increase in energy transport in the plasma centre. In the intermediate region turbulent transport is expected. Candidates here are drift waves and resistive fluid turbulences. At the edge, ballooning modes or rippling modes could dominate the transport. For the intermediate region, one can deduce theoretical scaling laws for τ E from turbulent theories. Predicted scalings reproduce the experimentally found density dependence of τ E in the linear Ohmic confinement regime (LOC) and the saturated regime (SOC), but they do not show the correct dependence on the isotope mass. The relevance of these transport theories can only be tested in comparing them to experimental local transport coefficients. To this purpose we have performed transport calculations on more than a hundred Ohmic ASDEX discharges. By Principal Component Analysis we determine the dimensionless components which dominate the transport coefficients and we compare the results to the predictions of various theories. (author) 6 refs., 2 figs., 1 tab

  11. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.

    Science.gov (United States)

    Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

  12. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  13. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  14. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  15. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    Energy Technology Data Exchange (ETDEWEB)

    Udey, Ruth Norma [Michigan State Univ., East Lansing, MI (United States)

    2013-01-01

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  16. arXiv Statistical Analyses of Higgs- and Z-Portal Dark Matter Models

    CERN Document Server

    Ellis, John; Marzola, Luca; Raidal, Martti

    2018-06-12

    We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing $\\gtrsim 100$ GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.

  17. Exploratory study on a statistical method to analyse time resolved data obtained during nanomaterial exposure measurements

    International Nuclear Information System (INIS)

    Clerc, F; Njiki-Menga, G-H; Witschger, O

    2013-01-01

    Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a

  18. Statistical analysis in MSW collection performance assessment.

    Science.gov (United States)

    Teixeira, Carlos Afonso; Avelino, Catarina; Ferreira, Fátima; Bentes, Isabel

    2014-09-01

    The increase of Municipal Solid Waste (MSW) generated over the last years forces waste managers pursuing more effective collection schemes, technically viable, environmentally effective and economically sustainable. The assessment of MSW services using performance indicators plays a crucial role for improving service quality. In this work, we focus on the relevance of regular system monitoring as a service assessment tool. In particular, we select and test a core-set of MSW collection performance indicators (effective collection distance, effective collection time and effective fuel consumption) that highlights collection system strengths and weaknesses and supports pro-active management decision-making and strategic planning. A statistical analysis was conducted with data collected in mixed collection system of Oporto Municipality, Portugal, during one year, a week per month. This analysis provides collection circuits' operational assessment and supports effective short-term municipality collection strategies at the level of, e.g., collection frequency and timetables, and type of containers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Directory of Open Access Journals (Sweden)

    Jordi Marcé-Nogué

    2017-10-01

    Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.

  20. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Science.gov (United States)

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  1. Statistical inference for the lifetime performance index based on generalised order statistics from exponential distribution

    Science.gov (United States)

    Vali Ahmadi, Mohammad; Doostparast, Mahdi; Ahmadi, Jafar

    2015-04-01

    In manufacturing industries, the lifetime of an item is usually characterised by a random variable X and considered to be satisfactory if X exceeds a given lower lifetime limit L. The probability of a satisfactory item is then ηL := P(X ≥ L), called conforming rate. In industrial companies, however, the lifetime performance index, proposed by Montgomery and denoted by CL, is widely used as a process capability index instead of the conforming rate. Assuming a parametric model for the random variable X, we show that there is a connection between the conforming rate and the lifetime performance index. Consequently, the statistical inferences about ηL and CL are equivalent. Hence, we restrict ourselves to statistical inference for CL based on generalised order statistics, which contains several ordered data models such as usual order statistics, progressively Type-II censored data and records. Various point and interval estimators for the parameter CL are obtained and optimal critical regions for the hypothesis testing problems concerning CL are proposed. Finally, two real data-sets on the lifetimes of insulating fluid and ball bearings, due to Nelson (1982) and Caroni (2002), respectively, and a simulated sample are analysed.

  2. A weighted U-statistic for genetic association analyses of sequencing data.

    Science.gov (United States)

    Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing

    2014-12-01

    With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.

  3. SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Directory of Open Access Journals (Sweden)

    Annie Chu

    2009-04-01

    Full Text Available The web-based, Java-written SOCR (Statistical Online Computational Resource toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b. It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b. Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses.

  4. Statistics Anxiety, Trait Anxiety, Learning Behavior, and Academic Performance

    Science.gov (United States)

    Macher, Daniel; Paechter, Manuela; Papousek, Ilona; Ruggeri, Kai

    2012-01-01

    The present study investigated the relationship between statistics anxiety, individual characteristics (e.g., trait anxiety and learning strategies), and academic performance. Students enrolled in a statistics course in psychology (N = 147) filled in a questionnaire on statistics anxiety, trait anxiety, interest in statistics, mathematical…

  5. Validating Future Force Performance Measures (Army Class): Concluding Analyses

    Science.gov (United States)

    2016-06-01

    32 Table 3.10. Descriptive Statistics and Intercorrelations for LV Final Predictor Factor Scores...55 Table 4.7. Descriptive Statistics for Analysis Criteria...Soldier attrition and performance: Dependability (Non- Delinquency ), Adjustment, Physical Conditioning, Leadership, Work Orientation, and Agreeableness

  6. Scripts for TRUMP data analyses. Part II (HLA-related data): statistical analyses specific for hematopoietic stem cell transplantation.

    Science.gov (United States)

    Kanda, Junya

    2016-01-01

    The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.

  7. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  8. A Model of Statistics Performance Based on Achievement Goal Theory.

    Science.gov (United States)

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  9. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  10. Statistical Analyses of Second Indoor Bio-Release Field Evaluation Study at Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G.; Pulsipher, Brent A.; Matzke, Brett D.

    2009-12-17

    In September 2008 a large-scale testing operation (referred to as the INL-2 test) was performed within a two-story building (PBF-632) at the Idaho National Laboratory (INL). The report “Operational Observations on the INL-2 Experiment” defines the seven objectives for this test and discusses the results and conclusions. This is further discussed in the introduction of this report. The INL-2 test consisted of five tests (events) in which a floor (level) of the building was contaminated with the harmless biological warfare agent simulant Bg and samples were taken in most, if not all, of the rooms on the contaminated floor. After the sampling, the building was decontaminated, and the next test performed. Judgmental samples and probabilistic samples were determined and taken during each test. Vacuum, wipe, and swab samples were taken within each room. The purpose of this report is to study an additional four topics that were not within the scope of the original report. These topics are: 1) assess the quantitative assumptions about the data being normally or log-normally distributed; 2) evaluate differences and quantify the sample to sample variability within a room and across the rooms; 3) perform geostatistical types of analyses to study spatial correlations; and 4) quantify the differences observed between surface types and sampling methods for each scenario and study the consistency across the scenarios. The following four paragraphs summarize the results of each of the four additional analyses. All samples after decontamination came back negative. Because of this, it was not appropriate to determine if these clearance samples were normally distributed. As Table 1 shows, the characterization data consists of values between and inclusive of 0 and 100 CFU/cm2 (100 was the value assigned when the number is too numerous to count). The 100 values are generally much bigger than the rest of the data, causing the data to be right skewed. There are also a significant

  11. Authigenic oxide Neodymium Isotopic composition as a proxy of seawater: applying multivariate statistical analyses.

    Science.gov (United States)

    McKinley, C. C.; Scudder, R.; Thomas, D. J.

    2016-12-01

    The Neodymium Isotopic composition (Nd IC) of oxide coatings has been applied as a tracer of water mass composition and used to address fundamental questions about past ocean conditions. The leached authigenic oxide coating from marine sediment is widely assumed to reflect the dissolved trace metal composition of the bottom water interacting with sediment at the seafloor. However, recent studies have shown that readily reducible sediment components, in addition to trace metal fluxes from the pore water, are incorporated into the bottom water, influencing the trace metal composition of leached oxide coatings. This challenges the prevailing application of the authigenic oxide Nd IC as a proxy of seawater composition. Therefore, it is important to identify the component end-members that create sediments of different lithology and determine if, or how they might contribute to the Nd IC of oxide coatings. To investigate lithologic influence on the results of sequential leaching, we selected two sites with complete bulk sediment statistical characterization. Site U1370 in the South Pacific Gyre, is predominantly composed of Rhyolite ( 60%) and has a distinguishable ( 10%) Fe-Mn Oxyhydroxide component (Dunlea et al., 2015). Site 1149 near the Izu-Bonin-Arc is predominantly composed of dispersed ash ( 20-50%) and eolian dust from Asia ( 50-80%) (Scudder et al., 2014). We perform a two-step leaching procedure: a 14 mL of 0.02 M hydroxylamine hydrochloride (HH) in 20% acetic acid buffered to a pH 4 for one hour, targeting metals bound to Fe- and Mn- oxides fractions, and a second HH leach for 12 hours, designed to remove any remaining oxides from the residual component. We analyze all three resulting fractions for a large suite of major, trace and rare earth elements, a sub-set of the samples are also analyzed for Nd IC. We use multivariate statistical analyses of the resulting geochemical data to identify how each component of the sediment partitions across the sequential

  12. Statistical analyses in the study of solar wind-magnetosphere coupling

    International Nuclear Information System (INIS)

    Baker, D.N.

    1985-01-01

    Statistical analyses provide a valuable method for establishing initially the existence (or lack of existence) of a relationship between diverse data sets. Statistical methods also allow one to make quantitative assessments of the strengths of observed relationships. This paper reviews the essential techniques and underlying statistical bases for the use of correlative methods in solar wind-magnetosphere coupling studies. Techniques of visual correlation and time-lagged linear cross-correlation analysis are emphasized, but methods of multiple regression, superposed epoch analysis, and linear prediction filtering are also described briefly. The long history of correlation analysis in the area of solar wind-magnetosphere coupling is reviewed with the assessments organized according to data averaging time scales (minutes to years). It is concluded that these statistical methods can be very useful first steps, but that case studies and various advanced analysis methods should be employed to understand fully the average response of the magnetosphere to solar wind input. It is clear that many workers have not always recognized underlying assumptions of statistical methods and thus the significance of correlation results can be in doubt. Long-term averages (greater than or equal to 1 hour) can reveal gross relationships, but only when dealing with high-resolution data (1 to 10 min) can one reach conclusions pertinent to magnetospheric response time scales and substorm onset mechanisms

  13. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.

  14. Statistical analyses of the data on occupational radiation expousure at JPDR

    International Nuclear Information System (INIS)

    Kato, Shohei; Anazawa, Yutaka; Matsuno, Kenji; Furuta, Toshishiro; Akiyama, Isamu

    1980-01-01

    In the statistical analyses of the data on occupational radiation exposure at JPDR, statistical features were obtained as follows. (1) The individual doses followed log-normal distribution. (2) In the distribution of doses from one job in controlled area, the logarithm of the mean (μ) depended on the exposure rate (γ(mR/h)), and the σ correlated to the nature of the job and normally distributed. These relations were as follows. μ = 0.48 ln r-0.24, σ = 1.2 +- 0.58 (3) For the data containing different groups, the distribution of doses showed a polygonal line on the log-normal probability paper. (4) Under the dose limitation, the distribution of the doses showed asymptotic curve along the limit on the log-normal probability paper. (author)

  15. Performance and Vibration Analyses of Lift-Offset Helicopters

    Directory of Open Access Journals (Sweden)

    Jeong-In Go

    2017-01-01

    Full Text Available A validation study on the performance and vibration analyses of the XH-59A compound helicopter is conducted to establish techniques for the comprehensive analysis of lift-offset compound helicopters. This study considers the XH-59A lift-offset compound helicopter using a rigid coaxial rotor system as a verification model. CAMRAD II (Comprehensive Analytical Method of Rotorcraft Aerodynamics and Dynamics II, a comprehensive analysis code, is used as a tool for the performance, vibration, and loads analyses. A general free wake model, which is a more sophisticated wake model than other wake models, is used to obtain good results for the comprehensive analysis. Performance analyses of the XH-59A helicopter with and without auxiliary propulsion are conducted in various flight conditions. In addition, vibration analyses of the XH-59A compound helicopter configuration are conducted in the forward flight condition. The present comprehensive analysis results are in good agreement with the flight test and previous analyses. Therefore, techniques for the comprehensive analysis of lift-offset compound helicopters are appropriately established. Furthermore, the rotor lifts are calculated for the XH-59A lift-offset compound helicopter in the forward flight condition to investigate the airloads characteristics of the ABC™ (Advancing Blade Concept rotor.

  16. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  17. Installation and performance evaluation of an indigenous surface area analyser

    International Nuclear Information System (INIS)

    Pillai, S.N.; Solapurkar, M.N.; Venkatesan, V.; Prakash, A.; Khan, K.B.; Kumar, Arun; Prasad, R.S.

    2014-01-01

    An indigenously available surface area analyser was installed inside glove box and checked for its performance by analyzing uranium oxide and thorium oxide powders at RMD. The unit has been made ready for analysis of Plutonium oxide powders after incorporating several important features. (author)

  18. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available and the goal of the algorithm is to track a set of tradeoff solutions over time. Analysing the performance of a dynamic multi-objective optimisation algorithm (DMOA) is not a trivial task. For each environment (before a change occurs) the DMOA has to find a set...

  19. Characteristics of electrostatic solitary waves observed in the plasma sheet boundary: Statistical analyses

    Directory of Open Access Journals (Sweden)

    H. Kojima

    1999-01-01

    Full Text Available We present the characteristics of the Electrostatic Solitary Waves (ESW observed by the Geotail spacecraft in the plasma sheet boundary layer based on the statistical analyses. We also discuss the results referring to a model of ESW generation due to electron beams, which is proposed by computer simulations. In this generation model, the nonlinear evolution of Langmuir waves excited by electron bump-on-tail instabilities leads to formation of isolated electrostatic potential structures corresponding to "electron hole" in the phase space. The statistical analyses of the Geotail data, which we conducted under the assumption that polarity of ESW potentials is positive, show that most of ESW propagate in the same direction of electron beams, which are observed by the plasma instrument, simultaneously. Further, we also find that the ESW potential energy is much smaller than the background electron thermal energy and that the ESW potential widths are typically shorter than 60 times of local electron Debye length when we assume that the ESW potentials travel in the same velocity of electron beams. These results are very consistent with the ESW generation model that the nonlinear evolution of electron bump-on-tail instability leads to the formation of electron holes in the phase space.

  20. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    Science.gov (United States)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  1. Multivariate statistical analyses demonstrate unique host immune responses to single and dual lentiviral infection.

    Directory of Open Access Journals (Sweden)

    Sunando Roy

    2009-10-01

    Full Text Available Feline immunodeficiency virus (FIV and human immunodeficiency virus (HIV are recently identified lentiviruses that cause progressive immune decline and ultimately death in infected cats and humans. It is of great interest to understand how to prevent immune system collapse caused by these lentiviruses. We recently described that disease caused by a virulent FIV strain in cats can be attenuated if animals are first infected with a feline immunodeficiency virus derived from a wild cougar. The detailed temporal tracking of cat immunological parameters in response to two viral infections resulted in high-dimensional datasets containing variables that exhibit strong co-variation. Initial analyses of these complex data using univariate statistical techniques did not account for interactions among immunological response variables and therefore potentially obscured significant effects between infection state and immunological parameters.Here, we apply a suite of multivariate statistical tools, including Principal Component Analysis, MANOVA and Linear Discriminant Analysis, to temporal immunological data resulting from FIV superinfection in domestic cats. We investigated the co-variation among immunological responses, the differences in immune parameters among four groups of five cats each (uninfected, single and dual infected animals, and the "immune profiles" that discriminate among them over the first four weeks following superinfection. Dual infected cats mount an immune response by 24 days post superinfection that is characterized by elevated levels of CD8 and CD25 cells and increased expression of IL4 and IFNgamma, and FAS. This profile discriminates dual infected cats from cats infected with FIV alone, which show high IL-10 and lower numbers of CD8 and CD25 cells.Multivariate statistical analyses demonstrate both the dynamic nature of the immune response to FIV single and dual infection and the development of a unique immunological profile in dual

  2. Statistical analyses of digital collections: Using a large corpus of systematic reviews to study non-citations

    DEFF Research Database (Denmark)

    Frandsen, Tove Faber; Nicolaisen, Jeppe

    2017-01-01

    Using statistical methods to analyse digital material for patterns makes it possible to detect patterns in big data that we would otherwise not be able to detect. This paper seeks to exemplify this fact by statistically analysing a large corpus of references in systematic reviews. The aim...

  3. Performance Analyses in an Assistive Technology Service Delivery Process

    DEFF Research Database (Denmark)

    Petersen, Anne Karin

    Performance Analyses in an Assistive Technology Service Delivery Process.Keywords: process model, occupational performance, assistive technologiesThe Poster is about teaching students, using models and theory in education and practice. It is related to Occupational therapy process and professional...... af top-til-bund, klientcentreret og aktivitetsbaseret interventioner, ERGO/MunksgaardFisher, A. &, Griswold, L. A., 2014. Performance Skills. I: B.Schell red.2014 Occupational Therapy. Willard &Spackman’s occupational therapy. -12th ed., p.249-264Cook A.M., Polgar J.M. (2015) Assistive Technologies...

  4. Self-assessed performance improves statistical fusion of image labels

    International Nuclear Information System (INIS)

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  5. Self-assessed performance improves statistical fusion of image labels

    Energy Technology Data Exchange (ETDEWEB)

    Bryan, Frederick W., E-mail: frederick.w.bryan@vanderbilt.edu; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Reich, Daniel S. [Translational Neuroradiology Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, Maryland 20892 (United States); Landman, Bennett A. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); and Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37235 (United States)

    2014-03-15

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  6. Systematic Mapping and Statistical Analyses of Valley Landform and Vegetation Asymmetries Across Hydroclimatic Gradients

    Science.gov (United States)

    Poulos, M. J.; Pierce, J. L.; McNamara, J. P.; Flores, A. N.; Benner, S. G.

    2015-12-01

    Terrain aspect alters the spatial distribution of insolation across topography, driving eco-pedo-hydro-geomorphic feedbacks that can alter landform evolution and result in valley asymmetries for a suite of land surface characteristics (e.g. slope length and steepness, vegetation, soil properties, and drainage development). Asymmetric valleys serve as natural laboratories for studying how landscapes respond to climate perturbation. In the semi-arid montane granodioritic terrain of the Idaho batholith, Northern Rocky Mountains, USA, prior works indicate that reduced insolation on northern (pole-facing) aspects prolongs snow pack persistence, and is associated with thicker, finer-grained soils, that retain more water, prolong the growing season, support coniferous forest rather than sagebrush steppe ecosystems, stabilize slopes at steeper angles, and produce sparser drainage networks. We hypothesize that the primary drivers of valley asymmetry development are changes in the pedon-scale water-balance that coalesce to alter catchment-scale runoff and drainage development, and ultimately cause the divide between north and south-facing land surfaces to migrate northward. We explore this conceptual framework by coupling land surface analyses with statistical modeling to assess relationships and the relative importance of land surface characteristics. Throughout the Idaho batholith, we systematically mapped and tabulated various statistical measures of landforms, land cover, and hydroclimate within discrete valley segments (n=~10,000). We developed a random forest based statistical model to predict valley slope asymmetry based upon numerous measures (n>300) of landscape asymmetries. Preliminary results suggest that drainages are tightly coupled with hillslopes throughout the region, with drainage-network slope being one of the strongest predictors of land-surface-averaged slope asymmetry. When slope-related statistics are excluded, due to possible autocorrelation, valley

  7. Improved custom statistics visualization for CA Performance Center data

    CERN Document Server

    Talevi, Iacopo

    2017-01-01

    The main goal of my project is to understand and experiment the possibilities that CA Performance Center (CA PC) offers for creating custom applications to display stored information through interesting visual means, such as maps. In particular, I have re-written some of the network statistics web pages in order to fetch data from new statistics modules in CA PC, which has its own API, and stop using the RRD data.

  8. Statistical analyses to support guidelines for marine avian sampling. Final report

    Science.gov (United States)

    Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris

    2012-01-01

    distribution to describe counts of a given species in a particular region and season. 4. Using a large database of historical at-sea seabird survey data, we applied this technique to identify appropriate statistical distributions for modeling a variety of species, allowing the distribution to vary by season. For each species and season, we used the selected distribution to calculate and map retrospective statistical power to detect hotspots and coldspots, and map pvalues from Monte Carlo significance tests of hotspots and coldspots, in discrete lease blocks designated by the U.S. Department of Interior, Bureau of Ocean Energy Management (BOEM). 5. Because our definition of hotspots and coldspots does not explicitly include variability over time, we examine the relationship between the temporal scale of sampling and the proportion of variance captured in time series of key environmental correlates of marine bird abundance, as well as available marine bird abundance time series, and use these analyses to develop recommendations for the temporal distribution of sampling to adequately represent both shortterm and long-term variability. We conclude by presenting a schematic “decision tree” showing how this power analysis approach would fit in a general framework for avian survey design, and discuss implications of model assumptions and results. We discuss avenues for future development of this work, and recommendations for practical implementation in the context of siting and wildlife assessment for offshore renewable energy development projects.

  9. Statistical cluster analysis and diagnosis of nuclear system level performance

    International Nuclear Information System (INIS)

    Teichmann, T.; Levine, M.M.; Samanta, P.K.; Kato, W.Y.

    1985-01-01

    The complexity of individual nuclear power plants and the importance of maintaining reliable and safe operations makes it desirable to complement the deterministic analyses of these plants by corresponding statistical surveys and diagnoses. Based on such investigations, one can then explore, statistically, the anticipation, prevention, and when necessary, the control of such failures and malfunctions. This paper, and the accompanying one by Samanta et al., describe some of the initial steps in exploring the feasibility of setting up such a program on an integrated and global (industry-wide) basis. The conceptual statistical and data framework was originally outlined in BNL/NUREG-51609, NUREG/CR-3026, and the present work aims at showing how some important elements might be implemented in a practical way (albeit using hypothetical or simulated data)

  10. Statistical contact angle analyses; "slow moving" drops on a horizontal silicon-oxide surface.

    Science.gov (United States)

    Schmitt, M; Grub, J; Heib, F

    2015-06-01

    Sessile drop experiments on horizontal surfaces are commonly used to characterise surface properties in science and in industry. The advancing angle and the receding angle are measurable on every solid. Specially on horizontal surfaces even the notions themselves are critically questioned by some authors. Building a standard, reproducible and valid method of measuring and defining specific (advancing/receding) contact angles is an important challenge of surface science. Recently we have developed two/three approaches, by sigmoid fitting, by independent and by dependent statistical analyses, which are practicable for the determination of specific angles/slopes if inclining the sample surface. These approaches lead to contact angle data which are independent on "user-skills" and subjectivity of the operator which is also of urgent need to evaluate dynamic measurements of contact angles. We will show in this contribution that the slightly modified procedures are also applicable to find specific angles for experiments on horizontal surfaces. As an example droplets on a flat freshly cleaned silicon-oxide surface (wafer) are dynamically measured by sessile drop technique while the volume of the liquid is increased/decreased. The triple points, the time, the contact angles during the advancing and the receding of the drop obtained by high-precision drop shape analysis are statistically analysed. As stated in the previous contribution the procedure is called "slow movement" analysis due to the small covered distance and the dominance of data points with low velocity. Even smallest variations in velocity such as the minimal advancing motion during the withdrawing of the liquid are identifiable which confirms the flatness and the chemical homogeneity of the sample surface and the high sensitivity of the presented approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Statistical analyses of incidents on onshore gas transmission pipelines based on PHMSA database

    International Nuclear Information System (INIS)

    Lam, Chio; Zhou, Wenxing

    2016-01-01

    This article reports statistical analyses of the mileage and pipe-related incidents data corresponding to the onshore gas transmission pipelines in the US between 2002 and 2013 collected by the Pipeline Hazardous Material Safety Administration of the US Department of Transportation. The analysis indicates that there are approximately 480,000 km of gas transmission pipelines in the US, approximately 60% of them more than 45 years old as of 2013. Eighty percent of the pipelines are Class 1 pipelines, and about 20% of the pipelines are Classes 2 and 3 pipelines. It is found that the third-party excavation, external corrosion, material failure and internal corrosion are the four leading failure causes, responsible for more than 75% of the total incidents. The 12-year average rate of rupture equals 3.1 × 10"−"5 per km-year due to all failure causes combined. External corrosion is the leading cause for ruptures: the 12-year average rupture rate due to external corrosion equals 1.0 × 10"−"5 per km-year and is twice the rupture rate due to the third-party excavation or material failure. The study provides insights into the current state of gas transmission pipelines in the US and baseline failure statistics for the quantitative risk assessments of such pipelines. - Highlights: • Analyze PHMSA pipeline mileage and incident data between 2002 and 2013. • Focus on gas transmission pipelines. • Leading causes for pipeline failures are identified. • Provide baseline failure statistics for risk assessments of gas transmission pipelines.

  12. Attitude towards statistics and performance among post-graduate students

    Science.gov (United States)

    Rosli, Mira Khalisa; Maat, Siti Mistima

    2017-05-01

    For student to master Statistics is a necessity, especially for those post-graduates that are involved in the research field. The purpose of this research was to identify the attitude towards Statistics among the post-graduates and to determine the relationship between the attitude towards Statistics and post-graduates' of Faculty of Education, UKM, Bangi performance. 173 post-graduate students were chosen randomly to participate in the study. These students registered in Research Methodology II course that was introduced by faculty. A survey of attitude toward Statistics using 5-points Likert scale was used for data collection purposes. The instrument consists of four components such as affective, cognitive competency, value and difficulty. The data was analyzed using the SPSS version 22 in producing the descriptive and inferential Statistics output. The result of this research showed that there is a medium and positive relation between attitude towards statistics and students' performance. As a conclusion, educators need to access students' attitude towards the course to accomplish the learning outcomes.

  13. A Divergence Statistics Extension to VTK for Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  14. THESEE-3, Orgel Reactor Performance and Statistic Hot Channel Factors

    International Nuclear Information System (INIS)

    Chambaud, B.

    1974-01-01

    1 - Nature of physical problem solved: The code applies to a heavy-water moderated organic-cooled reactor channel. Different fuel cluster models can be used (circular or hexagonal patterns). The code gives coolant temperatures and velocities and cladding temperatures throughout the channel and also channel performances, such as power, outlet temperature, boiling and burn-out safety margins (see THESEE-1). In a further step, calculations are performed with statistical values obtained by random retrieval of geometrical in- put data and taking into account construction tolerances, vibrations, etc. The code evaluates the mean value and standard deviation for the more important thermal and hydraulic parameters. 2 - Method of solution: First step calculations are performed for nominal values of parameters by solving iteratively the non-linear system of equations which give the pressure drops in subchannels of the current zone (see THESEE-1). Then a Gaussian probability distribution of possible statistical values of the geometrical input data is assumed. A random number generation routine determines the statistical case. Calculations are performed in the same way as for the nominal case. In the case of several channels, statistical performances must be adjusted to equalize the normal pressure drop. A special subroutine (AVERAGE) then determines the mean value and standard deviation, and thus probability functions of the most significant thermal and hydraulic results. 3 - Restrictions on the complexity of the problem: Maximum 7 fuel clusters, each divided into 10 axial zones. Fuel bundle geometries are restricted to the following models - circular pattern 6/7, 18/19, 36/67 rods, with or without fillers. The fuel temperature distribution is not studied. The probability distribution of the statistical input is assumed to be a Gaussian function. The principle of random retrieval of statistical values is correct, but some additional correlations could be found from a more

  15. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    Science.gov (United States)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  16. Review of accident analyses performed at Mochovce NPP

    International Nuclear Information System (INIS)

    Siko, D.

    2000-01-01

    In this paper the review of accident analysis performed in NPP Mochovce V-1 is presented. The scope of these safety measures was defined and development in the T SSM for NPP Mochovce Nuclear Safety Improvements Report' issued in July 1995. The main objectives of these safety measures were the followings: (a) to establish the criteria for selection and classification of accidental events, as well as defining the list of initiating events to be analysed. Accident classification to the individual groups must be performed in accordance with RG 1.70 and IAEA recommendations 'Guidelines for Accidental Analysis of WWER NPP' (IAEA-EBR-WWER-01) to select boundary cases to be calculated from the scope of initiating events; (b ) to elaborate the accident analysis methodology that also includes acceptance criteria for their result evaluation, initial and boundary conditions, assumption related with the application of the single failure criteria, requirements on the analysis quality, used computer codes, as well as NPP models and input data for the accident analysis; (c) to perform the accident analysis for the Pre-operational Safety Report (POSAR); (d) to provide a synthetic report addressing the validity range of codes models and correlations, the assessment against relevant tests results, the evidence of the user qualification, the modernisation and nodding scheme for the plant and the justification of used computer codes. Analyses results showed that all acceptance criteria were met with satisfactory margin and design of the NPP Mochovce is accurate. (author)

  17. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    International Nuclear Information System (INIS)

    Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2015-01-01

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  18. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Edjabou, Maklawe Essonanawe, E-mail: vine@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Petersen, Claus [Econet AS, Omøgade 8, 2.sal, 2100 Copenhagen (Denmark); Scheutz, Charlotte; Astrup, Thomas Fruergaard [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark)

    2015-02-15

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  19. Testing Genetic Pleiotropy with GWAS Summary Statistics for Marginal and Conditional Analyses.

    Science.gov (United States)

    Deng, Yangqing; Pan, Wei

    2017-12-01

    There is growing interest in testing genetic pleiotropy, which is when a single genetic variant influences multiple traits. Several methods have been proposed; however, these methods have some limitations. First, all the proposed methods are based on the use of individual-level genotype and phenotype data; in contrast, for logistical, and other, reasons, summary statistics of univariate SNP-trait associations are typically only available based on meta- or mega-analyzed large genome-wide association study (GWAS) data. Second, existing tests are based on marginal pleiotropy, which cannot distinguish between direct and indirect associations of a single genetic variant with multiple traits due to correlations among the traits. Hence, it is useful to consider conditional analysis, in which a subset of traits is adjusted for another subset of traits. For example, in spite of substantial lowering of low-density lipoprotein cholesterol (LDL) with statin therapy, some patients still maintain high residual cardiovascular risk, and, for these patients, it might be helpful to reduce their triglyceride (TG) level. For this purpose, in order to identify new therapeutic targets, it would be useful to identify genetic variants with pleiotropic effects on LDL and TG after adjusting the latter for LDL; otherwise, a pleiotropic effect of a genetic variant detected by a marginal model could simply be due to its association with LDL only, given the well-known correlation between the two types of lipids. Here, we develop a new pleiotropy testing procedure based only on GWAS summary statistics that can be applied for both marginal analysis and conditional analysis. Although the main technical development is based on published union-intersection testing methods, care is needed in specifying conditional models to avoid invalid statistical estimation and inference. In addition to the previously used likelihood ratio test, we also propose using generalized estimating equations under the

  20. Detailed statistical contact angle analyses; "slow moving" drops on inclining silicon-oxide surfaces.

    Science.gov (United States)

    Schmitt, M; Groß, K; Grub, J; Heib, F

    2015-06-01

    Contact angle determination by sessile drop technique is essential to characterise surface properties in science and in industry. Different specific angles can be observed on every solid which are correlated with the advancing or the receding of the triple line. Different procedures and definitions for the determination of specific angles exist which are often not comprehensible or reproducible. Therefore one of the most important things in this area is to build standard, reproducible and valid methods for determining advancing/receding contact angles. This contribution introduces novel techniques to analyse dynamic contact angle measurements (sessile drop) in detail which are applicable for axisymmetric and non-axisymmetric drops. Not only the recently presented fit solution by sigmoid function and the independent analysis of the different parameters (inclination, contact angle, velocity of the triple point) but also the dependent analysis will be firstly explained in detail. These approaches lead to contact angle data and different access on specific contact angles which are independent from "user-skills" and subjectivity of the operator. As example the motion behaviour of droplets on flat silicon-oxide surfaces after different surface treatments is dynamically measured by sessile drop technique when inclining the sample plate. The triple points, the inclination angles, the downhill (advancing motion) and the uphill angles (receding motion) obtained by high-precision drop shape analysis are independently and dependently statistically analysed. Due to the small covered distance for the dependent analysis (contact angle determination. They are characterised by small deviations of the computed values. Additional to the detailed introduction of this novel analytical approaches plus fit solution special motion relations for the drop on inclined surfaces and detailed relations about the reactivity of the freshly cleaned silicon wafer surface resulting in acceleration

  1. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  2. Dispensing processes impact apparent biological activity as determined by computational and statistical analyses.

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    Full Text Available Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets. We generated computational 3D pharmacophores based on data derived by both acoustic and tip-based transfer. The computed pharmacophores differ significantly depending upon dispensing and dilution methods. The acoustic dispensing-derived pharmacophore correctly identified active compounds in a subsequent test set where the tip-based method failed. Data from acoustic dispensing generates a pharmacophore containing two hydrophobic features, one hydrogen bond donor and one hydrogen bond acceptor. This is consistent with X-ray crystallography studies of ligand-protein interactions and automatically generated pharmacophores derived from this structural data. In contrast, the tip-based data suggest a pharmacophore with two hydrogen bond acceptors, one hydrogen bond donor and no hydrophobic features. This pharmacophore is inconsistent with the X-ray crystallographic studies and automatically generated pharmacophores. In short, traditional dispensing processes are another important source of error in high-throughput screening that impacts computational and statistical analyses. These findings have far-reaching implications in biological research.

  3. Statistical analysis of RHIC beam position monitors performance

    Science.gov (United States)

    Calaga, R.; Tomás, R.

    2004-04-01

    A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  4. Statistical analysis of RHIC beam position monitors performance

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2004-04-01

    Full Text Available A detailed statistical analysis of beam position monitors (BPM performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  5. Statistical improvements in functional magnetic resonance imaging analyses produced by censoring high-motion data points.

    Science.gov (United States)

    Siegel, Joshua S; Power, Jonathan D; Dubis, Joseph W; Vogel, Alecia C; Church, Jessica A; Schlaggar, Bradley L; Petersen, Steven E

    2014-05-01

    Subject motion degrades the quality of task functional magnetic resonance imaging (fMRI) data. Here, we test two classes of methods to counteract the effects of motion in task fMRI data: (1) a variety of motion regressions and (2) motion censoring ("motion scrubbing"). In motion regression, various regressors based on realignment estimates were included as nuisance regressors in general linear model (GLM) estimation. In motion censoring, volumes in which head motion exceeded a threshold were withheld from GLM estimation. The effects of each method were explored in several task fMRI data sets and compared using indicators of data quality and signal-to-noise ratio. Motion censoring decreased variance in parameter estimates within- and across-subjects, reduced residual error in GLM estimation, and increased the magnitude of statistical effects. Motion censoring performed better than all forms of motion regression and also performed well across a variety of parameter spaces, in GLMs with assumed or unassumed response shapes. We conclude that motion censoring improves the quality of task fMRI data and can be a valuable processing step in studies involving populations with even mild amounts of head movement. Copyright © 2013 Wiley Periodicals, Inc.

  6. Correlating tephras and cryptotephras using glass compositional analyses and numerical and statistical methods: Review and evaluation

    Science.gov (United States)

    Lowe, David J.; Pearce, Nicholas J. G.; Jorgensen, Murray A.; Kuehn, Stephen C.; Tryon, Christian A.; Hayward, Chris L.

    2017-11-01

    We define tephras and cryptotephras and their components (mainly ash-sized particles of glass ± crystals in distal deposits) and summarize the basis of tephrochronology as a chronostratigraphic correlational and dating tool for palaeoenvironmental, geological, and archaeological research. We then document and appraise recent advances in analytical methods used to determine the major, minor, and trace elements of individual glass shards from tephra or cryptotephra deposits to aid their correlation and application. Protocols developed recently for the electron probe microanalysis of major elements in individual glass shards help to improve data quality and standardize reporting procedures. A narrow electron beam (diameter ∼3-5 μm) can now be used to analyze smaller glass shards than previously attainable. Reliable analyses of 'microshards' (defined here as glass shards T2 test). Randomization tests can be used where distributional assumptions such as multivariate normality underlying parametric tests are doubtful. Compositional data may be transformed and scaled before being subjected to multivariate statistical procedures including calculation of distance matrices, hierarchical cluster analysis, and PCA. Such transformations may make the assumption of multivariate normality more appropriate. A sequential procedure using Mahalanobis distance and the Hotelling two-sample T2 test is illustrated using glass major element data from trachytic to phonolitic Kenyan tephras. All these methods require a broad range of high-quality compositional data which can be used to compare 'unknowns' with reference (training) sets that are sufficiently complete to account for all possible correlatives, including tephras with heterogeneous glasses that contain multiple compositional groups. Currently, incomplete databases are tending to limit correlation efficacy. The development of an open, online global database to facilitate progress towards integrated, high

  7. Reporting characteristics of meta-analyses in orthodontics: methodological assessment and statistical recommendations.

    Science.gov (United States)

    Papageorgiou, Spyridon N; Papadopoulos, Moschos A; Athanasiou, Athanasios E

    2014-02-01

    Ideally meta-analyses (MAs) should consolidate the characteristics of orthodontic research in order to produce an evidence-based answer. However severe flaws are frequently observed in most of them. The aim of this study was to evaluate the statistical methods, the methodology, and the quality characteristics of orthodontic MAs and to assess their reporting quality during the last years. Electronic databases were searched for MAs (with or without a proper systematic review) in the field of orthodontics, indexed up to 2011. The AMSTAR tool was used for quality assessment of the included articles. Data were analyzed with Student's t-test, one-way ANOVA, and generalized linear modelling. Risk ratios with 95% confidence intervals were calculated to represent changes during the years in reporting of key items associated with quality. A total of 80 MAs with 1086 primary studies were included in this evaluation. Using the AMSTAR tool, 25 (27.3%) of the MAs were found to be of low quality, 37 (46.3%) of medium quality, and 18 (22.5%) of high quality. Specific characteristics like explicit protocol definition, extensive searches, and quality assessment of included trials were associated with a higher AMSTAR score. Model selection and dealing with heterogeneity or publication bias were often problematic in the identified reviews. The number of published orthodontic MAs is constantly increasing, while their overall quality is considered to range from low to medium. Although the number of MAs of medium and high level seems lately to rise, several other aspects need improvement to increase their overall quality.

  8. The RISMC approach to perform advanced PRA analyses - 15332

    International Nuclear Information System (INIS)

    Mandelli, D.; Smith, C.; Riley, T.; Nielsen, J.; Alfonsi, A.; Rabiti, C.; Cogliati, J.

    2015-01-01

    The existing fleet of nuclear power plants is in the process of extending its lifetime and increasing the power generated from these plants via power up-rates. In order to evaluate the impact of these two factors on the safety of the plant, the RISMC (Risk Informed Safety Margin Characterization) Pathway aims to develop simulation-based tools and methods to assess risks for existing nuclear power plants in order to optimize safety. This pathway, by developing new methods, is extending the state-of-the-practice methods that have been traditionally based on logic structures such as Event-Trees and Fault-Trees. These static types of models mimic system response in an inductive and deductive way respectively, yet are restrictive in the ways they can represent spatial and temporal constructs. RISMC analyses are performed by using a combination of thermal-hydraulic codes and a stochastic analysis tool (RAVEN)currently under development at the Idaho National Laboratory. This paper presents a case study in order to show the capabilities of the RISMC methodology to assess impact of power up-rate of a boiling water reactor system during a station blackout accident scenario. We employ the system simulator code, RELAP5-3D, coupled with RAVEN which perform the stochastic analysis. Our analysis is in fact performed by: 1) sampling values of a set of parameters from the uncertainty space of interest, 2) simulating the system behavior for that specific set of parameter values and 3) analyzing the set of simulation runs. Results obtained give a detailed investigation of the issues associated with a plant power up-rate including the effects of station blackout accident scenarios. We are able to quantify how the timing of specific events was impacted by a higher nominal reactor core power. Such safety insights can provide useful information to the decision makers to perform risk informed margins management

  9. High performance statistical computing with parallel R: applications to biology and climate modelling

    International Nuclear Information System (INIS)

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  10. Essentials of Excel, Excel VBA, SAS and Minitab for statistical and financial analyses

    CERN Document Server

    Lee, Cheng-Few; Chang, Jow-Ran; Tai, Tzu

    2016-01-01

    This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, MINITAB, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and MINITAB. Of those, we look at MINITAB and SAS in this textbook. One of the main reasons to use MINITAB is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities...

  11. Analysis of Norwegian bio energy statistics. Quality improvement proposals; Analyse av norsk bioenergistatistikk. Forslag til kvalitetsheving

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This report is an assessment of the current model and presentation form of bio energy statistics. It appears proposed revision and enhancement of both collection and data representation. In the context of market development both in general for energy and particularly for bio energy and government targets, a good bio energy statistics form the basis to follow up the objectives and means.(eb)

  12. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Science.gov (United States)

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  13. Statistical performance evaluation of ECG transmission using wireless networks.

    Science.gov (United States)

    Shakhatreh, Walid; Gharaibeh, Khaled; Al-Zaben, Awad

    2013-07-01

    This paper presents simulation of the transmission of biomedical signals (using ECG signal as an example) over wireless networks. Investigation of the effect of channel impairments including SNR, pathloss exponent, path delay and network impairments such as packet loss probability; on the diagnosability of the received ECG signal are presented. The ECG signal is transmitted through a wireless network system composed of two communication protocols; an 802.15.4- ZigBee protocol and an 802.11b protocol. The performance of the transmission is evaluated using higher order statistics parameters such as kurtosis and Negative Entropy in addition to the common techniques such as the PRD, RMS and Cross Correlation.

  14. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses

    OpenAIRE

    Buttigieg, Pier Luigi; Ramette, Alban Nicolas

    2014-01-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynami...

  15. Statistical Analysis of EGFR Structures’ Performance in Virtual Screening

    Science.gov (United States)

    Li, Yan; Li, Xiang; Dong, Zigang

    2015-01-01

    In this work the ability of EGFR structures to distinguish true inhibitors from decoys in docking and MM-PBSA is assessed by statistical procedures. The docking performance depends critically on the receptor conformation and bound state. The enrichment of known inhibitors is well correlated with the difference between EGFR structures rather than the bound-ligand property. The optimal structures for virtual screening can be selected based purely on the complex information. And the mixed combination of distinct EGFR conformations is recommended for ensemble docking. In MM-PBSA, a variety of EGFR structures have identically good performance in the scoring and ranking of known inhibitors, indicating that the choice of the receptor structure has little effect on the screening. PMID:26476847

  16. A simple and robust statistical framework for planning, analysing and interpreting faecal egg count reduction test (FECRT) studies

    DEFF Research Database (Denmark)

    Denwood, M.J.; McKendrick, I.J.; Matthews, L.

    Introduction. There is an urgent need for a method of analysing FECRT data that is computationally simple and statistically robust. A method for evaluating the statistical power of a proposed FECRT study would also greatly enhance the current guidelines. Methods. A novel statistical framework has...... been developed that evaluates observed FECRT data against two null hypotheses: (1) the observed efficacy is consistent with the expected efficacy, and (2) the observed efficacy is inferior to the expected efficacy. The method requires only four simple summary statistics of the observed data. Power...... that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...

  17. The Relationship Between Radiative Forcing and Temperature. What Do Statistical Analyses of the Instrumental Temperature Record Measure?

    International Nuclear Information System (INIS)

    Kaufmann, R.K.; Kauppi, H.; Stock, J.H.

    2006-01-01

    Comparing statistical estimates for the long-run temperature effect of doubled CO2 with those generated by climate models begs the question, is the long-run temperature effect of doubled CO2 that is estimated from the instrumental temperature record using statistical techniques consistent with the transient climate response, the equilibrium climate sensitivity, or the effective climate sensitivity. Here, we attempt to answer the question, what do statistical analyses of the observational record measure, by using these same statistical techniques to estimate the temperature effect of a doubling in the atmospheric concentration of carbon dioxide from seventeen simulations run for the Coupled Model Intercomparison Project 2 (CMIP2). The results indicate that the temperature effect estimated by the statistical methodology is consistent with the transient climate response and that this consistency is relatively unaffected by sample size or the increase in radiative forcing in the sample

  18. Magnetic resonance imaging of the wrist: Diagnostic performance statistics

    International Nuclear Information System (INIS)

    Hobby, Jonathan L.; Tom, Brian D.M.; Bearcroft, Philip W.P.; Dixon, Adrian K.

    2001-01-01

    AIM: To review the published diagnostic performance statistics for magnetic resonance imaging (MRI) of the wrist for tears of the triangular fibrocartilage complex, the intrinsic carpal ligaments, and for osteonecrosis of the carpal bones. MATERIALS AND METHODS: We used Medline and Embase to search the English language literature. Studies evaluating the diagnostic performance of MRI of the wrist in living patients with surgical confirmation of MR findings were identified. RESULTS: We identified 11 studies reporting the diagnostic performance of MRI for tears of the triangular fibrocartilage complex for a total of 410 patients, six studies for the scapho-lunate ligament (159 patients), six studies for the luno-triquetral ligament (142 patients) and four studies (56 patients) for osteonecrosis of the carpal bones. CONCLUSIONS: Magnetic resonance imaging is an accurate means of diagnosing tears of the triangular fibrocartilage and carpal osteonecrosis. Although MRI is highly specific for tears of the intrinsic carpal ligaments, its sensitivity is low. The diagnostic performance of MRI in the wrist is improved by using high-resolution T2* weighted 3D gradient echo sequences. Using current imaging techniques without intra-articular contrast medium, magnetic resonance imaging cannot reliably exclude tears of the intrinsic carpal ligaments. Hobby, J.L. (2001)

  19. Statistical analyses of the magnet data for the advanced photon source storage ring magnets

    International Nuclear Information System (INIS)

    Kim, S.H.; Carnegie, D.W.; Doose, C.; Hogrefe, R.; Kim, K.; Merl, R.

    1995-01-01

    The statistics of the measured magnetic data of 80 dipole, 400 quadrupole, and 280 sextupole magnets of conventional resistive designs for the APS storage ring is summarized. In order to accommodate the vacuum chamber, the curved dipole has a C-type cross section and the quadrupole and sextupole cross sections have 180 degrees and 120 degrees symmetries, respectively. The data statistics include the integrated main fields, multipole coefficients, magnetic and mechanical axes, and roll angles of the main fields. The average and rms values of the measured magnet data meet the storage ring requirements

  20. "Who Was 'Shadow'?" The Computer Knows: Applying Grammar-Program Statistics in Content Analyses to Solve Mysteries about Authorship.

    Science.gov (United States)

    Ellis, Barbara G.; Dick, Steven J.

    1996-01-01

    Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)

  1. A statistical approach to nuclear fuel design and performance

    Science.gov (United States)

    Cunning, Travis Andrew

    As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance

  2. Design and implementation of a modular program system for the carrying-through of statistical analyses

    International Nuclear Information System (INIS)

    Beck, W.

    1984-01-01

    From the complexity of computer programs for the solution of scientific and technical problems results a lot of questions. Typical questions concern the strength and weakness of computer programs, the propagation of incertainties among the input data, the sensitivity of input data on output data and the substitute of complex models by more simple ones, which provide equivalent results in certain ranges. Those questions have a general practical meaning, principle answers may be found by statistical methods, which are based on the Monte Carlo Method. In this report the statistical methods are chosen, described and valuated. They are implemented into the modular program system STAR, which is an own component of the program system RSYST. The design of STAR considers users with different knowledge of data processing and statistics. The variety of statistical methods, generating and evaluating procedures. The processing of large data sets in complex structures. The coupling to other components of RSYST and RSYST foreign programs. That the system can be easily modificated and enlarged. Four examples are given, which demonstrate the application of STAR. (orig.) [de

  3. Statistical Control Charts: Performances of Short Term Stock Trading in Croatia

    Directory of Open Access Journals (Sweden)

    Dumičić Ksenija

    2015-03-01

    Full Text Available Background: The stock exchange, as a regulated financial market, in modern economies reflects their economic development level. The stock market indicates the mood of investors in the development of a country and is an important ingredient for growth. Objectives: This paper aims to introduce an additional statistical tool used to support the decision-making process in stock trading, and it investigate the usage of statistical process control (SPC methods into the stock trading process. Methods/Approach: The individual (I, exponentially weighted moving average (EWMA and cumulative sum (CUSUM control charts were used for gaining trade signals. The open and the average prices of CROBEX10 index stocks on the Zagreb Stock Exchange were used in the analysis. The statistical control charts capabilities for stock trading in the short-run were analysed. Results: The statistical control chart analysis pointed out too many signals to buy or sell stocks. Most of them are considered as false alarms. So, the statistical control charts showed to be not so much useful in stock trading or in a portfolio analysis. Conclusions: The presence of non-normality and autocorellation has great impact on statistical control charts performances. It is assumed that if these two problems are solved, the use of statistical control charts in a portfolio analysis could be greatly improved.

  4. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 1: Review and comparison of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked

  5. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...

  6. ANALYSING PERFORMANCE ASSESSMENT IN PUBLIC SERVICES: HOW USEFUL IS THE CONCEPT OF A PERFORMANCE REGIME?

    Science.gov (United States)

    Martin, Steve; Nutley, Sandra; Downe, James; Grace, Clive

    2016-03-01

    Approaches to performance assessment have been described as 'performance regimes', but there has been little analysis of what is meant by this concept and whether it has any real value. We draw on four perspectives on regimes - 'institutions and instruments', 'risk regulation regimes', 'internal logics and effects' and 'analytics of government' - to explore how the concept of a multi-dimensional regime can be applied to performance assessment in public services. We conclude that the concept is valuable. It helps to frame comparative and longitudinal analyses of approaches to performance assessment and draws attention to the ways in which public service performance regimes operate at different levels, how they change over time and what drives their development. Areas for future research include analysis of the impacts of performance regimes and interactions between their visible features (such as inspections, performance indicators and star ratings) and the veiled rationalities which underpin them.

  7. Statistic analyses of the color experience according to the age of the observer.

    Science.gov (United States)

    Hunjet, Anica; Parac-Osterman, Durdica; Vucaj, Edita

    2013-04-01

    Psychological experience of color is a real state of the communication between the environment and color, and it will depend on the source of the light, angle of the view, and particular on the observer and his health condition. Hering's theory or a theory of the opponent processes supposes that cones, which are situated in the retina of the eye, are not sensible on the three chromatic domains (areas, fields, zones) (red, green and purple-blue), but they produce a signal based on the principle of the opposed pairs of colors. A reason of this theory depends on the fact that certain disorders of the color eyesight, which include blindness to certain colors, cause blindness to pairs of opponent colors. This paper presents a demonstration of the experience of blue and yellow tone according to the age of the observer. For the testing of the statistically significant differences in the omission in the color experience according to the color of the background we use following statistical tests: Mann-Whitnney U Test, Kruskal-Wallis ANOVA and Median test. It was proven that the differences are statistically significant in the elderly persons (older than 35 years).

  8. Robust statistics for deterministic and stochastic gravitational waves in non-Gaussian noise. II. Bayesian analyses

    International Nuclear Information System (INIS)

    Allen, Bruce; Creighton, Jolien D.E.; Flanagan, Eanna E.; Romano, Joseph D.

    2003-01-01

    In a previous paper (paper I), we derived a set of near-optimal signal detection techniques for gravitational wave detectors whose noise probability distributions contain non-Gaussian tails. The methods modify standard methods by truncating or clipping sample values which lie in those non-Gaussian tails. The methods were derived, in the frequentist framework, by minimizing false alarm probabilities at fixed false detection probability in the limit of weak signals. For stochastic signals, the resulting statistic consisted of a sum of an autocorrelation term and a cross-correlation term; it was necessary to discard 'by hand' the autocorrelation term in order to arrive at the correct, generalized cross-correlation statistic. In the present paper, we present an alternative derivation of the same signal detection techniques from within the Bayesian framework. We compute, for both deterministic and stochastic signals, the probability that a signal is present in the data, in the limit where the signal-to-noise ratio squared per frequency bin is small, where the signal is nevertheless strong enough to be detected (integrated signal-to-noise ratio large compared to 1), and where the total probability in the non-Gaussian tail part of the noise distribution is small. We show that, for each model considered, the resulting probability is to a good approximation a monotonic function of the detection statistic derived in paper I. Moreover, for stochastic signals, the new Bayesian derivation automatically eliminates the problematic autocorrelation term

  9. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  10. Psychological Analyses of Courageous Performance in Military Personnel

    Science.gov (United States)

    1986-11-01

    schedule HR heart rate IBI inter- beat interval N number of subjects NS not statistically significant P probability PCA principal components analysis RAQ...tones in the range of 400 to 600 Hz, set at a level of 60 dB, transmitted for 1 sec binaurally through earphones from a commercial oscillator. The...because of interference on the recording trace. Cardiac activity was measured in terms of heart rate (HR). The number of beats /minute was estimared by

  11. The Use of Statistical Process Control Tools for Analysing Financial Statements

    Directory of Open Access Journals (Sweden)

    Niezgoda Janusz

    2017-06-01

    Full Text Available This article presents the proposed application of one type of the modified Shewhart control charts in the monitoring of changes in the aggregated level of financial ratios. The control chart x̅ has been used as a basis of analysis. The examined variable from the sample in the mentioned chart is the arithmetic mean. The author proposes to substitute it with a synthetic measure that is determined and based on the selected ratios. As the ratios mentioned above, are expressed in different units and characters, the author applies standardisation. The results of selected comparative analyses have been presented for both bankrupts and non-bankrupts. They indicate the possibility of using control charts as an auxiliary tool in financial analyses.

  12. Statistical methods for analysing the relationship between bank profitability and liquidity

    OpenAIRE

    Boguslaw Guzik

    2006-01-01

    The article analyses the most popular methods for the empirical estimation of the relationship between bank profitability and liquidity. Owing to the fact that profitability depends on various factors (both economic and non-economic), a simple correlation coefficient, two-dimensional (profitability/liquidity) graphs or models where profitability depends only on liquidity variable do not provide good and reliable results. Quite good results can be obtained only when multifactorial profitabilit...

  13. Dispersal of potato cyst nematodes measured using historical and spatial statistical analyses.

    Science.gov (United States)

    Banks, N C; Hodda, M; Singh, S K; Matveeva, E M

    2012-06-01

    Rates and modes of dispersal of potato cyst nematodes (PCNs) were investigated. Analysis of records from eight countries suggested that PCNs spread a mean distance of 5.3 km/year radially from the site of first detection, and spread 212 km over ≈40 years before detection. Data from four countries with more detailed histories of invasion were analyzed further, using distance from first detection, distance from previous detection, distance from nearest detection, straight line distance, and road distance. Linear distance from first detection was significantly related to the time since the first detection. Estimated rate of spread was 5.7 km/year, and did not differ statistically between countries. Time between the first detection and estimated introduction date varied between 0 and 20 years, and differed among countries. Road distances from nearest and first detection were statistically significantly related to time, and gave slightly higher estimates for rate of spread of 6.0 and 7.9 km/year, respectively. These results indicate that the original site of introduction of PCNs may act as a source for subsequent spread and that this may occur at a relatively constant rate over time regardless of whether this distance is measured by road or by a straight line. The implications of this constant radial rate of dispersal for biosecurity and pest management are discussed, along with the effects of control strategies.

  14. Long-Term Propagation Statistics and Availability Performance Assessment for Simulated Terrestrial Hybrid FSO/RF System

    Directory of Open Access Journals (Sweden)

    Fiser Ondrej

    2011-01-01

    Full Text Available Long-term monthly and annual statistics of the attenuation of electromagnetic waves that have been obtained from 6 years of measurements on a free space optical path, 853 meters long, with a wavelength of 850 nm and on a precisely parallel radio path with a frequency of 58 GHz are presented. All the attenuation events observed are systematically classified according to the hydrometeor type causing the particular event. Monthly and yearly propagation statistics on the free space optical path and radio path are obtained. The influence of individual hydrometeors on attenuation is analysed. The obtained propagation statistics are compared to the calculated statistics using ITU-R models. The calculated attenuation statistics both at 850 nm and 58 GHz underestimate the measured statistics for higher attenuation levels. The availability performance of a simulated hybrid FSO/RF system is analysed based on the measured data.

  15. Age and gender effects on normal regional cerebral blood flow studied using two different voxel-based statistical analyses

    International Nuclear Information System (INIS)

    Pirson, A.S.; George, J.; Krug, B.; Vander Borght, T.; Van Laere, K.; Jamart, J.; D'Asseler, Y.; Minoshima, S.

    2009-01-01

    Fully automated analysis programs have been applied more and more to aid for the reading of regional cerebral blood flow SPECT study. They are increasingly based on the comparison of the patient study with a normal database. In this study, we evaluate the ability of Three-Dimensional Stereotactic Surface Projection (3 D-S.S.P.) to isolate effects of age and gender in a previously studied normal population. The results were also compared with those obtained using Statistical Parametric Mapping (S.P.M.99). Methods Eighty-nine 99m Tc-E.C.D.-SPECT studies performed in carefully screened healthy volunteers (46 females, 43 males; age 20 - 81 years) were analysed using 3 D-S.S.P.. A multivariate analysis based on the general linear model was performed with regions as intra-subject factor, gender as inter-subject factor and age as co-variate. Results Both age and gender had a significant interaction effect with regional tracer uptake. An age-related decline (p < 0.001) was found in the anterior cingulate gyrus, left frontal association cortex and left insula. Bilateral occipital association and left primary visual cortical uptake showed a significant relative increase with age (p < 0.001). Concerning the gender effect, women showed higher uptake (p < 0.01) in the parietal and right sensorimotor cortices. An age by gender interaction (p < 0.01) was only found in the left medial frontal cortex. The results were consistent with those obtained with S.P.M.99. Conclusion 3 D-S.S.P. analysis of normal r.C.B.F. variability is consistent with the literature and other automated voxel-based techniques, which highlight the effects of both age and gender. (authors)

  16. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  17. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  18. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    DEFF Research Database (Denmark)

    Edjabou, Vincent Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona

    2015-01-01

    Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both...... comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub......-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10-50 waste fractions, organised according to a three-level (tiered approach) facilitating,comparison of the waste data between individual sub-areas with different fractionation (waste...

  19. Chemometric and Statistical Analyses of ToF-SIMS Spectra of Increasingly Complex Biological Samples

    Energy Technology Data Exchange (ETDEWEB)

    Berman, E S; Wu, L; Fortson, S L; Nelson, D O; Kulp, K S; Wu, K J

    2007-10-24

    Characterizing and classifying molecular variation within biological samples is critical for determining fundamental mechanisms of biological processes that will lead to new insights including improved disease understanding. Towards these ends, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to examine increasingly complex samples of biological relevance, including monosaccharide isomers, pure proteins, complex protein mixtures, and mouse embryo tissues. The complex mass spectral data sets produced were analyzed using five common statistical and chemometric multivariate analysis techniques: principal component analysis (PCA), linear discriminant analysis (LDA), partial least squares discriminant analysis (PLSDA), soft independent modeling of class analogy (SIMCA), and decision tree analysis by recursive partitioning. PCA was found to be a valuable first step in multivariate analysis, providing insight both into the relative groupings of samples and into the molecular basis for those groupings. For the monosaccharides, pure proteins and protein mixture samples, all of LDA, PLSDA, and SIMCA were found to produce excellent classification given a sufficient number of compound variables calculated. For the mouse embryo tissues, however, SIMCA did not produce as accurate a classification. The decision tree analysis was found to be the least successful for all the data sets, providing neither as accurate a classification nor chemical insight for any of the tested samples. Based on these results we conclude that as the complexity of the sample increases, so must the sophistication of the multivariate technique used to classify the samples. PCA is a preferred first step for understanding ToF-SIMS data that can be followed by either LDA or PLSDA for effective classification analysis. This study demonstrates the strength of ToF-SIMS combined with multivariate statistical and chemometric techniques to classify increasingly complex biological samples

  20. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  1. Enabling Detailed Energy Analyses via the Technology Performance Exchange: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Studer, D.; Fleming, K.; Lee, E.; Livingood, W.

    2014-08-01

    One of the key tenets to increasing adoption of energy efficiency solutions in the built environment is improving confidence in energy performance. Current industry practices make extensive use of predictive modeling, often via the use of sophisticated hourly or sub-hourly energy simulation programs, to account for site-specific parameters (e.g., climate zone, hours of operation, and space type) and arrive at a performance estimate. While such methods are highly precise, they invariably provide less than ideal accuracy due to a lack of high-quality, foundational energy performance input data. The Technology Performance Exchange was constructed to allow the transparent sharing of foundational, product-specific energy performance data, and leverages significant, external engineering efforts and a modular architecture to efficiently identify and codify the minimum information necessary to accurately predict product energy performance. This strongly-typed database resource represents a novel solution to a difficult and established problem. One of the most exciting benefits is the way in which the Technology Performance Exchange's application programming interface has been leveraged to integrate contributed foundational data into the Building Component Library. Via a series of scripts, data is automatically translated and parsed into the Building Component Library in a format that is immediately usable to the energy modeling community. This paper (1) presents a high-level overview of the project drivers and the structure of the Technology Performance Exchange; (2) offers a detailed examination of how technologies are incorporated and translated into powerful energy modeling code snippets; and (3) examines several benefits of this robust workflow.

  2. On statistical methods for analysing the geographical distribution of cancer cases near nuclear installations

    International Nuclear Information System (INIS)

    Bithell, J.F.; Stone, R.A.

    1989-01-01

    This paper sets out to show that epidemiological methods most commonly used can be improved. When analysing geographical data it is necessary to consider location. The most obvious quantification of location is ranked distance, though other measures which may be more meaningful in relation to aetiology may be substituted. A test based on distance ranks, the ''Poisson maximum test'', depends on the maximum of observed relative risk in regions of increasing size, but with significance level adjusted for selection. Applying this test to data from Sellafield and Sizewell shows that the excess of leukaemia incidence observed at Seascale, near Sellafield, is not an artefact due to data selection by region, and that the excess probably results from a genuine, if as yet unidentified cause (there being little evidence of any other locational association once the Seascale cases have been removed). So far as Sizewell is concerned, geographical proximity to the nuclear power station does not seen particularly important. (author)

  3. Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.

    Science.gov (United States)

    Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi

    2013-12-01

    Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.

  4. Influence of Immersion Conditions on The Tensile Strength of Recycled Kevlar®/Polyester/Low-Melting-Point Polyester Nonwoven Geotextiles through Applying Statistical Analyses

    Directory of Open Access Journals (Sweden)

    Jing-Chzi Hsieh

    2016-05-01

    Full Text Available The recycled Kevlar®/polyester/low-melting-point polyester (recycled Kevlar®/PET/LPET nonwoven geotextiles are immersed in neutral, strong acid, and strong alkali solutions, respectively, at different temperatures for four months. Their tensile strength is then tested according to various immersion periods at various temperatures, in order to determine their durability to chemicals. For the purpose of analyzing the possible factors that influence mechanical properties of geotextiles under diverse environmental conditions, the experimental results and statistical analyses are incorporated in this study. Therefore, influences of the content of recycled Kevlar® fibers, implementation of thermal treatment, and immersion periods on the tensile strength of recycled Kevlar®/PET/LPET nonwoven geotextiles are examined, after which their influential levels are statistically determined by performing multiple regression analyses. According to the results, the tensile strength of nonwoven geotextiles can be enhanced by adding recycled Kevlar® fibers and thermal treatment.

  5. [Longitudinal and specific analyses of physical performance in handball].

    Science.gov (United States)

    Schwesig, R; Fieseler, G; Jungermann, P; Noack, F; Irlenbusch, L; Leuchte, S; Fischer, D

    2012-09-01

    Sports-specific, biomechanical measuring stations and measuring-station trainings have become common practice in many forms of sports and are an essential element of the complex assessment of physical performance. In handball, however, there is still considerable research potential in this respect as well as in the systematic generation and acquisition of the requirements profile and progress of strain. The prime objective of the longitudinal study was to determine the potential performance and development of handball players (3 rd league) in general and in terms of handball sport in particular. Another objective was to establish correlations between tests and indicators of performance in competitions. 13 handball players (age: 26.5 ± 3.6 years) were tested three times (before and after the pre-season preparation phase and at the end of the first half of the season) on two test days each. The examination was composed of sprint test (ST, day 1), handball-specific complex test (HBKT, day 1) and assessment of treadmill diagnostics (LD, day 2). The surveyed parameters were lactate and heart rate (LD/HBKT) as well as time (ST, HBKT) and the number of errors (HBKT). The cardiac (Hfmax = 201 min-1) and metabolic strain (lactate = 17.8 mmol/L) in the HBKT were very high. In the preparatory phase, the average magnitudes of effect registered were at d = 0.31 (ST parameter), d = 0.68 (HBKT parameter) and d = 0.98 (LD parameter). The most significant improvements throughout the entire period of time were registered in the parameters v2 (LD; η2 = 0.371), total goal-throwing time (HBKT; η2 = 0.250), total penalty time (HBKT; η2 = 0.236) and total round 2 (HBKT; η2 = 0.227). In HBKT and LD, the performance level was stabilised by the end of the first half of the season. In terms of speed, however, there was a decline in performance abilities. The competition performance has its highest degree of correlation with cardial (defense: r = -0.656) and metabolic (offensive: r = -0

  6. STATISTIC, PROBABILISTIC, CORRELATION AND SPECTRAL ANALYSES OF REGENERATIVE BRAKING CURRENT OF DC ELECTRIC ROLLING STOCK

    Directory of Open Access Journals (Sweden)

    A. V. Nikitenko

    2014-04-01

    Full Text Available Purpose. Defining and analysis of the probabilistic and spectral characteristics of random current in regenerative braking mode of DC electric rolling stock are observed in this paper. Methodology. The elements and methods of the probability theory (particularly the theory of stationary and non-stationary processes and methods of the sampling theory are used for processing of the regenerated current data arrays by PC. Findings. The regenerated current records are obtained from the locomotives and trains in Ukraine railways and trams in Poland. It was established that the current has uninterrupted and the jumping variations in time (especially in trams. For the random current in the regenerative braking mode the functions of mathematical expectation, dispersion and standard deviation are calculated. Histograms, probabilistic characteristics and correlation functions are calculated and plotted down for this current too. It was established that the current of the regenerative braking mode can be considered like the stationary and non-ergodic process. The spectral analysis of these records and “tail part” of the correlation function found weak periodical (or low-frequency components which are known like an interharmonic. Originality. Firstly, the theory of non-stationary random processes was adapted for the analysis of the recuperated current which has uninterrupted and the jumping variations in time. Secondly, the presence of interharmonics in the stochastic process of regenerated current was defined for the first time. And finally, the patterns of temporal changes of the correlation current function are defined too. This allows to reasonably apply the correlation functions method in the identification of the electric traction system devices. Practical value. The results of probabilistic and statistic analysis of the recuperated current allow to estimate the quality of recovered energy and energy quality indices of electric rolling stock in the

  7. Measurements and statistical analyses of indoor radon concentrations in Tokyo and surrounding areas

    International Nuclear Information System (INIS)

    Sugiura, Shiroharu; Suzuki, Takashi; Inokoshi, Yukio

    1995-01-01

    Since the UNSCEAR report published in 1982, radiation exposure to the respiratory tract due to radon and its progeny has been regarded as the single largest contributor to the natural radiation exposure of the general public. In Japan, the measurement of radon gas concentrations in many types of buildings have been surveyed by national and private institutes. We also carried out the measurement of radon gas concentrations in different types of residential buildings in Tokyo and its adjoining prefectures from October 1988 to September 1991, to evaluate the potential radiation risk of the people living there. One or two simplified passive radon monitors were set up in each of the 34 residential buildings located in the above-mentioned area for an exposure period of 3 months each. Comparing the average concentrations in the buildings of different materials and structures, those in the concrete steel buildings were always higher than those in the wooden and the prefabricated mortared buildings. The radon concentrations were proved to become higher in autumn and winter, and lower in spring and summer. Radon concentrations in an underground room of a concrete steel building showed the highest value throughout our investigation, and statistically significant seasonal variation was detected by the X-11 method developed by the U.S. Bureau of Census. The values measured in a room at the first floor of the same concrete steel building also showed seasonal variation, but the phase of variation was different. Another multivariate analysis suggested that the building material and structure are the most important factors concerning the levels of radon concentration among other factors such as the age of the building and the use of ventilators. (author)

  8. Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism

    Directory of Open Access Journals (Sweden)

    Le Bao

    2014-11-01

    Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.

  9. Analysing Student Performance Using Sparse Data of Core Bachelor Courses

    Science.gov (United States)

    Saarela, Mirka; Karkkainen, Tommi

    2015-01-01

    Curricula for Computer Science (CS) degrees are characterized by the strong occupational orientation of the discipline. In the BSc degree structure, with clearly separate CS core studies, the learning skills for these and other required courses may vary a lot, which is shown in students' overall performance. To analyze this situation, we apply…

  10. Het ontwikkelen van een web performance en security analyse applicatie

    NARCIS (Netherlands)

    O. Hawker

    2017-01-01

    Doelstelling van de opdracht is om vier security en performance aspecten van een website te meten of bepalen. Dit wordt gedaan met behulp van een site crawler. De resultaten van de metingen inclusief de trends van vorige scans worden aan de gebruiker gepresenteerd. Als laatst moet er rekening met de

  11. Analyse des caractéristiques structurelles et des performances ...

    African Journals Online (AJOL)

    SARAH

    28 févr. 2014 ... performances technico-économiques de la riziculture irriguée en Côte d' .... en nous basant sur les prix des intrants et des produits en vigueur au cours ..... la recherche et de la vulgarisation et le transfert de technologies ; (2) ...

  12. Statistics

    International Nuclear Information System (INIS)

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  13. Concepts for measuring maintenance performance and methods for analysing competing failure modes

    DEFF Research Database (Denmark)

    Cooke, R.; Paulsen, J.L.

    1997-01-01

    competing failure modes. This article examines ways to assess maintenance performance without introducing statistical assumptions, then introduces a plausible statistical model for describing the interaction of preventive and corrective maintenance, and finally illustrates these with examples from...

  14. Evaluation of the performance of Moses statistical engine adapted to ...

    African Journals Online (AJOL)

    ... of Moses statistical engine adapted to English-Arabic language combination. ... of Artificial Intelligence (AI) dedicated to Natural Language Processing (NLP). ... and focuses on SMT, then introducing the features of the open source Moses ...

  15. Statistics

    International Nuclear Information System (INIS)

    2001-01-01

    For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  16. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  17. Statistics

    International Nuclear Information System (INIS)

    1999-01-01

    For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  18. Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Saleh Altwaijri

    2012-12-01

    Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but

  19. Thermal Performance Analyses of Multiborehole Ground Heat Exchangers

    Directory of Open Access Journals (Sweden)

    Wanjing Luo

    2017-01-01

    Full Text Available Geothermal energy known as a clean, renewable energy resource is widely available and reliable. Ground heat exchangers (GHEs can assist the development of geothermal energy by reducing the capital cost and greenhouse gas emission. In this paper, a novel semianalytical method was developed to study the thermal performance of multiborehole ground heat exchangers (GHEs with arbitrary configurations. By assuming a uniform inlet fluid temperature (UIFT, instead of uniform heat flux (UHF, the effects of thermal interference and the thermal performance difference between different boreholes can be examined. Simulation results indicate that the monthly average outlet fluid temperatures of GHEs will increase gradually while the annual cooling load of the GHEs is greater than the annual heating load. Besides, two mechanisms, the thermal dissipation and the heat storage effect, will determine the heat transfer underground, which can be further divided into four stages. Moreover, some boreholes will be malfunctioned; that is, boreholes can absorb heat from ground when the GHEs are under the cooling mode. However, as indicated by further investigations, this malfunction can be avoided by increasing borehole spacing.

  20. Accident analyses performed for the Norwegian committee on nuclear power

    International Nuclear Information System (INIS)

    Tveten, U.; Thomassen, D.; Kvaal, E.

    1979-02-01

    As part of the work performed for the Norwegian Government Committee on Nuclear Power, risk calculations were carried out for two examples of possible reactor sites in Norway. The calculations were performed with the computer program COMO (or CRACK), which was also used in the American reactor safety study (WASH-1400). In connection with the Norwegian calculations some modifications were made to the program, and relevant data for Norwegian conditions were introduced. The atmospheric dispersion model and meteorological data are discussed at some length. An analysi of the population distribution around both sites is presented and land usage is also discussed. Radiation dose calculations internal, and external, are summarised. Shielding factors from terrain and buildings are also given, and the effect of evacuation briefly discussed. Health effects, immediate mortalities, and delayed and genetic effects are discussed at some length. The economic consequences of an accident due to e.g. evacuation, condemnation of agricultural products, cost of decontamination, loss in property value and relocation costs are estimated. The results are presented graphically as a function of probability. (JIW)

  1. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 2: robustness of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type II errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples

  2. Statistics

    International Nuclear Information System (INIS)

    2003-01-01

    For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

  3. Statistics

    International Nuclear Information System (INIS)

    2004-01-01

    For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

  4. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  5. Performances and reliabilities comparisons by multidimensional statistic studies

    International Nuclear Information System (INIS)

    Coudray, R.

    1993-01-01

    After an overview of the methods utilizables in this type of analyse, we insist on the method used in the experience return for the French PWR 900 MWe chemical and volume control system pump. 7 figs., 3 tabs., 8 refs

  6. Cost and quality effectiveness of objective-based and statistically-based quality control for volatile organic compounds analyses of gases

    International Nuclear Information System (INIS)

    Bennett, J.T.; Crowder, C.A.; Connolly, M.J.

    1994-01-01

    Gas samples from drums of radioactive waste at the Department of Energy (DOE) Idaho National Engineering Laboratory are being characterized for 29 volatile organic compounds to determine the feasibility of storing the waste in DOE's Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. Quality requirements for the gas chromatography (GC) and GC/mass spectrometry chemical methods used to analyze the waste are specified in the Quality Assurance Program Plan for the WIPP Experimental Waste Characterization Program. Quality requirements consist of both objective criteria (data quality objectives, DQOs) and statistical criteria (process control). The DQOs apply to routine sample analyses, while the statistical criteria serve to determine and monitor precision and accuracy (P ampersand A) of the analysis methods and are also used to assign upper confidence limits to measurement results close to action levels. After over two years and more than 1000 sample analyses there are two general conclusions concerning the two approaches to quality control: (1) Objective criteria (e.g., ± 25% precision, ± 30% accuracy) based on customer needs and the usually prescribed criteria for similar EPA- approved methods are consistently attained during routine analyses. (2) Statistical criteria based on short term method performance are almost an order of magnitude more stringent than objective criteria and are difficult to satisfy following the same routine laboratory procedures which satisfy the objective criteria. A more cost effective and representative approach to establishing statistical method performances criteria would be either to utilize a moving average of P ampersand A from control samples over a several month time period or to determine within a sample variation by one-way analysis of variance of several months replicate sample analysis results or both. Confidence intervals for results near action levels could also be determined by replicate analysis of the sample in

  7. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics.

    Science.gov (United States)

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural

  8. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

    Science.gov (United States)

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural

  9. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

    Directory of Open Access Journals (Sweden)

    Manuela Paechter

    2017-07-01

    Full Text Available In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men. Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in

  10. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files

  11. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  12. Performance Monitoring System: Summary of Lock Statistics. Revision 1.

    Science.gov (United States)

    1985-12-01

    2751 84 4057 4141 526 798 18 1342 5727 19 5523 3996 4587 8583 1056 1630 35 2721 6536LOCK A DAMI 2 AUXILIARY I Ins NO DATA RECORDD FOR THIS LOCK- " LOCK I...TOTAL (KTOMS) ’ - (AVt OPNP ETC) ’’ ,q [ " ARKANSAS RIVER "" FORRELL LOCK IP 7A/3TRC 9/N83 UPBOUID STATISTICS ISO 53 42 M6 553 356 909 221 41 21 M8

  13. Statistical Analysis of TTC Students' performance in the ...

    African Journals Online (AJOL)

    user17

    This study aimed at analysing Teacher Training Colleges students‟ .... and equipment, teaching and learning aids, TTC environment, TTC ..... grounds, classrooms, tutors‟ offices, library, reading space and support staff ... science, clean and secure environment that promotes inclusive, equity and equality of education.

  14. Practical Statistics

    CERN Document Server

    Lyons, L.

    2016-01-01

    Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

  15. A statistical analysis on the leak detection performance of ...

    Indian Academy of Sciences (India)

    Chinedu Duru

    2017-11-09

    Nov 9, 2017 ... of underground and overground pipelines with wireless sensor networks through the .... detection performance analysis of pipeline leakage. This study and ..... case and apply to all materials transported through the pipeline.

  16. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  17. FY01 Supplemental Science and Performance Analyses, Volume 1: Scientific Bases and Analyses, Part 1 and 2

    International Nuclear Information System (INIS)

    Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23 013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054] [DIRS 124754]). By making the large amount of information developed on Yucca Mountain available in stages, the DOE intends to provide the public and interested parties with time to review the available materials and to formulate

  18. Influence of peer review on the reporting of primary outcome(s) and statistical analyses of randomised trials.

    Science.gov (United States)

    Hopewell, Sally; Witt, Claudia M; Linde, Klaus; Icke, Katja; Adedire, Olubusola; Kirtley, Shona; Altman, Douglas G

    2018-01-11

    Selective reporting of outcomes in clinical trials is a serious problem. We aimed to investigate the influence of the peer review process within biomedical journals on reporting of primary outcome(s) and statistical analyses within reports of randomised trials. Each month, PubMed (May 2014 to April 2015) was searched to identify primary reports of randomised trials published in six high-impact general and 12 high-impact specialty journals. The corresponding author of each trial was invited to complete an online survey asking authors about changes made to their manuscript as part of the peer review process. Our main outcomes were to assess: (1) the nature and extent of changes as part of the peer review process, in relation to reporting of the primary outcome(s) and/or primary statistical analysis; (2) how often authors followed these requests; and (3) whether this was related to specific journal or trial characteristics. Of 893 corresponding authors who were invited to take part in the online survey 258 (29%) responded. The majority of trials were multicentre (n = 191; 74%); median sample size 325 (IQR 138 to 1010). The primary outcome was clearly defined in 92% (n = 238), of which the direction of treatment effect was statistically significant in 49%. The majority responded (1-10 Likert scale) they were satisfied with the overall handling (mean 8.6, SD 1.5) and quality of peer review (mean 8.5, SD 1.5) of their manuscript. Only 3% (n = 8) said that the editor or peer reviewers had asked them to change or clarify the trial's primary outcome. However, 27% (n = 69) reported they were asked to change or clarify the statistical analysis of the primary outcome; most had fulfilled the request, the main motivation being to improve the statistical methods (n = 38; 55%) or avoid rejection (n = 30; 44%). Overall, there was little association between authors being asked to make this change and the type of journal, intervention, significance of the

  19. Consumer Loyalty and Loyalty Programs: a topographic examination of the scientific literature using bibliometrics, spatial statistics and network analyses

    Directory of Open Access Journals (Sweden)

    Viviane Moura Rocha

    2015-04-01

    Full Text Available This paper presents a topographic analysis of the fields of consumer loyalty and loyalty programs, vastly studied in the last decades and still relevant in the marketing literature. After the identification of 250 scientific papers that were published in the last ten years in indexed journals, a subset of 76 were chosen and their 3223 references were extracted. The journals in which these papers were published, their key words, abstracts, authors, institutions of origin and citation patterns were identified and analyzed using bibliometrics, spatial statistics techniques and network analyses. The results allow the identification of the central components of the field, as well as its main authors, journals, institutions and countries that intermediate the diffusion of knowledge, which contributes to the understanding of the constitution of the field by researchers and students.

  20. Statistical process control as a tool for controlling operating room performance: retrospective analysis and benchmarking.

    Science.gov (United States)

    Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao

    2010-10-01

    There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.

  1. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses.

    Science.gov (United States)

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-12-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community. © 2014 The Authors. FEMS Microbiology Ecology published by John Wiley & Sons Ltd on behalf of Federation of European Microbiological Societies.

  2. Use of statistical process control in evaluation of academic performance

    Directory of Open Access Journals (Sweden)

    Ezequiel Gibbon Gautério

    2014-05-01

    Full Text Available The aim of this article was to study some indicators of academic performance (number of students per class, dropout rate, failure rate and scores obtained by the students to identify a pattern of behavior that would enable to implement improvements in the teaching-learning process. The sample was composed of five classes of undergraduate courses in Engineering. The data were collected for three years. Initially an exploratory analysis with analytical and graphical techniques was performed. An analysis of variance and Tukey’s test investigated some sources of variability. This information was used in the construction of control charts. We have found evidence that classes with more students are associated with higher failure rates and lower mean. Moreover, when the course was later in the curriculum, the students had higher scores. The results showed that although they have been detected some special causes interfering in the process, it was possible to stabilize it and to monitor it.

  3. Testing the performance of a blind burst statistic

    Energy Technology Data Exchange (ETDEWEB)

    Vicere, A [Istituto di Fisica, Universita di Urbino (Italy); Calamai, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Campagna, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Conforto, G [Istituto di Fisica, Universita di Urbino (Italy); Cuoco, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Dominici, P [Istituto di Fisica, Universita di Urbino (Italy); Fiori, I [Istituto di Fisica, Universita di Urbino (Italy); Guidi, G M [Istituto di Fisica, Universita di Urbino (Italy); Losurdo, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Martelli, F [Istituto di Fisica, Universita di Urbino (Italy); Mazzoni, M [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Perniola, B [Istituto di Fisica, Universita di Urbino (Italy); Stanga, R [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Vetrano, F [Istituto di Fisica, Universita di Urbino (Italy)

    2003-09-07

    In this work, we estimate the performance of a method for the detection of burst events in the data produced by interferometric gravitational wave detectors. We compute the receiver operating characteristics in the specific case of a simulated noise having the spectral density expected for Virgo, using test signals taken from a library of possible waveforms emitted during the collapse of the core of type II supernovae.

  4. Growth curve analyses of the relationship between early maternal age and children's mathematics and reading performance.

    Science.gov (United States)

    Torres, D Diego

    2015-03-01

    Regarding the methods used to examine the early maternal age-child academic outcomes relationship, the extant literature has tended to examine change using statistical analyses that fail to appreciate that individuals vary in their rates of growth. Of the one study I have been able to find that employs a true growth model to estimate this relationship, the authors only controlled for characteristics of the maternal household after family formation; confounding background factors of mothers that might select them into early childbearing, a possible source of bias, were ignored. The authors' findings nonetheless suggested an inverse relationship between early maternal age, i.e., a first birth between the ages of 13 and 17, and Canadian adolescents' mean math performance at age 10. Early maternal age was not related to the linear slope of age. To elucidate whether the early maternal age-child academic outcomes association, treated in a growth context, is consistent with this finding, the present study built on it using US data and explored children's mathematics and reading trajectories from age 5 on. Its unique contribution is that it further explicitly controlled for maternal background factors and employed a three-level growth model with repeated measures of children nested within their mothers. Though the strength of the relationship varied between mean initial academic performance and mean academic growth, results confirmed that early maternal age was negatively related to children's mathematics and reading achievement, net of post-teen first birth child-specific and maternal household factors. Once maternal background factors were included, there was no statistically significant relationship between early maternal age and either children's mean initial mathematics and reading scores or their mean mathematics and reading growth. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    Science.gov (United States)

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent

  6. Statistical Analysis of the Grid Connected Photovoltaic System Performance Ratio

    Directory of Open Access Journals (Sweden)

    Javier Vilariño-García

    2017-05-01

    Full Text Available A methodology based on the application of variance analysis and Tukey's method to a data set of solar radiation in the plane of the photovoltaic modules and the corresponding values of power delivered to the grid at intervals of 10 minutes presents from sunrise to sunset during the 52 weeks of the year 2013. These data were obtained through a monitoring system located in a photovoltaic plant of 10 MW of rated power located in Cordoba, consisting of 16 transformers and 98 investors. The application of the comparative method among the middle of the performance index of the processing centers to detect with an analysis of variance if there is significant difference in average at least the rest at a level of significance of 5% and then by testing Tukey which one or more processing centers that are below average due to a fault to be detected and corrected are.

  7. The Statistical Analysis of Relation between Compressive and Tensile/Flexural Strength of High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Kępniak M.

    2016-12-01

    Full Text Available This paper addresses the tensile and flexural strength of HPC (high performance concrete. The aim of the paper is to analyse the efficiency of models proposed in different codes. In particular, three design procedures from: the ACI 318 [1], Eurocode 2 [2] and the Model Code 2010 [3] are considered. The associations between design tensile strength of concrete obtained from these three codes and compressive strength are compared with experimental results of tensile strength and flexural strength by statistical tools. Experimental results of tensile strength were obtained in the splitting test. Based on this comparison, conclusions are drawn according to the fit between the design methods and the test data. The comparison shows that tensile strength and flexural strength of HPC depend on more influential factors and not only compressive strength.

  8. Concepts for measuring maintenance performance and methods for analysing competing failure modes

    International Nuclear Information System (INIS)

    Cooke, Roger; Paulsen, Jette

    1997-01-01

    Measurement of maintenance performance is done on the basis of component history data in which service sojourns are distinguished according to whether they terminate in corrective or preventive maintenance. From the viewpoint of data analysis, corrective and preventive maintenance constitute competing failure nudes. This article examines ways to assess maintenance performance without introducing statistical assumptions, then introduces a plausible statistical model for describing the interaction of preventive and corrective maintenance, and finally illustrates these with examples from the Nordic TUD data system

  9. Performance evaluation of a hybrid-passive landfill leachate treatment system using multivariate statistical techniques

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, Jack, E-mail: jack.wallace@ce.queensu.ca [Department of Civil Engineering, Queen’s University, Ellis Hall, 58 University Avenue, Kingston, Ontario K7L 3N6 (Canada); Champagne, Pascale, E-mail: champagne@civil.queensu.ca [Department of Civil Engineering, Queen’s University, Ellis Hall, 58 University Avenue, Kingston, Ontario K7L 3N6 (Canada); Monnier, Anne-Charlotte, E-mail: anne-charlotte.monnier@insa-lyon.fr [National Institute for Applied Sciences – Lyon, 20 Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2015-01-15

    Highlights: • Performance of a hybrid passive landfill leachate treatment system was evaluated. • 33 Water chemistry parameters were sampled for 21 months and statistically analyzed. • Parameters were strongly linked and explained most (>40%) of the variation in data. • Alkalinity, ammonia, COD, heavy metals, and iron were criteria for performance. • Eight other parameters were key in modeling system dynamics and criteria. - Abstract: A pilot-scale hybrid-passive treatment system operated at the Merrick Landfill in North Bay, Ontario, Canada, treats municipal landfill leachate and provides for subsequent natural attenuation. Collected leachate is directed to a hybrid-passive treatment system, followed by controlled release to a natural attenuation zone before entering the nearby Little Sturgeon River. The study presents a comprehensive evaluation of the performance of the system using multivariate statistical techniques to determine the interactions between parameters, major pollutants in the leachate, and the biological and chemical processes occurring in the system. Five parameters (ammonia, alkalinity, chemical oxygen demand (COD), “heavy” metals of interest, with atomic weights above calcium, and iron) were set as criteria for the evaluation of system performance based on their toxicity to aquatic ecosystems and importance in treatment with respect to discharge regulations. System data for a full range of water quality parameters over a 21-month period were analyzed using principal components analysis (PCA), as well as principal components (PC) and partial least squares (PLS) regressions. PCA indicated a high degree of association for most parameters with the first PC, which explained a high percentage (>40%) of the variation in the data, suggesting strong statistical relationships among most of the parameters in the system. Regression analyses identified 8 parameters (set as independent variables) that were most frequently retained for modeling

  10. Results of the RAMI analyses performed for the IFMIF accelerator facility in the engineering design phase

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo@esss.se [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier; Dies, Javier; De Blas, Alfredo; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Moya, Joaquin; Ibarra, Angel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2015-10-15

    Highlights: • RAMI methodology used for IFMIF accelerator facility is presented. • Availability analyses and results are shown. • Main accelerator design changes are proposed. • Consequences and conclusions of the RAMI analyses are described. - Abstract: This paper presents a summary of the RAMI (Reliability Availability Maintainability Inspectability) analyses done for the IFMIF (International Fusion Materials Irradiation Facility) Accelerator facility in the Engineering Design Phase. The methodology followed, the analyses performed, the results obtained and the conclusions drawn are described. Moreover, the consequences of the incorporation of the RAMI studies in the IFMIF design are presented and the main outcomes of these analyses are shown.

  11. Results of the RAMI analyses performed for the IFMIF accelerator facility in the engineering design phase

    International Nuclear Information System (INIS)

    Bargalló, Enric; Arroyo, Jose Manuel; Abal, Javier; Dies, Javier; De Blas, Alfredo; Tapia, Carlos; Moya, Joaquin; Ibarra, Angel

    2015-01-01

    Highlights: • RAMI methodology used for IFMIF accelerator facility is presented. • Availability analyses and results are shown. • Main accelerator design changes are proposed. • Consequences and conclusions of the RAMI analyses are described. - Abstract: This paper presents a summary of the RAMI (Reliability Availability Maintainability Inspectability) analyses done for the IFMIF (International Fusion Materials Irradiation Facility) Accelerator facility in the Engineering Design Phase. The methodology followed, the analyses performed, the results obtained and the conclusions drawn are described. Moreover, the consequences of the incorporation of the RAMI studies in the IFMIF design are presented and the main outcomes of these analyses are shown.

  12. Analyses of statistical transformations of row data describing free proline concentration in sugar beet exposed to drought

    Directory of Open Access Journals (Sweden)

    Putnik-Delić Marina I.

    2010-01-01

    Full Text Available Eleven sugar beet genotypes were tested for their capacity to tolerate drought. Plants were grown in semi-controlled conditions, in the greenhouse, and watered daily. After 90 days, water deficit was imposed by the cessation of watering, while the control plants continued to be watered up to 80% of FWC. Five days later concentration of free proline in leaves was determined. Analysis was done in three replications. Statistical analysis was performed using STATISTICA 9.0, Minitab 15, and R2.11.1. Differences between genotypes were statistically processed by Duncan test. Because of nonormality of the data distribution and heterogeneity of variances in different groups, two types of transformations of row data were applied. For this type of data more appropriate in eliminating nonormality was Johnson transformation, as opposed to Box-Cox. Based on the both transformations it may be concluded that in all genotypes except for 10, concentration of free proline differs significantly between treatment (drought and the control.

  13. The effect of innovative activity in firm performance and development: Analysing data from eurozone

    Directory of Open Access Journals (Sweden)

    Ilias A. Makris

    2016-06-01

    Full Text Available Purpose – The purpose of this paper is to examine the effect of Innovative Activity on firm performance and growth. Active Research and Development is considered to be directly related with development, prosperity and growth, in micro and macro level and a key factor in hindering economic recession. Design/methodology/approach – We analyse economic data from listed firms of selected eurozone country-members in order to associate Research and Development with performance indicators in firm and country level. For that purpose, several firm data were collected from WorldScope data base and macroeconomic data from Worldbank database. The period examined is between 2002 and 2012, with a special focus on current financial crisis (after 2007. The empirical process includes, descriptive statistics and logistic regression analysis. Findings – Findings indicate the crucial effect of innovative process in economic performance and development in firm and country level. The latter highlights the urgent need for public support in order to spur innovative activity and high-tech exports, especially in countries that were heavily affected by recession. Research limitations/implications – Some research limitations are the large number of missing cases in WordScope database, as many firms after the beginning of current crisis exit stock market. Furthermore, the other part of the economy, the Small and Medium Enterprises does not exist in the analysis, as listed firms are mainly large and mature companies. Originality/value – The results tend to highlight the need for common policy measures in eurozone, in regard to such issues, instead of imposing horizontal budgetary constraints in specific countries (like Southern Europe, hindering the vicious recessionary circle.

  14. Computational modeling and statistical analyses on individual contact rate and exposure to disease in complex and confined transportation hubs

    Science.gov (United States)

    Wang, W. L.; Tsui, K. L.; Lo, S. M.; Liu, S. B.

    2018-01-01

    Crowded transportation hubs such as metro stations are thought as ideal places for the development and spread of epidemics. However, for the special features of complex spatial layout, confined environment with a large number of highly mobile individuals, it is difficult to quantify human contacts in such environments, wherein disease spreading dynamics were less explored in the previous studies. Due to the heterogeneity and dynamic nature of human interactions, increasing studies proved the importance of contact distance and length of contact in transmission probabilities. In this study, we show how detailed information on contact and exposure patterns can be obtained by statistical analyses on microscopic crowd simulation data. To be specific, a pedestrian simulation model-CityFlow was employed to reproduce individuals' movements in a metro station based on site survey data, values and distributions of individual contact rate and exposure in different simulation cases were obtained and analyzed. It is interesting that Weibull distribution fitted the histogram values of individual-based exposure in each case very well. Moreover, we found both individual contact rate and exposure had linear relationship with the average crowd densities of the environments. The results obtained in this paper can provide reference to epidemic study in complex and confined transportation hubs and refine the existing disease spreading models.

  15. Digital immunohistochemistry platform for the staining variation monitoring based on integration of image and statistical analyses with laboratory information system.

    Science.gov (United States)

    Laurinaviciene, Aida; Plancoulaine, Benoit; Baltrusaityte, Indra; Meskauskas, Raimundas; Besusparis, Justinas; Lesciute-Krilaviciene, Daiva; Raudeliunas, Darius; Iqbal, Yasir; Herlin, Paulette; Laurinavicius, Arvydas

    2014-01-01

    Digital immunohistochemistry (IHC) is one of the most promising applications brought by new generation image analysis (IA). While conventional IHC staining quality is monitored by semi-quantitative visual evaluation of tissue controls, IA may require more sensitive measurement. We designed an automated system to digitally monitor IHC multi-tissue controls, based on SQL-level integration of laboratory information system with image and statistical analysis tools. Consecutive sections of TMA containing 10 cores of breast cancer tissue were used as tissue controls in routine Ki67 IHC testing. Ventana slide label barcode ID was sent to the LIS to register the serial section sequence. The slides were stained and scanned (Aperio ScanScope XT), IA was performed by the Aperio/Leica Colocalization and Genie Classifier/Nuclear algorithms. SQL-based integration ensured automated statistical analysis of the IA data by the SAS Enterprise Guide project. Factor analysis and plot visualizations were performed to explore slide-to-slide variation of the Ki67 IHC staining results in the control tissue. Slide-to-slide intra-core IHC staining analysis revealed rather significant variation of the variables reflecting the sample size, while Brown and Blue Intensity were relatively stable. To further investigate this variation, the IA results from the 10 cores were aggregated to minimize tissue-related variance. Factor analysis revealed association between the variables reflecting the sample size detected by IA and Blue Intensity. Since the main feature to be extracted from the tissue controls was staining intensity, we further explored the variation of the intensity variables in the individual cores. MeanBrownBlue Intensity ((Brown+Blue)/2) and DiffBrownBlue Intensity (Brown-Blue) were introduced to better contrast the absolute intensity and the colour balance variation in each core; relevant factor scores were extracted. Finally, tissue-related factors of IHC staining variance were

  16. Point processes statistics of stable isotopes: analysing water uptake patterns in a mixed stand of Aleppo pine and Holm oak

    Directory of Open Access Journals (Sweden)

    Carles Comas

    2015-04-01

    Full Text Available Aim of study: Understanding inter- and intra-specific competition for water is crucial in drought-prone environments. However, little is known about the spatial interdependencies for water uptake among individuals in mixed stands. The aim of this work was to compare water uptake patterns during a drought episode in two common Mediterranean tree species, Quercus ilex L. and Pinus halepensis Mill., using the isotope composition of xylem water (δ18O, δ2H as hydrological marker. Area of study: The study was performed in a mixed stand, sampling a total of 33 oaks and 78 pines (plot area= 888 m2. We tested the hypothesis that both species uptake water differentially along the soil profile, thus showing different levels of tree-to-tree interdependency, depending on whether neighbouring trees belong to one species or the other. Material and Methods: We used pair-correlation functions to study intra-specific point-tree configurations and the bivariate pair correlation function to analyse the inter-specific spatial configuration. Moreover, the isotopic composition of xylem water was analysed as a mark point pattern. Main results: Values for Q. ilex (δ18O = –5.3 ± 0.2‰, δ2H = –54.3 ± 0.7‰ were significantly lower than for P. halepensis (δ18O = –1.2 ± 0.2‰, δ2H = –25.1 ± 0.8‰, pointing to a greater contribution of deeper soil layers for water uptake by Q. ilex. Research highlights: Point-process analyses revealed spatial intra-specific dependencies among neighbouring pines, showing neither oak-oak nor oak-pine interactions. This supports niche segregation for water uptake between the two species.

  17. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  18. Development of a statistical shape model of multi-organ and its performance evaluation

    International Nuclear Information System (INIS)

    Nakada, Misaki; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru

    2010-01-01

    Existing statistical shape modeling methods for an organ can not take into account the correlation between neighboring organs. This study focuses on a level set distribution model and proposes two modeling methods for multiple organs that can take into account the correlation between neighboring organs. The first method combines level set functions of multiple organs into a vector. Subsequently it analyses the distribution of the vectors of a training dataset by a principal component analysis and builds a multiple statistical shape model. Second method constructs a statistical shape model for each organ independently and assembles component scores of different organs in a training dataset so as to generate a vector. It analyses the distribution of the vectors of to build a statistical shape model of multiple organs. This paper shows results of applying the proposed methods trained by 15 abdominal CT volumes to unknown 8 CT volumes. (author)

  19. Can We Use Polya’s Method to Improve Students’ Performance in the Statistics Classes?

    Directory of Open Access Journals (Sweden)

    Indika Wickramasinghe

    2015-01-01

    Full Text Available In this study, Polya’s problem-solving method is introduced in a statistics class in an effort to enhance students’ performance. Teaching the method was applied to one of the two introductory-level statistics classes taught by the same instructor, and a comparison was made between the performances in the two classes. The results indicate there was a significant improvement of the students’ performance in the class in which Polya’s method was introduced.

  20. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    Science.gov (United States)

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  1. The ‘39 steps’: an algorithm for performing statistical analysis of data on energy intake and expenditure

    Directory of Open Access Journals (Sweden)

    John R. Speakman

    2013-03-01

    Full Text Available The epidemics of obesity and diabetes have aroused great interest in the analysis of energy balance, with the use of organisms ranging from nematode worms to humans. Although generating energy-intake or -expenditure data is relatively straightforward, the most appropriate way to analyse the data has been an issue of contention for many decades. In the last few years, a consensus has been reached regarding the best methods for analysing such data. To facilitate using these best-practice methods, we present here an algorithm that provides a step-by-step guide for analysing energy-intake or -expenditure data. The algorithm can be used to analyse data from either humans or experimental animals, such as small mammals or invertebrates. It can be used in combination with any commercial statistics package; however, to assist with analysis, we have included detailed instructions for performing each step for three popular statistics packages (SPSS, MINITAB and R. We also provide interpretations of the results obtained at each step. We hope that this algorithm will assist in the statistically appropriate analysis of such data, a field in which there has been much confusion and some controversy.

  2. Analysing sensory panel performance in a proficiency test using the PanelCheck software

    DEFF Research Database (Denmark)

    Tomic, O.; Luciano, G.; Nilsen, A.

    2010-01-01

    Check software, a workflow is proposed that guides the user through the data analysis process. This allows practitioners and non-statisticians to get an overview over panel performances in a rapid manner without the need to be familiar with details on the statistical methods. Visualisation of data analysis...... results plays an important role as this provides a time saving and efficient way of screening and investigating sensory panel performances. Most of the statistical methods used in this paper are available in the open source software PanelCheck, which may be downloaded and used for free....

  3. Network meta-analyses performed by contracting companies and commissioned by industry

    NARCIS (Netherlands)

    Schuit, Ewoud; Ioannidis, John P A

    2016-01-01

    Background: Industry commissions contracting companies to perform network meta-analysis for health technology assessment (HTA) and reimbursement submissions. Our objective was to estimate the number of network meta-analyses performed by consulting companies contracted by industry, to assess whether

  4. Factors for analysing and improving performance of R&D in Malaysian universities

    NARCIS (Netherlands)

    Ramli, Mohammad Shakir; de Boer, S.J.; de Bruijn, E.J.

    2004-01-01

    This paper presents a model for analysing and improving performance of R&D in Malaysian universities. There are various general models for R&D analysis, but none is specific for improving the performance of R&D in Malaysian universities. This research attempts to fill a gap in the body of knowledge

  5. The effects of clinical and statistical heterogeneity on the predictive values of results from meta-analyses

    NARCIS (Netherlands)

    Melsen, W G; Rovers, M M; Bonten, M J M; Bootsma, M C J|info:eu-repo/dai/nl/304830305

    Variance between studies in a meta-analysis will exist. This heterogeneity may be of clinical, methodological or statistical origin. The last of these is quantified by the I(2) -statistic. We investigated, using simulated studies, the accuracy of I(2) in the assessment of heterogeneity and the

  6. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu

    2014-01-15

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located on a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.

  7. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    International Nuclear Information System (INIS)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef

    2014-01-01

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located on a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers

  8. Performance of Generating Plant: Managing the Changes. Part 2: Thermal Generating Plant Unavailability Factors and Availability Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Curley, G. Michael [North American Electric Reliability Corporation (United States); Mandula, Jiri [International Atomic Energy Agency (IAEA)

    2008-05-15

    The WEC Committee on the Performance of Generating Plant (PGP) has been collecting and analysing power plant performance statistics worldwide for more than 30 years and has produced regular reports, which include examples of advanced techniques and methods for improving power plant performance through benchmarking. A series of reports from the various working groups was issued in 2008. This reference presents the results of Working Group 2 (WG2). WG2's main task is to facilitate the collection and input on an annual basis of power plant performance data (unit-by-unit and aggregated data) into the WEC PGP database. The statistics will be collected for steam, nuclear, gas turbine and combined cycle, hydro and pump storage plant. WG2 will also oversee the ongoing development of the availability statistics database, including the contents, the required software, security issues and other important information. The report is divided into two sections: Thermal generating, combined cycle/co-generation, combustion turbine, hydro and pumped storage unavailability factors and availability statistics; and nuclear power generating units.

  9. Evaluation of multivariate statistical analyses for monitoring and prediction of processes in an seawater reverse osmosis desalination plant

    International Nuclear Information System (INIS)

    Kolluri, Srinivas Sahan; Esfahani, Iman Janghorban; Garikiparthy, Prithvi Sai Nadh; Yoo, Chang Kyoo

    2015-01-01

    Our aim was to analyze, monitor, and predict the outcomes of processes in a full-scale seawater reverse osmosis (SWRO) desalination plant using multivariate statistical techniques. Multivariate analysis of variance (MANOVA) was used to investigate the performance and efficiencies of two SWRO processes, namely, pore controllable fiber filterreverse osmosis (PCF-SWRO) and sand filtration-ultra filtration-reverse osmosis (SF-UF-SWRO). Principal component analysis (PCA) was applied to monitor the two SWRO processes. PCA monitoring revealed that the SF-UF-SWRO process could be analyzed reliably with a low number of outliers and disturbances. Partial least squares (PLS) analysis was then conducted to predict which of the seven input parameters of feed flow rate, PCF/SF-UF filtrate flow rate, temperature of feed water, turbidity feed, pH, reverse osmosis (RO)flow rate, and pressure had a significant effect on the outcome variables of permeate flow rate and concentration. Root mean squared errors (RMSEs) of the PLS models for permeate flow rates were 31.5 and 28.6 for the PCF-SWRO process and SF-UF-SWRO process, respectively, while RMSEs of permeate concentrations were 350.44 and 289.4, respectively. These results indicate that the SF-UF-SWRO process can be modeled more accurately than the PCF-SWRO process, because the RMSE values of permeate flowrate and concentration obtained using a PLS regression model of the SF-UF-SWRO process were lower than those obtained for the PCF-SWRO process.

  10. Evaluation of multivariate statistical analyses for monitoring and prediction of processes in an seawater reverse osmosis desalination plant

    Energy Technology Data Exchange (ETDEWEB)

    Kolluri, Srinivas Sahan; Esfahani, Iman Janghorban; Garikiparthy, Prithvi Sai Nadh; Yoo, Chang Kyoo [Kyung Hee University, Yongin (Korea, Republic of)

    2015-08-15

    Our aim was to analyze, monitor, and predict the outcomes of processes in a full-scale seawater reverse osmosis (SWRO) desalination plant using multivariate statistical techniques. Multivariate analysis of variance (MANOVA) was used to investigate the performance and efficiencies of two SWRO processes, namely, pore controllable fiber filterreverse osmosis (PCF-SWRO) and sand filtration-ultra filtration-reverse osmosis (SF-UF-SWRO). Principal component analysis (PCA) was applied to monitor the two SWRO processes. PCA monitoring revealed that the SF-UF-SWRO process could be analyzed reliably with a low number of outliers and disturbances. Partial least squares (PLS) analysis was then conducted to predict which of the seven input parameters of feed flow rate, PCF/SF-UF filtrate flow rate, temperature of feed water, turbidity feed, pH, reverse osmosis (RO)flow rate, and pressure had a significant effect on the outcome variables of permeate flow rate and concentration. Root mean squared errors (RMSEs) of the PLS models for permeate flow rates were 31.5 and 28.6 for the PCF-SWRO process and SF-UF-SWRO process, respectively, while RMSEs of permeate concentrations were 350.44 and 289.4, respectively. These results indicate that the SF-UF-SWRO process can be modeled more accurately than the PCF-SWRO process, because the RMSE values of permeate flowrate and concentration obtained using a PLS regression model of the SF-UF-SWRO process were lower than those obtained for the PCF-SWRO process.

  11. Combining the Power of Statistical Analyses and Community Interviews to Identify Adoption Barriers for Stormwater Best-Management Practices

    Science.gov (United States)

    Hoover, F. A.; Bowling, L. C.; Prokopy, L. S.

    2015-12-01

    Urban stormwater is an on-going management concern in municipalities of all sizes. In both combined or separated sewer systems, pollutants from stormwater runoff enter the natural waterway system during heavy rain events. Urban flooding during frequent and more intense storms are also a growing concern. Therefore, stormwater best-management practices (BMPs) are being implemented in efforts to reduce and manage stormwater pollution and overflow. The majority of BMP water quality studies focus on the small-scale, individual effects of the BMP, and the change in water quality directly from the runoff of these infrastructures. At the watershed scale, it is difficult to establish statistically whether or not these BMPs are making a difference in water quality, given that watershed scale monitoring is often costly and time consuming, relying on significant sources of funds, which a city may not have. Hence, there is a need to quantify the level of sampling needed to detect the water quality impact of BMPs at the watershed scale. In this study, a power analysis was performed on data from an urban watershed in Lafayette, Indiana, to determine the frequency of sampling required to detect a significant change in water quality measurements. Using the R platform, results indicate that detecting a significant change in watershed level water quality would require hundreds of weekly measurements, even when improvement is present. The second part of this study investigates whether the difficulty in demonstrating water quality change represents a barrier to adoption of stormwater BMPs. Semi-structured interviews of community residents and organizations in Chicago, IL are being used to investigate residents understanding of water quality and best management practices and identify their attitudes and perceptions towards stormwater BMPs. Second round interviews will examine how information on uncertainty in water quality improvements influences their BMP attitudes and perceptions.

  12. Humans make efficient use of natural image statistics when performing spatial interpolation.

    Science.gov (United States)

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  13. Research Pearls: The Significance of Statistics and Perils of Pooling. Part 3: Pearls and Pitfalls of Meta-analyses and Systematic Reviews.

    Science.gov (United States)

    Harris, Joshua D; Brand, Jefferson C; Cote, Mark P; Dhawan, Aman

    2017-08-01

    Within the health care environment, there has been a recent and appropriate trend towards emphasizing the value of care provision. Reduced cost and higher quality improve the value of care. Quality is a challenging, heterogeneous, variably defined concept. At the core of quality is the patient's outcome, quantified by a vast assortment of subjective and objective outcome measures. There has been a recent evolution towards evidence-based medicine in health care, clearly elucidating the role of high-quality evidence across groups of patients and studies. Synthetic studies, such as systematic reviews and meta-analyses, are at the top of the evidence-based medicine hierarchy. Thus, these investigations may be the best potential source of guiding diagnostic, therapeutic, prognostic, and economic medical decision making. Systematic reviews critically appraise and synthesize the best available evidence to provide a conclusion statement (a "take-home point") in response to a specific answerable clinical question. A meta-analysis uses statistical methods to quantitatively combine data from single studies. Meta-analyses should be performed with high methodological quality homogenous studies (Level I or II) or evidence randomized studies, to minimize confounding variable bias. When it is known that the literature is inadequate or a recent systematic review has already been performed with a demonstration of insufficient data, then a new systematic review does not add anything meaningful to the literature. PROSPERO registration and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines assist authors in the design and conduct of systematic reviews and should always be used. Complete transparency of the conduct of the review permits reproducibility and improves fidelity of the conclusions. Pooling of data from overly dissimilar investigations should be avoided. This particularly applies to Level IV evidence, that is, noncomparative investigations

  14. Performance analyses of naval ships based on engineering level of simulation at the initial design stage

    Directory of Open Access Journals (Sweden)

    Dong-Hoon Jeong

    2017-07-01

    Full Text Available Naval ships are assigned many and varied missions. Their performance is critical for mission success, and depends on the specifications of the components. This is why performance analyses of naval ships are required at the initial design stage. Since the design and construction of naval ships take a very long time and incurs a huge cost, Modeling and Simulation (M & S is an effective method for performance analyses. Thus in this study, a simulation core is proposed to analyze the performance of naval ships considering their specifications. This simulation core can perform the engineering level of simulations, considering the mathematical models for naval ships, such as maneuvering equations and passive sonar equations. Also, the simulation models of the simulation core follow Discrete EVent system Specification (DEVS and Discrete Time System Specification (DTSS formalisms, so that simulations can progress over discrete events and discrete times. In addition, applying DEVS and DTSS formalisms makes the structure of simulation models flexible and reusable. To verify the applicability of this simulation core, such a simulation core was applied to simulations for the performance analyses of a submarine in an Anti-SUrface Warfare (ASUW mission. These simulations were composed of two scenarios. The first scenario of submarine diving carried out maneuvering performance analysis by analyzing the pitch angle variation and depth variation of the submarine over time. The second scenario of submarine detection carried out detection performance analysis by analyzing how well the sonar of the submarine resolves adjacent targets. The results of these simulations ensure that the simulation core of this study could be applied to the performance analyses of naval ships considering their specifications.

  15. Identification of novel risk factors for community-acquired Clostridium difficile infection using spatial statistics and geographic information system analyses.

    Directory of Open Access Journals (Sweden)

    Deverick J Anderson

    Full Text Available The rate of community-acquired Clostridium difficile infection (CA-CDI is increasing. While receipt of antibiotics remains an important risk factor for CDI, studies related to acquisition of C. difficile outside of hospitals are lacking. As a result, risk factors for exposure to C. difficile in community settings have been inadequately studied.To identify novel environmental risk factors for CA-CDI.We performed a population-based retrospective cohort study of patients with CA-CDI from 1/1/2007 through 12/31/2014 in a 10-county area in central North Carolina. 360 Census Tracts in these 10 counties were used as the demographic Geographic Information System (GIS base-map. Longitude and latitude (X, Y coordinates were generated from patient home addresses and overlaid to Census Tracts polygons using ArcGIS; ArcView was used to assess "hot-spots" or clusters of CA-CDI. We then constructed a mixed hierarchical model to identify environmental variables independently associated with increased rates of CA-CDI.A total of 1,895 unique patients met our criteria for CA-CDI. The mean patient age was 54.5 years; 62% were female and 70% were Caucasian. 402 (21% patient addresses were located in "hot spots" or clusters of CA-CDI (p<0.001. "Hot spot" census tracts were scattered throughout the 10 counties. After adjusting for clustering and population density, age ≥ 60 years (p = 0.03, race (<0.001, proximity to a livestock farm (0.01, proximity to farming raw materials services (0.02, and proximity to a nursing home (0.04 were independently associated with increased rates of CA-CDI.Our study is the first to use spatial statistics and mixed models to identify important environmental risk factors for acquisition of C. difficile and adds to the growing evidence that farm practices may put patients at risk for important drug-resistant infections.

  16. Japanese standard method for safety evaluation using best estimate code based on uncertainty and scaling analyses with statistical approach

    International Nuclear Information System (INIS)

    Mizokami, Shinya; Hotta, Akitoshi; Kudo, Yoshiro; Yonehara, Tadashi; Watada, Masayuki; Sakaba, Hiroshi

    2009-01-01

    Current licensing practice in Japan consists of using conservative boundary and initial conditions(BIC), assumptions and analytical codes. The safety analyses for licensing purpose are inherently deterministic. Therefore, conservative BIC and assumptions, such as single failure, must be employed for the analyses. However, using conservative analytical codes are not considered essential. The standard committee of Atomic Energy Society of Japan(AESJ) has drawn up the standard for using best estimate codes for safety analyses in 2008 after three-years of discussions reflecting domestic and international recent findings. (author)

  17. Global health business: the production and performativity of statistics in Sierra Leone and Germany.

    Science.gov (United States)

    Erikson, Susan L

    2012-01-01

    The global push for health statistics and electronic digital health information systems is about more than tracking health incidence and prevalence. It is also experienced on the ground as means to develop and maintain particular norms of health business, knowledge, and decision- and profit-making that are not innocent. Statistics make possible audit and accountability logics that undergird the management of health at a distance and that are increasingly necessary to the business of health. Health statistics are inextricable from their social milieus, yet as business artifacts they operate as if they are freely formed, objectively originated, and accurate. This article explicates health statistics as cultural forms and shows how they have been produced and performed in two very different countries: Sierra Leone and Germany. In both familiar and surprising ways, this article shows how statistics and their pursuit organize and discipline human behavior, constitute subject positions, and reify existing relations of power.

  18. Business Statistics: A Comparison of Student Performance in Three Learning Modes

    Science.gov (United States)

    Simmons, Gerald R.

    2014-01-01

    The purpose of this study was to compare the performance of three teaching modes and age groups of business statistics sections in terms of course exam scores. The research questions were formulated to determine the performance of the students within each teaching mode, to compare each mode in terms of exam scores, and to compare exam scores by…

  19. The CEO performance effect : Statistical issues and a complex fit perspective

    NARCIS (Netherlands)

    Blettner, D.P.; Chaddad, F.R.; Bettis, R.

    2012-01-01

    How CEOs affect strategy and performance is important to strategic management research. We show that sophisticated statistical analysis alone is problematic for establishing the magnitude and causes of CEO impact on performance. We discuss three problem areas that substantially distort the

  20. 7 CFR 98.3 - Analyses performed and locations of laboratories.

    Science.gov (United States)

    2010-01-01

    ... the special laboratory analyses rendered by the Science and Technology as a result of an agreement... Sausage Fat, salt 4 Pork Sausage Fat, moisture 4 Pork Sausage Fat 4 Mil-P-44131A (Pork Steaks, Flaked... performed at any one of the Science and Technology (S&T) field laboratories as follows: (1) USDA, AMS...

  1. Career-span analyses of track performance: longitudinal data present a more optimistic view of age-related performance decline.

    Science.gov (United States)

    Young, Bradley W; Starkes, Janet L

    2005-01-01

    Sport scientists (Starkes, Weir, Singh, Hodges, & Kerr, 1999; Starkes, Weir, & Young, 2003) have suggested that prolonged training is critical for the maintenance of athletic performance even in the face of predicted age-related decline. This study used polynomial regression analyses to examine the relationship between age and running performance in the 1500 and 10,000 metre events. We compared the age and career-longitudinal performances for 15 male Canadian Masters athletes with a cross-sectional sample of performances at different ages. We hypothesized that the 30 years of uninterrupted training characteristic of this longitudinal sample would moderate the patterns of age-related decline (retention hypothesis); alternatively, the cross-sectional data were expected to demonstrate pronounced age-related decline (quadratic hypothesis). Investigators performed multimodel regression analyses on the age and performance data. Based on the absence (for longitudinal data) or presence (for the cross-sectional data) of significant quadratic components in second-order polynomial models, the authors found support for their respective hypotheses. The longitudinal data showed that running performance declined with age in a more linear fashion than did cross-sectional data. Graphical trends showed that the moderation of age-related decline appeared greater for the longitudinal 10 km performances than for the 1500m event.

  2. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  3. The influence of uncertainties of measurements in laboratory performance evaluation by intercomparison program in radionuclide analyses of environmental samples

    International Nuclear Information System (INIS)

    Tauhata, L.; Vianna, M.E.; Oliveira, A.E. de; Clain, A.F.; Ferreira, A.C.M.; Bernardes, E.M.

    2000-01-01

    The accuracy and precision of results of the radionuclide analyses in environmental samples are widely claimed internationally due to its consequences in the decision process coupled to evaluation of environmental pollution, impact, internal and external population exposure. These characteristics of measurement of the laboratories can be shown clearly using intercomparison data, due to the existence of a reference value and the need of three determinations for each analysis. In intercomparison studies accuracy in radionuclide assays in low-level environmental samples has usually been the main focus in performance evaluation and it can be estimated by taking into account the deviation between the experimental laboratory mean value and the reference value. The laboratory repeatability of measurements or their standard deviation is seldom included in performance evaluation. In order to show the influence of the uncertainties in performance evaluation of the laboratories, data of 22 intercomparison runs which distributed 790 spiked environmental samples to 20 Brazilian participant laboratories were compared, using the 'Normalised Standard Deviation' as statistical criteria for performance evaluation of U.S.EPA. It mainly takes into account the laboratory accuracy and the performance evaluation using the same data classified by normalised standard deviation modified by a weight reactor that includes the individual laboratory uncertainty. The results show a relative decrease in laboratory performance in each radionuclide assay: 1.8% for 65 Zn, 2.8% for 40 K, 3.4 for 60 Co, 3.7% for 134 Cs, 4.0% for 137 Cs, 4.4% for Th and U nat , 4.5% for 3 H, 6.3% for 133 Ba, 8.6% for 90 Sr, 10.6% for Gross Alpha, 10.9% for 106 Ru, 11.1% for 226 Ra, 11.5% for Gross Beta and 13.6% for 228 Ra. The changes in the parameters of the statistical distribution function were negligible and the distribution remained as Gaussian type for all radionuclides analysed. Data analyses in terms of

  4. Nursing students' attitudes toward statistics: Effect of a biostatistics course and association with examination performance.

    Science.gov (United States)

    Kiekkas, Panagiotis; Panagiotarou, Aliki; Malja, Alvaro; Tahirai, Daniela; Zykai, Rountina; Bakalis, Nick; Stefanopoulos, Nikolaos

    2015-12-01

    Although statistical knowledge and skills are necessary for promoting evidence-based practice, health sciences students have expressed anxiety about statistics courses, which may hinder their learning of statistical concepts. To evaluate the effects of a biostatistics course on nursing students' attitudes toward statistics and to explore the association between these attitudes and their performance in the course examination. One-group quasi-experimental pre-test/post-test design. Undergraduate nursing students of the fifth or higher semester of studies, who attended a biostatistics course. Participants were asked to complete the pre-test and post-test forms of The Survey of Attitudes Toward Statistics (SATS)-36 scale at the beginning and end of the course respectively. Pre-test and post-test scale scores were compared, while correlations between post-test scores and participants' examination performance were estimated. Among 156 participants, post-test scores of the overall SATS-36 scale and of the Affect, Cognitive Competence, Interest and Effort components were significantly higher than pre-test ones, indicating that the course was followed by more positive attitudes toward statistics. Among 104 students who participated in the examination, higher post-test scores of the overall SATS-36 scale and of the Affect, Difficulty, Interest and Effort components were significantly but weakly correlated with higher examination performance. Students' attitudes toward statistics can be improved through appropriate biostatistics courses, while positive attitudes contribute to higher course achievements and possibly to improved statistical skills in later professional life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. FY01 Supplemental Science and Performance Analysis: Volume 1, Scientific Bases and Analyses

    International Nuclear Information System (INIS)

    Bodvarsson, G.S.; Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  6. FY01 Supplemental Science and Performance Analysis: Volume 1,Scientific Bases and Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bodvarsson, G.S.; Dobson, David

    2001-05-30

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S&ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S&ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S&ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  7. An innovative statistical approach for analysing non-continuous variables in environmental monitoring: assessing temporal trends of TBT pollution.

    Science.gov (United States)

    Santos, José António; Galante-Oliveira, Susana; Barroso, Carlos

    2011-03-01

    The current work presents an innovative statistical approach to model ordinal variables in environmental monitoring studies. An ordinal variable has values that can only be compared as "less", "equal" or "greater" and it is not possible to have information about the size of the difference between two particular values. The example of ordinal variable under this study is the vas deferens sequence (VDS) used in imposex (superimposition of male sexual characters onto prosobranch females) field assessment programmes for monitoring tributyltin (TBT) pollution. The statistical methodology presented here is the ordered logit regression model. It assumes that the VDS is an ordinal variable whose values match up a process of imposex development that can be considered continuous in both biological and statistical senses and can be described by a latent non-observable continuous variable. This model was applied to the case study of Nucella lapillus imposex monitoring surveys conducted in the Portuguese coast between 2003 and 2008 to evaluate the temporal evolution of TBT pollution in this country. In order to produce more reliable conclusions, the proposed model includes covariates that may influence the imposex response besides TBT (e.g. the shell size). The model also provides an analysis of the environmental risk associated to TBT pollution by estimating the probability of the occurrence of females with VDS ≥ 2 in each year, according to OSPAR criteria. We consider that the proposed application of this statistical methodology has a great potential in environmental monitoring whenever there is the need to model variables that can only be assessed through an ordinal scale of values.

  8. Guided waves based SHM systems for composites structural elements: statistical analyses finalized at probability of detection definition and assessment

    Science.gov (United States)

    Monaco, E.; Memmolo, V.; Ricci, F.; Boffa, N. D.; Maio, L.

    2015-03-01

    Maintenance approaches based on sensorised structures and Structural Health Monitoring systems could represent one of the most promising innovations in the fields of aerostructures since many years, mostly when composites materials (fibers reinforced resins) are considered. Layered materials still suffer today of drastic reductions of maximum allowable stress values during the design phase as well as of costly and recurrent inspections during the life cycle phase that don't permit of completely exploit their structural and economic potentialities in today aircrafts. Those penalizing measures are necessary mainly to consider the presence of undetected hidden flaws within the layered sequence (delaminations) or in bonded areas (partial disbonding); in order to relax design and maintenance constraints a system based on sensors permanently installed on the structure to detect and locate eventual flaws can be considered (SHM system) once its effectiveness and reliability will be statistically demonstrated via a rigorous Probability Of Detection function definition and evaluation. This paper presents an experimental approach with a statistical procedure for the evaluation of detection threshold of a guided waves based SHM system oriented to delaminations detection on a typical wing composite layered panel. The experimental tests are mostly oriented to characterize the statistical distribution of measurements and damage metrics as well as to characterize the system detection capability using this approach. Numerically it is not possible to substitute part of the experimental tests aimed at POD where the noise in the system response is crucial. Results of experiments are presented in the paper and analyzed.

  9. Comparative analyses on dynamic performances of photovoltaic–thermal solar collectors integrated with phase change materials

    International Nuclear Information System (INIS)

    Su, Di; Jia, Yuting; Alva, Guruprasad; Liu, Lingkun; Fang, Guiyin

    2017-01-01

    Highlights: • The dynamic model of photovoltaic–thermal collector with phase change material was developed. • The performances of photovoltaic–thermal collector are performed comparative analyses. • The performances of photovoltaic–thermal collector with phase change material were evaluated. • Upper phase change material mode can improve performances of photovoltaic–thermal collector. - Abstract: The operating conditions (especially temperature) of photovoltaic–thermal solar collectors have significant influence on dynamic performance of the hybrid photovoltaic–thermal solar collectors. Only a small percentage of incoming solar radiation can be converted into electricity, and the rest is converted into heat. This heat leads to a decrease in efficiency of the photovoltaic module. In order to improve the performance of the hybrid photovoltaic–thermal solar collector, we performed comparative analyses on a hybrid photovoltaic–thermal solar collector integrated with phase change material. Electrical and thermal parameters like solar cell temperature, outlet temperature of air, electrical power, thermal power, electrical efficiency, thermal efficiency and overall efficiency are simulated and analyzed to evaluate the dynamic performance of the hybrid photovoltaic–thermal collector. It is found that the position of phase change material layer in the photovoltaic–thermal collector has a significant effect on the performance of the photovoltaic–thermal collector. The results indicate that upper phase change material mode in the photovoltaic–thermal collector can significantly improve the thermal and electrical performance of photovoltaic–thermal collector. It is found that overall efficiency of photovoltaic–thermal collector in ‘upper phase change material’ mode is 10.7% higher than that in ‘no phase change material’ mode. Further, for a photovoltaic–thermal collector with upper phase change material, it is verified that 3 cm

  10. The Relationship between Test Anxiety and Academic Performance of Students in Vital Statistics Course

    Directory of Open Access Journals (Sweden)

    Shirin Iranfar

    2013-12-01

    Full Text Available Introduction: Test anxiety is a common phenomenon among students and is one of the problems of educational system. The present study was conducted to investigate the test anxiety in vital statistics course and its association with academic performance of students at Kermanshah University of Medical Sciences. This study was descriptive-analytical and the study sample included the students studying in nursing and midwifery, paramedicine and health faculties that had taken vital statistics course and were selected through census method. Sarason questionnaire was used to analyze the test anxiety. Data were analyzed by descriptive and inferential statistics. The findings indicated no significant correlation between test anxiety and score of vital statistics course.

  11. Site-Specific Analyses for Demonstrating Compliance with 10 CFR 61 Performance Objectives - 12179

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, C.J.; Esh, D.W.; Yadav, P.; Carrera, A.G. [U.S. Nuclear Regulatory Commission, 11545 Rockville Pike, Rockville, MD 20852 (United States)

    2012-07-01

    The U.S. Nuclear Regulatory Commission (NRC) is proposing to amend its regulations at 10 CFR Part 61 to require low-level radioactive waste disposal facilities to conduct site-specific analyses to demonstrate compliance with the performance objectives in Subpart C. The amendments would require licensees to conduct site-specific analyses for protection of the public and inadvertent intruders as well as analyses for long-lived waste. The amendments would ensure protection of public health and safety, while providing flexibility to demonstrate compliance with the performance objectives, for current and potential future waste streams. NRC staff intends to submit proposed rule language and associated regulatory basis to the Commission for its approval in early 2012. The NRC staff also intends to develop associated guidance to accompany any proposed amendments. The guidance is intended to supplement existing low-level radioactive waste guidance on issues pertinent to conducting site-specific analyses to demonstrate compliance with the performance objectives. The guidance will facilitate implementation of the proposed amendments by licensees and assist competent regulatory authorities in reviewing the site-specific analyses. Specifically, the guidance provides staff recommendations on general considerations for the site-specific analyses, modeling issues for assessments to demonstrate compliance with the performance objectives including the performance assessment, intruder assessment, stability assessment, and analyses for long-lived waste. This paper describes the technical basis for changes to the rule language and the proposed guidance associated with implementation of the rule language. The NRC staff, per Commission direction, intends to propose amendments to 10 CFR Part 61 to require licensees to conduct site-specific analyses to demonstrate compliance with performance objectives for the protection of public health and the environment. The amendments would require a

  12. Review of radionuclide source terms used for performance-assessment analyses

    International Nuclear Information System (INIS)

    Barnard, R.W.

    1993-06-01

    Two aspects of the radionuclide source terms used for total-system performance assessment (TSPA) analyses have been reviewed. First, a detailed radionuclide inventory (i.e., one in which the reactor type, decay, and burnup are specified) is compared with the standard source-term inventory used in prior analyses. The latter assumes a fixed ratio of pressurized-water reactor (PWR) to boiling-water reactor (BWR) spent fuel, at specific amounts of burnup and at 10-year decay. TSPA analyses have been used to compare the simplified source term with the detailed one. The TSPA-91 analyses did not show a significant difference between the source terms. Second, the radionuclides used in source terms for TSPA aqueous-transport analyses have been reviewed to select ones that are representative of the entire inventory. It is recommended that two actinide decay chains be included (the 4n+2 ''uranium'' and 4n+3 ''actinium'' decay series), since these include several radionuclides that have potentially important release and dose characteristics. In addition, several fission products are recommended for the same reason. The choice of radionuclides should be influenced by other parameter assumptions, such as the solubility and retardation of the radionuclides

  13. Dry critical experiments and analyses performed in support of the Topaz-2 Safety Program

    International Nuclear Information System (INIS)

    Pelowitz, D.B.; Sapir, J.; Glushkov, E.S.; Ponomarev-Stepnoi, N.N.; Bubelev, V.G.; Kompanietz, G.B.; Krutov, A.M.; Polyakov, D.N.; Loynstev, V.A.

    1994-01-01

    In December 1991, the Strategic Defense Initiative Organization decided to investigate the possibility of launching a Russian Topaz-2 space nuclear power system. Functional safety requirements developed for the Topaz mission mandated that the reactor remain subcritical when flooded and immersed in water. Initial experiments and analyses performed in Russia and the United States indicated that the reactor could potentially become supercritical in several water- or sand-immersion scenarios. Consequently, a series of critical experiments was performed on the Narciss M-II facility at the Kurchatov Institute to measure the reactivity effects of water and sand immersion, to quantify the effectiveness of reactor modifications proposed to preclude criticality, and to benchmark the calculational methods and nuclear data used in the Topaz-2 safety analyses. In this paper we describe the Narciss M-II experimental configurations along with the associated calculational models and methods. We also present and compare the measured and calculated results for the dry experimental configurations

  14. Analyses to demonstrate the thermal performance of the CASTOR KN12

    International Nuclear Information System (INIS)

    Diersch, R.; Weiss, M.; Tso, C.F.; Powell, D.; Choy, B.I.; Lee, H.Y.

    2004-01-01

    The CASTOR registered KN-12 is a new cask design of GNB for dry and wet transportation of up to 12 PWR spent nuclear fuel assemblies in Korea. It complies with the requirements of 10 CFR 71 [1] and IAEA ST-1 [2] for TYPE B(U)F packages. It received its transport license from the Korean Competent Authority KINS in July 2002 and is now in use in South Korea. Demonstration of the cask's compliance with the regulatory requirements in the area of thermal performance has been carried out by a combination of testing carried out by Korea Atomic Energy Research Institute and analyses carried out by Arup. This paper describes the analyses to demonstrate the thermal performance of the cask and compliance with regulatory requirements under normal and hypothetical accident conditions of transport. Other aspects of the design of the CASTOR registered KN12 are presented in other papers at this conference

  15. Compilation of Quality Assurance Documentation for Analyses Performed for the Resumption of Transient Testing Environmental Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Schafer, Annette L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Sondrup, A. Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2013-11-01

    This is a companion document to the analyses performed in support of the environmental assessment for the Resumption of Transient Fuels and Materials Testing. It is provided to allow transparency of the supporting calculations. It provides computer code input and output. The basis for the calculations is documented separately in INL (2013) and is referenced, as appropriate. Spreadsheets used to manipulate the code output are not provided.

  16. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  17. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...

  18. Performance assessment analyses unique to Department of Energy spent nuclear fuel

    International Nuclear Information System (INIS)

    Loo, H.H.; Duguid, J.J.

    2000-01-01

    This paper describes the iterative process of grouping and performance assessment that has led to the current grouping of the U.S. Department of Energy (DOE) spent nuclear fuel (SNF). The unique sensitivity analyses that form the basis for incorporating DOE fuel into the total system performance assessment (TSPA) base case model are described. In addition, the chemistry that results from dissolution of DOE fuel and high level waste (HLW) glass in a failed co-disposal package, and the effects of disposal of selected DOE SNF in high integrity cans are presented

  19. Effects of Concept Mapping Strategy on Learning Performance in Business and Economics Statistics

    Science.gov (United States)

    Chiou, Chei-Chang

    2009-01-01

    A concept map (CM) is a hierarchically arranged, graphic representation of the relationships among concepts. Concept mapping (CMING) is the process of constructing a CM. This paper examines whether a CMING strategy can be useful in helping students to improve their learning performance in a business and economics statistics course. A single…

  20. Exploring Statistics Anxiety: Contrasting Mathematical, Academic Performance and Trait Psychological Predictors

    Science.gov (United States)

    Bourne, Victoria J.

    2018-01-01

    Statistics anxiety is experienced by a large number of psychology students, and previous research has examined a range of potential correlates, including academic performance, mathematical ability and psychological predictors. These varying predictors are often considered separately, although there may be shared variance between them. In the…

  1. Flipping the Classroom and Student Performance in Advanced Statistics: Evidence from a Quasi-Experiment

    Science.gov (United States)

    Touchton, Michael

    2015-01-01

    I administer a quasi-experiment using undergraduate political science majors in statistics classes to evaluate whether "flipping the classroom" (the treatment) alters students' applied problem-solving performance and satisfaction relative to students in a traditional classroom environment (the control). I also assess whether general…

  2. Changes in Math Prerequisites and Student Performance in Business Statistics: Do Math Prerequisites Really Matter?

    OpenAIRE

    Jeffrey J. Green; Courtenay C. Stone; Abera Zegeye; Thomas A. Charles

    2007-01-01

    We use a binary probit model to assess the impact of several changes in math prerequisites on student performance in an undergraduate business statistics course. While the initial prerequisites did not necessarily provide students with the necessary math skills, our study, the first to examine the effect of math prerequisite changes, shows that these changes were deleterious to student performance. Our results helped convince the College of Business to change the math prerequisite again begin...

  3. Does bisphenol A induce superfeminization in Marisa cornuarietis? Part II: toxicity test results and requirements for statistical power analyses.

    Science.gov (United States)

    Forbes, Valery E; Aufderheide, John; Warbritton, Ryan; van der Hoeven, Nelly; Caspers, Norbert

    2007-03-01

    This study presents results of the effects of bisphenol A (BPA) on adult egg production, egg hatchability, egg development rates and juvenile growth rates in the freshwater gastropod, Marisa cornuarietis. We observed no adult mortality, substantial inter-snail variability in reproductive output, and no effects of BPA on reproduction during 12 weeks of exposure to 0, 0.1, 1.0, 16, 160 or 640 microg/L BPA. We observed no effects of BPA on egg hatchability or timing of egg hatching. Juveniles showed good growth in the control and all treatments, and there were no significant effects of BPA on this endpoint. Our results do not support previous claims of enhanced reproduction in Marisa cornuarietis in response to exposure to BPA. Statistical power analysis indicated high levels of inter-snail variability in the measured endpoints and highlighted the need for sufficient replication when testing treatment effects on reproduction in M. cornuarietis with adequate power.

  4. Quantitative X-ray Map Analyser (Q-XRMA): A new GIS-based statistical approach to Mineral Image Analysis

    Science.gov (United States)

    Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino

    2018-06-01

    We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.

  5. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  6. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  7. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes

  8. Performance of statistical process control methods for regional surgical site infection surveillance: a 10-year multicentre pilot study.

    Science.gov (United States)

    Baker, Arthur W; Haridy, Salah; Salem, Joseph; Ilieş, Iulian; Ergai, Awatef O; Samareh, Aven; Andrianas, Nicholas; Benneyan, James C; Sexton, Daniel J; Anderson, Deverick J

    2017-11-24

    Traditional strategies for surveillance of surgical site infections (SSI) have multiple limitations, including delayed and incomplete outbreak detection. Statistical process control (SPC) methods address these deficiencies by combining longitudinal analysis with graphical presentation of data. We performed a pilot study within a large network of community hospitals to evaluate performance of SPC methods for detecting SSI outbreaks. We applied conventional Shewhart and exponentially weighted moving average (EWMA) SPC charts to 10 previously investigated SSI outbreaks that occurred from 2003 to 2013. We compared the results of SPC surveillance to the results of traditional SSI surveillance methods. Then, we analysed the performance of modified SPC charts constructed with different outbreak detection rules, EWMA smoothing factors and baseline SSI rate calculations. Conventional Shewhart and EWMA SPC charts both detected 8 of the 10 SSI outbreaks analysed, in each case prior to the date of traditional detection. Among detected outbreaks, conventional Shewhart chart detection occurred a median of 12 months prior to outbreak onset and 22 months prior to traditional detection. Conventional EWMA chart detection occurred a median of 7 months prior to outbreak onset and 14 months prior to traditional detection. Modified Shewhart and EWMA charts additionally detected several outbreaks earlier than conventional SPC charts. Shewhart and SPC charts had low false-positive rates when used to analyse separate control hospital SSI data. Our findings illustrate the potential usefulness and feasibility of real-time SPC surveillance of SSI to rapidly identify outbreaks and improve patient safety. Further study is needed to optimise SPC chart selection and calculation, statistical outbreak detection rules and the process for reacting to signals of potential outbreaks. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights

  9. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Tappen, J. J.; Wasiolek, M. A.; Wu, D. W.; Schmitt, J. F.; Smith, A. J.

    2002-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  10. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Jeff Tappen; M.A. Wasiolek; D.W. Wu; J.F. Schmitt

    2001-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  11. Total System Performance Assessment Sensitivity Analyses for Final Nuclear Regulatory Commission Regulations

    International Nuclear Information System (INIS)

    Bechtel SAIC Company

    2001-01-01

    This Letter Report presents the results of supplemental evaluations and analyses designed to assess long-term performance of the potential repository at Yucca Mountain. The evaluations were developed in the context of the Nuclear Regulatory Commission (NRC) final public regulation, or rule, 10 CFR Part 63 (66 FR 55732 [DIRS 156671]), which was issued on November 2, 2001. This Letter Report addresses the issues identified in the Department of Energy (DOE) technical direction letter dated October 2, 2001 (Adams 2001 [DIRS 156708]). The main objective of this Letter Report is to evaluate performance of the potential Yucca Mountain repository using assumptions consistent with performance-assessment-related provisions of 10 CFR Part 63. The incorporation of the final Environmental Protection Agency (EPA) standard, 40 CFR Part 197 (66 FR 32074 [DIRS 155216]), and the analysis of the effect of the 40 CFR Part 197 EPA final rule on long-term repository performance are presented in the Total System Performance Assessment--Analyses for Disposal of Commercial and DOE Waste Inventories at Yucca Mountain--Input to Final Environmental Impact Statement and Site Suitability Evaluation (BSC 2001 [DIRS 156460]), referred to hereafter as the FEIS/SSE Letter Report. The Total System Performance Assessment (TSPA) analyses conducted and documented prior to promulgation of the NRC final rule 10 CFR Part 63 (66 FR 55732 [DIRS 156671]), were based on the NRC proposed rule (64 FR 8640 [DIRS 101680]). Slight differences exist between the NRC's proposed and final rules which were not within the scope of the FEIS/SSE Letter Report (BSC 2001 [DIRS 156460]), the Preliminary Site Suitability Evaluation (PSSE) (DOE 2001 [DIRS 155743]), and supporting documents for these reports. These differences include (1) the possible treatment of ''unlikely'' features, events and processes (FEPs) in evaluation of both the groundwater protection standard and the human-intrusion scenario of the individual

  12. European downstream oil industry safety performance. Statistical summary of reported incidents 2009

    International Nuclear Information System (INIS)

    Burton, A.; Den Haan, K.H.

    2010-10-01

    The sixteenth such report by CONCAWE, this issue includes statistics on workrelated personal injuries for the European downstream oil industry's own employees as well as contractors for the year 2009. Data were received from 33 companies representing more than 97% of the European refining capacity. Trends over the last sixteen years are highlighted and the data are also compared to similar statistics from related industries. In addition, this report presents the results of the first Process Safety Performance Indicator data gathering exercise amongst the CONCAWE membership.

  13. Joint statistics of partial sums of ordered exponential variates and performance of GSC RAKE receivers over rayleigh fading channel

    KAUST Repository

    Nam, Sungsik

    2011-08-01

    Spread spectrum receivers with generalized selection combining (GSC) RAKE reception were proposed and have been studied as alternatives to the classical two fundamental schemes: maximal ratio combining and selection combining because the number of diversity paths increases with the transmission bandwidth. Previous work on performance analyses of GSC RAKE receivers based on the signal to noise ratio focused on the development of methodologies to derive exact closed-form expressions for various performance measures. However, some open problems related to the performance evaluation of GSC RAKE receivers still remain to be solved such as the exact performance analysis of the capture probability and an exact assessment of the impact of self-interference on GSC RAKE receivers. The major difficulty in these problems is to derive some joint statistics of ordered exponential variates. With this motivation in mind, we capitalize in this paper on some new order statistics results to derive exact closed-form expressions for the capture probability and outage probability of GSC RAKE receivers subject to self-interference over independent and identically distributed Rayleigh fading channels, and compare it to that of partial RAKE receivers. © 2011 IEEE.

  14. Statistical and molecular analyses of evolutionary significance of red-green color vision and color blindness in vertebrates.

    Science.gov (United States)

    Yokoyama, Shozo; Takenaka, Naomi

    2005-04-01

    Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.

  15. Quantifying Trace Amounts of Aggregates in Biopharmaceuticals Using Analytical Ultracentrifugation Sedimentation Velocity: Bayesian Analyses and F Statistics.

    Science.gov (United States)

    Wafer, Lucas; Kloczewiak, Marek; Luo, Yin

    2016-07-01

    Analytical ultracentrifugation-sedimentation velocity (AUC-SV) is often used to quantify high molar mass species (HMMS) present in biopharmaceuticals. Although these species are often present in trace quantities, they have received significant attention due to their potential immunogenicity. Commonly, AUC-SV data is analyzed as a diffusion-corrected, sedimentation coefficient distribution, or c(s), using SEDFIT to numerically solve Lamm-type equations. SEDFIT also utilizes maximum entropy or Tikhonov-Phillips regularization to further allow the user to determine relevant sample information, including the number of species present, their sedimentation coefficients, and their relative abundance. However, this methodology has several, often unstated, limitations, which may impact the final analysis of protein therapeutics. These include regularization-specific effects, artificial "ripple peaks," and spurious shifts in the sedimentation coefficients. In this investigation, we experimentally verified that an explicit Bayesian approach, as implemented in SEDFIT, can largely correct for these effects. Clear guidelines on how to implement this technique and interpret the resulting data, especially for samples containing micro-heterogeneity (e.g., differential glycosylation), are also provided. In addition, we demonstrated how the Bayesian approach can be combined with F statistics to draw more accurate conclusions and rigorously exclude artifactual peaks. Numerous examples with an antibody and an antibody-drug conjugate were used to illustrate the strengths and drawbacks of each technique.

  16. Ultimate compression after impact load prediction in graphite/epoxy coupons using neural network and multivariate statistical analyses

    Science.gov (United States)

    Gregoire, Alexandre David

    2011-07-01

    The goal of this research was to accurately predict the ultimate compressive load of impact damaged graphite/epoxy coupons using a Kohonen self-organizing map (SOM) neural network and multivariate statistical regression analysis (MSRA). An optimized use of these data treatment tools allowed the generation of a simple, physically understandable equation that predicts the ultimate failure load of an impacted damaged coupon based uniquely on the acoustic emissions it emits at low proof loads. Acoustic emission (AE) data were collected using two 150 kHz resonant transducers which detected and recorded the AE activity given off during compression to failure of thirty-four impacted 24-ply bidirectional woven cloth laminate graphite/epoxy coupons. The AE quantification parameters duration, energy and amplitude for each AE hit were input to the Kohonen self-organizing map (SOM) neural network to accurately classify the material failure mechanisms present in the low proof load data. The number of failure mechanisms from the first 30% of the loading for twenty-four coupons were used to generate a linear prediction equation which yielded a worst case ultimate load prediction error of 16.17%, just outside of the +/-15% B-basis allowables, which was the goal for this research. Particular emphasis was placed upon the noise removal process which was largely responsible for the accuracy of the results.

  17. Voxel-based statistical analysis of cerebral blood flow using Tc-99m ECD brain SPECT in patients with traumatic brain injury: group and individual analyses.

    Science.gov (United States)

    Shin, Yong Beom; Kim, Seong-Jang; Kim, In-Ju; Kim, Yong-Ki; Kim, Dong-Soo; Park, Jae Heung; Yeom, Seok-Ran

    2006-06-01

    Statistical parametric mapping (SPM) was applied to brain perfusion single photon emission computed tomography (SPECT) images in patients with traumatic brain injury (TBI) to investigate regional cerebral abnormalities compared to age-matched normal controls. Thirteen patients with TBI underwent brain perfusion SPECT were included in this study (10 males, three females, mean age 39.8 +/- 18.2, range 21 - 74). SPM2 software implemented in MATLAB 5.3 was used for spatial pre-processing and analysis and to determine the quantitative differences between TBI patients and age-matched normal controls. Three large voxel clusters of significantly decreased cerebral blood perfusion were found in patients with TBI. The largest clusters were area including medial frontal gyrus (voxel number 3642, peak Z-value = 4.31, 4.27, p = 0.000) in both hemispheres. The second largest clusters were areas including cingulated gyrus and anterior cingulate gyrus of left hemisphere (voxel number 381, peak Z-value = 3.67, 3.62, p = 0.000). Other clusters were parahippocampal gyrus (voxel number 173, peak Z-value = 3.40, p = 0.000) and hippocampus (voxel number 173, peak Z-value = 3.23, p = 0.001) in the left hemisphere. The false discovery rate (FDR) was less than 0.04. From this study, group and individual analyses of SPM2 could clearly identify the perfusion abnormalities of brain SPECT in patients with TBI. Group analysis of SPM2 showed hypoperfusion pattern in the areas including medial frontal gyrus of both hemispheres, cingulate gyrus, anterior cingulate gyrus, parahippocampal gyrus and hippocampus in the left hemisphere compared to age-matched normal controls. Also, left parahippocampal gyrus and left hippocampus were additional hypoperfusion areas. However, these findings deserve further investigation on a larger number of patients to be performed to allow a better validation of objective SPM analysis in patients with TBI.

  18. Performance in College Chemistry: a Statistical Comparison Using Gender and Jungian Personality Type

    Science.gov (United States)

    Greene, Susan V.; Wheeler, Henry R.; Riley, Wayne D.

    This study sorted college introductory chemistry students by gender and Jungian personality type. It recognized differences from the general population distribution and statistically compared the students' grades with their Jungian personality types. Data from 577 female students indicated that ESFP (extroverted, sensory, feeling, perceiving) and ENFP (extroverted, intuitive, feeling, perceiving) profiles performed poorly at statistically significant levels when compared with the distribution of females enrolled in introductory chemistry. The comparable analysis using data from 422 male students indicated that the poorly performing male profiles were ISTP (introverted, sensory, thinking, perceiving) and ESTP (extroverted, sensory, thinking, perceiving). ESTJ (extroverted, sensory, thinking, judging) female students withdrew from the course at a statistically significant level. For both genders, INTJ (introverted, intuitive, thinking, judging) students were the best performers. By examining the documented characteristics of Jungian profiles that correspond with poorly performing students in chemistry, one may more effectively assist the learning process and the retention of these individuals in the fields of natural science, engineering, and technology.

  19. Evaluating transient performance of servo mechanisms by analysing stator current of PMSM

    Science.gov (United States)

    Zhang, Qing; Tan, Luyao; Xu, Guanghua

    2018-02-01

    Smooth running and rapid response are the desired performance goals for the transient motions of servo mechanisms. Because of the uncertain and unobservable transient behaviour of servo mechanisms, it is difficult to evaluate their transient performance. Under the effects of electromechanical coupling, the stator current signals of a permanent-magnet synchronous motor (PMSM) potentially contain the performance information regarding servo mechanisms in use. In this paper, a novel method based on analysing the stator current of the PMSM is proposed for quantifying the transient performance. First, a vector control model is constructed to simulate the stator current behaviour in the transient processes of consecutive speed changes, consecutive load changes, and intermittent start-stops. It is discovered that the amplitude and frequency of the stator current are modulated by the transient load torque and motor speed, respectively. The stator currents under different performance conditions are also simulated and compared. Then, the stator current is processed using a local means decomposition (LMD) algorithm to extract the instantaneous amplitude and instantaneous frequency. The sample entropy of the instantaneous amplitude, which reflects the complexity of the load torque variation, is calculated as a performance indicator of smooth running. The peak-to-peak value of the instantaneous frequency, which defines the range of the motor speed variation, is set as a performance indicator of rapid response. The proposed method is applied to both simulated data in an intermittent start-stops process and experimental data measured for a batch of servo turrets for turning lathes. The results show that the performance evaluations agree with the actual performance.

  20. Orbitrap mass analyser for in situ characterisation of planetary environments: Performance evaluation of a laboratory prototype

    Science.gov (United States)

    Briois, Christelle; Thissen, Roland; Thirkell, Laurent; Aradj, Kenzi; Bouabdellah, Abdel; Boukrara, Amirouche; Carrasco, Nathalie; Chalumeau, Gilles; Chapelon, Olivier; Colin, Fabrice; Coll, Patrice; Cottin, Hervé; Engrand, Cécile; Grand, Noel; Lebreton, Jean-Pierre; Orthous-Daunay, François-Régis; Pennanech, Cyril; Szopa, Cyril; Vuitton, Véronique; Zapf, Pascal; Makarov, Alexander

    2016-10-01

    For decades of space exploration, mass spectrometry has proven to be a reliable instrumentation for the characterisation of the nature and energy of ionic and neutral, atomic and molecular species in the interplanetary medium and upper planetary atmospheres. It has been used as well to analyse the chemical composition of planetary and small bodies environments. The chemical complexity of these environments calls for the need to develop a new generation of mass spectrometers with significantly increased mass resolving power. The recently developed OrbitrapTM mass analyser at ultra-high resolution shows promising adaptability to space instrumentation, offering improved performances for in situ measurements. In this article, we report on our project named ;Cosmorbitrap; aiming at demonstrating the adaptability of the Orbitrap technology for in situ space exploration. We present the prototype that was developed in the laboratory for demonstration of both technical feasibility and analytical capabilities. A set of samples containing elements with masses ranging from 9 to 208 u has been used to evaluate the performance of the analyser, in terms of mass resolving power (reaching 474,000 at m/z 9) and ability to discriminate between isobaric interferences, accuracy of mass measurement (below 15 ppm) and determination of relative isotopic abundances (below 5%) of various samples. We observe a good agreement between the results obtained with the prototype and those of a commercial instrument. As the background pressure is a key parameter for in situ exploration of atmosphere planetary bodies, we study the effect of background gas on the performance of the Cosmorbitrap prototype, showing an upper limit for N2 in our set-up at 10-8 mbar. The results demonstrate the strong potential to adapt this technology to space exploration.

  1. Statistical properties of interval mapping methods on quantitative trait loci location: impact on QTL/eQTL analyses

    Directory of Open Access Journals (Sweden)

    Wang Xiaoqiang

    2012-04-01

    Full Text Available Abstract Background Quantitative trait loci (QTL detection on a huge amount of phenotypes, like eQTL detection on transcriptomic data, can be dramatically impaired by the statistical properties of interval mapping methods. One of these major outcomes is the high number of QTL detected at marker locations. The present study aims at identifying and specifying the sources of this bias, in particular in the case of analysis of data issued from outbred populations. Analytical developments were carried out in a backcross situation in order to specify the bias and to propose an algorithm to control it. The outbred population context was studied through simulated data sets in a wide range of situations. The likelihood ratio test was firstly analyzed under the "one QTL" hypothesis in a backcross population. Designs of sib families were then simulated and analyzed using the QTL Map software. On the basis of the theoretical results in backcross, parameters such as the population size, the density of the genetic map, the QTL effect and the true location of the QTL, were taken into account under the "no QTL" and the "one QTL" hypotheses. A combination of two non parametric tests - the Kolmogorov-Smirnov test and the Mann-Whitney-Wilcoxon test - was used in order to identify the parameters that affected the bias and to specify how much they influenced the estimation of QTL location. Results A theoretical expression of the bias of the estimated QTL location was obtained for a backcross type population. We demonstrated a common source of bias under the "no QTL" and the "one QTL" hypotheses and qualified the possible influence of several parameters. Simulation studies confirmed that the bias exists in outbred populations under both the hypotheses of "no QTL" and "one QTL" on a linkage group. The QTL location was systematically closer to marker locations than expected, particularly in the case of low QTL effect, small population size or low density of markers, i

  2. Using synthetic data to evaluate multiple regression and principal component analyses for statistical modeling of daily building energy consumption

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, T.A. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States)); Claridge, D.E. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States))

    1994-01-01

    Multiple regression modeling of monitored building energy use data is often faulted as a reliable means of predicting energy use on the grounds that multicollinearity between the regressor variables can lead both to improper interpretation of the relative importance of the various physical regressor parameters and to a model with unstable regressor coefficients. Principal component analysis (PCA) has the potential to overcome such drawbacks. While a few case studies have already attempted to apply this technique to building energy data, the objectives of this study were to make a broader evaluation of PCA and multiple regression analysis (MRA) and to establish guidelines under which one approach is preferable to the other. Four geographic locations in the US with different climatic conditions were selected and synthetic data sequence representative of daily energy use in large institutional buildings were generated in each location using a linear model with outdoor temperature, outdoor specific humidity and solar radiation as the three regression variables. MRA and PCA approaches were then applied to these data sets and their relative performances were compared. Conditions under which PCA seems to perform better than MRA were identified and preliminary recommendations on the use of either modeling approach formulated. (orig.)

  3. Phenomenological and statistical analyses of turbulence in forced convection with temperature-dependent viscosity under non-Boussinesq condition.

    Science.gov (United States)

    Yahya, S M; Anwer, S F; Sanghi, S

    2013-10-01

    In this work, Thermal Large Eddy Simulation (TLES) is performed to study the behavior of weakly compressible Newtonian fluids with anisotropic temperature-dependent viscosity in forced convection turbulent flow. A systematic analysis of variable-viscosity effects, isolated from gravity, with relevance to industrial cooling/heating applications is being carried out. A LES of a planar channel flow with significant heat transfer at a low Mach number was performed to study effects of fluid property variation on the near-wall turbulence structure. In this flow configuration the top wall is maintained at a higher temperature (T hot ) than the bottom wall (T cold ). The temperature ratio (R θ = T hot /T cold ) is fixed at 1.01, 2 and 3 to study the effects of property variations at low Mach number. Results indicate that average and turbulent fields undergo significant changes. Compared with isothermal flow with constant viscosity, we observe that turbulence is enhanced in the cold side of the channel, characterized by locally lower viscosity whereas a decrease of turbulent kinetic energy is found at the hot wall. The turbulent structures near the cold wall are very short and densely populated vortices but near the hot wall there seems to be a long streaky structure or large elongated vortices. Spectral study reveals that turbulence is completely suppressed at the hot side of the channel at a large temperature ratio because no inertial zone is obtained (i.e. index of Kolmogorov scaling law is zero) from the spectra in these region.

  4. Reader characteristics linked to detection of pulmonary nodules on radiographs: ROC vs. JAFROC analyses of performance

    Science.gov (United States)

    Kohli, Akshay; Robinson, John W.; Ryan, John; McEntee, Mark F.; Brennan, Patrick C.

    2011-03-01

    The purpose of this study is to explore whether reader characteristics are linked to heightened levels of diagnostic performance in chest radiology using receiver operating characteristic (ROC) and jackknife free response ROC (JAFROC) methodologies. A set of 40 postero-anterior chest radiographs was developed, of which 20 were abnormal containing one or more simulated nodules, of varying subtlety. Images were independently reviewed by 12 boardcertified radiologists including six chest specialists. The observer performance was measured in terms of ROC and JAFROC scores. For the ROC analysis, readers were asked to rate their degree of suspicion for the presence of nodules by using a confidence rating scale (1-6). JAFROC analysis required the readers to locate and rate as many suspicious areas as they wished using the same scale and resultant data were used to generate Az and FOM scores for ROC and JAFROC analyses respectively. Using Pearson methods, scores of performance were correlated with 7 reader characteristics recorded using a questionnaire. JAFROC analysis showed that improved reader performance was significantly (pchest specialty (pchest radiographs (pchest readings per year (pchest radiographs (pchest specialty, hours reading per week and number of radiographs read per year. Also, JAFROC is a more powerful predictor of performance as compared to ROC.

  5. Statistical modelling of networked human-automation performance using working memory capacity.

    Science.gov (United States)

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

  6. Sensitivity analyses of factors influencing CMAQ performance for fine particulate nitrate.

    Science.gov (United States)

    Shimadera, Hikari; Hayami, Hiroshi; Chatani, Satoru; Morino, Yu; Mori, Yasuaki; Morikawa, Tazuko; Yamaji, Kazuyo; Ohara, Toshimasa

    2014-04-01

    Improvement of air quality models is required so that they can be utilized to design effective control strategies for fine particulate matter (PM2.5). The Community Multiscale Air Quality modeling system was applied to the Greater Tokyo Area of Japan in winter 2010 and summer 2011. The model results were compared with observed concentrations of PM2.5 sulfate (SO4(2-)), nitrate (NO3(-)) and ammonium, and gaseous nitric acid (HNO3) and ammonia (NH3). The model approximately reproduced PM2.5 SO4(2-) concentration, but clearly overestimated PM2.5 NO3(-) concentration, which was attributed to overestimation of production of ammonium nitrate (NH4NO3). This study conducted sensitivity analyses of factors associated with the model performance for PM2.5 NO3(-) concentration, including temperature and relative humidity, emission of nitrogen oxides, seasonal variation of NH3 emission, HNO3 and NH3 dry deposition velocities, and heterogeneous reaction probability of dinitrogen pentoxide. Change in NH3 emission directly affected NH3 concentration, and substantially affected NH4NO3 concentration. Higher dry deposition velocities of HNO3 and NH3 led to substantial reductions of concentrations of the gaseous species and NH4NO3. Because uncertainties in NH3 emission and dry deposition processes are probably large, these processes may be key factors for improvement of the model performance for PM2.5 NO3(-). The Community Multiscale Air Quality modeling system clearly overestimated the concentration of fine particulate nitrate in the Greater Tokyo Area of Japan, which was attributed to overestimation of production of ammonium nitrate. Sensitivity analyses were conducted for factors associated with the model performance for nitrate. Ammonia emission and dry deposition of nitric acid and ammonia may be key factors for improvement of the model performance.

  7. Dry critical experiments and analyses performed in support of the TOPAZ-2 safety program

    International Nuclear Information System (INIS)

    Pelowitz, D.B.; Sapir, J.; Glushkov, E.S.; Ponomarev-Stepnoi, N.N.; Bubelev, V.G.; Kompanietz, G.B.; Krutov, A.M.; Polyakov, D.N.; Lobynstev, V.A.

    1995-01-01

    In December 1991, the Strategic Defense Initiative Organization decided to investigate the possibility of launching a Russian Topaz-2 space nuclear power system. Functional safety requirements developed for the Topaz mission mandated that the reactor remain subcritical when flooded and immersed in water. Initial experiments and analyses performed in Russia and the United States indicated that the reactor could potentially become supercritical in several water- or sand-immersion scenarios. Consequently, a series of critical experiments was performed on the Narciss M-II facility at the Kurchatov Institute to measure the reactivity effects of water and sand immersion, to quantify the effectiveness of reactor modifications proposed to preclude criticality, and to benchmark the calculational methods and nuclear data used in the Topaz-2 safety analyses. In this paper we describe the Narciss M-II experimental configurations along with the associated calculational models and methods. We also present and compare the measured and calculated results for the dry experimental configurations. copyright 1995 American Institute of Physics

  8. Disturbance rejection performance analyses of closed loop control systems by reference to disturbance ratio.

    Science.gov (United States)

    Alagoz, Baris Baykant; Deniz, Furkan Nur; Keles, Cemal; Tan, Nusret

    2015-03-01

    This study investigates disturbance rejection capacity of closed loop control systems by means of reference to disturbance ratio (RDR). The RDR analysis calculates the ratio of reference signal energy to disturbance signal energy at the system output and provides a quantitative evaluation of disturbance rejection performance of control systems on the bases of communication channel limitations. Essentially, RDR provides a straightforward analytical method for the comparison and improvement of implicit disturbance rejection capacity of closed loop control systems. Theoretical analyses demonstrate us that RDR of the negative feedback closed loop control systems are determined by energy spectral density of controller transfer function. In this manner, authors derived design criteria for specifications of disturbance rejection performances of PID and fractional order PID (FOPID) controller structures. RDR spectra are calculated for investigation of frequency dependence of disturbance rejection capacity and spectral RDR analyses are carried out for PID and FOPID controllers. For the validation of theoretical results, simulation examples are presented. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Devising a New Model of Demand-Based Learning Integrated with Social Networks and Analyses of its Performance

    Directory of Open Access Journals (Sweden)

    Bekim Fetaji

    2018-02-01

    Full Text Available The focus of the research study is to devise a new model for demand based learning that will be integrated with social networks such as Facebook, twitter and other. The study investigates this by reviewing the published literature and realizes a case study analyses in order to analyze the new models’ analytical perspectives of practical implementation. The study focuses on analyzing demand-based learning and investigating how it can be improved by devising a specific model that incorporates social network use. Statistical analyses of the results of the questionnaire through research of the raised questions and hypothesis showed that there is a need for introducing new models in the teaching process. The originality stands on the prologue of the social login approach to an educational environment, whereas the approach is counted as a contribution of developing a demand-based web application, which aims to modernize the educational pattern of communication, introduce the social login approach, and increase the process of knowledge transfer as well as improve learners’ performance and skills. Insights and recommendations are provided, argumented and discussed.

  10. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.

    Science.gov (United States)

    Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin

    2017-06-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https

  11. A multi-criteria evaluation system for marine litter pollution based on statistical analyses of OSPAR beach litter monitoring time series.

    Science.gov (United States)

    Schulz, Marcus; Neumann, Daniel; Fleet, David M; Matthies, Michael

    2013-12-01

    During the last decades, marine pollution with anthropogenic litter has become a worldwide major environmental concern. Standardized monitoring of litter since 2001 on 78 beaches selected within the framework of the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR) has been used to identify temporal trends of marine litter. Based on statistical analyses of this dataset a two-part multi-criteria evaluation system for beach litter pollution of the North-East Atlantic and the North Sea is proposed. Canonical correlation analyses, linear regression analyses, and non-parametric analyses of variance were used to identify different temporal trends. A classification of beaches was derived from cluster analyses and served to define different states of beach quality according to abundances of 17 input variables. The evaluation system is easily applicable and relies on the above-mentioned classification and on significant temporal trends implied by significant rank correlations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE

  13. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  14. Output performance analyses of solar array on stratospheric airship with thermal effect

    International Nuclear Information System (INIS)

    Li, Jun; Lv, Mingyun; Tan, Dongjie; Zhu, Weiyu; Sun, Kangwen; Zhang, Yuanyuan

    2016-01-01

    Highlights: • A model investigating the output power of solar array is proposed. • The output power in the cruise condition with thermal effect is researched. • The effect of some factors on output performance is discussed in detail. • A suitable transmissivity of external layer is crucial in preliminary design step. - Abstract: Output performance analyses of the solar array are very critical for solving the energy problem of a long endurance stratospheric airship, and the solar cell efficiency is very sensitive to temperature of the solar cell. But the research about output performance of solar array with thermal effect is rare. This paper outlines a numerical model including the thermal model of airship and solar cells, the incident solar radiation model on the solar array, and the power output model. Based on this numerical model, a MATLAB computer program is developed. In the course of the investigation, the comparisons of the simulation results with and without considering thermal effect are reported. Furthermore, effects of the transmissivity of external encapsulation layer of solar array and wind speed on the thermal performance and output power of solar array are discussed in detail. The results indicate that this method is helpful for planning energy management.

  15. The Nursing Performance Instrument: Exploratory and Confirmatory Factor Analyses in Registered Nurses.

    Science.gov (United States)

    Sagherian, Knar; Steege, Linsey M; Geiger-Brown, Jeanne; Harrington, Donna

    2018-04-01

    The optimal performance of nurses in healthcare settings plays a critical role in care quality and patient safety. Despite this importance, few measures are provided in the literature that evaluate nursing performance as an independent construct from competencies. The nine-item Nursing Performance Instrument (NPI) was developed to fill this gap. The aim of this study was to examine and confirm the underlying factor structure of the NPI in registered nurses. The design was cross-sectional, using secondary data collected between February 2008 and April 2009 for the "Fatigue in Nursing Survey" (N = 797). The sample was predominantly dayshift female nurses working in acute care settings. Using Mplus software, exploratory and confirmatory factor analyses were applied to the NPI data, which were divided into two equal subsamples. Multiple fit indices were used to evaluate the fit of the alternative models. The three-factor model was determined to fit the data adequately. The factors that were labeled as "physical/mental decrements," "consistent practice," and "behavioral change" were moderately to strongly intercorrelated, indicating good convergent validity. The reliability coefficients for the subscales were acceptable. The NPI consists of three latent constructs. This instrument has the potentialto be used as a self-monitoring instrument that addressesnurses' perceptions of performance while providing patient care.

  16. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  17. Implementation of Statistical Process Control: Evaluating the Mechanical Performance of a Candidate Silicone Elastomer Docking Seal

    Science.gov (United States)

    Oravec, Heather Ann; Daniels, Christopher C.

    2014-01-01

    The National Aeronautics and Space Administration has been developing a novel docking system to meet the requirements of future exploration missions to low-Earth orbit and beyond. A dynamic gas pressure seal is located at the main interface between the active and passive mating components of the new docking system. This seal is designed to operate in the harsh space environment, but is also to perform within strict loading requirements while maintaining an acceptable level of leak rate. In this study, a candidate silicone elastomer seal was designed, and multiple subscale test articles were manufactured for evaluation purposes. The force required to fully compress each test article at room temperature was quantified and found to be below the maximum allowable load for the docking system. However, a significant amount of scatter was observed in the test results. Due to the stochastic nature of the mechanical performance of this candidate docking seal, a statistical process control technique was implemented to isolate unusual compression behavior from typical mechanical performance. The results of this statistical analysis indicated a lack of process control, suggesting a variation in the manufacturing phase of the process. Further investigation revealed that changes in the manufacturing molding process had occurred which may have influenced the mechanical performance of the seal. This knowledge improves the chance of this and future space seals to satisfy or exceed design specifications.

  18. Supplemental Performance Analyses for the Potential High-Level Nuclear Waste Repository at Yucca Mountain

    International Nuclear Information System (INIS)

    Sevougian, S. D.; McNeish, J. A.; Coppersmith, K.; Jenni, K. E.; Rickertsen, L. D.; Swift, P. N.; Wilson, M. L.

    2002-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for the potential development of a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (1), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. Based on internal reviews of the S and ER and its key supporting references, the Total System Performance Assessment for the Site Recommendation (TSPA-SR) (2) and the Analysis Model Reports and Process Model Reports cited therein, the DOE has recently identified and performed several types of analyses to supplement the treatment of uncertainty in support of the consideration of a possible site recommendation. The results of these new analyses are summarized in the two-volume report entitled FY01 Supplemental Science and Performance Analysis (SSPA) (3,4). The information in this report is intended to supplement, not supplant, the information contained in the S and ER. The DOE recognizes that important uncertainties will always remain in any assessment of the performance of a potential repository over thousands of years (1). One part of the DOE approach to recognizing and managing these uncertainties is a commitment to continued testing and analysis and to the continued evaluation of the technical basis supporting the possible recommendation of the site, such as the analysis contained in the SSPA. The goals of the work described here are to provide insights into the implications of newly quantified uncertainties, updated science, and evaluations of lower operating temperatures on the performance of a potential Yucca Mountain repository and to increase confidence in the results of the TSPA described

  19. Scenario sensitivity analyses performed on the PRESTO-EPA LLW risk assessment models

    International Nuclear Information System (INIS)

    Bandrowski, M.S.

    1988-01-01

    The US Environmental Protection Agency (EPA) is currently developing standards for the land disposal of low-level radioactive waste. As part of the standard development, EPA has performed risk assessments using the PRESTO-EPA codes. A program of sensitivity analysis was conducted on the PRESTO-EPA codes, consisting of single parameter sensitivity analysis and scenario sensitivity analysis. The results of the single parameter sensitivity analysis were discussed at the 1987 DOE LLW Management Conference. Specific scenario sensitivity analyses have been completed and evaluated. Scenario assumptions that were analyzed include: site location, disposal method, form of waste, waste volume, analysis time horizon, critical radionuclides, use of buffer zones, and global health effects

  20. SURE: a system of computer codes for performing sensitivity/uncertainty analyses with the RELAP code

    International Nuclear Information System (INIS)

    Bjerke, M.A.

    1983-02-01

    A package of computer codes has been developed to perform a nonlinear uncertainty analysis on transient thermal-hydraulic systems which are modeled with the RELAP computer code. Using an uncertainty around the analyses of experiments in the PWR-BDHT Separate Effects Program at Oak Ridge National Laboratory. The use of FORTRAN programs running interactively on the PDP-10 computer has made the system very easy to use and provided great flexibility in the choice of processing paths. Several experiments simulating a loss-of-coolant accident in a nuclear reactor have been successfully analyzed. It has been shown that the system can be automated easily to further simplify its use and that the conversion of the entire system to a base code other than RELAP is possible

  1. IRE (Institut National des Radioelements) site in Belgium. Report of in situ measurements and analyses performed for the RTBF

    International Nuclear Information System (INIS)

    2010-05-01

    This document reports various analyses performed within the frame of the preparation and filming of a TV documentary on the Belgium National Institute of Radio-elements. It reports gamma radiation measurements performed at the vicinity of the institute, discusses the possible origin of its increase at the vicinity of the institute, analyses of sludge samples coming from a wastewater treatment works, and analyses of milk, cabbage, mosses and sediments collected by residents

  2. A laboratory evaluation of the influence of weighing gauges performance on extreme events statistics

    Science.gov (United States)

    Colli, Matteo; Lanza, Luca

    2014-05-01

    The effects of inaccurate ground based rainfall measurements on the information derived from rain records is yet not much documented in the literature. La Barbera et al. (2002) investigated the propagation of the systematic mechanic errors of tipping bucket type rain gauges (TBR) into the most common statistics of rainfall extremes, e.g. in the assessment of the return period T (or the related non-exceedance probability) of short-duration/high intensity events. Colli et al. (2012) and Lanza et al. (2012) extended the analysis to a 22-years long precipitation data set obtained from a virtual weighing type gauge (WG). The artificial WG time series was obtained basing on real precipitation data measured at the meteo-station of the University of Genova and modelling the weighing gauge output as a linear dynamic system. This approximation was previously validated with dedicated laboratory experiments and is based on the evidence that the accuracy of WG measurements under real world/time varying rainfall conditions is mainly affected by the dynamic response of the gauge (as revealed during the last WMO Field Intercomparison of Rainfall Intensity Gauges). The investigation is now completed by analyzing actual measurements performed by two common weighing gauges, the OTT Pluvio2 load-cell gauge and the GEONOR T-200 vibrating-wire gauge, since both these instruments demonstrated very good performance under previous constant flow rate calibration efforts. A laboratory dynamic rainfall generation system has been arranged and validated in order to simulate a number of precipitation events with variable reference intensities. Such artificial events were generated basing on real world rainfall intensity (RI) records obtained from the meteo-station of the University of Genova so that the statistical structure of the time series is preserved. The influence of the WG RI measurements accuracy on the associated extreme events statistics is analyzed by comparing the original intensity

  3. A comparison of linear and nonlinear statistical techniques in performance attribution.

    Science.gov (United States)

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  4. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  5. FREQFIT: Computer program which performs numerical regression and statistical chi-squared goodness of fit analysis

    International Nuclear Information System (INIS)

    Hofland, G.S.; Barton, C.C.

    1990-01-01

    The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program's results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig

  6. Validation of Method Performance of pH, PCO2, PO2, Na(+), K(+) of Cobas b121 ABG Analyser.

    Science.gov (United States)

    Nanda, Sunil Kumar; Ray, Lopamudra; Dinakaran, Asha

    2014-06-01

    The introduction of a new method or new analyser is a common occurrence in clinical biochemistry laboratory. Blood gas measurements and electrolytes are often performed in Point-of-Care (POC) settings. When a new POC analyser is obtained, the performance of the analyser should be evaluated by comparison to the measurements with the reference analyser in the laboratory. Evaluation of method performance of pH, PCO2, PO2, Na(+), K(+) of cobas b121 ABG analyser. The evaluation of method performance of pH, PO2, PCO2, Na(+), K(+) of cobas b121 ABG analyser was done by comparing the results of 50 patient samples run on cobas b121 with the results obtained from Rapid lab values (reference analyser). Correlation coefficient was calculated from the results obtained from both the analysers. Precision was calculated by running biorad ABG control samples. The correlation coefficient values obtained for parameters were close to 1.0 indicating good correlation. The CV obtained for all the parameters were less than 5 indicating good precision. The new ABG analyser, Cobas b121 correlated well with the reference ABG analyser (Rapid Lab) and could be used to run on patient samples.

  7. The statistical analysis techniques to support the NGNP fuel performance experiments

    Energy Technology Data Exchange (ETDEWEB)

    Pham, Binh T., E-mail: Binh.Pham@inl.gov; Einerson, Jeffrey J.

    2013-10-15

    This paper describes the development and application of statistical analysis techniques to support the Advanced Gas Reactor (AGR) experimental program on Next Generation Nuclear Plant (NGNP) fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel temperature) is regulated by the He–Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the NGNP Data Management and Analysis System for automated processing and qualification of the AGR measured data. The neutronic and thermal code simulation results are used for comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the fuel temperature within a given range.

  8. Statistical parametric mapping and statistical probabilistic anatomical mapping analyses of basal/acetazolamide Tc-99m ECD brain SPECT for efficacy assessment of endovascular stent placement for middle cerebral artery stenosis

    International Nuclear Information System (INIS)

    Lee, Tae-Hong; Kim, Seong-Jang; Kim, In-Ju; Kim, Yong-Ki; Kim, Dong-Soo; Park, Kyung-Pil

    2007-01-01

    Statistical parametric mapping (SPM) and statistical probabilistic anatomical mapping (SPAM) were applied to basal/acetazolamide Tc-99m ECD brain perfusion SPECT images in patients with middle cerebral artery (MCA) stenosis to assess the efficacy of endovascular stenting of the MCA. Enrolled in the study were 11 patients (8 men and 3 women, mean age 54.2 ± 6.2 years) who had undergone endovascular stent placement for MCA stenosis. Using SPM and SPAM analyses, we compared the number of significant voxels and cerebral counts in basal and acetazolamide SPECT images before and after stenting, and assessed the perfusion changes and cerebral vascular reserve index (CVRI). The numbers of hypoperfusion voxels in SPECT images were decreased from 10,083 ± 8,326 to 4,531 ± 5,091 in basal images (P 0.0317) and from 13,398 ± 14,222 to 7,699 ± 10,199 in acetazolamide images (P = 0.0142) after MCA stenting. On SPAM analysis, the increases in cerebral counts were significant in acetazolamide images (90.9 ± 2.2 to 93.5 ± 2.3, P = 0.0098) but not in basal images (91 ± 2.7 to 92 ± 2.6, P = 0.1602). The CVRI also showed a statistically significant increase from before stenting (median 0.32; 95% CI -2.19-2.37) to after stenting (median 1.59; 95% CI -0.85-4.16; P = 0.0068). This study revealed the usefulness of voxel-based analysis of basal/acetazolamide brain perfusion SPECT after MCA stent placement. This study showed that SPM and SPAM analyses of basal/acetazolamide Tc-99m brain SPECT could be used to evaluate the short-term hemodynamic efficacy of successful MCA stent placement. (orig.)

  9. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  10. Neutronic analyses of design issues affecting the tritium breeding performance in different DEMO blanket concepts

    Energy Technology Data Exchange (ETDEWEB)

    Pereslavtsev, Pavel, E-mail: pavel.pereslavtsev@kit.edu [Karlsruhe Institute for Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bachmann, Christian [EUROfusion – Programme Management Unit, Boltzmannstrasse 2, 85748 Garching (Germany); Fischer, Ulrich [Karlsruhe Institute for Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2016-11-01

    Highlights: • Realistic 3D MCNP model based on the CAD engineering model of DEMO. • Automated procedure for the generation and arrangement of the blanket modules for different DEMO concepts: HCPB, HCLL, WCLL, DCLL. • Several parameters affecting tritium breeding ratio (TBR) were investigated. • A set of practical guidelines was prepared for the designers developing the individual breeding blanket concepts. - Abstract: Neutronic analyses were performed to assess systematically the tritium breeding ratio (TBR) variations in the DEMO for the different blanket concepts HCPB, HCLL, WCLL and DCLL DEMOs due to modifications of the blanket configurations. A dedicated automated procedure was developed to fill the breeding modules in the common generic model in correspondence to the different concepts. The TBR calculations were carried out using the MCNP5 Monte Carlo code. The following parameters affecting the global TBR were investigated: TBR poloidal distribution, radial breeder zone depth, {sup 6}Li enrichment, steel content in the breeder modules, poloidal segmentation of the breeder blanket volume, size of gaps between blankets, thickness of the first wall and of the tungsten armour. Based on the results a set of practical guidelines was prepared for the designers developing the individual breeding blanket concepts with the goal to achieve the required tritium breeding performance in DEMO.

  11. Neutronic analyses of design issues affecting the tritium breeding performance in different DEMO blanket concepts

    International Nuclear Information System (INIS)

    Pereslavtsev, Pavel; Bachmann, Christian; Fischer, Ulrich

    2016-01-01

    Highlights: • Realistic 3D MCNP model based on the CAD engineering model of DEMO. • Automated procedure for the generation and arrangement of the blanket modules for different DEMO concepts: HCPB, HCLL, WCLL, DCLL. • Several parameters affecting tritium breeding ratio (TBR) were investigated. • A set of practical guidelines was prepared for the designers developing the individual breeding blanket concepts. - Abstract: Neutronic analyses were performed to assess systematically the tritium breeding ratio (TBR) variations in the DEMO for the different blanket concepts HCPB, HCLL, WCLL and DCLL DEMOs due to modifications of the blanket configurations. A dedicated automated procedure was developed to fill the breeding modules in the common generic model in correspondence to the different concepts. The TBR calculations were carried out using the MCNP5 Monte Carlo code. The following parameters affecting the global TBR were investigated: TBR poloidal distribution, radial breeder zone depth, "6Li enrichment, steel content in the breeder modules, poloidal segmentation of the breeder blanket volume, size of gaps between blankets, thickness of the first wall and of the tungsten armour. Based on the results a set of practical guidelines was prepared for the designers developing the individual breeding blanket concepts with the goal to achieve the required tritium breeding performance in DEMO.

  12. Current Approaches to Tactical Performance Analyses in Soccer Using Position Data.

    Science.gov (United States)

    Memmert, Daniel; Lemmink, Koen A P M; Sampaio, Jaime

    2017-01-01

    Tactical match performance depends on the quality of actions of individual players or teams in space and time during match-play in order to be successful. Technological innovations have led to new possibilities to capture accurate spatio-temporal information of all players and unravel the dynamics and complexity of soccer matches. The main aim of this article is to give an overview of the current state of development of the analysis of position data in soccer. Based on the same single set of position data of a high-level 11 versus 11 match (Bayern Munich against FC Barcelona) three different promising approaches from the perspective of dynamic systems and neural networks will be presented: Tactical performance analysis revealed inter-player coordination, inter-team and inter-line coordination before critical events, as well as team-team interaction and compactness coefficients. This could lead to a multi-disciplinary discussion on match analyses in sport science and new avenues for theoretical and practical implications in soccer.

  13. A tale of five cities: Using recycling frameworks to analyse inclusive recycling performance.

    Science.gov (United States)

    Scheinberg, Anne; Simpson, Michael

    2015-11-01

    'Recycling' is a source of much confusion, particularly when comparing solid waste systems in high-income countries with those in low- and middle-income countries. Few analysts can explain why the performance and structure of recycling appears to be so different in rich countries from poor ones, nor why well-meaning efforts to implement recycling so often fail. The analysis of policy drivers, and the Integrated Sustainable Waste Management (ISWM) framework, come close to an explanation.This article builds on these earlier works, focusing in on five cities profiled in the 2010 UN-Habitat publication (Scheinberg A, Wilson DC and Rodic L (2010) Solid Waste Management in the World's Cities. UN-Habitat's Third Global Report on the State of Water and Sanitation in the World's Cities. Newcastle-on-Tyne, UK: Earthscan Publications). Data from these cities and others provides the basis for developing a new tool to analyse inclusive recycling performance. The points of departure are the institutional and economic relationships between the service chain, the public obligation to remove waste, pollution, and other forms of disvalue, and the value chain, a system of private enterprises trading valuable materials and providing markets for recyclables. The methodological innovation is to use flows of materials and money as indicators of institutional relationships, and is an extension of process flow diagramming.The authors are using the term 'recycling framework analysis' to describe this new form of institutional analysis. The diagrams increase our understanding of the factors that contribute to high-performance inclusive recycling. By focusing on institutional relationships, the article seeks to improve analysis, planning, and ultimately, outcomes, of recycling interventions. © The Author(s) 2015.

  14. Mild performic acid oxidation enhances chromatographic and top down mass spectrometric analyses of histones.

    Science.gov (United States)

    Pesavento, James J; Garcia, Benjamin A; Streeky, James A; Kelleher, Neil L; Mizzen, Craig A

    2007-09-01

    Recent developments in top down mass spectrometry have enabled closely related histone variants and their modified forms to be identified and quantitated with unprecedented precision, facilitating efforts to better understand how histones contribute to the epigenetic regulation of gene transcription and other nuclear processes. It is therefore crucial that intact MS profiles accurately reflect the levels of variants and modified forms present in a given cell type or cell state for the full benefit of such efforts to be realized. Here we show that partial oxidation of Met and Cys residues in histone samples prepared by conventional methods, together with oxidation that can accrue during storage or during chip-based automated nanoflow electrospray ionization, confounds MS analysis by altering the intact MS profile as well as hindering posttranslational modification localization after MS/MS. We also describe an optimized performic acid oxidation procedure that circumvents these problems without catalyzing additional oxidations or altering the levels of posttranslational modifications common in histones. MS and MS/MS of HeLa cell core histones confirmed that Met and Cys were the only residues oxidized and that complete oxidation restored true intact abundance ratios and significantly enhanced MS/MS data quality. This allowed for the unequivocal detection, at the intact molecule level, of novel combinatorially modified forms of H4 that would have been missed otherwise. Oxidation also enhanced the separation of human core histones by reverse phase chromatography and decreased the levels of salt-adducted forms observed in ESI-FTMS. This method represents a simple and easily automated means for enhancing the accuracy and sensitivity of top down analyses of combinatorially modified forms of histones that may also be of benefit for top down or bottom up analyses of other proteins.

  15. Analysing the spatial patterns of livestock anthrax in Kazakhstan in relation to environmental factors: a comparison of local (Gi* and morphology cluster statistics

    Directory of Open Access Journals (Sweden)

    Ian T. Kracalik

    2012-11-01

    Full Text Available We compared a local clustering and a cluster morphology statistic using anthrax outbreaks in large (cattle and small (sheep and goats domestic ruminants across Kazakhstan. The Getis-Ord (Gi* statistic and a multidirectional optimal ecotope algorithm (AMOEBA were compared using 1st, 2nd and 3rd order Rook contiguity matrices. Multivariate statistical tests were used to evaluate the environmental signatures between clusters and non-clusters from the AMOEBA and Gi* tests. A logistic regression was used to define a risk surface for anthrax outbreaks and to compare agreement between clustering methodologies. Tests revealed differences in the spatial distribution of clusters as well as the total number of clusters in large ruminants for AMOEBA (n = 149 and for small ruminants (n = 9. In contrast, Gi* revealed fewer large ruminant clusters (n = 122 and more small ruminant clusters (n = 61. Significant environmental differences were found between groups using the Kruskall-Wallis and Mann- Whitney U tests. Logistic regression was used to model the presence/absence of anthrax outbreaks and define a risk surface for large ruminants to compare with cluster analyses. The model predicted 32.2% of the landscape as high risk. Approximately 75% of AMOEBA clusters corresponded to predicted high risk, compared with ~64% of Gi* clusters. In general, AMOEBA predicted more irregularly shaped clusters of outbreaks in both livestock groups, while Gi* tended to predict larger, circular clusters. Here we provide an evaluation of both tests and a discussion of the use of each to detect environmental conditions associated with anthrax outbreak clusters in domestic livestock. These findings illustrate important differences in spatial statistical methods for defining local clusters and highlight the importance of selecting appropriate levels of data aggregation.

  16. The use of statistics in real and simulated investigations performed by undergraduate health sciences' students

    OpenAIRE

    Pimenta, Rui; Nascimento, Ana; Vieira, Margarida; Costa, Elísio

    2010-01-01

    In previous works, we evaluated the statistical reasoning ability acquired by health sciences’ students carrying out their final undergraduate project. We found that these students achieved a good level of statistical literacy and reasoning in descriptive statistics. However, concerning inferential statistics the students did not reach a similar level. Statistics educators therefore claim for more effective ways to learn statistics such as project based investigations. These can be simulat...

  17. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  18. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    Science.gov (United States)

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  19. Statistical properties of a utility measure of observer performance compared to area under the ROC curve

    Science.gov (United States)

    Abbey, Craig K.; Samuelson, Frank W.; Gallas, Brandon D.; Boone, John M.; Niklason, Loren T.

    2013-03-01

    The receiver operating characteristic (ROC) curve has become a common tool for evaluating diagnostic imaging technologies, and the primary endpoint of such evaluations is the area under the curve (AUC), which integrates sensitivity over the entire false positive range. An alternative figure of merit for ROC studies is expected utility (EU), which focuses on the relevant region of the ROC curve as defined by disease prevalence and the relative utility of the task. However if this measure is to be used, it must also have desirable statistical properties keep the burden of observer performance studies as low as possible. Here, we evaluate effect size and variability for EU and AUC. We use two observer performance studies recently submitted to the FDA to compare the EU and AUC endpoints. The studies were conducted using the multi-reader multi-case methodology in which all readers score all cases in all modalities. ROC curves from the study were used to generate both the AUC and EU values for each reader and modality. The EU measure was computed assuming an iso-utility slope of 1.03. We find mean effect sizes, the reader averaged difference between modalities, to be roughly 2.0 times as big for EU as AUC. The standard deviation across readers is roughly 1.4 times as large, suggesting better statistical properties for the EU endpoint. In a simple power analysis of paired comparison across readers, the utility measure required 36% fewer readers on average to achieve 80% statistical power compared to AUC.

  20. Inferring the origin of rare fruit distillates from compositional data using multivariate statistical analyses and the identification of new flavour constituents.

    Science.gov (United States)

    Mihajilov-Krstev, Tatjana M; Denić, Marija S; Zlatković, Bojan K; Stankov-Jovanović, Vesna P; Mitić, Violeta D; Stojanović, Gordana S; Radulović, Niko S

    2015-04-01

    In Serbia, delicatessen fruit alcoholic drinks are produced from autochthonous fruit-bearing species such as cornelian cherry, blackberry, elderberry, wild strawberry, European wild apple, European blueberry and blackthorn fruits. There are no chemical data on many of these and herein we analysed volatile minor constituents of these rare fruit distillates. Our second goal was to determine possible chemical markers of these distillates through a statistical/multivariate treatment of the herein obtained and previously reported data. Detailed chemical analyses revealed a complex volatile profile of all studied fruit distillates with 371 identified compounds. A number of constituents were recognised as marker compounds for a particular distillate. Moreover, 33 of them represent newly detected flavour constituents in alcoholic beverages or, in general, in foodstuffs. With the aid of multivariate analyses, these volatile profiles were successfully exploited to infer the origin of raw materials used in the production of these spirits. It was also shown that all fruit distillates possessed weak antimicrobial properties. It seems that the aroma of these highly esteemed wild-fruit spirits depends on the subtle balance of various minor volatile compounds, whereby some of them are specific to a certain type of fruit distillate and enable their mutual distinction. © 2014 Society of Chemical Industry.

  1. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    Science.gov (United States)

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

  2. Predicting energy performance of a net-zero energy building: A statistical approach

    International Nuclear Information System (INIS)

    Kneifel, Joshua; Webb, David

    2016-01-01

    Highlights: • A regression model is applied to actual energy data from a net-zero energy building. • The model is validated through a rigorous statistical analysis. • Comparisons are made between model predictions and those of a physics-based model. • The model is a viable baseline for evaluating future models from the energy data. - Abstract: Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid Climate Zone, and compares these

  3. Pushover, Response Spectrum and Time History Analyses of Safe Rooms in a Poor Performance Masonry Building

    International Nuclear Information System (INIS)

    Mazloom, M.

    2008-01-01

    The idea of safe room has been developed for decreasing the earthquake casualties in masonry buildings. The information obtained from the previous ground motions occurring in seismic zones expresses the lack of enough safety of these buildings against earthquakes. For this reason, an attempt has been made to create some safe areas inside the existing masonry buildings, which are called safe rooms. The practical method for making these safe areas is to install some prefabricated steel frames in some parts of the existing structure. These frames do not carry any service loads before an earthquake. However, if a devastating earthquake happens and the load bearing walls of the building are destroyed, some parts of the floors, which are in the safe areas, will fall on the roof of the installed frames and the occupants who have sheltered there will survive. This paper presents the performance of these frames located in a destroying three storey masonry building with favorable conclusions. In fact, the experimental pushover diagram of the safe room located at the ground-floor level of this building is compared with the analytical results and it is concluded that pushover analysis is a good method for seismic performance evaluation of safe rooms. For time history analysis the 1940 El Centro, the 2003 Bam, and the 1990 Manjil earthquake records with the maximum peak accelerations of 0.35g were utilized. Also the design spectrum of Iranian Standard No. 2800-05 for the ground kind 2 is used for response spectrum analysis. The results of time history, response spectrum and pushover analyses show that the strength and displacement capacity of the steel frames are adequate to accommodate the distortions generated by seismic loads and aftershocks properly

  4. Investigation of Forming Performance of Laminated Steel Sheets Using Finite Element Analyses

    International Nuclear Information System (INIS)

    Liu Wenning; Sun Xin; Ruokolainen, Robert; Gayden Xiaohong

    2007-01-01

    Laminated steel sheets have been used in automotive structures for reducing in-cabin noise. However, due to the marked difference in material properties of the different laminated layers, integrating laminated steel parts into the manufacturing processes can be challenging. Especially, the behavior of laminated sheets during forming processes is very different from that of monolithic steel sheets. During the deep-draw forming process, large shear deformation and corresponding high interfacial stress may initiate and propagate interfacial cracks between the core polymer and the metal skin, hence degrading the performance of the laminated sheets. In this paper, the formability of the laminated steel sheets is investigated by means of numerical analysis. The goal of this work is to gain insight into the relationship between the individual properties of the laminated sheet layers and the corresponding formability of the laminated sheet as a whole, eventually leading to reliable design and successful forming process development of such materials. Finite element analyses of laminate sheet forming are presented. Effects of polymer core thickness and viscoelastic properties of the polymer core, as well as punching velocity, are also investigated

  5. MendelianRandomization: an R package for performing Mendelian randomization analyses using summarized data.

    Science.gov (United States)

    Yavorska, Olena O; Burgess, Stephen

    2017-12-01

    MendelianRandomization is a software package for the R open-source software environment that performs Mendelian randomization analyses using summarized data. The core functionality is to implement the inverse-variance weighted, MR-Egger and weighted median methods for multiple genetic variants. Several options are available to the user, such as the use of robust regression, fixed- or random-effects models and the penalization of weights for genetic variants with heterogeneous causal estimates. Extensions to these methods, such as allowing for variants to be correlated, can be chosen if appropriate. Graphical commands allow summarized data to be displayed in an interactive graph, or the plotting of causal estimates from multiple methods, for comparison. Although the main method of data entry is directly by the user, there is also an option for allowing summarized data to be incorporated from the PhenoScanner database of genotype-phenotype associations. We hope to develop this feature in future versions of the package. The R software environment is available for download from [https://www.r-project.org/]. The MendelianRandomization package can be downloaded from the Comprehensive R Archive Network (CRAN) within R, or directly from [https://cran.r-project.org/web/packages/MendelianRandomization/]. Both R and the MendelianRandomization package are released under GNU General Public Licenses (GPL-2|GPL-3). © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.

  6. Performance Optimization of Unglazed Nanofluid Photovoltaic/Thermal System: Energy and Exergy Analyses

    Directory of Open Access Journals (Sweden)

    M. Imtiaz Hussain

    2018-01-01

    Full Text Available The focus of this paper is to predict the transient response of a nanoengineered photovoltaic thermal (PV/T system in view of energy and exergy analyses. Instead of a circular-shaped receiver, a trapezoidal-shaped receiver is employed to increase heat transfer surface area with photovoltaic (PV cells for improvement of heat extraction and thus achievement of a higher PV/T system efficiency. The dynamic mathematical model is developed using MATLAB® software by considering real-time heat transfer coefficients. The proposed model is validated with experimental data from a previous study. Negligible discrepancies were found between measured and predicted data. The validated model was further investigated in detail using different nanofluids by dispersing copper oxide (CuO and aluminum oxide (Al2O3 in pure water. The overall performance of the nanoengineered PV/T system was compared to that of a PV/T system using water only, and optimal operating conditions were determined for maximum useful energy and exergy rates. The results indicated that the CuO/water nanofluid has a notable impact on the energy and exergy efficiencies of the PV/T system compared to that of Al2O3/water nanofluid and water only cases.

  7. Performance Analyses of Renewable and Fuel Power Supply Systems for Different Base Station Sites

    Directory of Open Access Journals (Sweden)

    Josip Lorincz

    2014-11-01

    Full Text Available Base station sites (BSSs powered with renewable energy sources have gained the attention of cellular operators during the last few years. This is because such “green” BSSs impose significant reductions in the operational expenditures (OPEX of telecom operators due to the possibility of on-site renewable energy harvesting. In this paper, the green BSSs power supply system parameters detected through remote and centralized real time sensing are presented. An implemented sensing system based on a wireless sensor network enables reliable collection and post-processing analyses of many parameters, such as: total charging/discharging current of power supply system, battery voltage and temperature, wind speed, etc. As an example, yearly sensing results for three different BSS configurations powered by solar and/or wind energy are discussed in terms of renewable energy supply (RES system performance. In the case of powering those BSS with standalone systems based on a fuel generator, the fuel consumption models expressing interdependence among the generator load and fuel consumption are proposed. This has allowed energy-efficiency comparison of the fuel powered and RES systems, which is presented in terms of the OPEX and carbon dioxide (CO2 reductions. Additionally, approaches based on different BSS air-conditioning systems and the on/off regulation of a daily fuel generator activity are proposed and validated in terms of energy and capital expenditure (CAPEX savings.

  8. Effect of altitude on physiological performance: a statistical analysis using results of international football games.

    Science.gov (United States)

    McSharry, Patrick E

    2007-12-22

    To assess the effect of altitude on match results and physiological performance of a large and diverse population of professional athletes. Statistical analysis of international football (soccer) scores and results. FIFA extensive database of 1460 football matches in 10 countries spanning over 100 years. Altitude had a significant (Pnegative impact on physiological performance as revealed through the overall underperformance of low altitude teams when playing against high altitude teams in South America. High altitude teams score more and concede fewer goals with increasing altitude difference. Each additional 1000 m of altitude difference increases the goal difference by about half of a goal. The probability of the home team winning for two teams from the same altitude is 0.537, whereas this rises to 0.825 for a home team with an altitude difference of 3695 m (such as Bolivia v Brazil) and falls to 0.213 when the altitude difference is -3695 m (such as Brazil v Bolivia). Altitude provides a significant advantage for high altitude teams when playing international football games at both low and high altitudes. Lowland teams are unable to acclimatise to high altitude, reducing physiological performance. As physiological performance does not protect against the effect of altitude, better predictors of individual susceptibility to altitude illness would facilitate team selection.

  9. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

    Science.gov (United States)

    Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

    2015-02-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  10. Effect of the Target Motion Sampling Temperature Treatment Method on the Statistics and Performance

    Science.gov (United States)

    Viitanen, Tuomas; Leppänen, Jaakko

    2014-06-01

    Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very strong resonances. This effect is actually not related to the usage of sampled responses, but is instead an inherent property of the TMS tracking method and concerns both EBT and 0 K calculations.

  11. Statistics in a nutshell

    CERN Document Server

    Boslaugh, Sarah

    2013-01-01

    Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.

  12. The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments

    International Nuclear Information System (INIS)

    Pham, Bihn T.; Einerson, Jeffrey J.

    2010-01-01

    This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory's Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automated processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.

  13. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control

    International Nuclear Information System (INIS)

    Létourneau, Daniel; McNiven, Andrea; Keller, Harald; Wang, An; Amin, Md Nurul; Pearce, Jim; Norrlinger, Bernhard; Jaffray, David A.

    2014-01-01

    Purpose: High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. Methods: The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3–4 times/week over a period of 10–11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ±0.5 and ±1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. Results: The precision of the MLC performance monitoring QC test and the MLC itself was within ±0.22 mm for most MLC leaves

  14. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control.

    Science.gov (United States)

    Létourneau, Daniel; Wang, An; Amin, Md Nurul; Pearce, Jim; McNiven, Andrea; Keller, Harald; Norrlinger, Bernhard; Jaffray, David A

    2014-12-01

    High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3-4 times/week over a period of 10-11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ± 0.5 and ± 1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. The precision of the MLC performance monitoring QC test and the MLC itself was within ± 0.22 mm for most MLC leaves and the majority of the

  15. LHCb: Statistical Comparison of CPU performance for LHCb applications on the Grid

    CERN Multimedia

    Graciani, R

    2009-01-01

    The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks, clusters,…) Both applications are based on the Gaudi and LHCb software frameworks. Gauss uses Pythia and Geant as underlying libraries for the simulation of the collision and the later passage of the generated particles through the LHCb detector. While Brunel makes use of LHCb specific code to process the data from each sub-detector. Both applications are CPU bound. Large Monte Carlo productions or data reconstructions running on the Grid are an ideal benchmark to compare the performance of the different CPU models for each case. Since the processed events are only statistically comparable, only statistical comparison of the...

  16. The nano-mechanical signature of Ultra High Performance Concrete by statistical nanoindentation techniques

    International Nuclear Information System (INIS)

    Sorelli, Luca; Constantinides, Georgios; Ulm, Franz-Josef; Toutlemonde, Francois

    2008-01-01

    Advances in engineering the microstructure of cementitious composites have led to the development of fiber reinforced Ultra High Performance Concretes (UHPC). The scope of this paper is twofold, first to characterize the nano-mechanical properties of the phases governing the UHPC microstructure by means of a novel statistical nanoindentation technique; then to upscale those nanoscale properties, by means of continuum micromechanics, to the macroscopic scale of engineering applications. In particular, a combined investigation of nanoindentation, scanning electron microscope (SEM) and X-ray Diffraction (XRD) indicates that the fiber-matrix transition zone is relatively defect free. On this basis, a four-level multiscale model with defect free interfaces allows to accurately determine the composite stiffness from the measured nano-mechanical properties. Besides evidencing the dominant role of high density calcium silicate hydrates and the stiffening effect of residual clinker, the suggested model may become a useful tool for further optimizing cement-based engineered composites

  17. Oil pipeline performance review 1995, 1996, 1997, 1998 : Technical/statistical report

    International Nuclear Information System (INIS)

    2000-12-01

    This document provides a summary of the pipeline performance and reportable pipeline failures of liquid hydrocarbon pipelines in Canada, for the years 1995 through 1998. The year 1994 was the last one for which the Oil Pipeline Performance Review (OPPR) was published on an annual basis. The OPPR will continue to be published until such time as the Pipeline Risk Assesment Sub-Committee (PRASC) has obtained enough pipeline failure data to be aggregated into a meaningful report. The shifts in the mix of reporting pipeline companies is apparent in the data presented, comparing the volumes transported and the traffic volume during the previous ten-year period. Another table presents a summary of the failures which occurred during the period under consideration, 1995-1998, allowing for a comparison with the data for the previous ten-year period. From the current perspective and from an historical context, this document provides a statistical review of the performance of the pipelines, covering refined petroleum product pipelines, clean oil pipelines and High Vapour Pressure (HVP) pipelines downstream of battery limits. Classified as reportable are spills of 1.5 cubic metre or more of liquid hydrocarbons, any amount of HVP material, any incident involving an injury, a death, a fire, or an explosion. For those companies that responded to the survey, the major items, including number of failures and volumes released are accurate. Samples of the forms used for collecting the information are provided within the document. 6 tabs., 1 fig

  18. How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions

    Directory of Open Access Journals (Sweden)

    Michael Robert Cunningham

    2016-10-01

    Full Text Available The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger, Wood, Stiff, and Chatzisarantis, 2010. Meta-analyses are supposed to reduce bias in literature reviews. Carter, Kofler, Forster, and McCullough’s (2015 meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and funnel plot asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test [PET] and Precision Effect Estimate with Standard Error (PEESE procedures. Despite these serious problems, the Carter et al. meta-analysis results actually indicate that there is a real depletion effect – contrary to their title.

  19. Adaptive statistical iterative reconstruction and Veo: assessment of image quality and diagnostic performance in CT colonography at various radiation doses.

    Science.gov (United States)

    Yoon, Min A; Kim, Se Hyung; Lee, Jeong Min; Woo, Hyoun Sik; Lee, Eun Sun; Ahn, Se Jin; Han, Joon Koo

    2012-01-01

    To evaluate the diagnostic performance of computed tomography (CT) colonography (CTC) reconstructed with different levels of adaptive statistical iterative reconstruction (ASiR, GE Healthcare) and Veo (model-based iterative reconstruction, GE Healthcare) at various tube currents in detection of polyps in porcine colon phantoms. Five porcine colon phantoms with 46 simulated polyps were scanned at different radiation doses (10, 30, and 50 mA s) and were reconstructed using filtered back projection (FBP), ASiR (20%, 40%, and 60%) and Veo. Eleven data sets for each phantom (10-mA s FBP, 10-mA s 20% ASiR, 10-mA s 40% ASiR, 10-mA s 60% ASiR, 10-mA s Veo, 30-mA s FBP, 30-mA s 20% ASiR, 30-mA s 40% ASiR, 30-mA s 60% ASiR, 30-mA s Veo, and 50-mA s FBP) yielded a total of 55 data sets. Polyp detection sensitivity and confidence level of 2 independent observers were evaluated with the McNemar test, the Fisher exact test, and receiver operating characteristic curve analysis. Comparative analyses of overall image quality score, measured image noise, and interpretation time were also performed. Per-polyp detection sensitivities and specificities were highest in 10-mA s Veo, 30-mA s FBP, 30-mA s 60% ASiR, and 50-mA s FBP (sensitivity, 100%; specificity, 100%). The area-under-the-curve values for the overall performance of each data set was also highest (1.000) at 50-mA s FBP, 30-mA s FBP, 30-mA s 60% ASiR, and 10-mA s Veo. Images reconstructed with ASiR showed statistically significant improvement in per-polyp detection sensitivity as the percent level of per-polyp sensitivity increased (10-mA s FBP vs 10-mA s 20% ASiR, P = 0.011; 10-mA s FBP vs 10-mA s 40% ASiR, P = 0.000; 10-mA s FBP vs 10-mA s 60% ASiR, P = 0.000; 10-mA s 20% ASiR vs 40% ASiR, P = 0.034). Overall image quality score was highest at 30-mA s Veo and 50-mA s FBP. The quantitative measurement of the image noise was lowest at 30-mA s Veo and second lowest at 10-mA s Veo. There was a trend of decrease in time

  20. Turking Statistics: Student-Generated Surveys Increase Student Engagement and Performance

    Science.gov (United States)

    Whitley, Cameron T.; Dietz, Thomas

    2018-01-01

    Thirty years ago, Hubert M. Blalock Jr. published an article in "Teaching Sociology" about the importance of teaching statistics. We honor Blalock's legacy by assessing how using Amazon Mechanical Turk (MTurk) in statistics classes can enhance student learning and increase statistical literacy among social science gradaute students. In…

  1. Effect of the Target Motion Sampling temperature treatment method on the statistics and performance

    International Nuclear Information System (INIS)

    Viitanen, Tuomas; Leppänen, Jaakko

    2015-01-01

    Highlights: • Use of the Target Motion Sampling (TMS) method with collision estimators is studied. • The expected values of the estimators agree with NJOY-based reference. • In most practical cases also the variances of the estimators are unaffected by TMS. • Transport calculation slow-down due to TMS dominates the impact on figures-of-merit. - Abstract: Target Motion Sampling (TMS) is a stochastic on-the-fly temperature treatment technique that is being developed as a part of the Monte Carlo reactor physics code Serpent. The method provides for modeling of arbitrary temperatures in continuous-energy Monte Carlo tracking routines with only one set of cross sections stored in the computer memory. Previously, only the performance of the TMS method in terms of CPU time per transported neutron has been discussed. Since the effective cross sections are not calculated at any point of a transport simulation with TMS, reaction rate estimators must be scored using sampled cross sections, which is expected to increase the variances and, consequently, to decrease the figures-of-merit. This paper examines the effects of the TMS on the statistics and performance in practical calculations involving reaction rate estimation with collision estimators. Against all expectations it turned out that the usage of sampled response values has no practical effect on the performance of reaction rate estimators when using TMS with elevated basis cross section temperatures (EBT), i.e. the usual way. With 0 Kelvin cross sections a significant increase in the variances of capture rate estimators was observed right below the energy region of unresolved resonances, but at these energies the figures-of-merit could be increased using a simple resampling technique to decrease the variances of the responses. It was, however, noticed that the usage of the TMS method increases the statistical deviances of all estimators, including the flux estimator, by tens of percents in the vicinity of very

  2. Performance Assessment and Sensitivity Analyses of Disposal of Plutonium as Can-in-Canister Ceramic

    International Nuclear Information System (INIS)

    Rainer Senger

    2001-01-01

    The purpose of this analysis is to examine whether there is a justification for using high-level waste (HLW) as a surrogate for plutonium disposal in can-in-canister ceramic in the total-system performance assessment (TSPA) model for the Site Recommendation (SR). In the TSPA-SR model, the immobilized plutonium waste form is not explicitly represented, but is implicitly represented as an equal number of canisters of HLW. There are about 50 metric tons of plutonium in the U. S. Department of Energy inventory of surplus fissile material that could be disposed. Approximately 17 tons of this material contain significant quantities of impurities and are considered unsuitable for mixed-oxide (MOX) reactor fuel. This material has been designated for direct disposal by immobilization in a ceramic waste form and encapsulating this waste form in high-level waste (HLW). The remaining plutonium is suitable for incorporation into MOX fuel assemblies for commercial reactors (Shaw 1999, Section 2). In this analysis, two cases of immobilized plutonium disposal are analyzed, the 17-ton case and the 13-ton case (Shaw et al. 2001, Section 2.2). The MOX spent-fuel disposal is not analyzed in this report. In the TSPA-VA (CRWMS M and O 1998a, Appendix B, Section B-4), the calculated dose release from immobilized plutonium waste form (can-in-canister ceramic) did not exceed that from an equivalent amount of HLW glass. This indicates that the HLW could be used as a surrogate for the plutonium can-in-canister ceramic. Representation of can-in-canister ceramic as a surrogate is necessary to reduce the number of waste forms in the TSPA model. This reduction reduces the complexity and running time of the TSPA model and makes the analyses tractable. This document was developed under a Technical Work Plan (CRWMS M and O 2000a), and is compliant with that plan. The application of the Quality Assurance (QA) program to the development of that plan (CRWMS M and O 2000a) and of this Analysis is

  3. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

    Science.gov (United States)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

    2015-08-01

    We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

  4. Categorization of the trophic status of a hydroelectric power plant reservoir in the Brazilian Amazon by statistical analyses and fuzzy approaches.

    Science.gov (United States)

    da Costa Lobato, Tarcísio; Hauser-Davis, Rachel Ann; de Oliveira, Terezinha Ferreira; Maciel, Marinalva Cardoso; Tavares, Maria Regina Madruga; da Silveira, Antônio Morais; Saraiva, Augusto Cesar Fonseca

    2015-02-15

    The Amazon area has been increasingly suffering from anthropogenic impacts, especially due to the construction of hydroelectric power plant reservoirs. The analysis and categorization of the trophic status of these reservoirs are of interest to indicate man-made changes in the environment. In this context, the present study aimed to categorize the trophic status of a hydroelectric power plant reservoir located in the Brazilian Amazon by constructing a novel Water Quality Index (WQI) and Trophic State Index (TSI) for the reservoir using major ion concentrations and physico-chemical water parameters determined in the area and taking into account the sampling locations and the local hydrological regimes. After applying statistical analyses (factor analysis and cluster analysis) and establishing a rule base of a fuzzy system to these indicators, the results obtained by the proposed method were then compared to the generally applied Carlson and a modified Lamparelli trophic state index (TSI), specific for trophic regions. The categorization of the trophic status by the proposed fuzzy method was shown to be more reliable, since it takes into account the specificities of the study area, while the Carlson and Lamparelli TSI do not, and, thus, tend to over or underestimate the trophic status of these ecosystems. The statistical techniques proposed and applied in the present study, are, therefore, relevant in cases of environmental management and policy decision-making processes, aiding in the identification of the ecological status of water bodies. With this, it is possible to identify which factors should be further investigated and/or adjusted in order to attempt the recovery of degraded water bodies. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    Energy Technology Data Exchange (ETDEWEB)

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  6. Region-of-interest analyses of one-dimensional biomechanical trajectories: bridging 0D and 1D theory, augmenting statistical power

    Directory of Open Access Journals (Sweden)

    Todd C. Pataky

    2016-11-01

    Full Text Available One-dimensional (1D kinematic, force, and EMG trajectories are often analyzed using zero-dimensional (0D metrics like local extrema. Recently whole-trajectory 1D methods have emerged in the literature as alternatives. Since 0D and 1D methods can yield qualitatively different results, the two approaches may appear to be theoretically distinct. The purposes of this paper were (a to clarify that 0D and 1D approaches are actually just special cases of a more general region-of-interest (ROI analysis framework, and (b to demonstrate how ROIs can augment statistical power. We first simulated millions of smooth, random 1D datasets to validate theoretical predictions of the 0D, 1D and ROI approaches and to emphasize how ROIs provide a continuous bridge between 0D and 1D results. We then analyzed a variety of public datasets to demonstrate potential effects of ROIs on biomechanical conclusions. Results showed, first, that a priori ROI particulars can qualitatively affect the biomechanical conclusions that emerge from analyses and, second, that ROIs derived from exploratory/pilot analyses can detect smaller biomechanical effects than are detectable using full 1D methods. We recommend regarding ROIs, like data filtering particulars and Type I error rate, as parameters which can affect hypothesis testing results, and thus as sensitivity analysis tools to ensure arbitrary decisions do not influence scientific interpretations. Last, we describe open-source Python and MATLAB implementations of 1D ROI analysis for arbitrary experimental designs ranging from one-sample t tests to MANOVA.

  7. Making the user visible: analysing irrigation practices and farmers’ logic to explain actual drip irrigation performance

    NARCIS (Netherlands)

    Benouniche, M.; Kuper, M.; Hammani, A.; Boesveld, H.

    2014-01-01

    The actual performance of drip irrigation (irrigation efficiency, distribution uniformity) in the field is often quite different from that obtained in experimental stations. We developed an approach to explain the actual irrigation performance of drip irrigation systems by linking measured

  8. 76 FR 24831 - Site-Specific Analyses for Demonstrating Compliance With Subpart C Performance Objectives

    Science.gov (United States)

    2011-05-03

    ...-level radioactive waste disposal facilities to conduct site-specific analyses to demonstrate compliance... public health and safety, these amendments would enhance the safe disposal of low-level radioactive waste... would be to enhance the safe disposal of low-level radioactive waste. The NRC is also proposing...

  9. Phytochemical Profile of Erythrina variegata by Using High-Performance Liquid Chromatography and Gas Chromatography-Mass Spectroscopy Analyses

    OpenAIRE

    Suriyavathana Muthukrishnan; Subha Palanisamy; Senthilkumar Subramanian; Sumathi Selvaraj; Kavitha Rani Mari; Ramalingam Kuppulingam

    2016-01-01

    Natural products derived from plant sources have been utilized to treat patients with numerous diseases. The phytochemical constituents present in ethanolic leaf extract of Erythrina variegata (ELEV) were identified by using high-performance liquid chromatography (HPLC) and gas chromatography-mass spectroscopy (GC-MS) analyses. Shade dried leaves were powdered and extracted with ethanol for analyses through HPLC to identify selected flavonoids and through GC-MS to identify other molecules. Th...

  10. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    Science.gov (United States)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  11. The Impact of Time Difference between Satellite Overpass and Ground Observation on Cloud Cover Performance Statistics

    Directory of Open Access Journals (Sweden)

    Jędrzej S. Bojanowski

    2014-12-01

    Full Text Available Cloud property data sets derived from passive sensors onboard the polar orbiting satellites (such as the NOAA’s Advanced Very High Resolution Radiometer have global coverage and now span a climatological time period. Synoptic surface observations (SYNOP are often used to characterize the accuracy of satellite-based cloud cover. Infrequent overpasses of polar orbiting satellites combined with the 3- or 6-h SYNOP frequency lead to collocation time differences of up to 3 h. The associated collocation error degrades the cloud cover performance statistics such as the Hanssen-Kuiper’s discriminant (HK by up to 45%. Limiting the time difference to 10 min, on the other hand, introduces a sampling error due to a lower number of corresponding satellite and SYNOP observations. This error depends on both the length of the validated time series and the SYNOP frequency. The trade-off between collocation and sampling error call for an optimum collocation time difference. It however depends on cloud cover characteristics and SYNOP frequency, and cannot be generalized. Instead, a method is presented to reconstruct the unbiased (true HK from HK affected by the collocation differences, which significantly (t-test p < 0.01 improves the validation results.

  12. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    Science.gov (United States)

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.

  13. Chemical data and statistical analyses from a uranium hydrogeochemical survey of the Rio Ojo Caliente drainage basin, New Mexico. Part I. Water

    International Nuclear Information System (INIS)

    Wenrich-Verbeek, K.J.; Suits, V.J.

    1979-01-01

    This report presents the chemical analyses and statistical evaluation of 62 water samples collected in the north-central part of New Mexico near Rio Ojo Caliente. Both spring and surface-water samples were taken throughout the Rio Ojo Caliente drainage basin above and a few miles below the town of La Madera. A high U concentration (15 μg/l) found in the water of the Rio Ojo Caliente near La Madera, Rio Arriba County, New Mexico, during a regional sampling-technique study in August 1975 by the senior author, was investigated further in May 1976 to determine whether stream waters could be effectively used to trace the source of a U anomaly. A detailed study of the tributaries to the Rio Ojo Caliente, involving 29 samples, was conducted during a moderate discharge period, May 1976, so that small tributaries would contain water. This study isolated Canada de la Cueva as the tributary contributing the anomalous U, so that in May 1977, an extremely low discharge period due to the 1977 drought, an additional 33 samples were taken to further define the anomalous area. 6 references, 3 figures, 6 tables

  14. Examining the Performance of Statistical Downscaling Methods: Toward Matching Applications to Data Products

    Science.gov (United States)

    Dixon, K. W.; Lanzante, J. R.; Adams-Smith, D.

    2017-12-01

    Several challenges exist when seeking to use future climate model projections in a climate impacts study. A not uncommon approach is to utilize climate projection data sets derived from more than one future emissions scenario and from multiple global climate models (GCMs). The range of future climate responses represented in the set is sometimes taken to be indicative of levels of uncertainty in the projections. Yet, GCM outputs are deemed to be unsuitable for direct use in many climate impacts applications. GCM grids typically are viewed as being too coarse. Additionally, regional or local-scale biases in a GCM's simulation of the contemporary climate that may not be problematic from a global climate modeling perspective may be unacceptably large for a climate impacts application. Statistical downscaling (SD) of climate projections - a type of post-processing that uses observations to inform the refinement of GCM projections - is often used in an attempt to account for GCM biases and to provide additional spatial detail. "What downscaled climate projection is the best one to use" is a frequently asked question, but one that is not always easy to answer, as it can be dependent on stakeholder needs and expectations. Here we present results from a perfect model experimental design illustrating how SD method performance can vary not only by SD method, but how performance can also vary by location, season, climate variable of interest, amount of projected climate change, SD configuration choices, and whether one is interested in central tendencies or the tails of the distribution. Awareness of these factors can be helpful when seeking to determine the suitability of downscaled climate projections for specific climate impacts applications. It also points to the potential value of considering more than one SD data product in a study, so as to acknowledge uncertainties associated with the strengths and weaknesses of different downscaling methods.

  15. Radiological analyses of France Telecom surge arresters. Study performed for the CGT FAPT Cantal

    International Nuclear Information System (INIS)

    2010-02-01

    This document reports the radiological characterization of various versions of surge arresters used in the past to protect telephone lines against over-voltages. These equipment, which use various radioactive materials, were assessed by gamma radiation flow measurements, alpha-beta-gamma count rate measurements, dose rate measurements, gamma spectrometry analyses, tritium emanation test, radon 222 emanation test, smearing. Recommendations are formulated to manage radioactive surge arresters which are still being operated

  16. Introduction of a Journal Excerpt Activity Improves Undergraduate Students' Performance in Statistics

    Science.gov (United States)

    Rabin, Laura A.; Nutter-Upham, Katherine E.

    2010-01-01

    We describe an active learning exercise intended to improve undergraduate students' understanding of statistics by grounding complex concepts within a meaningful, applied context. Students in a journal excerpt activity class read brief excerpts of statistical reporting from published research articles, answered factual and interpretive questions,…

  17. The Effects of Pre-Lecture Quizzes on Test Anxiety and Performance in a Statistics Course

    Science.gov (United States)

    Brown, Michael J.; Tallon, Jennifer

    2015-01-01

    The purpose of our study was to examine the effects of pre-lecture quizzes in a statistics course. Students (N = 70) from 2 sections of an introductory statistics course served as participants in this study. One section completed pre-lecture quizzes whereas the other section did not. Completing pre-lecture quizzes was associated with improved exam…

  18. Effect of Task Presentation on Students' Performances in Introductory Statistics Courses

    Science.gov (United States)

    Tomasetto, Carlo; Matteucci, Maria Cristina; Carugati, Felice; Selleri, Patrizia

    2009-01-01

    Research on academic learning indicates that many students experience major difficulties with introductory statistics and methodology courses. We hypothesized that students' difficulties may depend in part on the fact that statistics tasks are commonly viewed as related to the threatening domain of math. In two field experiments which we carried…

  19. Data Collection Manual for Academic and Research Library Network Statistics and Performance Measures.

    Science.gov (United States)

    Shim, Wonsik "Jeff"; McClure, Charles R.; Fraser, Bruce T.; Bertot, John Carlo

    This manual provides a beginning approach for research libraries to better describe the use and users of their networked services. The manual also aims to increase the visibility and importance of developing such statistics and measures. Specific objectives are: to identify selected key statistics and measures that can describe use and users of…

  20. Les OPCVM actions au Maroc : mesure et analyse de performance-risque

    Directory of Open Access Journals (Sweden)

    F. El Majidi

    2017-09-01

    Full Text Available Cet article a pour objet d’élaborer des classements des OPCVM actions au Maroc sur un an et sur cinq ans selon la performance, le risque et le ratio risque rentabilité. On a utilisé les données figurant sur les tableaux des performances publiés par l’ASFIM. Les calculs des performances sur les deux horizons montrent que les classements des fonds ne se distinguent pas en termes des performances absolues et celles relatives à contrario on a constaté une instabilité dans les classements d’un horizon à un autre. Le nombre de fonds réalisant des performances sur une année est supérieur au nombre de fonds dont la performance est calculée sur cinq ans aussi bien en termes absolu qu’en termes relatif. Enfin, le classement par le ratio risque rentabilité est rigoureux, il diffère de ceux par les performances car les fonds changent de positions et se distinguent par leurs risques inhérents.

  1. Performing dynamic time history analyses by extension of the response spectrum method

    International Nuclear Information System (INIS)

    Hulbert, G.M.

    1983-01-01

    A method is presented to calculate the dynamic time history response of finite-element models using results from response spectrum analyses. The proposed modified time history method does not represent a new mathamatical approach to dynamic analysis but suggests a more efficient ordering of the analytical equations and procedures. The modified time history method is considerably faster and less expensive to use than normal time hisory methods. This paper presents the theory and implementation of the modified time history approach along with comparisons of the modified and normal time history methods for a prototypic seismic piping design problem

  2. A statistical analysis of electrical cerebral activity; Contribution a l'etude de l'analyse statistique de l'activite electrique cerebrale

    Energy Technology Data Exchange (ETDEWEB)

    Bassant, Marie-Helene

    1971-01-15

    The aim of this work was to study the statistical properties of the amplitude of the electroencephalographic signal. The experimental method is described (implantation of electrodes, acquisition and treatment of data). The program of the mathematical analysis is given (calculation of probability density functions, study of stationarity) and the validity of the tests discussed. The results concerned ten rabbits. Trips of EEG were sampled during 40 s. with very short intervals (500 μs). The probability density functions established for different brain structures (especially the dorsal hippocampus) and areas, were compared during sleep, arousal and visual stimulus. Using a Χ{sup 2} test, it was found that the Gaussian distribution assumption was rejected in 96.7 per cent of the cases. For a given physiological state, there was no mathematical reason to reject the assumption of stationarity (in 96 per cent of the cases). (author) [French] Le but de ce travail est d'etudier les proprietes statistiques des amplitudes du signal electroencephalographique. La methode experimentale est decrite (implantation d'electrodes, acquisition et traitement des donnees). Le programme d'analyse mathematique est precise (calcul des courbes de repartition statistique, etude de la stationnarite du signal) et la validite des tests, discutee. Les resultats de l'etude portent sur 10 lapins. Des sequences de 40 s d'EEG sont echantillonnees. La valeur de la tension est prelevee a un pas d'echantillonnage de 500 μs. Les courbes de repartition statistiques sont comparees d'une region de l'encephale a l'autre (l'hippocampe dorsal a ete specialement analyse) ceci pendant le sommeil, l'eveil et des stimulations visuelles. Le test du Χ{sup 2} rejette l'hypothese de distribution normale dans 97 pour cent des cas. Pour un etat physiologique donne, il n'existe pas de raison mathematique a ce que soit repoussee l'hypothese de stationnarite, ceci dans 96.7 pour cent des cas. (auteur)

  3. Analysing collaborative performance and cost allocation for the joint route planning problem

    OpenAIRE

    Verdonck, Lotte; Ramaekers, Katrien; Depaire, Benoît; Caris, An; Janssens, Gerrit K.

    2017-01-01

    Although organisations become increasingly aware of the inevitable character of horizontal collaboration, surveys report failure rates up to 70 percent for starting strategic partnerships. While a growing body of research acknowledges the importance of the partner selection and cost allocation process, no extensive study has been performed on the numerical relationship between specific company traits, applied allocation mechanisms and collaborative performance. This paper investigates the imp...

  4. ADAPTER: Analysing and developing adaptability and performance in teams to enhance resilience

    International Nuclear Information System (INIS)

    Beek, Dolf van der; Schraagen, Jan Maarten

    2015-01-01

    In the current study, the concept of team resilience was operationalized by developing a first version of a questionnaire (ADAPTER) driven by the four essential abilities of resilience (Hollnagel E, 2011, Resilience engineering in practice: a guidebook, p. 275–96) and expanded with more relation-oriented abilities of leadership and cooperation. The development and administration of ADAPTER took place within two companies. Factor analyses using data of 91 participants largely supported the hypothesized 6-dimension taxonomy. Support was found for Team responding behavior, Shared Leadership and Cooperation with other teams/departments. Anticipation showed considerable overlap with the monitoring scale, possibly due to the fact that monitoring items dealt with prospective situations. Using ADAPTER questionnaire results as a starting point for further in-depth discussion among the different teams in the pilot companies proved very useful. Suggestions for future research include contextualizing the questionnaire by embedding it in actual cases or having it filled in after specific incidents. Also, support of organization should be included as a separate dimension in ADAPTER. - Highlights: • Development of a team resilience questionnaire (ADAPTER). • Driven by Hollnagel's resilience abilities plus shared leadership and cooperation. • Pilot testing of ADAPTER took place within two companies. • Factor analyses (N=91) largely supported the hypothesized 6-dimension taxonomy. • Results provide a useful starting point for further in-depth discussions

  5. First study of correlation between oleic acid content and SAD gene polymorphism in olive oil samples through statistical and bayesian modeling analyses.

    Science.gov (United States)

    Ben Ayed, Rayda; Ennouri, Karim; Ercişli, Sezai; Ben Hlima, Hajer; Hanana, Mohsen; Smaoui, Slim; Rebai, Ahmed; Moreau, Fabienne

    2018-04-10

    Virgin olive oil is appreciated for its particular aroma and taste and is recognized worldwide for its nutritional value and health benefits. The olive oil contains a vast range of healthy compounds such as monounsaturated free fatty acids, especially, oleic acid. The SAD.1 polymorphism localized in the Stearoyl-acyl carrier protein desaturase gene (SAD) was genotyped and showed that it is associated with the oleic acid composition of olive oil samples. However, the effect of polymorphisms in fatty acid-related genes on olive oil monounsaturated and saturated fatty acids distribution in the Tunisian olive oil varieties is not understood. Seventeen Tunisian olive-tree varieties were selected for fatty acid content analysis by gas chromatography. The association of SAD.1 genotypes with the fatty acids composition was studied by statistical and Bayesian modeling analyses. Fatty acid content analysis showed interestingly that some Tunisian virgin olive oil varieties could be classified as a functional food and nutraceuticals due to their particular richness in oleic acid. In fact, the TT-SAD.1 genotype was found to be associated with a higher proportion of mono-unsaturated fatty acids (MUFA), mainly oleic acid (C18:1) (r = - 0.79, p SAD.1 association with the oleic acid composition of olive oil was identified among the studied varieties. This correlation fluctuated between studied varieties, which might elucidate variability in lipidic composition among them and therefore reflecting genetic diversity through differences in gene expression and biochemical pathways. SAD locus would represent an excellent marker for identifying interesting amongst virgin olive oil lipidic composition.

  6. Scientometric analyses of studies on the role of innate variation in athletic performance.

    Science.gov (United States)

    Lombardo, Michael P; Emiah, Shadie

    2014-01-01

    Historical events have produced an ideologically charged atmosphere in the USA surrounding the potential influences of innate variation on athletic performance. We tested the hypothesis that scientific studies of the role of innate variation in athletic performance were less likely to have authors with USA addresses than addresses elsewhere because of this cultural milieu. Using scientometric data collected from 290 scientific papers published in peer-reviewed journals from 2000-2012, we compared the proportions of authors with USA addresses with those that listed addresses elsewhere that studied the relationships between athletic performance and (a) prenatal exposure to androgens, as indicated by the ratio between digits 2 and 4, and (b) the genotypes for angiotensin converting enzyme, α-actinin-3, and myostatin; traits often associated with athletic performance. Authors with USA addresses were disproportionately underrepresented on papers about the role of innate variation in athletic performance. We searched NIH and NSF databases for grant proposals solicited or funded from 2000-2012 to determine if the proportion of authors that listed USA addresses was associated with funding patterns. NIH did not solicit grant proposals designed to examine these factors in the context of athletic performance and neither NIH nor NSF funded grants designed to study these topics. We think the combined effects of a lack of government funding and the avoidance of studying controversial or non-fundable topics by USA based scientists are responsible for the observation that authors with USA addresses were underrepresented on scientific papers examining the relationships between athletic performance and innate variation.

  7. Hydrogeologic characterization and evolution of the 'excavation damaged zone' by statistical analyses of pressure signals: application to galleries excavated at the clay-stone sites of Mont Terri (Ga98) and Tournemire (Ga03)

    International Nuclear Information System (INIS)

    Fatmi, H.; Ababou, R.; Matray, J.M.; Joly, C.

    2010-01-01

    before, during, and after excavation works. To achieve the above objectives, we analyse and interpret pressure signals using several statistical methods which we programmed as custom-made Matlab Tool Boxes - including the following: 1) Auto-correlation, cross-correlation and temporal transfer functions (deconvolution). 2) Fourier spectral analysis, cross-spectral analysis and frequency gain. 3) Cross analysis of pore water and atmospheric pressure; ACF analysis (Atmospheric Correction Factory); relation with the concept 'relative pressure'; consequences on the barometric effect. 4) Multi-resolution dyadic wavelet analysis (and cross-wavelet analysis). 5) Statistical envelope of evolutionary signals (Hilbert transform, Cramer-Leadbetter envelope). Before these analyses are performed, the raw signals are pre-processed using a number of techniques for homogenizing the time steps, detecting outliers, and/or reconstructing missing data. Furthermore, the various methods of signal analyses themselves involve parameters that need to be tested (such as spectral estimation filters). For these reasons, in each case, a validation test is developed for a better interpretation of the signal processing tools. In this paper, the statistical analyses of pressure signals are used to assess the hydraulic characteristics of the clay-stone. In particular, the statistical analyses allow us to detect and to characterize, at diurnal time scales, the relationship between piezo-metric and atmospheric pressure (whence the barometric efficiency), and at semi-diurnal time scales, the statistical influence of earth tides (whence an estimate of specific storage). We focus in particular on the time evolution of these phenomena in the zone damaged by excavation works. Excavation of the galleries: effects on the evolution of pore pressure, and specific storage The statistical analyses of pore pressure signals indicate the emergence of a damaged zone (EDZ) as the excavation front passes near the

  8. Performance analyses of Elmo Bumpy Torus plasmas and plasma support systems

    International Nuclear Information System (INIS)

    Fenstermacher, M.E.

    1979-01-01

    The development and applcation of the OASIS Code (Operational Analysis of ELMO Bumpy Torus Support and Ignition Systems) for the study of EBT device and plasma performance are presented. The code performs a time-independent, zero-dimensional self-consistent calculation of plasma and plasmasupport systems parameters for the physics and engineering of EBT devices. The features of OASIS modeling for the EBT plasma include: (1) particle balance of the bulk toroidal and electron ring plasma components for experimental (H-H, D-D, He-He etc.) as well as reactor (D-T) devices; (2) energy balance in the bulk and ring plasmas for externally heated or ignition devices; (3) alpha particle effects for reactor devices; (4) auxiliary heating effects, including microwave (ECRH), RF heating (e.g., ICRH), and neutral beam methods; and (5) ignition conditions, including fusion power, alpha power and neutron wall loading. The performance studies using OASIS focussed on variation in plasma and device size and on microwave input power and frequency. An additional study was performed to determine the characteristics of an EBT reactor proof-of-principle device operated with a deuterium-tritium plasma. Sensitivity studies were performed for variation in the input microwave power sharing fractions and the dependence of the bulk n tau scaling law on bulk electron temperature

  9. Assessment of CONTAIN and MELCOR for performing LOCA and LOVA analyses in ITER

    International Nuclear Information System (INIS)

    Merrill, B.J.; Hagrman, D.L.; Gaeta, M.J.; Petti, D.A.

    1994-09-01

    This report describes the results of an assessment of the CONTAIN and MELCOR computer codes for ITER LOCA and LOVA applications. As part of the assessment, the results of running a test problem that describes an ITER LOCA are presented. It is concluded that the MELCOR code should be the preferred code for ITER severe accident thermal hydraulic analyses. This code will require the least modification to be appropriate for calculating thermal hydraulic behavior in ITER relevant conditions that include vacuum, cryogenics, ITER temperatures, and the presence of a liquid metal test module. The assessment of the aerosol transport models in these codes concludes that several modifications would have to be made to CONTAIN and/or MELCOR to make them applicable to the aerosol transport part of severe accident analysis in ITER

  10. Performance analyses of the communication networks of a modern supervision and control system of research reactors

    International Nuclear Information System (INIS)

    El-Madbouly, E.I.; Shaat, M.K.; Shokr, A.M.; Elrefaei, G.H.

    2009-01-01

    The functions of the Instrumentation and Control (I and C) system in research reactors, the changes in its design according to the advances in the technology, and the internationally established safety requirements on the design and operational performance of this system are reviewed. The main features of the communication networks commonly used in the Supervision and Control systems (SCS) are presented. A methodology for the performance analysis of the communication networks of computer-based distributed SCS is developed and presented along with discussions. Application of this methodology to a modern SCS of a typical research reactor is illustrated. (orig.)

  11. Performance analyses of the communication networks of a modern supervision and control system of research reactors

    Energy Technology Data Exchange (ETDEWEB)

    El-Madbouly, E.I. [Menoufia Univ., Menouf (Egypt). Faculty of Electronics Engineering; Shaat, M.K.; Shokr, A.M.; Elrefaei, G.H. [Atomic Energy Authority, Abouzabal (Egypt). Egypt Second Research Reactor

    2009-04-15

    The functions of the Instrumentation and Control (I and C) system in research reactors, the changes in its design according to the advances in the technology, and the internationally established safety requirements on the design and operational performance of this system are reviewed. The main features of the communication networks commonly used in the Supervision and Control systems (SCS) are presented. A methodology for the performance analysis of the communication networks of computer-based distributed SCS is developed and presented along with discussions. Application of this methodology to a modern SCS of a typical research reactor is illustrated. (orig.)

  12. Analyses of User Rationality and System Learnability: Performing Task Variants in User Tests

    Science.gov (United States)

    Law, Effie Lai-Chong; Blazic, Borka Jerman; Pipan, Matic

    2007-01-01

    No systematic empirical study on investigating the effects of performing task variants on user cognitive strategy and behaviour in usability tests and on learnability of the system being tested has been documented in the literature. The current use-inspired basic research work aims to identify the underlying cognitive mechanisms and the practical…

  13. Performance analyses of Z-source and quasi Z-source inverter for photovoltaic applications

    Science.gov (United States)

    Himabind, S.; Priya, T. Hari; Manjeera, Ch.

    2018-04-01

    This paper presents the comparative analysis of Z-source and Quasi Z-source converter for renewable energy applications. Due to the dependency of renewable energy sources on external weather conditions the output voltage, current changes accordingly which effects the performance of traditional voltage source and current source inverters connected across it. To overcome the drawbacks of VSI and CSI, Z-source and Quasi Z-source inverter (QZSI) are used, which can perform multiple tasks like ac-to-dc, dc-to-ac, ac-to-ac, dc-to-dc conversion. They can be used for both buck and boost operations, by utilizing the shoot-through zero state. The QZSI is derived from the ZSI topology, with a slight change in the impedance network and it overcomes the drawbacks of ZSI. The QZSI draws a constant current from the source when compared to ZSI. A comparative analysis is performed between Z-source and Quasi Z-source inverter, simulation is performed in MATLAB/Simulink environment.

  14. Producing the Docile Body: Analysing Local Area Under-Performance Inspection (LAUI)

    Science.gov (United States)

    Clapham, Andrew

    2015-01-01

    Sir Michael Wilshaw, the head of the Office for Standards in Education (OfSTED), declared a "new wave" of Local Area Under-performance Inspections (LAUI) of schools "denying children the standard of education they deserve". This paper examines how the threat of LAUI played out over three mathematics lessons taught by a teacher…

  15. Systems Performance Analyses of Alaska Wind-Diesel Projects; Kotzebue, Alaska (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, I.

    2009-04-01

    This fact sheet summarizes a systems performance analysis of the wind-diesel project in Kotzebue, Alaska. Data provided for this project include wind turbine output, average wind speed, average net capacity factor, and optimal net capacity factor based on Alaska Energy Authority wind data, estimated fuel savings, and wind system availability.

  16. A Framework for Analysing Corporate Social Performance: beyond the Wood Model

    NARCIS (Netherlands)

    Pierick, ten E.; Beekman, V.; Weele, van der C.N.; Meeusen-van Onna, M.J.G.; Graaff, de R.P.M.

    2004-01-01

    Many business organisations put a lot of effort in raising their performance levels in the three dimensions of sustainability - i.e., people, planet, and profit. It is therefore important to measure and weight the effects of their efforts. In this report a framework is presented that is helpful in

  17. Current Approaches to Tactical Performance Analyses in Soccer Using Position Data

    NARCIS (Netherlands)

    Memmert, Daniel; Lemmink, Koen A P M; Sampaio, Jaime

    Tactical match performance depends on the quality of actions of individual players or teams in space and time during match-play in order to be successful. Technological innovations have led to new possibilities to capture accurate spatio-temporal information of all players and unravel the dynamics

  18. The MARS for squat, countermovement, and standing long jump performance analyses: are measures reproducible?

    Science.gov (United States)

    Hébert-Losier, Kim; Beaven, C Martyn

    2014-07-01

    Jump tests are often used to assess the effect of interventions because their outcomes are reported valid indicators of functional performance. In this study, we examined the reproducibility of performance parameters from 3 common jump tests obtained using the commercially available Kistler Measurement, Analysis and Reporting Software (MARS). On 2 separate days, 32 men performed 3 squat jumps (SJs), 3 countermovement jumps (CMJs), and 3 standing long jumps (LJs) on a Kistler force-plate. On both days, the performance measures from the best jump of each series were extracted using the MARS. Changes in the mean scores, intraclass correlation coefficients (ICCs), and coefficients of variations (CVs) were computed to quantify the between-day reproducibility of each parameter. Moreover, the reproducibility quantifiers specific to the 3 separate jumps were compared using nonparametric tests. Overall, an acceptable between-day reproducibility (mean ± SD, ICC, and CV) of SJ (0.88 ± 0.06 and 7.1 ± 3.8%), CMJ (0.84 ± 0.17 and 5.9 ± 4.1%), and LJ (0.80 ± 0.13 and 8.1 ± 4.1%) measures was found using the MARS, except for parameters directly relating to the rate of force development (i.e., time to maximal force) and change in momentum during countermovement (i.e., negative force impulse) where reproducibility was lower. A greater proportion of the performance measures from the standing LJs had low ICCs and/or high CVs values most likely owing to the complex nature of the LJ test. Practitioners and researchers can use most of the jump test parameters from the MARS with confidence to quantify changes in the functional ability of individuals over time, except for those relating to the rate of force development or change in momentum during countermovement phases of jumps.

  19. Overview and statistical failure analyses of the electrical insulation system for the SSC long dipole magnets from an industrialization point of view

    International Nuclear Information System (INIS)

    Roach, J.F.

    1992-01-01

    The electrical insulation system of the SSC long dipole magnets is reviewed and potential dielectric failure modes discussed. Electrical insulation fabrication and assembly issues with respect to rate production manufacturability are addressed. The automation required for rate assembly of electrical insulation components will require critical online visual and dielectric screening tests to insure production quality. Storage and assembly areas must bc designed to prevent foreign particles from becoming entrapped in the insulation during critical coil winding, molding, and collaring operations. All hand assembly procedures involving dielectrics must be performed with rigorous attention to their impact on insulation integrity. Individual dipole magnets must have a sufficiently low probability of electrical insulation failure under all normal and fault mode voltage conditions such that the series of magnets in the SSC rings have acceptable Mean Time Between Failure (MTBF) with respect to dielectric mode failure events. Statistical models appropriate for large electrical system breakdown failure analysis are applied to the SSC magnet rings. The MTBF of the SSC system is related to failure data base for individual dipole magnet samples

  20. Performance of Aspergillus niger Cultivation in Geometrically Dissimilar Bioreactors Evaluated on the Basis of Morphological Analyses

    Directory of Open Access Journals (Sweden)

    M. A. Priede

    2002-01-01

    Full Text Available The growth of Aspergillus niger, citric acid production and mycelia morphology changes were compared under different mixing conditions in bioreactors with two types of stirrers: Rushton turbine stirrers (RTS1 or RTS2 and axial counterflow stirrers (ACS1 or ACS2. The characteristics of growth, productivity and morphology varied with the mixing system and the applied agitation regime. In the first series of experiments, the flow characteristics of Aspergillus niger broth under different mixing conditions were analysed in a model bioreactor using RTS1 and ACS1. The kinetic energy E of flow fluctuations was measured in gassed and ungassed water and fermentation broth systems using a stirring intensity measuring device (SIMD-f1. The difference of energy E values at different points was more pronounced in the bioreactor with RTS1 than in the case of ACS1. High viscous A. niger broths provided higher energy E values in comparison with water. It was observed that the Aspergillus niger growth rate and citric acid synthesis rate decreased at very high energy E values, the behaviour obviously being connected with the influence of the irreversible shear stress on the mycelial morphology. In the second series of experiments, a higher citric acid yield was achieved in the case of ACS2 at a power input approximately twice lower than in the case of RTS2. Morphological characterization of A. niger pellets was carried out by the image analysis method. ACS2 provided the development of morphology, where pellets and cores had larger area, perimeter and diameter, and the annular region of pellets was looser and more »hairy« in comparison with the case of RTS2. The pellets from the fermentation with RTS2 were smaller, denser, with shorter hyphae in the annular region of pellets, and the broth was characterized by a higher percentage of diffuse mycelia. Power input studies of RTS2 and ACS2 were made at different agitator rotation speeds and gas flow rates using water

  1. Analyses of expected rod performance during the dry storage of spent fuel

    International Nuclear Information System (INIS)

    Einziger, R.E.

    1982-08-01

    Within the next ten years, a number of utilities will be forced to increase their interim spent-fuel-storage capability or face the loss of full-core reserve. Dry storage is being considered to fill this need. This paper analyzes the fuel-rod-performance data supporting dry storage and discusses areas where there are still outstanding questions. Three storage temperature ranges (T 0 C, 250 0 C 0 C and T > 400 0 C), two atmospheres (inert, unlimited air) and two initial fuel-rod conditions (intact, breached) are considered. It is concluded that a fuel-performance data base exists that indicates that storage below 250 0 C can be accomplished with long-term fuel pellet and cladding stability. At higher temperatures, analytic studies and laboratory experiments are needed especially to extrapolate and interpret the result of demonstration tests. 2 figures, 2 tables

  2. Thermal Deformation and RF Performance Analyses for the SWOT Large Deployable Ka-Band Reflectarray

    Science.gov (United States)

    Fang, H.; Sunada, E.; Chaubell, J.; Esteban-Fernandez, D.; Thomson, M.; Nicaise, F.

    2010-01-01

    A large deployable antenna technology for the NASA Surface Water and Ocean Topography (SWOT) Mission is currently being developed by JPL in response to NRC Earth Science Tier 2 Decadal Survey recommendations. This technology is required to enable the SWOT mission due to the fact that no currently available antenna is capable of meeting SWOT's demanding Ka-Band remote sensing requirements. One of the key aspects of this antenna development is to minimize the effect of the on-orbit thermal distortion to the antenna RF performance. An analysis process which includes: 1) the on-orbit thermal analysis to obtain the temperature distribution; 2) structural deformation analysis to get the geometry of the antenna surface; and 3) the RF performance with the given deformed antenna surface has been developed to accommodate the development of this antenna technology. The detailed analysis process and some analysis results will be presented and discussed by this paper.

  3. Analysing the effects of air flow on a formula prototype vehicle to optimize its performance

    Science.gov (United States)

    Rastogi, Nisha; Shetty, Siddhanth; Ashok, B.

    2017-11-01

    FSAE (Formula Society of Automotive Engineers) is an_engineering design competition which challenges students to design and build their own Formula Style race-car. The race-car is being judged on basis of various criteria namely, design, cost, business and performance. For the race-car to participate in the dynamic events and traverse through different sorts of challenging tracks in the least time possible, the tyres must generate appropriate amount of lateral and longitudinal force. The car must not topple even at high speeds and needs to manoeuvre quickly. To achieve the above-mentioned criterion, there is a need of implementing aerodynamics in the car. The optimum amount of downforce necessary to execute a smooth and rapid active behaviour of our car with maximum achievable performance is to be measured keeping vehicle dynamics into consideration. In this paper, vehicle dynamics and aerodynamics are related to an extent where all the above criterion can be achieved successfully, thereby bringing about a trade-off without any sort of compromises in either of them. The co-ordination between aerodynamics and vehicle dynamics has been depicted with a detailed methodology, accompanied by Computational Fluid Dynamics (CFD) simulations of the wings and the full body of the car using STAR CCM+. Further the results has been discussed properly in the later sections of this paper. With a systematic approach, thoroughly done with several iterations on MATLAB followed by CFD simulations and analysis, the desired performance was accomplished.

  4. Solar thermal–photovoltaic powered potato cold storage – Conceptual design and performance analyses

    International Nuclear Information System (INIS)

    Basu, Dipankar N.; Ganguly, A.

    2016-01-01

    Highlights: • Loss of food crop is a huge problem in India due to the shortage of cold storage. • Conceptual design of a power system using solar energy for a potato cold storage. • Integration of flat plate collector and SPV module with suitable operating strategy. • System provides a net energy surplus of about 36 MW h over a calendar year. • Rudimentary economic analysis found payback period of less than four years. - Abstract: Wastage of food crops due to the dearth of proper cold storage facilities is a huge problem in underdeveloped and developing countries of the world. Conceptual design of a potato cold storage is presented here, along with performance appraisal over a calendar year. The microclimate inside the cold storage is regulated using a water–lithium bromide absorption system. Proposed system utilizes both solar thermal and photovoltaic generated electrical energy for its operation. A suitable operation strategy is devised and the performance of the integrated system is analyzed from energy and exergy point of view to identify the required numbers of thermal collectors and photovoltaic modules. The proposed system is found to provide a net surplus of about 36 MW h energy over a calendar year, after meeting the in-house requirements. A rudimentary economic analysis is also performed to check the financial viability of the proposed system. Both the thermal and photovoltaic components are found to have payback periods less than four years.

  5. Detailed energy saving performance analyses on thermal mass walls demonstrated in a zero energy house

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, L. [School of Architecture, Tianjin University, Tianjin 300072 (China); Hurt, R.; Correia, D.; Boehm, R. [Center for Energy Research, University of Nevada, Las Vegas, NV 89154 (United States)

    2009-03-15

    An insulated concrete wall system{sup 1}1 was used on exterior walls of a zero energy house. Its thermal functions were investigated using actual data in comparison to a conventional wood frame system. The internal wall temperature of massive systems changes more slowly than the conventional wall constructions, leading to a more stable indoor temperature. The Energy10 simulated equivalent R-value and DBMS of the mass walls under actual climate conditions are, respectively, 6.98 (m{sup 2} C)/W and 3.39. However, the simulated heating energy use was much lower for the massive walls while the cooling load was a little higher. Further investigation on the heat flux indicates that the heat actually is transferred inside all day and night, which results in a higher cooling energy consumption. A one-dimensional model further verified these analyses, and the calculated results are in good agreement with the actual data. We conclude that the thermal mass wall does have the ability to store heat during the daytime and release it back at night, but in desert climates with high 24-h ambient temperature and intense sunlight, more heat will be stored than can be transferred back outside at night. As a result, an increased cooling energy will be required. (author)

  6. Monitoring household waste recycling centres performance using mean bin weight analyses.

    Science.gov (United States)

    Maynard, Sarah; Cherrett, Tom; Waterson, Ben

    2009-02-01

    This paper describes a modelling approach used to investigate the significance of key factors (vehicle type, compaction type, site design, temporal effects) in influencing the variability in observed nett amenity bin weights produced by household waste recycling centres (HWRCs). This new method can help to quickly identify sites that are producing significantly lighter bins, enabling detailed back-end analyses to be efficiently targeted and best practice in HWRC operation identified. Tested on weigh ticket data from nine HWRCs across West Sussex, UK, the model suggests that compaction technique, vehicle type, month and site design explained 76% of the variability in the observed nett amenity weights. For each factor, a weighting coefficient was calculated to generate a predicted nett weight for each bin transaction and three sites were subsequently identified as having similar characteristics but returned significantly different mean nett bin weights. Waste and site audits were then conducted at the three sites to try and determine the possible sources of the remaining variability. Significant differences were identified in the proportions of contained waste (bagged), wood, and dry recyclables entering the amenity waste stream, particularly at one site where significantly less contaminated waste and dry recyclables were observed.

  7. Experiments performed with a functional model based on statistical discrimination in mixed nuclear radiation field

    International Nuclear Information System (INIS)

    Valcov, N.; Celarel, A.; Purghel, L.

    1999-01-01

    By using the statistical discrimination technique, the components of on ionization current, due to a mixed radiation field, may be simultaneously measured. A functional model, including a serially manufactured gamma-ray ratemeter was developed, as an intermediate step in the design of specialised nuclear instrumentation, in order to check the concept of statistical discrimination method. The obtained results are in good agreement with the estimations of the statistical discrimination method. The main characteristics of the functional model are the following: - dynamic range of measurement: >300: l; - simultaneous measurement of natural radiation background and gamma-ray fields; - accuracy (for equal exposure rates from gamma's and natural radiation background): 17%, for both radiation fields; - minimum detectable exposure rate: 2μR/h. (authors)

  8. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2012-01-01

    The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…

  9. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  10. Performance evaluation of CT measurements made on step gauges using statistical methodologies

    DEFF Research Database (Denmark)

    Angel, J.; De Chiffre, L.; Kruth, J.P.

    2015-01-01

    In this paper, a study is presented in which statistical methodologies were applied to evaluate the measurement of step gauges on an X-ray computed tomography (CT) system. In particular, the effects of step gauge material density and orientation were investigated. The step gauges consist of uni......- and bidirectional lengths. By confirming the repeatability of measurements made on the test system, the number of required scans in the design of experiment (DOE) was reduced. The statistical model was checked using model adequacy principles; model adequacy checking is an important step in validating...

  11. Markov counting and reward processes for analysing the performance of a complex system subject to random inspections

    International Nuclear Information System (INIS)

    Ruiz-Castro, Juan Eloy

    2016-01-01

    In this paper, a discrete complex reliability system subject to internal failures and external shocks, is modelled algorithmically. Two types of internal failure are considered: repairable and non-repairable. When a repairable failure occurs, the unit goes to corrective repair. In addition, the unit is subject to external shocks that may produce an aggravation of the internal degradation level, cumulative damage or extreme failure. When a damage threshold is reached, the unit must be removed. When a non-repairable failure occurs, the device is replaced by a new, identical one. The internal performance and the external damage are partitioned in performance levels. Random inspections are carried out. When an inspection takes place, the internal performance of the system and the damage caused by external shocks are observed and if necessary the unit is sent to preventive maintenance. If the inspection observes minor state for the internal performance and/or external damage, then these states remain in memory when the unit goes to corrective or preventive maintenance. Transient and stationary analyses are performed. Markov counting and reward processes are developed in computational form to analyse the performance and profitability of the system with and without preventive maintenance. These aspects are implemented computationally with Matlab. - Highlights: • A multi-state device is modelled in an algorithmic and computational form. • The performance is partitioned in multi-states and degradation levels. • Several types of failures with repair times according to degradation levels. • Preventive maintenance as response to random inspection is introduced. • The performance-profitable is analysed through Markov counting and reward processes.

  12. Performance analyses of geothermal organic Rankine cycles with selected hydrocarbon working fluids

    International Nuclear Information System (INIS)

    Liu, Qiang; Duan, Yuanyuan; Yang, Zhen

    2013-01-01

    ORC (organic Rankine cycles) are promising systems for conversion of low temperature geothermal energy to electricity. The thermodynamic performance of the ORC with a wet cooling system is analyzed here using hydrocarbon working fluids driven by geothermal water from 100 °C to 150 °C and reinjection temperatures not less than 70 °C. The hydrocarbon working fluids are butane (R600), isobutane (R600a), pentane (R601), isopentane (R601a) and hexane. For each fluid, the ORC net power output first increases and then decreases with increasing turbine inlet temperature. The turbine inlet parameters are then optimized for the maximum power output. The ORC net power output increases as the condensation temperature decreases but the circulating pump power consumption increases especially for lower condensation temperatures at higher cooling water flow rates. The optimal condensation temperatures for the maximum plant power output are 29.45–29.75 °C for a cooling water inlet temperature of 20 °C and a pinch point temperature difference of 5 °C in the condenser. The maximum power is produced by an ORC using R600a at geothermal water inlet temperatures higher than 120 °C, followed by R245fa and R600 for reinjection temperatures not less than 70 °C. R600a also has the highest plant exergetic efficiency with the lowest turbine size factor. - Highlights: • ORC (organic Rankine cycles) using geothermal water from 100 to 150 °C and reinjection temperatures not less than 70 °C are analyzed. • Condensation temperatures optimized to maximize the plant power output. • An IHE (internal heat exchanger) gives higher plant power at low geothermal water temperatures and high reinjection temperatures. • ORC performance optimized considering the condensation and reinjection temperature. • R600a gives the best performance at the optimal turbine operating parameters

  13. Performance Analyses of Counter-Flow Closed Wet Cooling Towers Based on a Simplified Calculation Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wei

    2017-02-01

    Full Text Available As one of the most widely used units in water cooling systems, the closed wet cooling towers (CWCTs have two typical counter-flow constructions, in which the spray water flows from the top to the bottom, and the moist air and cooling water flow in the opposite direction vertically (parallel or horizontally (cross, respectively. This study aims to present a simplified calculation method for conveniently and accurately analyzing the thermal performance of the two types of counter-flow CWCTs, viz. the parallel counter-flow CWCT (PCFCWCT and the cross counter-flow CWCT (CCFCWCT. A simplified cooling capacity model that just includes two characteristic parameters is developed. The Levenberg–Marquardt method is employed to determine the model parameters by curve fitting of experimental data. Based on the proposed model, the predicted outlet temperatures of the process water are compared with the measurements of a PCFCWCT and a CCFCWCT, respectively, reported in the literature. The results indicate that the predicted values agree well with the experimental data in previous studies. The maximum absolute errors in predicting the process water outlet temperatures are 0.20 and 0.24 °C for the PCFCWCT and CCFCWCT, respectively. These results indicate that the simplified method is reliable for performance prediction of counter-flow CWCTs. Although the flow patterns of the two towers are different, the variation trends of thermal performance are similar to each other under various operating conditions. The inlet air wet-bulb temperature, inlet cooling water temperature, air flow rate, and cooling water flow rate are crucial for determining the cooling capacity of a counter-flow CWCT, while the cooling tower effectiveness is mainly determined by the flow rates of air and cooling water. Compared with the CCFCWCT, the PCFCWCT is much more applicable in a large-scale cooling water system, and the superiority would be amplified when the scale of water

  14. Risk and Performance Analyses Supporting Closure of WMA C at the Hanford Site in Southeast Washington

    International Nuclear Information System (INIS)

    Eberlein, Susan J.; Bergeron, Marcel P.; Kemp, Christopher J.; Hildebrand, R. Douglas; Aly, Alaa; Kozak, Matthew; Mehta, Sunil; Connelly, Michael

    2013-01-01

    The Office of River Protection under the U.S. Department of Energy (DOE) is pursuing closure of the Single-Shell Tank (SST) Waste Management Area (WMA) C as stipulated by the Hanford Federal Facility Agreement and Consent Order (HFFACO) under federal requirements and work tasks will be done under the State-approved closure plans and permits. An initial step in meeting the regulatory requirements is to develop a baseline risk assessment representing current conditions based on available characterization data and information collected at the WMA C location. The baseline risk assessment will be supporting a Resource Conservation and Recovery Act of 1976 (RCRA) Field Investigation (RFI)/Corrective Measures Study (CMS) for WMA closure and RCRA corrective action. Complying with the HFFACO conditions also involves developing a long-term closure Performance Assessment (PA) that evaluates human health and environmental impacts resulting from radionuclide inventories in residual wastes remaining in WMA C tanks and ancillary equipment. This PA is being developed to meet the requirements necessary for closure authorization under DOE Order 435.1 and Washington State Hazardous Waste Management Act. To meet the HFFACO conditions, the long-term closure risk analysis will include an evaluation of human health and environmental impacts from hazardous chemical inventories along with other performance Comprehensive Environmental Response, Compensation, and Liability Act Appropriate and Applicable Requirements (CERCLA ARARs) in residual wastes left in WMA C facilities after retrieval and removal. This closure risk analysis is needed to needed to comply with the requirements for permitted closure. Progress to date in developing a baseline risk assessment of WMA C has involved aspects of an evaluation of soil characterization and groundwater monitoring data collected as a part of the RFI/CMS and RCRA monitoring. Developing the long-term performance assessment aspects has involved the

  15. Risk and Performance Analyses Supporting Closure of WMA C at the Hanford Site in Southeast Washington

    Energy Technology Data Exchange (ETDEWEB)

    Eberlein, Susan J.; Bergeron, Marcel P.; Kemp, Christopher J.

    2013-11-11

    The Office of River Protection under the U.S. Department of Energy (DOE) is pursuing closure of the Single-Shell Tank (SST) Waste Management Area (WMA) C as stipulated by the Hanford Federal Facility Agreement and Consent Order (HFFACO) under federal requirements and work tasks will be done under the State-approved closure plans and permits. An initial step in meeting the regulatory requirements is to develop a baseline risk assessment representing current conditions based on available characterization data and information collected at the WMA C location. The baseline risk assessment will be supporting a Resource Conservation and Recovery Act of 1976 (RCRA) Field Investigation (RFI)/Corrective Measures Study (CMS) for WMA closure and RCRA corrective action. Complying with the HFFACO conditions also involves developing a long-term closure Performance Assessment (PA) that evaluates human health and environmental impacts resulting from radionuclide inventories in residual wastes remaining in WMA C tanks and ancillary equipment. This PA is being developed to meet the requirements necessary for closure authorization under DOE Order 435.1 and Washington State Hazardous Waste Management Act. To meet the HFFACO conditions, the long-term closure risk analysis will include an evaluation of human health and environmental impacts from hazardous chemical inventories along with other performance Comprehensive Environmental Response, Compensation, and Liability Act Appropriate and Applicable Requirements (CERCLA ARARs) in residual wastes left in WMA C facilities after retrieval and removal. This closure risk analysis is needed to needed to comply with the requirements for permitted closure. Progress to date in developing a baseline risk assessment of WMA C has involved aspects of an evaluation of soil characterization and groundwater monitoring data collected as a part of the RFI/CMS and RCRA monitoring. Developing the long-term performance assessment aspects has involved the

  16. Performance and driveline analyses of engine capacity in range extender engine hybrid vehicle

    Science.gov (United States)

    Praptijanto, Achmad; Santoso, Widodo Budi; Nur, Arifin; Wahono, Bambang; Putrasari, Yanuandri

    2017-01-01

    In this study, range extender engine designed should be able to meet the power needs of a power generator of hybrid electrical vehicle that has a minimum of 18 kW. Using this baseline model, the following range extenders will be compared between conventional SI piston engine (Baseline, BsL), engine capacity 1998 cm3, and efficiency-oriented SI piston with engine capacity 999 cm3 and 499 cm3 with 86 mm bore and stroke square gasoline engine in the performance, emission prediction of range extender engine, standard of charge by using engine and vehicle simulation software tools. In AVL Boost simulation software, range extender engine simulated from 1000 to 6000 rpm engine loads. The highest peak engine power brake reached up to 38 kW at 4500 rpm. On the other hand the highest torque achieved in 100 Nm at 3500 rpm. After that using AVL cruise simulation software, the model of range extended electric vehicle in series configuration with main components such as internal combustion engine, generator, electric motor, battery and the arthemis model rural road cycle was used to simulate the vehicle model. The simulation results show that engine with engine capacity 999 cm3 reported the economical performances of the engine and the emission and the control of engine cycle parameters.

  17. Steady State and Transient Fuel Rod Performance Analyses by Pad and Transuranus Codes

    International Nuclear Information System (INIS)

    Slyeptsov, O.; Slyeptsov, S.; Kulish, G.; Ostapov, A.; Chernov, I.

    2013-01-01

    The report performed under IAEA research contract No.15370/L2 describes the analysis results of WWER and PWR fuel rod performance at steady state operation and transients by means of PAD and TRANSURANUS codes. The code TRANSURANUS v1m1j09 developed by Institute for of Transuranium Elements (ITU) was used based on the Licensing Agreement N31302. The code PAD 4.0 developed by Westinghouse Electric Company was utilized in the frame of the Ukraine Nuclear Fuel Qualification Project for safety substantiation for the use of Westinghouse fuel assemblies in the mixed core of WWER-1000 reactor. The experimental data for the Russian fuel rod behavior obtained during the steady-state operation in the WWER-440 core of reactor Kola-3 and during the power transients in the core of MIR research reactor were taken from the IFPE database of the OECD/NEA and utilized for assessing the codes themselves during simulation of such properties as fuel burnup, fuel centerline temperature (FCT), fuel swelling, cladding strain, fission gas release (FGR) and rod internal pressure (RIP) in the rod burnup range of (41 - 60) GWD/MTU. The experimental data of fuel behavior at steady-state operation during seven reactor cycles presented by AREVA for the standard PWR fuel rod design were used to examine the code FGR model in the fuel burnup range of (37 - 81) GWD/MTU. (author)

  18. Exergetic performance analyses of drying of broccoli florets in a tray drier

    International Nuclear Information System (INIS)

    Zafer Erbay

    2009-01-01

    At present, the drying process is one of the major procedures of food preservation and an important unit operation in a wide variety of food industries. Recently, drying of vegetables is of a particular interest because it is added to various ready-to-eat meals in order to improve their nutritional quality due to health benefit compounds present in vegetables (vitamins, phytochemicals, dietary fibers). Broccoli has been described as a vegetable with a high nutritional value due to its important content of vitamins, antioxidants and anti-carcinogenic compounds. Broccoli dehydration has not been investigated to a great extent and a few data are available in the open literature. In this study, broccoli florets were dried in a tray drier at a temperature range of 50-70 deg C with an air velocity range of 0.5-1.5 m/s. The performance of the process and system was evaluated using the exergy analysis method. Based on the experimental data, effects of the drying air temperature and the velocity on the performance of the drying process were discussed. It was obtained that the exergy evaporation rate and the exergetic efficiency of the process were obtained to vary between 0.0006-0.0029 kW and 0.27-1.16%, respectively. They increased as the drying air temperature increased, while the exergetic efficiency decreased with the rise in the drying air velocity. (author)

  19. Performance of the S - [chi][squared] Statistic for Full-Information Bifactor Models

    Science.gov (United States)

    Li, Ying; Rupp, Andre A.

    2011-01-01

    This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…

  20. Course Modality Choice and Student Performance in Business Statistics Courses in Post Secondary Institutions

    Science.gov (United States)

    Radners, Richard Harry, Jr.

    2011-01-01

    Limited research has been conducted on the role of course modality choice (face-to-face [FTF] or online [OL]) on course grades. At the study site, an independent college, the research problem was the lack of research on the proportions of undergraduate students who completed a statistics course as part of their academic program, in either OL or…

  1. Characteristics and Performance of Students in an Online Section of Business Statistics

    Science.gov (United States)

    Dutton, John; Dutton, Marilyn

    2005-01-01

    We compare students in online and lecture sections of a business statistics class taught simultaneously by the same instructor using the same content, assignments, and exams in the fall of 2001. Student data are based on class grades, registration records, and two surveys. The surveys asked for information on preparedness, reasons for section…

  2. Student Performance in an Introductory Business Statistics Course: Does Delivery Mode Matter?

    Science.gov (United States)

    Haughton, Jonathan; Kelly, Alison

    2015-01-01

    Approximately 600 undergraduates completed an introductory business statistics course in 2013 in one of two learning environments at Suffolk University, a mid-sized private university in Boston, Massachusetts. The comparison group completed the course in a traditional classroom-based environment, whereas the treatment group completed the course in…

  3. Flipped Statistics Class Results: Better Performance than Lecture over One Year Later

    Science.gov (United States)

    Winquist, Jennifer R.; Carlson, Keith A.

    2014-01-01

    In this paper, we compare an introductory statistics course taught using a flipped classroom approach to the same course taught using a traditional lecture based approach. In the lecture course, students listened to lecture, took notes, and completed homework assignments. In the flipped course, students read relatively simple chapters and answered…

  4. Analyses with the FSTATE code: fuel performance in destructive in-pile experiments

    International Nuclear Information System (INIS)

    Bauer, T.H.; Meek, C.C.

    1982-01-01

    Thermal-mechanical analysis of a fuel pin is an essential part of the evaluation of fuel behavior during hypothetical accident transients. The FSTATE code has been developed to provide this required computational ability in situations lacking azimuthal symmetry about the fuel-pin axis by performing 2-dimensional thermal, mechanical, and fission gas release and redistribution computations for a wide range of possible transient conditions. In this paper recent code developments are described and application is made to in-pile experiments undertaken to study fast-reactor fuel under accident conditions. Three accident simulations, including a fast and slow ramp-rate overpower as well as a loss-of-cooling accident sequence, are used as representative examples, and the interpretation of STATE computations relative to experimental observations is made

  5. Analyses of performance of novel sensors with different coatings for detection of Lipopolysaccharide

    KAUST Repository

    Mohd. Syaifudin, A. R.

    2011-10-01

    Interdigital sensors have been widely used for non-destructive applications. New types of planar interdigital sensors have been fabricated with different coating materials to assess the response to Lipopolysaccharide, LPS. All the coatings were selected and optimized to be stable in water, as the measurements take place in water media. Moreover, the coatings have been designed to have available carboxylic or amine functional groups. The use of these functional groups is a widely used technique to specifically binding of biomolecules. The coated sensors were then immobilized with Polymyxin B(PmB) which has the specific binding properties to LPS. This paper will highlight the fabrication process and initial investigations on the sensors\\' performance based on Impedance Spectroscopy. © 2011 IEEE.

  6. Opto-mechanical Analyses for Performance Optimization of Lightweight Grazing-incidence Mirrors

    Science.gov (United States)

    Roche, Jacqueline M.; Kolodziejczak, Jeffery J.; Odell, Stephen L.; Elsner, Ronald F.; Weisskopf, Martin C.; Ramsey, Brian; Gubarev, Mikhail V.

    2013-01-01

    New technology in grazing-incidence mirror fabrication and assembly is necessary to achieve subarcsecond optics for large-area x-ray telescopes. In order to define specifications, an understanding of performance sensitivity to design parameters is crucial. MSFC is undertaking a systematic study to specify a mounting approach, mirror substrate, and testing method. Lightweight mirrors are typically flimsy and are, therefore, susceptible to significant distortion due to mounting and gravitational forces. Material properties of the mirror substrate along with its dimensions significantly affect the distortions caused by mounting and gravity. A parametric study of these properties and their relationship to mounting and testing schemes will indicate specifications for the design of the next generation of lightweight grazing-incidence mirrors. Here we report initial results of this study.

  7. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    Science.gov (United States)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization

  8. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ke; Tang, Jie [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)

    2014-04-15

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial

  9. IRE (Institut National des Radioelements) site in Belgium. Report of in situ measurements and analyses performed for the RTBF; Site IRE (Institut National des Radioelements) en Belgique. Compte rendu des mesures in situ et analyses effectuees pour la RTBF

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-05-15

    This document reports various analyses performed within the frame of the preparation and filming of a TV documentary on the Belgium National Institute of Radio-elements. It reports gamma radiation measurements performed at the vicinity of the institute, discusses the possible origin of its increase at the vicinity of the institute, analyses of sludge samples coming from a wastewater treatment works, and analyses of milk, cabbage, mosses and sediments collected by residents

  10. Combining Results from Distinct MicroRNA Target Prediction Tools Enhances the Performance of Analyses

    Directory of Open Access Journals (Sweden)

    Arthur C. Oliveira

    2017-05-01

    Full Text Available Target prediction is generally the first step toward recognition of bona fide microRNA (miRNA-target interactions in living cells. Several target prediction tools are now available, which use distinct criteria and stringency to provide the best set of candidate targets for a single miRNA or a subset of miRNAs. However, there are many false-negative predictions, and consensus about the optimum strategy to select and use the output information provided by the target prediction tools is lacking. We compared the performance of four tools cited in literature—TargetScan (TS, miRanda-mirSVR (MR, Pita, and RNA22 (R22, and we determined the most effective approach for analyzing target prediction data (individual, union, or intersection. For this purpose, we calculated the sensitivity, specificity, precision, and correlation of these approaches using 10 miRNAs (miR-1-3p, miR-17-5p, miR-21-5p, miR-24-3p, miR-29a-3p, miR-34a-5p, miR-124-3p, miR-125b-5p, miR-145-5p, and miR-155-5p and 1,400 genes (700 validated and 700 non-validated as targets of these miRNAs. The four tools provided a subset of high-quality predictions and returned few false-positive predictions; however, they could not identify several known true targets. We demonstrate that union of TS/MR and TS/MR/R22 enhanced the quality of in silico prediction analysis of miRNA targets. We conclude that the union rather than the intersection of the aforementioned tools is the best strategy for maximizing performance while minimizing the loss of time and resources in subsequent in vivo and in vitro experiments for functional validation of miRNA-target interactions.

  11. Performance of laboratories analysing welding fume on filter samples: results from the WASP proficiency testing scheme.

    Science.gov (United States)

    Stacey, Peter; Butler, Owen

    2008-06-01

    This paper emphasizes the need for occupational hygiene professionals to require evidence of the quality of welding fume data from analytical laboratories. The measurement of metals in welding fume using atomic spectrometric techniques is a complex analysis often requiring specialist digestion procedures. The results from a trial programme testing the proficiency of laboratories in the Workplace Analysis Scheme for Proficiency (WASP) to measure potentially harmful metals in several different types of welding fume showed that most laboratories underestimated the mass of analyte on the filters. The average recovery was 70-80% of the target value and >20% of reported recoveries for some of the more difficult welding fume matrices were welding fume trial filter samples. Consistent rather than erratic error predominated, suggesting that the main analytical factor contributing to the differences between the target values and results was the effectiveness of the sample preparation procedures used by participating laboratories. It is concluded that, with practice and regular participation in WASP, performance can improve over time.

  12. From volatility to value: analysing and managing financial and performance risk in energy savings projects

    International Nuclear Information System (INIS)

    Mills, Evan; Kromer, Steve; Weiss, Gary; Mathew, Paul A.

    2006-01-01

    Many energy-related investments are made without a clear financial understanding of their values, risks, and volatilities. In the face of this uncertainty, the investor-such as a building owner or an energy service company-will often choose to implement only the most certain and thus limited energy-efficiency measures. Conversely, commodities traders and other sophisticated investors accustomed to evaluating investments on a value, risk, and volatility basis often overlook energy-efficiency investments because risk and volatility information are not provided. Fortunately, energy-efficiency investments easily lend themselves to such analysis using tools similar to those applied to supply side risk management. Accurate and robust analysis demands a high level of understanding of the physical aspects of energy-efficiency, which enables the translation of physical performance data into the language of investment. With a risk management analysis framework in place, the two groups-energy-efficiency experts and investment decision-makers-can exchange the information they need to expand investment in demand-side energy projects. In this article, we first present the case for financial risk analysis in energy efficiency in the buildings sector. We then describe techniques and examples of how to identify, quantify, and manage risk. Finally, we describe emerging market-based opportunities in risk management for energy efficiency

  13. Drying of mint leaves in a solar dryer and under open sun: Modelling, performance analyses

    International Nuclear Information System (INIS)

    Akpinar, E. Kavak

    2010-01-01

    In this study was investigated the thin-layer drying characteristics in solar dryer with forced convection and under open sun with natural convection of mint leaves, and, performed energy analysis and exergy analysis of solar drying process of mint leaves. An indirect forced convection solar dryer consisting of a solar air collector and drying cabinet was used in the experiments. The drying data were fitted to ten the different mathematical models. Among the models, Wang and Singh model for the forced solar drying and the natural sun drying were found to best explain thin-layer drying behaviour of mint leaves. Using the first law of thermodynamics, the energy analysis throughout solar drying process was estimated. However, exergy analysis during solar drying process was determined by applying the second law of thermodynamics. Energy utilization ratio (EUR) values of drying cabinet varied in the ranges between 7.826% and 46.285%. The values of exergetic efficiency were found to be in the range of 34.760-87.717%. The values of improvement potential varied between 0 and 0.017 kJ s -1 . Energy utilization ratio and improvement potential decreased with increasing drying time and ambient temperature while exergetic efficiency increased.

  14. Measuring yield performance of upland cotton varieties using adaptability, stability and principal component analyses

    International Nuclear Information System (INIS)

    Baloch, M.J.

    2003-01-01

    Nine upland cotton varieties/strains were tested over 36 environments in Pakistan so as to determine their stability in yield performance. The regression coefficient (b) was used as a measure of adaptability, whereas parameters such as coefficient of determination (r2) and sum of squared deviations from regression (s/sup 2/d) were used as measure of stability. Although the regression coefficients (b) of all varieties did not deviate significantly from the unit slope, the varieties CRIS-5A. BII-89, DNH-40 and Rehmani gave b value closer to unity implying their better adaptation. Lower s/sub 2/d and higher r/sub 2/ of CRIS- 121 and DNH-40 suggest that both of these are fairly stable. The results indicate that, generally, adaptability and stability parameters are independent of each in as much as not all of the parameters simultaneously favoured one variety over the other excepting the variety DNH-40, which was stable based on majority of the parameters. Principal component analysis revealed that the first two components (latent roots) account for about 91.4% of the total variation. The latent vectors of first principal component (PCA1) were smaller and positive which also suggest that most of the varieties were quite adaptive to all of the test environments. (author)

  15. High-throughput, Highly Sensitive Analyses of Bacterial Morphogenesis Using Ultra Performance Liquid Chromatography*

    Science.gov (United States)

    Desmarais, Samantha M.; Tropini, Carolina; Miguel, Amanda; Cava, Felipe; Monds, Russell D.; de Pedro, Miguel A.; Huang, Kerwyn Casey

    2015-01-01

    The bacterial cell wall is a network of glycan strands cross-linked by short peptides (peptidoglycan); it is responsible for the mechanical integrity of the cell and shape determination. Liquid chromatography can be used to measure the abundance of the muropeptide subunits composing the cell wall. Characteristics such as the degree of cross-linking and average glycan strand length are known to vary across species. However, a systematic comparison among strains of a given species has yet to be undertaken, making it difficult to assess the origins of variability in peptidoglycan composition. We present a protocol for muropeptide analysis using ultra performance liquid chromatography (UPLC) and demonstrate that UPLC achieves resolution comparable with that of HPLC while requiring orders of magnitude less injection volume and a fraction of the elution time. We also developed a software platform to automate the identification and quantification of chromatographic peaks, which we demonstrate has improved accuracy relative to other software. This combined experimental and computational methodology revealed that peptidoglycan composition was approximately maintained across strains from three Gram-negative species despite taxonomical and morphological differences. Peptidoglycan composition and density were maintained after we systematically altered cell size in Escherichia coli using the antibiotic A22, indicating that cell shape is largely decoupled from the biochemistry of peptidoglycan synthesis. High-throughput, sensitive UPLC combined with our automated software for chromatographic analysis will accelerate the discovery of peptidoglycan composition and the molecular mechanisms of cell wall structure determination. PMID:26468288

  16. Financial and Performance Analyses of Microcontroller Based Solar-Powered Autorickshaw for a Developing Country

    Directory of Open Access Journals (Sweden)

    Abu Raihan Mohammad Siddique

    2016-01-01

    Full Text Available This paper presents a case study to examine the economic viability and performance analysis of a microcontroller based solar powered battery operated autorickshaw (m-SBAR, for the developing countries, which is compared with different types of rickshaws such as pedal rickshaw (PR, battery operated autorickshaw (BAR, and solar-powered battery operated autorickshaw (SBAR, available in Bangladesh. The BAR consists of a rickshaw structure, a battery bank, a battery charge controller, a DC motor driver, and a DC motor whereas the proposed m-SBAR contains additional components like solar panel and microcontroller based DC motor driver. The complete design considered the local radiation data and load profile of the proposed m-SBAR. The Levelized Cost of Energy (LCOE analysis, Net Present Worth, payback periods, and Benefit-to-Cost Ratio methods have been used to evaluate the financial feasibility and sensitivity analysis of m-SBAR, grid-powered BAR, and PR. The numerical analysis reveals that LCOE and Benefit-to-Cost Ratio of the proposed m-SBAR are lower compared to the grid-powered BAR. It has also been found that microcontroller based DC motor control circuit reduces battery discharge rate, improves battery life, and controls motor speed efficiency.

  17. INFLUENCE OF STOCHASTIC NOISE STATISTICS ON KALMAN FILTER PERFORMANCE BASED ON VIDEO TARGET TRACKING

    Institute of Scientific and Technical Information of China (English)

    Chen Ken; Napolitano; Zhang Yun; Li Dong

    2010-01-01

    The system stochastic noises involved in Kalman filtering are preconditioned on being ideally white and Gaussian distributed. In this research,efforts are exerted on exploring the influence of the noise statistics on Kalman filtering from the perspective of video target tracking quality. The correlation of tracking precision to both the process and measurement noise covariance is investigated; the signal-to-noise power density ratio is defined; the contribution of predicted states and measured outputs to Kalman filter behavior is discussed; the tracking precision relative sensitivity is derived and applied in this study case. The findings are expected to pave the way for future study on how the actual noise statistics deviating from the assumed ones impacts on the Kalman filter optimality and degradation in the application of video tracking.

  18. Statistical Diagnosis Method of Conductor Motions in Superconducting Magnets to Predict their Quench Performance

    CERN Document Server

    Khomenko, B A; Rijllart, A; Sanfilippo, S; Siemko, A

    2001-01-01

    Premature training quenches are usually caused by the transient energy released within the magnet coil as it is energised. Two distinct varieties of disturbances exist. They are thought to be electrical and mechanical in origin. The first type of disturbance comes from non-uniform current distribution in superconducting cables whereas the second one usually originates from conductor motions or micro-fractures of insulating materials under the action of Lorentz forces. All of these mechanical events produce in general a rapid variation of the voltages in the so-called quench antennas and across the magnet coil, called spikes. A statistical method to treat the spatial localisation and the time occurrence of spikes will be presented. It allows identification of the mechanical weak points in the magnet without need to increase the current to provoke a quench. The prediction of the quench level from detailed analysis of the spike statistics can be expected.

  19. Differences in game-related statistics of basketball performance by game location for men's winning and losing teams.

    Science.gov (United States)

    Gómez, Miguel A; Lorenzo, Alberto; Barakat, Rubén; Ortega, Enrique; Palao, José M

    2008-02-01

    The aim of the present study was to identify game-related statistics that differentiate winning and losing teams according to game location. The sample included 306 games of the 2004-2005 regular season of the Spanish professional men's league (ACB League). The independent variables were game location (home or away) and game result (win or loss). The game-related statistics registered were free throws (successful and unsuccessful), 2- and 3-point field goals (successful and unsuccessful), offensive and defensive rebounds, blocks, assists, fouls, steals, and turnovers. Descriptive and inferential analyses were done (one-way analysis of variance and discriminate analysis). The multivariate analysis showed that winning teams differ from losing teams in defensive rebounds (SC = .42) and in assists (SC = .38). Similarly, winning teams differ from losing teams when they play at home in defensive rebounds (SC = .40) and in assists (SC = .41). On the other hand, winning teams differ from losing teams when they play away in defensive rebounds (SC = .44), assists (SC = .30), successful 2-point field goals (SC = .31), and unsuccessful 3-point field goals (SC = -.35). Defensive rebounds and assists were the only game-related statistics common to all three analyses.

  20. Computational Performance Optimisation for Statistical Analysis of the Effect of Nano-CMOS Variability on Integrated Circuits

    Directory of Open Access Journals (Sweden)

    Zheng Xie

    2013-01-01

    Full Text Available The intrinsic variability of nanoscale VLSI technology must be taken into account when analyzing circuit designs to predict likely yield. Monte-Carlo- (MC- and quasi-MC- (QMC- based statistical techniques do this by analysing many randomised or quasirandomised copies of circuits. The randomisation must model forms of variability that occur in nano-CMOS technology, including “atomistic” effects without intradie correlation and effects with intradie correlation between neighbouring devices. A major problem is the computational cost of carrying out sufficient analyses to produce statistically reliable results. The use of principal components analysis, behavioural modeling, and an implementation of “Statistical Blockade” (SB is shown to be capable of achieving significant reduction in the computational costs. A computation time reduction of 98.7% was achieved for a commonly used asynchronous circuit element. Replacing MC by QMC analysis can achieve further computation reduction, and this is illustrated for more complex circuits, with the results being compared with those of transistor-level simulations. The “yield prediction” analysis of SRAM arrays is taken as a case study, where the arrays contain up to 1536 transistors modelled using parameters appropriate to 35 nm technology. It is reported that savings of up to 99.85% in computation time were obtained.

  1. Performance comparison of multi-detector detection statistics in targeted compact binary coalescence GW search

    OpenAIRE

    Haris, K; Pai, Archana

    2016-01-01

    Global network of advanced Interferometric gravitational wave (GW) detectors are expected to be on-line soon. Coherent observation of GW from a distant compact binary coalescence (CBC) with a network of interferometers located in different continents give crucial information about the source such as source location and polarization information. In this paper we compare different multi-detector network detection statistics for CBC search. In maximum likelihood ratio (MLR) based detection appro...

  2. Magnetotomography—a new method for analysing fuel cell performance and quality

    Science.gov (United States)

    Hauer, Karl-Heinz; Potthast, Roland; Wüster, Thorsten; Stolten, Detlef

    Magnetotomography is a new method for the measurement and analysis of the current density distribution of fuel cells. The method is based on the measurement of the magnetic flux surrounding the fuel cell stack caused by the current inside the stack. As it is non-invasive, magnetotomography overcomes the shortcomings of traditional methods for the determination of current density in fuel cells [J. Stumper, S.A. Campell, D.P. Wilkinson, M.C. Johnson, M. Davis, In situ methods for the determination of current distributions in PEM fuel cells, Electrochem. Acta 43 (1998) 3773; S.J.C. Cleghorn, C.R. Derouin, M.S. Wilson, S. Gottesfeld, A printed circuit board approach to measuring current distribution in a fuel cell, J. Appl. Electrochem. 28 (1998) 663; Ch. Wieser, A. Helmbold, E. Gülzow, A new technique for two-dimensional current distribution measurements in electro-chemical cells, J. Appl. Electrochem. 30 (2000) 803; Grinzinger, Methoden zur Ortsaufgelösten Strommessung in Polymer Elektrolyt Brennstoffzellen, Diploma thesis, TU-München, 2003; Y.-G. Yoon, W.-Y. Lee, T.-H. Yang, G.-G. Park, C.-S. Kim, Current distribution in a single cell of PEMFC, J. Power Sources 118 (2003) 193-199; M.M. Mench, C.Y. Wang, An in situ method for determination of current distribution in PEM fuel cells applied to a direct methanol fuel cell, J. Electrochem. Soc. 150 (2003) A79-A85; S. Schönbauer, T. Kaz, H. Sander, E. Gülzow, Segmented bipolar plate for the determination of current distribution in polymer electrolyte fuel cells, in: Proceedings of the Second European PEMFC Forum, vol. 1, Lucerne/Switzerland, 2003, pp. 231-237; G. Bender, S.W. Mahlon, T.A. Zawodzinski, Further refinements in the segmented cell approach to diagnosing performance in polymer electrolyte fuel cells, J. Power Sources 123 (2003) 163-171]. After several years of research a complete prototype system is now available for research on single cells and stacks. This paper describes the basic system (fundamentals

  3. Optimization of the gas turbine-modular helium reactor using statistical methods to maximize performance without compromising system design margins

    International Nuclear Information System (INIS)

    Lommers, L.J.; Parme, L.L.; Shenoy, A.S.

    1995-07-01

    This paper describes a statistical approach for determining the impact of system performance and design uncertainties on power plant performance. The objectives of this design approach are to ensure that adequate margin is provided, that excess margin is minimized, and that full advantage can be taken of unconsumed margin. It is applicable to any thermal system in which these factors are important. The method is demonstrated using the Gas Turbine Modular Helium Reactor as an example. The quantitative approach described allows the characterization of plant performance and the specification of the system design requirements necessary to achieve the desired performance with high confidence. Performance variations due to design evolution, inservice degradation, and basic performance uncertainties are considered. The impact of all performance variabilities is combined using Monte Carlo analysis to predict the range of expected operation

  4. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    Science.gov (United States)

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Science.gov (United States)

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  6. Error correction and statistical analyses for intra-host comparisons of feline immunodeficiency virus diversity from high-throughput sequencing data.

    Science.gov (United States)

    Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary

    2015-06-30

    Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase

  7. A study of the feasibility of statistical analysis of airport performance simulation

    Science.gov (United States)

    Myers, R. H.

    1982-01-01

    The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.

  8. Spectral analyses of systolic blood pressure and heart rate variability and their association with cognitive performance in elderly hypertensive subjects.

    Science.gov (United States)

    Santos, W B; Matoso, J M D; Maltez, M; Gonçalves, T; Casanova, M; Moreira, I F H; Lourenço, R A; Monteiro, W D; Farinatti, P T V; Soares, P P; Oigman, W; Neves, M F T; Correia, M L G

    2015-08-01

    Systolic hypertension is associated with cognitive decline in the elderly. Altered blood pressure (BP) variability is a possible mechanism of reduced cognitive performance in elderly hypertensives. We hypothesized that altered beat-to-beat systolic BP variability is associated with reduced global cognitive performance in elderly hypertensive subjects. In exploratory analyses, we also studied the correlation between diverse discrete cognitive domains and indices of systolic BP and heart rate variability. Disproving our initial hypothesis, we have shown that hypertension and low education, but not indices of systolic BP and heart rate variability, were independent predictors of lower global cognitive performance. However, exploratory analyses showed that the systolic BP variability in semi-upright position was an independent predictor of matrix reasoning (B = 0.08 ± .03, P-value = 0.005), whereas heart rate variability in semi-upright position was an independent predictor of the executive function score (B = -6.36 ± 2.55, P-value = 0.02). We conclude that myogenic vascular and sympathetic modulation of systolic BP do not contribute to reduced global cognitive performance in treated hypertensive subjects. Nevertheless, our results suggest that both systolic BP and heart rate variability might be associated with modulation of frontal lobe cognitive domains, such as executive function and matrix reasoning.

  9. Development of 4S and related technologies. (3) Statistical evaluation of safety performance of 4S on ULOF event

    International Nuclear Information System (INIS)

    Ishii, Kyoko; Matsumiya, Hisato; Horie, Hideki; Miyagi, Kazumi

    2009-01-01

    The purpose of this work is to evaluate quantitatively and statistically the safety performance of Super-Safe, Small, and Simple reactor (4S) by analyzing with ARGO code, a plant dynamics code for a sodium-cooled fast reactor. In this evaluation, an Anticipated Transient Without Scram (ATWS) is assumed, and an Unprotected Loss of Flow (ULOF) event is selected as a typical ATWS case. After a metric concerned with safety design is defined as performance factor a Phenomena Identification Ranking Table (PIRT) is produced in order to select the plausible phenomena that affect the metric. Then a sensitivity analysis is performed for the parameters related to the selected plausible phenomena. Finally the metric is evaluated with statistical methods whether it satisfies the given safety acceptance criteria. The result is as follows: The Cumulative Damage Fraction (CDF) for the cladding is defined as a metric, and the statistical estimation of the one-sided upper tolerance limit of 95 percent probability at a 95 percent confidence level in CDF is within the safety acceptance criterion; CDF < 0.1. The result shows that the 4S safety performance is acceptable in the ULOF event. (author)

  10. Numerical analyses of the effect of SG-interlayer shear stiffness on the structural performance of reinforced glass beams

    DEFF Research Database (Denmark)

    Louter, C.; Nielsen, Jens Henrik

    2013-01-01

    This paper focuses on the numerical modelling of SentryGlas-laminated reinforced glass beams. In these beams, which have been experimentally investigated in preceding research, a stainless steel reinforcement section is laminated at the inner recessed edge of a triple-layer glass beam by means...... of SentryGlas (SG) interlayer sheets. The current contribution numerically investigates the effect of the SG-interlayer shear stiffness on the overall structural response of the beams. This is done by means of a 3D finite element model in which the individual glass layers, the SG......-interlayers and the reinforcement are incorporated. In the model, the glass parts are allowed to crack, but all other parts are assumed linear elastic throughout the analyses. By changing the shear modulus of the SG-interlayer in multiple analyses, its contribution to the overall structural performance of the beams - especially...

  11. Analytical review based on statistics on good and poor financial performance of LPD in Bangli regency.

    Science.gov (United States)

    Yasa, I. B. A.; Parnata, I. K.; Susilawati, N. L. N. A. S.

    2018-01-01

    This study aims to apply analytical review model to analyze the influence of GCG, accounting conservatism, financial distress models and company size on good and poor financial performance of LPD in Bangli Regency. Ordinal regression analysis is used to perform analytical review, so that obtained the influence and relationship between variables to be considered further audit. Respondents in this study were LPDs in Bangli Regency, which amounted to 159 LPDs of that number 100 LPDs were determined as randomly selected samples. The test results found GCG and company size have a significant effect on both the good and poor financial performance, while the conservatism and financial distress model has no significant effect. The influence of the four variables on the overall financial performance of 58.8%, while the remaining 41.2% influenced by other variables. Size, FDM and accounting conservatism are variables, which are further recommended to be audited.

  12. Statistical techniques for automating the detection of anomalous performance in rotating machinery

    International Nuclear Information System (INIS)

    Piety, K.R.; Magette, T.E.

    1979-01-01

    The level of technology utilized in automated systems that monitor industrial rotating equipment and the potential of alternative surveillance methods are assessed. It is concluded that changes in surveillance methodology would upgrade ongoing programs and yet still be practical for implementation. An improved anomaly recognition methodology is formulated and implemented on a minicomputer system. The effectiveness of the monitoring system was evaluated in laboratory tests on a small rotor assembly, using vibrational signals from both displacement probes and accelerometers. Time and frequency domain descriptors are selected to compose an overall signature that characterizes the monitored equipment. Limits for normal operation of the rotor assembly are established automatically during an initial learning period. Thereafter, anomaly detection is accomplished by applying an approximate statistical test to each signature descriptor. As demonstrated over months of testing, this monitoring system is capable of detecting anomalous conditions while exhibiting a false alarm rate below 0.5%

  13. Cosmological Non-Gaussian Signature Detection: Comparing Performance of Different Statistical Tests

    Directory of Open Access Journals (Sweden)

    O. Forni

    2005-09-01

    Full Text Available Currently, it appears that the best method for non-Gaussianity detection in the cosmic microwave background (CMB consists in calculating the kurtosis of the wavelet coefficients. We know that wavelet-kurtosis outperforms other methods such as the bispectrum, the genus, ridgelet-kurtosis, and curvelet-kurtosis on an empirical basis, but relatively few studies have compared other transform-based statistics, such as extreme values, or more recent tools such as higher criticism (HC, or proposed “best possible” choices for such statistics. In this paper, we consider two models for transform-domain coefficients: (a a power-law model, which seems suited to the wavelet coefficients of simulated cosmic strings, and (b a sparse mixture model, which seems suitable for the curvelet coefficients of filamentary structure. For model (a, if power-law behavior holds with finite 8th moment, excess kurtosis is an asymptotically optimal detector, but if the 8th moment is not finite, a test based on extreme values is asymptotically optimal. For model (b, if the transform coefficients are very sparse, a recent test, higher criticism, is an optimal detector, but if they are dense, kurtosis is an optimal detector. Empirical wavelet coefficients of simulated cosmic strings have power-law character, infinite 8th moment, while curvelet coefficients of the simulated cosmic strings are not very sparse. In all cases, excess kurtosis seems to be an effective test in moderate-resolution imagery.

  14. Missing data in randomized clinical trials for weight loss: scope of the problem, state of the field, and performance of statistical methods.

    Directory of Open Access Journals (Sweden)

    Mai A Elobeid

    2009-08-01

    Full Text Available Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods.We searched PubMed and Cochrane databases (2000-2006 for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout rates being approximated by an exponential decay curve (e(-lambdat where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100 and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive.Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last

  15. The effects of 'ecstasy' (MDMA) on visuospatial memory performance: findings from a systematic review with meta-analyses.

    Science.gov (United States)

    Murphy, Philip N; Bruno, Raimondo; Ryland, Ida; Wareing, Michele; Fisk, John E; Montgomery, Catharine; Hilton, Joanne

    2012-03-01

    To review, with meta-analyses where appropriate, performance differences between ecstasy (3,4-methylenedioxymethamphetamine) users and non-users on a wider range of visuospatial tasks than previously reviewed. Such tasks have been shown to draw upon working memory executive resources. Abstract databases were searched using the United Kingdom National Health Service Evidence Health Information Resource. Inclusion criteria were publication in English language peer-reviewed journals and the reporting of new findings regarding human ecstasy-users' performance on visuospatial tasks. Data extracted included specific task requirements to provide a basis for meta-analyses for categories of tasks with similar requirements. Fifty-two studies were identified for review, although not all were suitable for meta-analysis. Significant weighted mean effect sizes indicating poorer performance by ecstasy users compared with matched controls were found for tasks requiring recall of spatial stimulus elements, recognition of figures and production/reproduction of figures. There was no evidence of a linear relationship between estimated ecstasy consumption and effect sizes. Given the networked nature of processing for spatial and non-spatial visual information, future scanning and imaging studies should focus on brain activation differences between ecstasy users and non-users in the context of specific tasks to facilitate identification of loci of potentially compromised activity in users. Copyright © 2012 John Wiley & Sons, Ltd.

  16. PRIS-STATISTICS: Power Reactor Information System Statistical Reports. User's Manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IAEA developed the Power Reactor Information System (PRIS)-Statistics application to assist PRIS end users with generating statistical reports from PRIS data. Statistical reports provide an overview of the status, specification and performance results of every nuclear power reactor in the world. This user's manual was prepared to facilitate the use of the PRIS-Statistics application and to provide guidelines and detailed information for each report in the application. Statistical reports support analyses of nuclear power development and strategies, and the evaluation of nuclear power plant performance. The PRIS database can be used for comprehensive trend analyses and benchmarking against best performers and industrial standards.

  17. Is there a curse of relocation? Analysing the causal link between offshoring and the innovation performance of (small) firms

    DEFF Research Database (Denmark)

    Mitze, Timo; Kreutzer, Fabian

    2017-01-01

    We analyse the empirical link between offshoring activities and different dimensions of innovation performance at the firm-level. In order to identify causal effects running from offshoring to innovation, we use a quasi-experimental comparison group approach by means of (conditional) difference......-in-difference estimations applied to German establishment-level data for firms that conducted offshoring activities in the period 2007–13. We find that the international relocation of business functions has a negative impact on the firms’ propensity to be innovative in terms of product and process innovations as well...

  18. Statistical properties of coastal long waves analysed through sea-level time-gradient functions: exemplary analysis of the Siracusa, Italy, tide-gauge data

    Directory of Open Access Journals (Sweden)

    L. Bressan

    2016-01-01

    reconstructed sea level (RSL, the background slope (BS and the control function (CF. These functions are examined through a traditional spectral fast Fourier transform (FFT analysis and also through a statistical analysis, showing that they can be characterised by probability distribution functions PDFs such as the Student's t distribution (IS and RSL and the beta distribution (CF. As an example, the method has been applied to data from the tide-gauge station of Siracusa, Italy.

  19. Statistical properties of indicators of first-year performance at university

    African Journals Online (AJOL)

    and tails of bivariate distributions composed of university average performance and a school .... Superimposed on the probability histogram in Figure 1 is this density estimate (solid line). ..... (1999b) was adjusted by adding a small ... average FYWUM is larger than 50% for the first time, and the Grade 12 average mark is.

  20. Do Different Mental Models Influence Cybersecurity Behavior? Evaluations via Statistical Reasoning Performance

    Directory of Open Access Journals (Sweden)

    Gary L. Brase

    2017-11-01

    Full Text Available Cybersecurity research often describes people as understanding internet security in terms of metaphorical mental models (e.g., disease risk, physical security risk, or criminal behavior risk. However, little research has directly evaluated if this is an accurate or productive framework. To assess this question, two experiments asked participants to respond to a statistical reasoning task framed in one of four different contexts (cybersecurity, plus the above alternative models. Each context was also presented using either percentages or natural frequencies, and these tasks were followed by a behavioral likelihood rating. As in previous research, consistent use of natural frequencies promoted correct Bayesian reasoning. There was little indication, however, that any of the alternative mental models generated consistently better understanding or reasoning over the actual cybersecurity context. There was some evidence that different models had some effects on patterns of responses, including the behavioral likelihood ratings, but these effects were small, as compared to the effect of the numerical format manipulation. This points to a need to improve the content of actual internet security warnings, rather than working to change the models users have of warnings.

  1. Do Different Mental Models Influence Cybersecurity Behavior? Evaluations via Statistical Reasoning Performance.

    Science.gov (United States)

    Brase, Gary L; Vasserman, Eugene Y; Hsu, William

    2017-01-01

    Cybersecurity research often describes people as understanding internet security in terms of metaphorical mental models (e.g., disease risk, physical security risk, or criminal behavior risk). However, little research has directly evaluated if this is an accurate or productive framework. To assess this question, two experiments asked participants to respond to a statistical reasoning task framed in one of four different contexts (cybersecurity, plus the above alternative models). Each context was also presented using either percentages or natural frequencies, and these tasks were followed by a behavioral likelihood rating. As in previous research, consistent use of natural frequencies promoted correct Bayesian reasoning. There was little indication, however, that any of the alternative mental models generated consistently better understanding or reasoning over the actual cybersecurity context. There was some evidence that different models had some effects on patterns of responses, including the behavioral likelihood ratings, but these effects were small, as compared to the effect of the numerical format manipulation. This points to a need to improve the content of actual internet security warnings, rather than working to change the models users have of warnings.

  2. Statistical physics of fracture: scientific discovery through high-performance computing

    International Nuclear Information System (INIS)

    Kumar, Phani; Nukala, V V; Simunovic, Srdan; Mills, Richard T

    2006-01-01

    The paper presents the state-of-the-art algorithmic developments for simulating the fracture of disordered quasi-brittle materials using discrete lattice systems. Large scale simulations are often required to obtain accurate scaling laws; however, due to computational complexity, the simulations using the traditional algorithms were limited to small system sizes. We have developed two algorithms: a multiple sparse Cholesky downdating scheme for simulating 2D random fuse model systems, and a block-circulant preconditioner for simulating 2D random fuse model systems. Using these algorithms, we were able to simulate fracture of largest ever lattice system sizes (L = 1024 in 2D, and L = 64 in 3D) with extensive statistical sampling. Our recent simulations on 1024 processors of Cray-XT3 and IBM Blue-Gene/L have further enabled us to explore fracture of 3D lattice systems of size L = 200, which is a significant computational achievement. These largest ever numerical simulations have enhanced our understanding of physics of fracture; in particular, we analyze damage localization and its deviation from percolation behavior, scaling laws for damage density, universality of fracture strength distribution, size effect on the mean fracture strength, and finally the scaling of crack surface roughness

  3. Noisy EEG signals classification based on entropy metrics. Performance assessment using first and second generation statistics.

    Science.gov (United States)

    Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio

    2017-08-01

    This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Applicability of a Diffuse Reflectance Infrared Fourier Transform handheld spectrometer to perform in situ analyses on Cultural Heritage materials.

    Science.gov (United States)

    Arrizabalaga, Iker; Gómez-Laserna, Olivia; Aramendia, Julene; Arana, Gorka; Madariaga, Juan Manuel

    2014-08-14

    This work studies the applicability of a Diffuse Reflectance Infrared Fourier Transform handheld device to perform in situ analyses on Cultural Heritage assets. This portable diffuse reflectance spectrometer has been used to characterise and diagnose the conservation state of (a) building materials of the Guevara Palace (15th century, Segura, Basque Country, Spain) and (b) different 19th century wallpapers manufactured by the Santa Isabel factory (Vitoria-Gasteiz, Basque Country, Spain) and by the well known Dufour and Leroy manufacturers (Paris, France), all of them belonging to the Torre de los Varona Castle (Villanañe, Basque Country, Spain). In all cases, in situ measurements were carried out and also a few samples were collected and measured in the laboratory by diffuse reflectance spectroscopy (DRIFT) in order to validate the information obtained by the handheld instrument. In the analyses performed in situ, distortions in the diffuse reflectance spectra can be observed due to the presence of specular reflection, showing the inverted bands caused by the Reststrahlen effect, in particular on those IR bands with the highest absorption coefficients. This paper concludes that the results obtained in situ by a diffuse reflectance handheld device are comparable to those obtained with laboratory diffuse reflectance spectroscopy equipment and proposes a few guidelines to acquire good spectra in the field, minimising the influence caused by the specular reflection. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  6. Nuclear power plant performance statistics. Comparison with fossil-fired units

    International Nuclear Information System (INIS)

    Tabet, C.; Laue, H.J.; Qureshi, A.; Skjoeldebrand, R.; White, D.

    1983-01-01

    The joint UNIPEDE/World Energy Conference Committee on Availability of Thermal Generating Plants has a mandate to study the availability of thermal plants and the different factors that influence it. This has led to the collection and publication at the Congress of the World Energy Conference (WEC) every third year of availability and unavailability factors to be used in systems reliability studies and operations and maintenance planning. For nuclear power plants the joint UNIPEDE/WEC Committee relies on the IAEA to provide availability and unavailability data. The IAEA has published an annual report with operating data from nuclear plants in its Member States since 1971, covering in addition back data from the early 1960s. These reports have developed over the years and in the early 1970s the format was brought into close conformity with that used by UNIPEDE and WEC to report performance of fossil-fired generating plants. Since 1974 an annual analytical summary report has been prepared. In 1981 all information on operating experience with nuclear power plants was placed in a computer file for easier reference. The computerized Power Reactor Information System (PRIS) ensures that data are easily retrievable and at its present level it remains compatible with various national systems. The objectives for the IAEA data collection and evaluation have developed significantly since 1970. At first, the IAEA primarily wanted to enable the individual power plant operator to compare the performance of his own plant with that of others of the same type; when enough data had been collected, they provided the basis for assessment of the fundamental performance parameters used in economic project studies; now, the data base merits being used in setting availability objectives for power plant operations. (author)

  7. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.

  8. Statistical multi-model approach for performance assessment of cooling tower

    International Nuclear Information System (INIS)

    Pan, Tian-Hong; Shieh, Shyan-Shu; Jang, Shi-Shang; Tseng, Wen-Hung; Wu, Chan-Wei; Ou, Jenq-Jang

    2011-01-01

    This paper presents a data-driven model-based assessment strategy to investigate the performance of a cooling tower. In order to achieve this objective, the operations of a cooling tower are first characterized using a data-driven method, multiple models, which presents a set of local models in the format of linear equations. Satisfactory fuzzy c-mean clustering algorithm is used to classify operating data into several groups to build local models. The developed models are then applied to predict the performance of the system based on design input parameters provided by the manufacturer. The tower characteristics are also investigated using the proposed models via the effects of the water/air flow ratio. The predicted results tend to agree well with the calculated tower characteristics using actual measured operating data from an industrial plant. By comparison with the design characteristic curve provided by the manufacturer, the effectiveness of cooling tower can be obtained in the end. A case study conducted in a commercial plant demonstrates the validity of proposed approach. It should be noted that this is the first attempt to assess the cooling efficiency which is deviated from the original design value using operating data for an industrial scale process. Moreover, the evaluated process need not interrupt the normal operation of the cooling tower. This should be of particular interest in industrial applications.

  9. Box-Behnken statistical design to optimize thermal performance of energy storage systems

    Science.gov (United States)

    Jalalian, Iman Joz; Mohammadiun, Mohammad; Moqadam, Hamid Hashemi; Mohammadiun, Hamid

    2018-05-01

    Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).

  10. Closed loop statistical performance analysis of N-K knock controllers

    Science.gov (United States)

    Peyton Jones, James C.; Shayestehmanesh, Saeed; Frey, Jesse

    2017-09-01

    The closed loop performance of engine knock controllers cannot be rigorously assessed from single experiments or simulations because knock behaves as a random process and therefore the response belongs to a random distribution also. In this work a new method is proposed for computing the distributions and expected values of the closed loop response, both in steady state and in response to disturbances. The method takes as its input the control law, and the knock propensity characteristic of the engine which is mapped from open loop steady state tests. The method is applicable to the 'n-k' class of knock controllers in which the control action is a function only of the number of cycles n since the last control move, and the number k of knock events that have occurred in this time. A Cumulative Summation (CumSum) based controller falls within this category, and the method is used to investigate the performance of the controller in a deeper and more rigorous way than has previously been possible. The results are validated using onerous Monte Carlo simulations, which confirm both the validity of the method and its high computational efficiency.

  11. Box-Behnken statistical design to optimize thermal performance of energy storage systems

    Science.gov (United States)

    Jalalian, Iman Joz; Mohammadiun, Mohammad; Moqadam, Hamid Hashemi; Mohammadiun, Hamid

    2017-11-01

    Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).

  12. Design and performance characteristics of solar adsorption refrigeration system using parabolic trough collector: Experimental and statistical optimization technique

    International Nuclear Information System (INIS)

    Abu-Hamdeh, Nidal H.; Alnefaie, Khaled A.; Almitani, Khalid H.

    2013-01-01

    Highlights: • The successes of using olive waste/methanol as an adsorbent/adsorbate pair. • The experimental gross cycle coefficient of performance obtained was COP a = 0.75. • Optimization showed expanding adsorbent mass to a certain range increases the COP. • The statistical optimization led to optimum tank volume between 0.2 and 0.3 m 3 . • Increasing the collector area to a certain range increased the COP. - Abstract: The current work demonstrates a developed model of a solar adsorption refrigeration system with specific requirements and specifications. The recent scheme can be employed as a refrigerator and cooler unit suitable for remote areas. The unit runs through a parabolic trough solar collector (PTC) and uses olive waste as adsorbent with methanol as adsorbate. Cooling production, COP (coefficient of performance, and COP a (cycle gross coefficient of performance) were used to assess the system performance. The system’s design optimum parameters in this study were arrived to through statistical and experimental methods. The lowest temperature attained in the refrigerated space was 4 °C and the equivalent ambient temperature was 27 °C. The temperature started to decrease steadily at 20:30 – when the actual cooling started – until it reached 4 °C at 01:30 in the next day when it rose again. The highest COP a obtained was 0.75

  13. Analyses to demonstrate the structural performance of the CASTOR KN12 in hypothetical accident drop accident scenarios

    International Nuclear Information System (INIS)

    Diersch, R.; Weiss, M.; Tso, C.F.; Chung, S.H.; Lee, H.Y.

    2004-01-01

    CASTORc ircledR KN-12 is a new cask design by GNB for KHNP-NETEC for dry and wet transportation of up to twelve spent PWR fuel assemblies in Korea. It received its transport license from the Korean Competent Authority KINS in July 2002 and is now in use in South Korea. It has been designed to satisfy the regulatory requirements of the 10 CFR 71 and the IAEA ST-1 for Type B(U)F packages. Its structural performance was demonstrated against the load cases and boundary conditions as defined in 10 CFR 71 and NRC's Regulatory Guide 7.8, and further explained in NUREG 1617. This included normal conditions of transport load cases - including Hot Environment, Cold Environment, Increased External Pressure (140MPa), Minimum External Pressure (24.5kPa), Vibration and shock, and 0.3m free drop - and the hypothetical accident conditions load cases - including the 9m Free Drop, Puncture, Thermal Fire Accident, 200m Water Immersion and 1.5 x MNOP Internal Pressure. Structural performance were demonstrated by analysis, including state-of-the-art finite element (FE) simulation, and confirmed by tests using a 1/3-scale model. Test results were also used to verify the numerical tool and the methods used in the analyses. All the structural analyses including validation against drop tests were carried out by Arup, and testing were carried out by KAERI. This paper concentrates on the analysis carried out to demonstrate performance in the hypothetical accident 9m free drop scenarios, and results from a small selection of them

  14. Analyses to demonstrate the structural performance of the CASTOR KN12 in hypothetical accident drop accident scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Diersch, R.; Weiss, M. [Gesellschaft fuer Nuklear-Behaelter mbH (Germany); Tso, C.F. [Arup (United Kingdom); Chung, S.H.; Lee, H.Y. [KHNP-NETEC (Korea)

    2004-07-01

    CASTORc{sup ircledR} KN-12 is a new cask design by GNB for KHNP-NETEC for dry and wet transportation of up to twelve spent PWR fuel assemblies in Korea. It received its transport license from the Korean Competent Authority KINS in July 2002 and is now in use in South Korea. It has been designed to satisfy the regulatory requirements of the 10 CFR 71 and the IAEA ST-1 for Type B(U)F packages. Its structural performance was demonstrated against the load cases and boundary conditions as defined in 10 CFR 71 and NRC's Regulatory Guide 7.8, and further explained in NUREG 1617. This included normal conditions of transport load cases - including Hot Environment, Cold Environment, Increased External Pressure (140MPa), Minimum External Pressure (24.5kPa), Vibration and shock, and 0.3m free drop - and the hypothetical accident conditions load cases - including the 9m Free Drop, Puncture, Thermal Fire Accident, 200m Water Immersion and 1.5 x MNOP Internal Pressure. Structural performance were demonstrated by analysis, including state-of-the-art finite element (FE) simulation, and confirmed by tests using a 1/3-scale model. Test results were also used to verify the numerical tool and the methods used in the analyses. All the structural analyses including validation against drop tests were carried out by Arup, and testing were carried out by KAERI. This paper concentrates on the analysis carried out to demonstrate performance in the hypothetical accident 9m free drop scenarios, and results from a small selection of them.

  15. Identification of robust statistical downscaling methods based on a comprehensive suite of performance metrics for South Korea

    Science.gov (United States)

    Eum, H. I.; Cannon, A. J.

    2015-12-01

    Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the

  16. Meta-regression analyses to explain statistical heterogeneity in a systematic review of strategies for guideline implementation in primary health care.

    Directory of Open Access Journals (Sweden)

    Susanne Unverzagt

    Full Text Available This study is an in-depth-analysis to explain statistical heterogeneity in a systematic review of implementation strategies to improve guideline adherence of primary care physicians in the treatment of patients with cardiovascular diseases. The systematic review included randomized controlled trials from a systematic search in MEDLINE, EMBASE, CENTRAL, conference proceedings and registers of ongoing studies. Implementation strategies were shown to be effective with substantial heterogeneity of treatment effects across all investigated strategies. Primary aim of this study was to explain different effects of eligible trials and to identify methodological and clinical effect modifiers. Random effects meta-regression models were used to simultaneously assess the influence of multimodal implementation strategies and effect modifiers on physician adherence. Effect modifiers included the staff responsible for implementation, level of prevention and definition pf the primary outcome, unit of randomization, duration of follow-up and risk of bias. Six clinical and methodological factors were investigated as potential effect modifiers of the efficacy of different implementation strategies on guideline adherence in primary care practices on the basis of information from 75 eligible trials. Five effect modifiers were able to explain a substantial amount of statistical heterogeneity. Physician adherence was improved by 62% (95% confidence interval (95% CI 29 to 104% or 29% (95% CI 5 to 60% in trials where other non-medical professionals or nurses were included in the implementation process. Improvement of physician adherence was more successful in primary and secondary prevention of cardiovascular diseases by around 30% (30%; 95% CI -2 to 71% and 31%; 95% CI 9 to 57%, respectively compared to tertiary prevention. This study aimed to identify effect modifiers of implementation strategies on physician adherence. Especially the cooperation of different health

  17. Performance study of K{sub e} factors in simplified elastic plastic fatigue analyses with emphasis on thermal cyclic loading

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Hermann, E-mail: hermann.lang@areva.com [AREVA NP GmbH, PEEA-G, Henri-Dunant-Strasse 50, 91058 Erlangen (Germany); Rudolph, Juergen; Ziegler, Rainer [AREVA NP GmbH, PEEA-G, Henri-Dunant-Strasse 50, 91058 Erlangen (Germany)

    2011-08-15

    As code-based fully elastic plastic code conforming fatigue analyses are still time consuming, simplified elastic plastic analysis is often applied. This procedure is known to be overly conservative for some conditions due to the applied plastification (penalty) factor K{sub e}. As a consequence, less conservative fully elastic plastic fatigue analyses based on non-linear finite element analyses (FEA) or simplified elastic plastic analysis based on more realistic K{sub e} factors have to be used for fatigue design. The demand for more realistic K{sub e} factors is covered as a requirement of practical fatigue analysis. Different code-based K{sub e} procedures are reviewed in this paper with special regard to performance under thermal cyclic loading conditions. Other approximation formulae such as those by Neuber, Seeger/Beste or Kuehnapfel are not evaluated in this context because of their applicability to mechanical loading excluding thermal cyclic loading conditions typical for power plant operation. Besides the current code-based K{sub e} corrections, the ASME Code Case N-779 (e.g. Adam's proposal) and its modification in ASME Section VIII is considered. Comparison of elastic plastic results and results from the Rules for Nuclear Facility Components and Rules for Pressure Vessels reveals a considerable overestimation of usage factor in the case of ASME III and KTA 3201.2 for the examined examples. Usage factors according to RCC-M, Adams (ASME Code Case N-779), ASME VIII (alternative) and EN 13445-3 are essentially comparable and less conservative for these examples. The K{sub v} correction as well as the applied yield criterion (Tresca or von Mises) essentially influence the quality of the more advanced plasticity corrections (e.g. ASME Code Case N-779 and RCC-M). Hence, new proposals are based on a refined K{sub v} correction.

  18. Performance study of Ke factors in simplified elastic plastic fatigue analyses with emphasis on thermal cyclic loading

    International Nuclear Information System (INIS)

    Lang, Hermann; Rudolph, Juergen; Ziegler, Rainer

    2011-01-01

    As code-based fully elastic plastic code conforming fatigue analyses are still time consuming, simplified elastic plastic analysis is often applied. This procedure is known to be overly conservative for some conditions due to the applied plastification (penalty) factor K e . As a consequence, less conservative fully elastic plastic fatigue analyses based on non-linear finite element analyses (FEA) or simplified elastic plastic analysis based on more realistic K e factors have to be used for fatigue design. The demand for more realistic K e factors is covered as a requirement of practical fatigue analysis. Different code-based K e procedures are reviewed in this paper with special regard to performance under thermal cyclic loading conditions. Other approximation formulae such as those by Neuber, Seeger/Beste or Kuehnapfel are not evaluated in this context because of their applicability to mechanical loading excluding thermal cyclic loading conditions typical for power plant operation. Besides the current code-based K e corrections, the ASME Code Case N-779 (e.g. Adam's proposal) and its modification in ASME Section VIII is considered. Comparison of elastic plastic results and results from the Rules for Nuclear Facility Components and Rules for Pressure Vessels reveals a considerable overestimation of usage factor in the case of ASME III and KTA 3201.2 for the examined examples. Usage factors according to RCC-M, Adams (ASME Code Case N-779), ASME VIII (alternative) and EN 13445-3 are essentially comparable and less conservative for these examples. The K v correction as well as the applied yield criterion (Tresca or von Mises) essentially influence the quality of the more advanced plasticity corrections (e.g. ASME Code Case N-779 and RCC-M). Hence, new proposals are based on a refined K v correction.

  19. Phytochemical Profile of Erythrina variegata by Using High-Performance Liquid Chromatography and Gas Chromatography-Mass Spectroscopy Analyses.

    Science.gov (United States)

    Muthukrishnan, Suriyavathana; Palanisamy, Subha; Subramanian, Senthilkumar; Selvaraj, Sumathi; Mari, Kavitha Rani; Kuppulingam, Ramalingam

    2016-08-01

    Natural products derived from plant sources have been utilized to treat patients with numerous diseases. The phytochemical constituents present in ethanolic leaf extract of Erythrina variegata (ELEV) were identified by using high-performance liquid chromatography (HPLC) and gas chromatography-mass spectroscopy (GC-MS) analyses. Shade dried leaves were powdered and extracted with ethanol for analyses through HPLC to identify selected flavonoids and through GC-MS to identify other molecules. The HPLC analysis of ELEV showed the presence of gallic and caffeic acids as the major components at concentrations of 2.0 ppm and 0.1 ppm, respectively, as well as other components. GC-MS analysis revealed the presence of 3-eicosyne; 3,7,11,15-tetramethyl-2-hexadecen-1-ol; butanoic acid, 3-methyl-3,7-dimethyl-6-octenyl ester; phytol; 1,2-benzenedicarboxylic acid, diundecyl ester; 1-octanol, 2-butyl-; squalene; and 2H-pyran, 2-(7-heptadecynyloxy) tetrahydro-derivative. Because pharmacopuncture is a new evolving natural mode that uses herbal extracts for treating patients with various ailments with minimum pain and maximum effect, the results of this study are particularly important and show that ELEV possesses a wide range of phytochemical constituents, as indicated above, as effective active principle molecules that can be used individually or in combination to treat patients with various diseases. Copyright © 2016. Published by Elsevier B.V.

  20. Qualitative and quantitative analyses of flavonoids in Spirodela polyrrhiza by high-performance liquid chromatography coupled with mass spectrometry.

    Science.gov (United States)

    Qiao, Xue; He, Wen-ni; Xiang, Cheng; Han, Jian; Wu, Li-jun; Guo, De-an; Ye, Min

    2011-01-01

    Spirodela polyrrhiza (L.) Schleid. is a traditional Chinese herbal medicine for the treatment of influenza. Despite its wide use in Chinese medicine, no report on quality control of this herb is available so far. To establish qualitative and quantitative analytical methods by high-performance liquid chromatography (HPLC) coupled with mass spectrometry (MS) for the quality control of S. polyrrhiza. The methanol extract of S. polyrrhiza was analysed by HPLC/ESI-MS(n). Flavonoids were identified by comparing with reference standards or according to their MS(n) (n = 2-4) fragmentation behaviours. Based on LC/MS data, a standardised HPLC fingerprint was established by analysing 15 batches of commercial herbal samples. Furthermore, quantitative analysis was conducted by determining five major flavonoids, namely luteolin 8-C-glucoside, apigenin 8-C-glucoside, luteolin 7-O-glucoside, apigenin 7-O-glucoside and luteolin. A total of 18 flavonoids were identified by LC/MS, and 14 of them were reported from this herb for the first time. The HPLC fingerprints contained 10 common peaks, and could differentiate good quality batches from counterfeits. The total contents of five major flavonoids in S. polyrrhiza varied significantly from 4.28 to 19.87 mg/g. Qualitative LC/MS and quantitative HPLC analytical methods were established for the comprehensive quality control of S. polyrrhiza. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    Science.gov (United States)

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  2. Discovery and characterisation of dietary patterns in two Nordic countries. Using non-supervised and supervised multivariate statistical techniques to analyse dietary survey data

    DEFF Research Database (Denmark)

    Edberg, Anna; Freyhult, Eva; Sand, Salomon

    - and inter-national data excerpts. For example, major PCA loadings helped deciphering both shared and disparate features, relating to food groups, across Danish and Swedish preschool consumers. Data interrogation, reliant on the above-mentioned composite techniques, disclosed one outlier dietary prototype...... prototype with the latter property was identified also in the Danish data material, but without low consumption of Vegetables or Fruit & berries. The second MDA-type of data interrogation involved Supervised Learning, also known as Predictive Modelling. These exercises involved the Random Forest (RF...... not elaborated on in-depth, output from several analyses suggests a preference for energy-based consumption data for Cluster Analysis and Predictive Modelling, over those appearing as weight....

  3. Application of multivariate statistical analyses in the interpretation of geochemical behaviour of uranium in phosphatic rocks in the Red Sea, Nile Valley and Western Desert, Egypt

    International Nuclear Information System (INIS)

    El-Arabi, A.M.Abd El-Gabar M.; Khalifa, Ibrahim H.

    2002-01-01

    Factor and cluster analyses as well as the Pearson correlation coefficient have been applied to geochemical data obtained from phosphorite and phosphatic rocks of Duwi Formation exposed at the Red Sea coast, Nile Valley and Western Desert. Sixty-six samples from a total of 71 collected samples were analysed for SiO 2 , TiO 2 , Al 2 O 3 , Fe 2 O 3 , CaO, MgO, Na 2 O, K 2 O, P 2 O 5 , Sr, U and Pb by XRF and their mineral constituents were determined by the use of XRD techniques. In addition, the natural radioactivity of the phosphatic samples due to their uranium, thorium and potassium contents was measured by gamma-spectrometry.The uranium content in the phosphate rocks with P 2 O 5 >15% (average of 106.6 ppm) is higher than in rocks with P 2 O 5 2 O 5 and CaO, whereas it is not related to changes in SiO 2 , TiO 2 , Al 2 O 3 , Fe 2 O 3 , MgO, Na 2 O and K 2 O concentrations.Factor analysis and the Pearson correlation coefficient revealed that uranium behaves geochemically in different ways in the phosphatic sediments and phosphorites in the Red Sea, Nile Valley and Western Desert. In the Red Sea and Western Desert phosphorites, uranium occurs mainly in oxidized U 6+ state where it seems to be fixed by the phosphate ion, forming secondary uranium phosphate minerals such as phosphuranylite.In the Nile Valley phosphorites, ionic substitution of Ca 2+ by U 4+ is the main controlling factor in the concentration of uranium in phosphate rocks. Moreover, fixation of U 6+ by phosphate ion and adsorption of uranium on phosphate minerals play subordinate roles

  4. Statistical Techniques For Real-time Anomaly Detection Using Spark Over Multi-source VMware Performance Data

    Energy Technology Data Exchange (ETDEWEB)

    Solaimani, Mohiuddin [Univ. of Texas-Dallas, Richardson, TX (United States); Iftekhar, Mohammed [Univ. of Texas-Dallas, Richardson, TX (United States); Khan, Latifur [Univ. of Texas-Dallas, Richardson, TX (United States); Thuraisingham, Bhavani [Univ. of Texas-Dallas, Richardson, TX (United States); Ingram, Joey Burton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Anomaly detection refers to the identi cation of an irregular or unusual pat- tern which deviates from what is standard, normal, or expected. Such deviated patterns typically correspond to samples of interest and are assigned different labels in different domains, such as outliers, anomalies, exceptions, or malware. Detecting anomalies in fast, voluminous streams of data is a formidable chal- lenge. This paper presents a novel, generic, real-time distributed anomaly detection framework for heterogeneous streaming data where anomalies appear as a group. We have developed a distributed statistical approach to build a model and later use it to detect anomaly. As a case study, we investigate group anomaly de- tection for a VMware-based cloud data center, which maintains a large number of virtual machines (VMs). We have built our framework using Apache Spark to get higher throughput and lower data processing time on streaming data. We have developed a window-based statistical anomaly detection technique to detect anomalies that appear sporadically. We then relaxed this constraint with higher accuracy by implementing a cluster-based technique to detect sporadic and continuous anomalies. We conclude that our cluster-based technique out- performs other statistical techniques with higher accuracy and lower processing time.

  5. A graphical user interface (GUI) toolkit for the calculation of three-dimensional (3D) multi-phase biological effective dose (BED) distributions including statistical analyses.

    Science.gov (United States)

    Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis

    2016-07-01

    A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Statistical analyses of in-situ and soil-sample measurements for radionuclides in surface soil near the 116-K-2 trench

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Klover, W.J.

    1988-09-01

    Radiation detection surveys are used at the US Department of Energy's Hanford Reservation near Richland, Washington, to determine areas that need posting as radiation zones or to measure dose rates in the field. The relationship between measurements made by Sodium Iodide (NaI) detectors mounted on the mobile Road Monitor vehicle and those made by hand-held GM P-11 probes and Micro-R meters are of particular interest because the Road Monitor can survey land areas in much less time than hand-held detectors. Statistical regression methods are used here to develop simple equations to predict GM P-11 probe gross gamma count-per-minute (cpm) and Micro-R-Meter μR/h measurements on the basis of NaI gross gamma count-per-second (cps) measurements obtained using the Road Monitor. These equations were estimated using data collected near the 116-K-2 Trench in the 100-K area on the Hanford Reservation. Equations are also obtained for estimating upper and lower limits within which the GM P-11 or Micro-R-Meter measurement corresponding to a given NaI Road Monitor measurement at a new location is expected to fall with high probability. An equation and limits for predicting GM P-11 measurements on the basis of Micro-R- Meter measurements is also estimated. Also, we estimate an equation that may be useful for approximating the 90 Sr measurement of a surface soil sample on the basis of a spectroscopy measurement for 137 Cs on that sample. 3 refs., 16 figs., 44 tabs

  7. One size does not fit all: On how Markov model order dictates performance of genomic sequence analyses

    Science.gov (United States)

    Narlikar, Leelavati; Mehta, Nidhi; Galande, Sanjeev; Arjunwadkar, Mihir

    2013-01-01

    The structural simplicity and ability to capture serial correlations make Markov models a popular modeling choice in several genomic analyses, such as identification of motifs, genes and regulatory elements. A critical, yet relatively unexplored, issue is the determination of the order of the Markov model. Most biological applications use a predetermined order for all data sets indiscriminately. Here, we show the vast variation in the performance of such applications with the order. To identify the ‘optimal’ order, we investigated two model selection criteria: Akaike information criterion and Bayesian information criterion (BIC). The BIC optimal order delivers the best performance for mammalian phylogeny reconstruction and motif discovery. Importantly, this order is different from orders typically used by many tools, suggesting that a simple additional step determining this order can significantly improve results. Further, we describe a novel classification approach based on BIC optimal Markov models to predict functionality of tissue-specific promoters. Our classifier discriminates between promoters active across 12 different tissues with remarkable accuracy, yielding 3 times the precision expected by chance. Application to the metagenomics problem of identifying the taxum from a short DNA fragment yields accuracies at least as high as the more complex mainstream methodologies, while retaining conceptual and computational simplicity. PMID:23267010

  8. Forecasting of a ground-coupled heat pump performance using neural networks with statistical data weighting pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Esen, Hikmet; Esen, Mehmet [Department of Mechanical Education, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey); Inalli, Mustafa [Department of Mechanical Engineering, Faculty of Engineering, Firat University, 23279 Elazig (Turkey); Sengur, Abdulkadir [Department of Electronic and Computer Science, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey)

    2008-04-15

    The objective of this work is to improve the performance of an artificial neural network (ANN) with a statistical weighted pre-processing (SWP) method to learn to predict ground source heat pump (GCHP) systems with the minimum data set. Experimental studies were completed to obtain training and test data. Air temperatures entering/leaving condenser unit, water-antifreeze solution entering/leaving the horizontal ground heat exchangers and ground temperatures (1 and 2 m) were used as input layer, while the output is coefficient of performance (COP) of system. Some statistical methods, such as the root-mean squared (RMS), the coefficient of multiple determinations (R{sup 2}) and the coefficient of variation (cov) is used to compare predicted and actual values for model validation. It is found that RMS value is 0.074, R{sup 2} value is 0.9999 and cov value is 2.22 for SCG6 algorithm of only ANN structure. It is also found that RMS value is 0.002, R{sup 2} value is 0.9999 and cov value is 0.076 for SCG6 algorithm of SWP-ANN structure. The simulation results show that the SWP based networks can be used an alternative way in these systems. Therefore, instead of limited experimental data found in literature, faster and simpler solutions are obtained using hybridized structures such as SWP-ANN. (author)

  9. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    Science.gov (United States)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  10. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics

    Directory of Open Access Journals (Sweden)

    Haejoon Jung

    2018-01-01

    Full Text Available As an intrinsic part of the Internet of Things (IoT ecosystem, machine-to-machine (M2M communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

  11. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics.

    Science.gov (United States)

    Jung, Haejoon; Lee, In-Ho

    2018-01-12

    As an intrinsic part of the Internet of Things (IoT) ecosystem, machine-to-machine (M2M) communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave) communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC) device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs) with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy) of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

  12. Mediator effect of statistical process control between Total Quality Management (TQM) and business performance in Malaysian Automotive Industry

    Science.gov (United States)

    Ahmad, M. F.; Rasi, R. Z.; Zakuan, N.; Hisyamudin, M. N. N.

    2015-12-01

    In today's highly competitive market, Total Quality Management (TQM) is vital management tool in ensuring a company can success in their business. In order to survive in the global market with intense competition amongst regions and enterprises, the adoption of tools and techniques are essential in improving business performance. There are consistent results between TQM and business performance. However, only few previous studies have examined the mediator effect namely statistical process control (SPC) between TQM and business performance. A mediator is a third variable that changes the association between an independent variable and an outcome variable. This study present research proposed a TQM performance model with mediator effect of SPC with structural equation modelling, which is a more comprehensive model for developing countries, specifically for Malaysia. A questionnaire was prepared and sent to 1500 companies from automotive industry and the related vendors in Malaysia, giving a 21.8 per cent rate. Attempts were made at findings significant impact of mediator between TQM practices and business performance showed that SPC is important tools and techniques in TQM implementation. The result concludes that SPC is partial correlation between and TQM and BP with indirect effect (IE) is 0.25 which can be categorised as high moderator effect.

  13. Diagnostic Performance and Utility of Quantitative EEG Analyses in Delirium: Confirmatory Results From a Large Retrospective Case-Control Study.

    Science.gov (United States)

    Fleischmann, Robert; Tränkner, Steffi; Bathe-Peters, Rouven; Rönnefarth, Maria; Schmidt, Sein; Schreiber, Stephan J; Brandt, Stephan A

    2018-03-01

    The lack of objective disease markers is a major cause of misdiagnosis and nonstandardized approaches in delirium. Recent studies conducted in well-selected patients and confined study environments suggest that quantitative electroencephalography (qEEG) can provide such markers. We hypothesize that qEEG helps remedy diagnostic uncertainty not only in well-defined study cohorts but also in a heterogeneous hospital population. In this retrospective case-control study, EEG power spectra of delirious patients and age-/gender-matched controls (n = 31 and n = 345, respectively) were fitted in a linear model to test their performance as binary classifiers. We subsequently evaluated the diagnostic performance of the best classifiers in control samples with normal EEGs (n = 534) and real-world samples including pathologic findings (n = 4294). Test reliability was estimated through split-half analyses. We found that the combination of spectral power at F3-P4 at 2 Hz (area under the curve [AUC] = .994) and C3-O1 at 19 Hz (AUC = .993) provided a sensitivity of 100% and a specificity of 99% to identify delirious patients among normal controls. These classifiers also yielded a false positive rate as low as 5% and increased the pretest probability of being delirious by 57% in an unselected real-world sample. Split-half reliabilities were .98 and .99, respectively. This retrospective study yielded preliminary evidence that qEEG provides excellent diagnostic performance to identify delirious patients even outside confined study environments. It furthermore revealed reduced beta power as a novel specific finding in delirium and that a normal EEG excludes delirium. Prospective studies including parameters of pretest probability and delirium severity are required to elaborate on these promising findings.

  14. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  15. Measuring the apparent diffusion coefficient in primary rectal tumors: is there a benefit in performing histogram analyses?

    Science.gov (United States)

    van Heeswijk, Miriam M; Lambregts, Doenja M J; Maas, Monique; Lahaye, Max J; Ayas, Z; Slenter, Jos M G M; Beets, Geerard L; Bakers, Frans C H; Beets-Tan, Regina G H

    2017-06-01

    The apparent diffusion coefficient (ADC) is a potential prognostic imaging marker in rectal cancer. Typically, mean ADC values are used, derived from precise manual whole-volume tumor delineations by experts. The aim was first to explore whether non-precise circular delineation combined with histogram analysis can be a less cumbersome alternative to acquire similar ADC measurements and second to explore whether histogram analyses provide additional prognostic information. Thirty-seven patients who underwent a primary staging MRI including diffusion-weighted imaging (DWI; b0, 25, 50, 100, 500, 1000; 1.5 T) were included. Volumes-of-interest (VOIs) were drawn on b1000-DWI: (a) precise delineation, manually tracing tumor boundaries (2 expert readers), and (b) non-precise delineation, drawing circular VOIs with a wide margin around the tumor (2 non-experts). Mean ADC and histogram metrics (mean, min, max, median, SD, skewness, kurtosis, 5th-95th percentiles) were derived from the VOIs and delineation time was recorded. Measurements were compared between the two methods and correlated with prognostic outcome parameters. Median delineation time reduced from 47-165 s (precise) to 21-43 s (non-precise). The 45th percentile of the non-precise delineation showed the best correlation with the mean ADC from the precise delineation as the reference standard (ICC 0.71-0.75). None of the mean ADC or histogram parameters showed significant prognostic value; only the total tumor volume (VOI) was significantly larger in patients with positive clinical N stage and mesorectal fascia involvement. When performing non-precise tumor delineation, histogram analysis (in specific 45th ADC percentile) may be used as an alternative to obtain similar ADC values as with precise whole tumor delineation. Histogram analyses are not beneficial to obtain additional prognostic information.

  16. Age and gender effects on normal regional cerebral blood flow studied using two different voxel-based statistical analyses; Effets de l'age et du genre sur la perfusion cerebrale regionale etudiee par deux methodes d'analyse statistique voxel-par-voxel

    Energy Technology Data Exchange (ETDEWEB)

    Pirson, A.S.; George, J.; Krug, B.; Vander Borght, T. [Universite Catholique de Louvain, Service de Medecine Nucleaire, Cliniques Universitaires de Mont-Godinne, Yvoir (Belgium); Van Laere, K. [Leuven Univ. Hospital, Nuclear Medicine Div. (Belgium); Jamart, J. [Universite Catholique de Louvain, Dept. de Biostatistiques, Cliniques Universitaires de Mont-Godinne, Yvoir (Belgium); D' Asseler, Y. [Ghent Univ., Medical Signal and Image Processing Dept. (MEDISIP), Faculty of applied sciences (Belgium); Minoshima, S. [Washington Univ., Dept. of Radiology, Seattle (United States)

    2009-10-15

    Fully automated analysis programs have been applied more and more to aid for the reading of regional cerebral blood flow SPECT study. They are increasingly based on the comparison of the patient study with a normal database. In this study, we evaluate the ability of Three-Dimensional Stereotactic Surface Projection (3 D-S.S.P.) to isolate effects of age and gender in a previously studied normal population. The results were also compared with those obtained using Statistical Parametric Mapping (S.P.M.99). Methods Eighty-nine {sup 99m}Tc-E.C.D.-SPECT studies performed in carefully screened healthy volunteers (46 females, 43 males; age 20 - 81 years) were analysed using 3 D-S.S.P.. A multivariate analysis based on the general linear model was performed with regions as intra-subject factor, gender as inter-subject factor and age as co-variate. Results Both age and gender had a significant interaction effect with regional tracer uptake. An age-related decline (p < 0.001) was found in the anterior cingulate gyrus, left frontal association cortex and left insula. Bilateral occipital association and left primary visual cortical uptake showed a significant relative increase with age (p < 0.001). Concerning the gender effect, women showed higher uptake (p < 0.01) in the parietal and right sensorimotor cortices. An age by gender interaction (p < 0.01) was only found in the left medial frontal cortex. The results were consistent with those obtained with S.P.M.99. Conclusion 3 D-S.S.P. analysis of normal r.C.B.F. variability is consistent with the literature and other automated voxel-based techniques, which highlight the effects of both age and gender. (authors)

  17. Inconsistent trial assessments by the National Institute for Health and Clinical Excellence and IQWiG: standards for the performance and interpretation of subgroup analyses are needed.

    Science.gov (United States)

    Hasford, J; Bramlage, P; Koch, G; Lehmacher, W; Einhäupl, K; Rothwell, P M

    2010-12-01

    The methodology for the critical assessment of medical interventions is well established. Regulatory agencies and institutions adhere, in principle, to the same standards. This consistency, however, is not always the case in practice. Using the evaluation of the CAPRIE (Clopidogrel versus Aspirin in Patients at risk of Ischemic Events) trial by the British National Institute for Health and Clinical Excellence (NICE) and the German Institute for Quality and Efficiency in Health Care (IQWiG), we illustrate that there was no consensus for the interpretation of possible heterogeneity in treatment comparisons across subgroups. The NICE concluded that CAPRIE demonstrated clinical benefit for the overall intention-to-treat (ITT) population with sufficient robustness to possible sources of heterogeneity. The IQWiG interpreted the alleged heterogeneity as implying that the clinical benefit only applied to the subgroup of patients with a statistically significant result irrespective of the results of the ITT analysis. International standards for the performance and interpretation of subgroup analyses as well as for the assessment of heterogeneity between strata are needed. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Nontargeted, Rapid Screening of Extra Virgin Olive Oil Products for Authenticity Using Near-Infrared Spectroscopy in Combination with Conformity Index and Multivariate Statistical Analyses.

    Science.gov (United States)

    Karunathilaka, Sanjeewa R; Kia, Ali-Reza Fardin; Srigley, Cynthia; Chung, Jin Kyu; Mossoba, Magdi M

    2016-10-01

    A rapid tool for evaluating authenticity was developed and applied to the screening of extra virgin olive oil (EVOO) retail products by using Fourier-transform near infrared (FT-NIR) spectroscopy in combination with univariate and multivariate data analysis methods. Using disposable glass tubes, spectra for 62 reference EVOO, 10 edible oil adulterants, 20 blends consisting of EVOO spiked with adulterants, 88 retail EVOO products and other test samples were rapidly measured in the transmission mode without any sample preparation. The univariate conformity index (CI) and the multivariate supervised soft independent modeling of class analogy (SIMCA) classification tool were used to analyze the various olive oil products which were tested for authenticity against a library of reference EVOO. Better discrimination between the authentic EVOO and some commercial EVOO products was observed with SIMCA than with CI analysis. Approximately 61% of all EVOO commercial products were flagged by SIMCA analysis, suggesting that further analysis be performed to identify quality issues and/or potential adulterants. Due to its simplicity and speed, FT-NIR spectroscopy in combination with multivariate data analysis can be used as a complementary tool to conventional official methods of analysis to rapidly flag EVOO products that may not belong to the class of authentic EVOO. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  19. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    Science.gov (United States)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  20. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Science.gov (United States)

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  1. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Directory of Open Access Journals (Sweden)

    Arika Ligmann-Zielinska

    Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  2. Post test analyses of Revisa benchmark based on a creep test at 1100 Celsius degrees performed on a notched tube

    International Nuclear Information System (INIS)

    Fischer, M.; Bernard, A.; Bhandari, S.

    2001-01-01

    In the Euratom 4. Framework Program of the European Commission, REVISA Project deals with the Reactor Vessel Integrity under Severe Accidents. One of the tasks consists in the experimental validation of the models developed in the project. To do this, a benchmark was designed where the participants use their models to test the results against an experiment. The experiment called RUPTHER 15 was conducted by the coordinating organisation, CEA (Commissariat a l'Energie Atomique) in France. It is a 'delayed fracture' test on a notched tube. Thermal loading is an axial gradient with a temperature of about 1130 C in the mid-part. Internal pressure is maintained at 0.8 MPa. This paper presents the results of Finite Element calculations performed by Framatome-ANP using the SYSTUS code. Two types of analyses were made: -) one based on the 'time hardening' Norton-Bailey creep law, -) the other based on the coupled creep/damage Lemaitre-Chaboche model. The purpose of this paper is in particular to show the influence of temperature on the simulation results. At high temperatures of the kind dealt with here, slight errors in the temperature measurements can lead to very large differences in the deformation behaviour. (authors)

  3. A statistical approach to estimating effects of performance shaping factors on human error probabilities of soft controls

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea; Jang, Inseok; Hyun Seong, Poong

    2015-01-01

    Despite recent efforts toward data collection for supporting human reliability analysis, there remains a lack of empirical basis in determining the effects of performance shaping factors (PSFs) on human error probabilities (HEPs). To enhance the empirical basis regarding the effects of the PSFs, a statistical methodology using a logistic regression and stepwise variable selection was proposed, and the effects of the PSF on HEPs related with the soft controls were estimated through the methodology. For this estimation, more than 600 human error opportunities related to soft controls in a computerized control room were obtained through laboratory experiments. From the eight PSF surrogates and combinations of these variables, the procedure quality, practice level, and the operation type were identified as significant factors for screen switch and mode conversion errors. The contributions of these significant factors to HEPs were also estimated in terms of a multiplicative form. The usefulness and limitation of the experimental data and the techniques employed are discussed herein, and we believe that the logistic regression and stepwise variable selection methods will provide a way to estimate the effects of PSFs on HEPs in an objective manner. - Highlights: • It is necessary to develop an empirical basis for the effects of the PSFs on the HEPs. • A statistical method using a logistic regression and variable selection was proposed. • The effects of PSFs on the HEPs of soft controls were empirically investigated. • The significant factors were identified and their effects were estimated

  4. Statistical Model and Performance Analysis of a Novel Multilevel Polarization Modulation in Local “Twisted” Fibers

    Directory of Open Access Journals (Sweden)

    Pierluigi Perrone

    2017-01-01

    Full Text Available Transmission demand continues to grow and higher capacity optical communication systems are required to economically meet this ever-increasing need for communication services. This article expands and deepens the study of a novel optical communication system for high-capacity Local Area Networks (LANs, based on twisted optical fibers. The complete statistical behavior of this system is shown, designed for more efficient use of the fiber single-channel capacity by adopting an unconventional multilevel polarization modulation (called “bands of polarization”. Starting from simulative results, a possible reference mathematical model is proposed. Finally, the system performance is analyzed in the presence of shot-noise (coherent detection or thermal noise (direct detection.

  5. Walking performance: correlation between energy cost of walking and walking participation. new statistical approach concerning outcome measurement.

    Directory of Open Access Journals (Sweden)

    Marco Franceschini

    Full Text Available Walking ability, though important for quality of life and participation in social and economic activities, can be adversely affected by neurological disorders, such as Spinal Cord Injury, Stroke, Multiple Sclerosis or Traumatic Brain Injury. The aim of this study is to evaluate if the energy cost of walking (CW, in a mixed group of chronic patients with neurological diseases almost 6 months after discharge from rehabilitation wards, can predict the walking performance and any walking restriction on community activities, as indicated by Walking Handicap Scale categories (WHS. One hundred and seven subjects were included in the study, 31 suffering from Stroke, 26 from Spinal Cord Injury and 50 from Multiple Sclerosis. The multivariable binary logistical regression analysis has produced a statistical model with good characteristics of fit and good predictability. This model generated a cut-off value of.40, which enabled us to classify correctly the cases with a percentage of 85.0%. Our research reveal that, in our subjects, CW is the only predictor of the walking performance of in the community, to be compared with the score of WHS. We have been also identifying a cut-off value of CW cost, which makes a distinction between those who can walk in the community and those who cannot do it. In particular, these values could be used to predict the ability to walk in the community when discharged from the rehabilitation units, and to adjust the rehabilitative treatment to improve the performance.

  6. Analytical performance, agreement and user-friendliness of six point-of-care testing urine analysers for urinary tract infection in general practice

    NARCIS (Netherlands)

    Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M

    2015-01-01

    OBJECTIVE: Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection

  7. Development of a model performance-based sign sheeting specification based on the evaluation of nighttime traffic signs using legibility and eye-tracker data : data and analyses.

    Science.gov (United States)

    2010-09-01

    This report presents data and technical analyses for Texas Department of Transportation Project 0-5235. This : project focused on the evaluation of traffic sign sheeting performance in terms of meeting the nighttime : driver needs. The goal was to de...

  8. The Minimum reporting package – Using standardised indicators to analyse the performance of Supplementary Feeding Programmes in 7 countries

    International Nuclear Information System (INIS)

    Fuller, Susan; Andert, Christoph; Keane, Emily; Navarro-Colorado, Carlos

    2014-01-01

    Full text: Background: The 2008 HPN Network Paper ‘Measuring the effectiveness of Supplementary Feeding Programmes in emergencies’ highlighted the inconsistencies, inadequacies and bias associated with reporting of Supplementary Feeding Programmes (SFP) and outlined the lack of existing tools to support all reporting needs for Community Management of Acute Malnutrition (CMAM) programmes. The ‘Minimum Reporting Package’ (MRP) was developed in response to this paper, and has evolved to a concise and comprehensive management tool which uses standardised indicators to improve the reporting and monitoring of the treatment components of community based management of acute malnutrition (CMAM). The aim of the tool is to provide a contextualised overview of the CMAM programmes to improve programme management decisions, improve accountability and assist urgently needed learning in the effectiveness of this programme approach. Methods: Data is collected regularly by a group of MRP partners and feeds into a central database. Analysis is on-going and leading to a larger analysis planned for the end of 2013/early 2014. The aims of these analyses are: • To describe the characteristics of CMAM programmes • To describe and assess the effect of CMAM programmes on rehabilitating malnourished individuals • To compare programme performance and outcomes according to contextual factors, differences in protocols or approaches. A preliminary analysis was run on Supplementary Feeding Programme (SFP) data collected between January 2012 and July 2013. The length of programme data differs but is generally above 3 months in order to be able to analyse programme results (a full contextual analysis will be conducted in early 2014 to be presented). Results: SFP data was available from 4 NGOs, supporting 10 programmes in 7 countries (Burkina Faso, Chad, Ethiopia, Ivory Coast, India, Kenya, Somalia). After data cleaning, a total of 23,584 admissions and 15,496 were included. The

  9. Hybridization Capture Using RAD Probes (hyRAD, a New Tool for Performing Genomic Analyses on Collection Specimens.

    Directory of Open Access Journals (Sweden)

    Tomasz Suchan

    Full Text Available In the recent years, many protocols aimed at reproducibly sequencing reduced-genome subsets in non-model organisms have been published. Among them, RAD-sequencing is one of the most widely used. It relies on digesting DNA with specific restriction enzymes and performing size selection on the resulting fragments. Despite its acknowledged utility, this method is of limited use with degraded DNA samples, such as those isolated from museum specimens, as these samples are less likely to harbor fragments long enough to comprise two restriction sites making possible ligation of the adapter sequences (in the case of double-digest RAD or performing size selection of the resulting fragments (in the case of single-digest RAD. Here, we address these limitations by presenting a novel method called hybridization RAD (hyRAD. In this approach, biotinylated RAD fragments, covering a random fraction of the genome, are used as baits for capturing homologous fragments from genomic shotgun sequencing libraries. This simple and cost-effective approach allows sequencing of orthologous loci even from highly degraded DNA samples, opening new avenues of research in the field of museum genomics. Not relying on the restriction site presence, it improves among-sample loci coverage. In a trial study, hyRAD allowed us to obtain a large set of orthologous loci from fresh and museum samples from a non-model butterfly species, with a high proportion of single nucleotide polymorphisms present in all eight analyzed specimens, including 58-year-old museum samples. The utility of the method was further validated using 49 museum and fresh samples of a Palearctic grasshopper species for which the spatial genetic structure was previously assessed using mtDNA amplicons. The application of the method is eventually discussed in a wider context. As it does not rely on the restriction site presence, it is therefore not sensitive to among-sample loci polymorphisms in the restriction sites

  10. Have Basic Mathematical Skills Grown Obsolete in the Computer Age: Assessing Basic Mathematical Skills and Forecasting Performance in a Business Statistics Course

    Science.gov (United States)

    Noser, Thomas C.; Tanner, John R.; Shah, Situl

    2008-01-01

    The purpose of this study was to measure the comprehension of basic mathematical skills of students enrolled in statistics classes at a large regional university, and to determine if the scores earned on a basic math skills test are useful in forecasting student performance in these statistics classes, and to determine if students' basic math…

  11. Risk assessment of student performance in the International Foundations of Medicine Clinical Science Examination by the use of statistical modeling.

    Science.gov (United States)

    David, Michael C; Eley, Diann S; Schafer, Jennifer; Davies, Leo

    2016-01-01

    The primary aim of this study was to assess the predictive validity of cumulative grade point average (GPA) for performance in the International Foundations of Medicine (IFOM) Clinical Science Examination (CSE). A secondary aim was to develop a strategy for identifying students at risk of performing poorly in the IFOM CSE as determined by the National Board of Medical Examiners' International Standard of Competence. Final year medical students from an Australian university medical school took the IFOM CSE as a formative assessment. Measures included overall IFOM CSE score as the dependent variable, cumulative GPA as the predictor, and the factors age, gender, year of enrollment, international or domestic status of student, and language spoken at home as covariates. Multivariable linear regression was used to measure predictor and covariate effects. Optimal thresholds of risk assessment were based on receiver-operating characteristic (ROC) curves. Cumulative GPA (nonstandardized regression coefficient [B]: 81.83; 95% confidence interval [CI]: 68.13 to 95.53) and international status (B: -37.40; 95% CI: -57.85 to -16.96) from 427 students were found to be statistically associated with increased IFOM CSE performance. Cumulative GPAs of 5.30 (area under ROC [AROC]: 0.77; 95% CI: 0.72 to 0.82) and 4.90 (AROC: 0.72; 95% CI: 0.66 to 0.78) were identified as being thresholds of significant risk for domestic and international students, respectively. Using cumulative GPA as a predictor of IFOM CSE performance and accommodating for differences in international status, it is possible to identify students who are at risk of failing to satisfy the National Board of Medical Examiners' International Standard of Competence.

  12. a Statistical Analysis on the System Performance of a Bluetooth Low Energy Indoor Positioning System in a 3d Environment

    Science.gov (United States)

    Haagmans, G. G.; Verhagen, S.; Voûte, R. L.; Verbree, E.

    2017-09-01

    Since GPS tends to fail for indoor positioning purposes, alternative methods like indoor positioning systems (IPS) based on Bluetooth low energy (BLE) are developing rapidly. Generally, IPS are deployed in environments covered with obstacles such as furniture, walls, people and electronics influencing the signal propagation. The major factor influencing the system performance and to acquire optimal positioning results is the geometry of the beacons. The geometry of the beacons is limited to the available infrastructure that can be deployed (number of beacons, basestations and tags), which leads to the following challenge: Given a limited number of beacons, where should they be placed in a specified indoor environment, such that the geometry contributes to optimal positioning results? This paper aims to propose a statistical model that is able to select the optimal configuration that satisfies the user requirements in terms of precision. The model requires the definition of a chosen 3D space (in our case 7 × 10 × 6 meter), number of beacons, possible user tag locations and a performance threshold (e.g. required precision). For any given set of beacon and receiver locations, the precision, internal- and external reliability can be determined on forehand. As validation, the modeled precision has been compared with observed precision results. The measurements have been performed with an IPS of BlooLoc at a chosen set of user tag locations for a given geometric configuration. Eventually, the model is able to select the optimal geometric configuration out of millions of possible configurations based on a performance threshold (e.g. required precision).

  13. A PERFORMANCE COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORKS AND MULTIVARIATE STATISTICAL METHODS IN FORECASTING FINANCIAL STRENGTH RATING IN TURKISH BANKING SECTOR

    Directory of Open Access Journals (Sweden)

    MELEK ACAR BOYACIOĞLU

    2013-06-01

    Full Text Available Financial strength rating indicates the fundamental financial strength of a bank. The aim of financial strength rating is to measure a bank’s fundamental financial strength excluding the external factors. External factors can stem from the working environment or can be linked with the outside protective support mechanisms. With the evaluation, the rating of a bank free from outside supportive factors is being sought. Also the financial fundamental, franchise value, the variety of assets and working environment of a bank are being evaluated in this context. In this study, a model has been developed in order to predict the financial strength rating of Turkish banks. The methodology of this study is as follows: Selecting variables to be used in the model, creating a data set, choosing the techniques to be used and the evaluation of classification success of the techniques. It is concluded that the artificial neural network system shows a better performance in terms of classification of financial strength rating in comparison to multivariate statistical methods in the raining set. On the other hand, there is no meaningful difference could be found in the validation set in which the prediction performances of the employed techniques are tested.

  14. SIMPLIFIED PREDICTIVE MODELS FOR CO₂ SEQUESTRATION PERFORMANCE ASSESSMENT RESEARCH TOPICAL REPORT ON TASK #3 STATISTICAL LEARNING BASED MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta; Schuetter, Jared

    2014-11-01

    We compare two approaches for building a statistical proxy model (metamodel) for CO₂ geologic sequestration from the results of full-physics compositional simulations. The first approach involves a classical Box-Behnken or Augmented Pairs experimental design with a quadratic polynomial response surface. The second approach used a space-filling maxmin Latin Hypercube sampling or maximum entropy design with the choice of five different meta-modeling techniques: quadratic polynomial, kriging with constant and quadratic trend terms, multivariate adaptive regression spline (MARS) and additivity and variance stabilization (AVAS). Simulations results for CO₂ injection into a reservoir-caprock system with 9 design variables (and 97 samples) were used to generate the data for developing the proxy models. The fitted models were validated with using an independent data set and a cross-validation approach for three different performance metrics: total storage efficiency, CO₂ plume radius and average reservoir pressure. The Box-Behnken–quadratic polynomial metamodel performed the best, followed closely by the maximin LHS–kriging metamodel.

  15. Risk assessment of student performance in the International Foundations of Medicine Clinical Science Examination by the use of statistical modeling

    Directory of Open Access Journals (Sweden)

    David MC

    2016-12-01

    Full Text Available Michael C David,1 Diann S Eley,2 Jennifer Schafer,2 Leo Davies,3 1School of Public Health, 2School of Medicine, The University of Queensland, Herston, QLD, 3Sydney Medical School, The University of Sydney, NSW, Australia Purpose: The primary aim of this study was to assess the predictive validity of cumulative grade point average (GPA for performance in the International Foundations of Medicine (IFOM Clinical Science Examination (CSE. A secondary aim was to develop a strategy for identifying students at risk of performing poorly in the IFOM CSE as determined by the National Board of Medical Examiners’ International Standard of Competence. Methods: Final year medical students from an Australian university medical school took the IFOM CSE as a formative assessment. Measures included overall IFOM CSE score as the dependent variable, cumulative GPA as the predictor, and the factors age, gender, year of enrollment, international or domestic status of student, and language spoken at home as covariates. Multivariable linear regression was used to measure predictor and covariate effects. Optimal thresholds of risk assessment were based on receiver-operating characteristic (ROC curves. Results: Cumulative GPA (nonstandardized regression coefficient [B]: 81.83; 95% confidence interval [CI]: 68.13 to 95.53 and international status (B: –37.40; 95% CI: –57.85 to –16.96 from 427 students were found to be statistically associated with increased IFOM CSE ­performance. Cumulative GPAs of 5.30 (area under ROC [AROC]: 0.77; 95% CI: 0.72 to 0.82 and 4.90 (AROC: 0.72; 95% CI: 0.66 to 0.78 were identified as being thresholds of significant risk for domestic and international students, respectively. Conclusion: Using cumulative GPA as a predictor of IFOM CSE performance and accommodating for differences in international status, it is possible to identify students who are at risk of failing to satisfy the National Board of Medical Examiners’ International

  16. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2009 and since 1971

    Energy Technology Data Exchange (ETDEWEB)

    Davis, P.M.; Dubois, J.; Gambardella, F.; Sanchez-Garcia, E.; Uhlig, F.

    2011-05-15

    CONCAWE has collected 39 years of spillage data on European cross-country oil pipelines. At about 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 870 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2009 and a full historical perspective since 1971. The performance over the whole 39 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in line tools (intelligence pigs) is also reported. 5 spillage incidents were reported in 2009, corresponding to 0.14 spillages per 1000 km of line, well below the 5-year average of 0.28 and the long-term running average of 0.53, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 4 incidents were due to mechanical failure and 1 was connected to past third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

  17. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2007 and since 1971

    International Nuclear Information System (INIS)

    2009-11-01

    CONCAWE has collected 37 years of spillage data on European cross-country oil pipelines. At over 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 800 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2007 and a full historical perspective since 1971. The performance over the whole 37 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by intelligence pigs is also reported. 9 spillage incidents were reported in 2007, corresponding to 0.28 spillages per 1000 km of line, just under the 5-year average and well below the long-term running average of 0.55, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 1 incident was due to mechanical failure, 2 incidents to corrosion and 6 were connected to third party activities. Over the long term, third party activities is the main cause of spillage incidents.

  18. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2010 and since 1971

    International Nuclear Information System (INIS)

    Davis, P.M.; Dubois, J.; Gambardella, F.; Sanchez-Garcia, E.; Uhlig, F.

    2011-12-01

    CONCAWE has collected 40 years of spillage data on European cross-country oil pipelines. At about 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 800 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2010 and a full historical perspective since 1971. The performance over the whole 40 years is analysed in various ways, including gross and net spillage volumes, and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in-line tools (intelligence pigs) is also reported. 4 spillage incidents were reported in 2010, corresponding to 0.12 spillages per 1000 km of line, well below the 5-year average of 0.25 and the long-term running average of 0.52, which has been steadily decreasing over the years from a value of 1.2 in the mid-70s. There were no fires, fatalities or injuries connected with these spills. 2 incidents were due to mechanical failure, 1 to external corrosion, and 1 was connected to past third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

  19. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2010 and since 1971

    Energy Technology Data Exchange (ETDEWEB)

    Davis, P.M.; Dubois, J.; Gambardella, F.; Sanchez-Garcia, E.; Uhlig, F.

    2011-12-15

    CONCAWE has collected 40 years of spillage data on European cross-country oil pipelines. At about 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 800 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2010 and a full historical perspective since 1971. The performance over the whole 40 years is analysed in various ways, including gross and net spillage volumes, and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in-line tools (intelligence pigs) is also reported. 4 spillage incidents were reported in 2010, corresponding to 0.12 spillages per 1000 km of line, well below the 5-year average of 0.25 and the long-term running average of 0.52, which has been steadily decreasing over the years from a value of 1.2 in the mid-70s. There were no fires, fatalities or injuries connected with these spills. 2 incidents were due to mechanical failure, 1 to external corrosion, and 1 was connected to past third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

  20. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2007 and since 1971

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-11-15

    CONCAWE has collected 37 years of spillage data on European cross-country oil pipelines. At over 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 800 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2007 and a full historical perspective since 1971. The performance over the whole 37 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by intelligence pigs is also reported. 9 spillage incidents were reported in 2007, corresponding to 0.28 spillages per 1000 km of line, just under the 5-year average and well below the long-term running average of 0.55, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 1 incident was due to mechanical failure, 2 incidents to corrosion and 6 were connected to third party activities. Over the long term, third party activities is the main cause of spillage incidents.

  1. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2008 and since 1971

    Energy Technology Data Exchange (ETDEWEB)

    Davis, P.M.; Dubois, J.; Gambardella, F.; Uhlig, F.

    2010-06-15

    CONCAWE has collected 38 years of spillage data on European cross-country oil pipelines. At over 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 780 million m{sup 3} per year of crude oil and oil products. This report covers the performance of these pipelines in 2008 and a full historical perspective since 1971. The performance over the whole 38 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in line tools (intelligence pigs) is also reported. 12 spillage incidents were reported in 2008, corresponding to 0.34 spillages per 1000 km of line, somewhat above the 5-year average of 0.28 but well below the long-term running average of 0.54, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 7 incidents were due to mechanical failure, 1 incident to corrosion and 4 were connected to third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

  2. Performance of European cross-country oil pipelines. Statistical summary of reported spillages in 2009 and since 1971

    International Nuclear Information System (INIS)

    Davis, P.M.; Dubois, J.; Gambardella, F.; Sanchez-Garcia, E.; Uhlig, F.

    2011-05-01

    CONCAWE has collected 39 years of spillage data on European cross-country oil pipelines. At about 35,000 km the inventory covered currently includes the vast majority of such pipelines in Europe, transporting around 870 million m3 per year of crude oil and oil products. This report covers the performance of these pipelines in 2009 and a full historical perspective since 1971. The performance over the whole 39 years is analysed in various ways including gross and net spillage volumes and spillage causes grouped into five main categories: mechanical failure, operational, corrosion, natural hazard and third party. The rate of inspections by in line tools (intelligence pigs) is also reported. 5 spillage incidents were reported in 2009, corresponding to 0.14 spillages per 1000 km of line, well below the 5-year average of 0.28 and the long-term running average of 0.53, which has been steadily decreasing over the years from a value of 1.2 in the mid 70s. There were no fires, fatalities or injuries connected with these spills. 4 incidents were due to mechanical failure and 1 was connected to past third party activities. Over the long term, third party activities remain the main cause of spillage incidents although mechanical failures have increased in recent years, a trend that needs to be scrutinised in years to come.

  3. PathMAPA: a tool for displaying gene expression and performing statistical tests on metabolic pathways at multiple levels for Arabidopsis

    Directory of Open Access Journals (Sweden)

    Ma Ligeng

    2003-11-01

    Full Text Available Abstract Background To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. Results We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i upload and populate microarray data into a database; (ii integrate gene expression with enzymes of the pathways; (iii generate pathway diagrams without building image files manually; (iv visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. Conclusion PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i automatic generation of pathways associated with gene expression and (ii statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s.

  4. Arlequin suite ver 3.5: a new series of programs to perform population genetics analyses under Linux and Windows.

    Science.gov (United States)

    Excoffier, Laurent; Lischer, Heidi E L

    2010-05-01

    We present here a new version of the Arlequin program available under three different forms: a Windows graphical version (Winarl35), a console version of Arlequin (arlecore), and a specific console version to compute summary statistics (arlsumstat). The command-line versions run under both Linux and Windows. The main innovations of the new version include enhanced outputs in XML format, the possibility to embed graphics displaying computation results directly into output files, and the implementation of a new method to detect loci under selection from genome scans. Command-line versions are designed to handle large series of files, and arlsumstat can be used to generate summary statistics from simulated data sets within an Approximate Bayesian Computation framework. © 2010 Blackwell Publishing Ltd.

  5. JENDL-3.2 performance in analyses of MISTRAL critical experiments for high-moderation MOX cores

    International Nuclear Information System (INIS)

    Takada, Naoyuki; Hibi, Koki; Ishii, Kazuya; Ando, Yoshihira; Yamamoto, Toru; Ueji, Masao; Iwata, Yutaka

    2001-01-01

    NUPEC and CEA have launched an extensive experimental program called MISTRAL to study highly moderated MOX cores for the advanced LWRs. The analyses using SRAC system and MVP code with JENDL-3.2 library are in progress on the experiments of the MISTRAL and the former EPICURE programs. Various comparisons have been made between calculation results and measurement values. (author)

  6. Structural and Treatment Analyses of Safe and At-Risk Behaviors and Postures Performed by Pharmacy Employees

    Science.gov (United States)

    Fante, Rhiannon; Gravina, Nicole; Betz, Alison; Austin, John

    2010-01-01

    This study employed structural and treatment analyses to determine factors that contributed to wrist posture safety in a small pharmacy. The pharmacy was located on a university campus and participants were three female pharmacy technicians. These particular employees had experienced various repetitive-motion injuries that resulted in a total of…

  7. Joint statistics of partial sums of ordered exponential variates and performance of GSC RAKE receivers over rayleigh fading channel

    KAUST Repository

    Nam, Sungsik; Hasna, Mazen Omar; Alouini, Mohamed-Slim

    2011-01-01

    -interference on GSC RAKE receivers. The major difficulty in these problems is to derive some joint statistics of ordered exponential variates. With this motivation in mind, we capitalize in this paper on some new order statistics results to derive exact closed

  8. Analyse spatiale et statistique de l’âge du Fer en France. L’exemple de la “ BaseFer ” Spatial and statistical analysis of the Iron Age in France. The example of 'basefer'

    Directory of Open Access Journals (Sweden)

    Olivier Buchsenschutz

    2009-05-01

    Full Text Available Le développement des systèmes d'information géographique (SIG permet d'introduire dans les bases de données archéologiques la localisation des données. Il est possible alors d'obtenir des cartes de répartition qu'il s'agit ensuite d'interpréter en s’appuyant sur des analyses statistiques et spatiales. Cartes et statistiques mettent en évidence l'état de la recherche, les conditions de conservation des sites, et au-delà des phénomènes historiques ou culturels.À travers un programme de recherche sur l'âge du Fer en France (Basefer une base de données globale a été constituée pour l'espace métropolitain. Cet article propose un certain nombre d'analyses sur les critères descriptifs généraux d’un corpus de 11 000 sites (les départements côtiers de la Méditerranée ne sont pas traités dans ce test. Le contrôle et le développement des rubriques plus fines seront réalisés avec une équipe élargie, avant une mise en réseau de la base.The development of Geographical Information Systems (GIS allows information in archaeological databases to be georeferenced. It is thus possible to obtain distribution maps which can then be interpreted using statistical and spatial analyses. Maps and statistics highlight the state of research, the condition of sites, and moreover historical and cultural phenomena.Through a research programme on the Iron Age in France (Basefer, a global database was established for the entire country. This article puts forward some analyses of the general descriptive criteria represented in a corpus of 11000 sites (departments along the Mediterranean Sea coast are excluded from this test. The control and development of finer descriptors will be undertaken by an enlarged team, before the data are networked.

  9. Performance of oil industry cross-country pipelines in Western Europe: statistical summary of reported spillages, 1979

    Energy Technology Data Exchange (ETDEWEB)

    de Waal, A.; Hayward, P.; Panisi, C.; Groenhof, J.

    This report presents statistical data relating to spillages from oil industry cross-country pipelines during the calendar year 1979, with comments and comparisons for the five year period 1975-1979. (Copyright (c) CONCAWE 1980.)

  10. The Role of Statistics and Research Methods in the Academic Success of Psychology Majors: Do Performance and Enrollment Timing Matter?

    Science.gov (United States)

    Freng, Scott; Webber, David; Blatter, Jamin; Wing, Ashley; Scott, Walter D.

    2011-01-01

    Comprehension of statistics and research methods is crucial to understanding psychology as a science (APA, 2007). However, psychology majors sometimes approach methodology courses with derision or anxiety (Onwuegbuzie & Wilson, 2003; Rajecki, Appleby, Williams, Johnson, & Jeschke, 2005); consequently, students may postpone…

  11. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    1963-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  12. Analysing the accuracy of pavement performance models in the short and long terms: GMDH and ANFIS methods

    NARCIS (Netherlands)

    Ziari, H.; Sobhani, J.; Ayoubinejad, J.; Hartmann, Timo

    2016-01-01

    The accuracy of pavement performance prediction is a critical part of pavement management and directly influences maintenance and rehabilitation strategies. Many models with various specifications have been proposed by researchers and used by agencies. This study presents nine variables affecting

  13. Analyses of thermodynamic performance for the endoreversible Otto cycle with the concepts of entropy generation and entransy

    Institute of Scientific and Technical Information of China (English)

    WU; YanQiu

    2017-01-01

    In this paper,power the the and endoreversible the Otto cycle is analyzed with the entropy generation minimization objectives,and the the entransy theory.of The output power,the heat-work conversion efficiency are taken as the optimization rate,and relationships the output heat-work conversion efficiency,entransy the entropy generation and the entropy generation rate numbers,the work entransy are loss rate,The entransy loss of coefficient,the dissipation rate the entransy variation associated with discussed.applicability entropy the entropy generation minimization and the entransy theory while to the analyses is also analyzed.It is found do that smaller generation rate does not always lead to larger output our power,smaller entropy entransy generation loss numbers and not always lead to larger heat-work conversion efficiency,either.larger the In calculations,power,both larger larger rate larger entransy variation heat-work rate associated with work correspond also to that output while entransy is loss coefficient suitable results the in larger conversion developed efficiency.It is found concept of entransy dissipation not always for analyses because it was for heat transfer.

  14. Modern applied statistics with S-plus

    CERN Document Server

    Venables, W N

    1994-01-01

    S-Plus is a powerful environment for statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-Plus to perform statistical analyses and provides both an introduction to the use of S-Plus and a course in modern statistical methods. The aim of the book is to show how to use S-Plus as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-Plus, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets.

  15. Using the expected detection delay to assess the performance of different multivariate statistical process monitoring methods for multiplicative and drift faults.

    Science.gov (United States)

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Peng, Kaixiang

    2017-03-01

    Using the expected detection delay (EDD) index to measure the performance of multivariate statistical process monitoring (MSPM) methods for constant additive faults have been recently developed. This paper, based on a statistical investigation of the T 2 - and Q-test statistics, extends the EDD index to the multiplicative and drift fault cases. As well, it is used to assess the performance of common MSPM methods that adopt these two test statistics. Based on how to use the measurement space, these methods can be divided into two groups, those which consider the complete measurement space, for example, principal component analysis-based methods, and those which only consider some subspace that reflects changes in key performance indicators, such as partial least squares-based methods. Furthermore, a generic form for them to use T 2 - and Q-test statistics are given. With the extended EDD index, the performance of these methods to detect drift and multiplicative faults is assessed using both numerical simulations and the Tennessee Eastman process. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    Science.gov (United States)

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  17. Evaluating the performance of commonly used gas analysers for methane eddy covariance flux measurements: the InGOS inter-comparison field experiment

    NARCIS (Netherlands)

    Peltola, O.; Hensen, A.; Helfter, C.; Belelli Marchesini, L.; Bosveld, F.C.; Bulk, van de W.C.M.; Elbers, J.A.; Haapanala, S.; Holst, J.; Laurila, T.; Lindroth, A.; Nemitz, E.; Röckmann, T.; Vermeulen, A.T.; Mammarella, I.

    2014-01-01

    The performance of eight fast-response methane (CH4) gas analysers suitable for eddy covariance flux measurements were tested at a grassland site near the Cabauw tall tower (Netherlands) during June 2012. The instruments were positioned close to each other in order to minimise the effect of varying

  18. Evaluating the performance of commonly used gas analysers for methane eddy covariance flux measurements : the InGOS inter-comparison field experiment

    NARCIS (Netherlands)

    Peltola, O.; Hensen, A.; Helfter, C.; Belelli Marchesini, L.; Bosveld, F. C.; Van Den Bulk, W. C M; Elbers, J. A.; Haapanala, S.; Holst, J.; Laurila, T.; Lindroth, A.; Nemitz, E.; Röckmann, T.; Vermeulen, A. T.; Mammarella, I.

    2014-01-01

    The performance of eight fast-response methane (CH4) gas analysers suitable for eddy covariance flux measurements were tested at a grassland site near the Cabauw tall tower (Netherlands) during June 2012. The instruments were positioned close to each other in order to minimise the effect of varying

  19. Evaluating the performance of commonly used gas analysers for methane eddy covariance flux measurements: the InGOS inter-comparison field experiment

    NARCIS (Netherlands)

    Peltola, O.; Hensen, A.; Helfter, C.; Belelli Marchesini, L.; Bosveld, F. C.; Van Den Bulk, W. C. M.; Elbers, J. A.; Haapanala, S.; Holst, J.; Laurila, T.; Lindroth, A.; Nemitz, E.; Röckmann, T.; Vermeulen, A. T.; Mammarella, I.

    2014-01-01

    The performance of eight fast-response methane (CH4) gas analysers suitable for eddy covariance flux measurements were tested at a grassland site near the Cabauw tall tower (Netherlands) during June 2012. The instruments were positioned close to each other in order to minimize the effect of varying

  20. Introduction to Statistics

    Directory of Open Access Journals (Sweden)

    Mirjam Nielen

    2017-01-01

    Full Text Available Always wondered why research papers often present rather complicated statistical analyses? Or wondered how to properly analyse the results of a pragmatic trial from your own practice? This talk will give an overview of basic statistical principles and focus on the why of statistics, rather than on the how.This is a podcast of Mirjam's talk at the Veterinary Evidence Today conference, Edinburgh November 2, 2016. 

  1. The influence of bilirubin, haemolysis and turbidity on 20 analytical tests performed on automatic analysers. Results of an interlaboratory study.

    Science.gov (United States)

    Grafmeyer, D; Bondon, M; Manchon, M; Levillain, P

    1995-01-01

    The director of a laboratory has to be sure to give out reliable results for routine tests on automatic analysers regardless of the clinical context. However, he may find hyperbilirubinaemia in some circumstances, parenteral nutrition causing turbidity in others, and haemolysis occurring if sampling is difficult. For this reason, the Commission for Instrumentation of the Société Française de Biologie Clinique (SFBC) (president Alain Feuillu) decided to look into "visible" interferences--bilirubin, haemolysis and turbidity--and their effect on 20 major tests: 13 substrates/chemistries: albumin, calcium, cholesterol, creatinine, glucose, iron, magnesium, phosphorus, total bilirubin, total proteins, triacylglycerols, uric acid, urea, and 7 enzymatic activities: alkaline phosphatase, alanine aminotransferase, alpha-amylase, aspartate aminotransferase, creatine kinase, gamma-glutamyl transferase and lactate dehydrogenase measured on 15 automatic analysers representative of those found on the French market (Astra 8, AU 510, AU 5010, AU 5000, Chem 1, CX 7, Dax 72, Dimension, Ektachem, Hitachi 717, Hitachi 737, Hitachi 747, Monarch, Open 30, Paramax, Wako 30 R) and to see how much they affect the accuracy of results under routine conditions in the laboratory. The study was carried out following the SFBC protocol for the validation of techniques using spiked plasma pools with bilirubin, ditauro-bilirubin, haemoglobin (from haemolysate) and Intralipid (turbidity). Overall, the following results were obtained: haemolysis affects tests the most often (34.5% of cases); total bilirubin interferes in 21.7% of cases; direct bilirubin and turbidity seem to interfere less at around 17%. The different tests are not affected to the same extent; enzyme activity is hardly affected at all; on the other hand certain major tests are extremely sensitive, increasingly so as we go through the following: creatinine (interference of bilirubin), triacylglycerols (interference of bilirubin and

  2. Development of the financial model for analyses on economic performances of nuclear facilities and examples of its applications

    International Nuclear Information System (INIS)

    Mankin, Shuichi; Ueno, Seiichi; Kimura, Shigeru; Yuasa, Tadao.

    1988-10-01

    On the assumption of the commercialization stage of technologies, the analysis on performances in financial operation based on simulation studies is one of important study subjects in the field of the system analysis and economic assessments of nuclear technologies. However, economic assessments on financial performances of such complex industries as nuclear power based on nuclear fuel cycle industries, or as electric utilities composed of hydro, fossil, nuclear power stations are complicated, and the adoption of conventional financial model is insufficient in the case of nuclear technologies which have such special financial process as decommissioning. We, therefore, develop the computer simulation model that can analyze financial performances of nuclear facilities. In this report, the derivation of equations and outlines of the model are explained. Additionally, examples of hypothetical financial simulation studies on a coal-gasoline plant, nuclear waste industries, and analysis on economic perspectives of small size nuclear reactors for electric utilities are indicated. (author)

  3. The Response of Performance to Merger Strategy in Indonesian Banking Industry: Analyses on Bank Mandiri, Bank Danamon, and Bank Permata

    Directory of Open Access Journals (Sweden)

    Murti Lestari

    2010-05-01

    Full Text Available This study analyzes the responses of performances of BankMandiri, Bank Danamon, and Bank Permata to merger strategy.This paper harnesses the quantitative approach with structuralbreak analysis method and impulse response function. Theplausible findings indicate that the merger of Bank Permataproduces a better performance response in comparison to theconsolidation of Bank Mandiri and the merger of Bank Danamon.The merger of Bank Permata does not result in performanceshocks, and the structural break does not prevail either. On theother hand, the consolidation of Bank Mandiri and the mergerof Bank Danamon result in structural breaks, particularly in thespread performance. In order to return to the stable position, themergers of Bank Mandiri and Bank Danamon require a longertime than does the merger of Bank Permata. This researchindicates that for large banks, the mergers and acquisitions(retaining one existing bank will deliver a better performanceresponse than will the consolidations (no existing bank. Keywords: impulse response function; merger; structural break

  4. Experience of RIA safety analyses performance for NPP Temelin core arranged with TVSA-T fuel assemblies

    International Nuclear Information System (INIS)

    Kryukov, S.A.; Lizorkin, M.P.

    2010-01-01

    The contents of the presentation are as follows: 1. Definition of categories for initiating events; 2. Acceptance criteria for safety assessment; 3. Main aspects of safety assessment methodology; 4. Main stages of calculation analysis; 5. Interface with other parts of the core design; 6. Codes used for calculation; 6.1 Main performances of code package TIGR-1; 6.2 Main performances of code BIPR-7A; 7. TIGR-1 accounting of design margins in calculation of fuel rod powers; 8. Peculiar features of Instrumentation and Control System for Temelin NPP; 9. Calculations; 10. Checklist of margin data important for reload safety assessment. (P.A.)

  5. Analysing drying unit performance in a continuous pharmaceutical manufacturing line by means of mass – Energy balances

    DEFF Research Database (Denmark)

    Mortier, Séverine Thérèse F.C.; Gernaey, Krist; De Beer, Thomas De Beer

    2014-01-01

    locations. A calibration is performed in order to predict the evaporation rate. The balances were able to predict both the moisture content of the granules at the end of the drying process and the gas outlet temperature quite accurately. Combining the gathered information with the height of the bed...

  6. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  7. Enhanced performance of solid oxide electrolysis cells by integration with a partial oxidation reactor: Energy and exergy analyses

    International Nuclear Information System (INIS)

    Visitdumrongkul, Nuttawut; Tippawan, Phanicha; Authayanun, Suthida; Assabumrungrat, Suttichai; Arpornwichanop, Amornchai

    2016-01-01

    Highlights: • Process design of solid oxide electrolyzer integrated with a partial oxidation reactor is studied. • Effect of key operating parameters of partial oxidation reactor on the electrolyzer performance is presented. • Exergy analysis of the electrolyzer process is performed. • Partial oxidation reactor can enhance the solid oxide electrolyzer performance. • Partial oxidation reactor in the process is the highest exergy destruction unit. - Abstract: Hydrogen production without carbon dioxide emission has received a large amount of attention recently. A solid oxide electrolysis cell (SOEC) can produce pure hydrogen and oxygen via a steam electrolysis reaction that does not emit greenhouse gases. Due to the high operating temperature of SOEC, an external heat source is required for operation, which also helps to improve SOEC performance and reduce operating electricity. The non-catalytic partial oxidation reaction (POX), which is a highly exothermic reaction, can be used as an external heat source and can be integrated with SOEC. Therefore, the aim of this work is to study the effect of operating parameters of non-catalytic POX (i.e., the oxygen to carbon ratio, operating temperature and pressure) on SOEC performance, including exergy analysis of the process. The study indicates that non-catalytic partial oxidation can enhance the hydrogen production rate and efficiency of the system. In terms of exergy analysis, the non-catalytic partial oxidation reactor is demonstrated to be the highest exergy destruction unit due to irreversible chemical reactions taking place, whereas SOEC is a low exergy destruction unit. This result indicates that the partial oxidation reactor should be improved and optimally designed to obtain a high energy and exergy system efficiency.