STATCAT, Statistical Analysis of Parametric and Non-Parametric Data
International Nuclear Information System (INIS)
David, Hugh
1990-01-01
1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required
The application of non-parametric statistical method for an ALARA implementation
International Nuclear Information System (INIS)
Cho, Young Ho; Herr, Young Hoi
2003-01-01
The cost-effective reduction of Occupational Radiation Dose (ORD) at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORD data of existing plants. Through the data analysis, it is required to identify what are the jobs of repetitive high ORD at the nuclear power plant. In this study, Percentile Rank Sum Method (PRSM) is proposed to identify repetitive high ORD jobs, which is based on non-parametric statistical theory. As a case study, the method is applied to ORD data of maintenance and repair jobs at Kori units 3 and 4 that are pressurized water reactors with 950 MWe capacity and have been operated since 1986 and 1987, respectively in Korea. The results was verified and validated, and PRSM has been demonstrated to be an efficient method of analyzing the data
Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations
International Nuclear Information System (INIS)
Arimescu, V.E.; Heins, L.
2001-01-01
Advances in modeling fuel rod behavior and accumulations of adequate experimental data have made possible the introduction of quantitative methods to estimate the uncertainty of predictions made with best-estimate fuel rod codes. The uncertainty range of the input variables is characterized by a truncated distribution which is typically a normal, lognormal, or uniform distribution. While the distribution for fabrication parameters is defined to cover the design or fabrication tolerances, the distribution of modeling parameters is inferred from the experimental database consisting of separate effects tests and global tests. The final step of the methodology uses a Monte Carlo type of random sampling of all relevant input variables and performs best-estimate code calculations to propagate these uncertainties in order to evaluate the uncertainty range of outputs of interest for design analysis, such as internal rod pressure and fuel centerline temperature. The statistical method underlying this Monte Carlo sampling is non-parametric order statistics, which is perfectly suited to evaluate quantiles of populations with unknown distribution. The application of this method is straightforward in the case of one single fuel rod, when a 95/95 statement is applicable: 'with a probability of 95% and confidence level of 95% the values of output of interest are below a certain value'. Therefore, the 0.95-quantile is estimated for the distribution of all possible values of one fuel rod with a statistical confidence of 95%. On the other hand, a more elaborate procedure is required if all the fuel rods in the core are being analyzed. In this case, the aim is to evaluate the following global statement: with 95% confidence level, the expected number of fuel rods which are not exceeding a certain value is all the fuel rods in the core except only a few fuel rods. In both cases, the thresholds determined by the analysis should be below the safety acceptable design limit. An indirect
Directory of Open Access Journals (Sweden)
Elias Chaibub Neto
Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.
International Nuclear Information System (INIS)
Frepoli, Cesare; Oriani, Luca
2006-01-01
In recent years, non-parametric or order statistics methods have been widely used to assess the impact of the uncertainties within Best-Estimate LOCA evaluation models. The bounding of the uncertainties is achieved with a direct Monte Carlo sampling of the uncertainty attributes, with the minimum trial number selected to 'stabilize' the estimation of the critical output values (peak cladding temperature (PCT), local maximum oxidation (LMO), and core-wide oxidation (CWO A non-parametric order statistics uncertainty analysis was recently implemented within the Westinghouse Realistic Large Break LOCA evaluation model, also referred to as 'Automated Statistical Treatment of Uncertainty Method' (ASTRUM). The implementation or interpretation of order statistics in safety analysis is not fully consistent within the industry. This has led to an extensive public debate among regulators and researchers which can be found in the open literature. The USNRC-approved Westinghouse method follows a rigorous implementation of the order statistics theory, which leads to the execution of 124 simulations within a Large Break LOCA analysis. This is a solid approach which guarantees that a bounding value (at 95% probability) of the 95 th percentile for each of the three 10 CFR 50.46 ECCS design acceptance criteria (PCT, LMO and CWO) is obtained. The objective of this paper is to provide additional insights on the ASTRUM statistical approach, with a more in-depth analysis of pros and cons of the order statistics and of the Westinghouse approach in the implementation of this statistical methodology. (authors)
Performances of non-parametric statistics in sensitivity analysis and parameter ranking
International Nuclear Information System (INIS)
Saltelli, A.
1987-01-01
Twelve parametric and non-parametric sensitivity analysis techniques are compared in the case of non-linear model responses. The test models used are taken from the long-term risk analysis for the disposal of high level radioactive waste in a geological formation. They describe the transport of radionuclides through a set of engineered and natural barriers from the repository to the biosphere and to man. The output data from these models are the dose rates affecting the maximum exposed individual of a critical group at a given point in time. All the techniques are applied to the output from the same Monte Carlo simulations, where a modified version of Latin Hypercube method is used for the sample selection. Hypothesis testing is systematically applied to quantify the degree of confidence in the results given by the various sensitivity estimators. The estimators are ranked according to their robustness and stability, on the basis of two test cases. The conclusions are that no estimator can be considered the best from all points of view and recommend the use of more than just one estimator in sensitivity analysis
Inferential, non-parametric statistics to assess the quality of probabilistic forecast systems
Maia, A.H.N.; Meinke, H.B.; Lennox, S.; Stone, R.C.
2007-01-01
Many statistical forecast systems are available to interested users. To be useful for decision making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and its statistical manifestation have been firmly established, the forecasts must
2016-05-31
Distribution Unlimited UU UU UU UU 31-05-2016 15-Apr-2014 14-Jan-2015 Final Report: Technical Topic 3.2.2.d Bayesian and Non- parametric Statistics...of Papers published in non peer-reviewed journals: Final Report: Technical Topic 3.2.2.d Bayesian and Non- parametric Statistics: Integration of Neural...Transfer N/A Number of graduating undergraduates who achieved a 3.5 GPA to 4.0 (4.0 max scale ): Number of graduating undergraduates funded by a DoD funded
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
Directory of Open Access Journals (Sweden)
Jinchao Feng
2018-03-01
Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
Energy Technology Data Exchange (ETDEWEB)
Rohée, E. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Coulon, R., E-mail: romain.coulon@cea.fr [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Carrel, F. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Dautremer, T.; Barat, E.; Montagu, T. [CEA, LIST, Laboratoire de Modélisation et Simulation des Systèmes, F-91191 Gif-sur-Yvette (France); Normand, S. [CEA, DAM, Le Ponant, DPN/STXN, F-75015 Paris (France); Jammes, C. [CEA, DEN, Cadarache, DER/SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France)
2016-11-11
Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.
Statistical benchmark for BosonSampling
International Nuclear Information System (INIS)
Walschaers, Mattia; Mayer, Klaus; Buchleitner, Andreas; Kuipers, Jack; Urbina, Juan-Diego; Richter, Klaus; Tichy, Malte Christopher
2016-01-01
Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church–Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects. (fast track communication)
Statistical benchmarking in utility regulation: Role, standards and methods
International Nuclear Information System (INIS)
Newton Lowry, Mark; Getachew, Lullit
2009-01-01
Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly
Benchmarks and statistics of entanglement dynamics
International Nuclear Information System (INIS)
Tiersch, Markus
2009-01-01
In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)
Benchmarks and statistics of entanglement dynamics
Energy Technology Data Exchange (ETDEWEB)
Tiersch, Markus
2009-09-04
In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)
DEFF Research Database (Denmark)
Linnet, Kristian
2005-01-01
Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
Speaker Linking and Applications using Non-Parametric Hashing Methods
2016-09-08
nonparametric estimate of a multivariate density function,” The Annals of Math- ematical Statistics , vol. 36, no. 3, pp. 1049–1051, 1965. [9] E. A. Patrick...Speaker Linking and Applications using Non-Parametric Hashing Methods† Douglas Sturim and William M. Campbell MIT Lincoln Laboratory, Lexington, MA...with many approaches [1, 2]. For this paper, we focus on using i-vectors [2], but the methods apply to any embedding. For the task of speaker QBE and
Bayesian non parametric modelling of Higgs pair production
Directory of Open Access Journals (Sweden)
Scarpa Bruno
2017-01-01
Full Text Available Statistical classification models are commonly used to separate a signal from a background. In this talk we face the problem of isolating the signal of Higgs pair production using the decay channel in which each boson decays into a pair of b-quarks. Typically in this context non parametric methods are used, such as Random Forests or different types of boosting tools. We remain in the same non-parametric framework, but we propose to face the problem following a Bayesian approach. A Dirichlet process is used as prior for the random effects in a logit model which is fitted by leveraging the Polya-Gamma data augmentation. Refinements of the model include the insertion in the simple model of P-splines to relate explanatory variables with the response and the use of Bayesian trees (BART to describe the atoms in the Dirichlet process.
Non-Parametric Estimation of Correlation Functions
DEFF Research Database (Denmark)
Brincker, Rune; Rytter, Anders; Krenk, Steen
In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...
Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.
MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C
2018-03-29
This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Parametric and Non-Parametric System Modelling
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg
1999-01-01
the focus is on combinations of parametric and non-parametric methods of regression. This combination can be in terms of additive models where e.g. one or more non-parametric term is added to a linear regression model. It can also be in terms of conditional parametric models where the coefficients...... considered. It is shown that adaptive estimation in conditional parametric models can be performed by combining the well known methods of local polynomial regression and recursive least squares with exponential forgetting. The approach used for estimation in conditional parametric models also highlights how...... networks is included. In this paper, neural networks are used for predicting the electricity production of a wind farm. The results are compared with results obtained using an adaptively estimated ARX-model. Finally, two papers on stochastic differential equations are included. In the first paper, among...
Meylianti S., Brigita
1999-01-01
Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...
Directory of Open Access Journals (Sweden)
Maria João Nunes
2005-03-01
Full Text Available In atmospheric aerosol sampling, it is inevitable that the air that carries particles is in motion, as a result of both externally driven wind and the sucking action of the sampler itself. High or low air flow sampling speeds may lead to significant particle size bias. The objective of this work is the validation of measurements enabling the comparison of species concentration from both air flow sampling techniques. The presence of several outliers and increase of residuals with concentration becomes obvious, requiring non-parametric methods, recommended for the handling of data which may not be normally distributed. This way, conversion factors are obtained for each of the various species under study using Kendall regression.
Non-parametric smoothing of experimental data
International Nuclear Information System (INIS)
Kuketayev, A.T.; Pen'kov, F.M.
2007-01-01
Full text: Rapid processing of experimental data samples in nuclear physics often requires differentiation in order to find extrema. Therefore, even at the preliminary stage of data analysis, a range of noise reduction methods are used to smooth experimental data. There are many non-parametric smoothing techniques: interval averages, moving averages, exponential smoothing, etc. Nevertheless, it is more common to use a priori information about the behavior of the experimental curve in order to construct smoothing schemes based on the least squares techniques. The latter methodology's advantage is that the area under the curve can be preserved, which is equivalent to conservation of total speed of counting. The disadvantages of this approach include the lack of a priori information. For example, very often the sums of undifferentiated (by a detector) peaks are replaced with one peak during the processing of data, introducing uncontrolled errors in the determination of the physical quantities. The problem is solvable only by having experienced personnel, whose skills are much greater than the challenge. We propose a set of non-parametric techniques, which allows the use of any additional information on the nature of experimental dependence. The method is based on a construction of a functional, which includes both experimental data and a priori information. Minimum of this functional is reached on a non-parametric smoothed curve. Euler (Lagrange) differential equations are constructed for these curves; then their solutions are obtained analytically or numerically. The proposed approach allows for automated processing of nuclear physics data, eliminating the need for highly skilled laboratory personnel. Pursuant to the proposed approach is the possibility to obtain smoothing curves in a given confidence interval, e.g. according to the χ 2 distribution. This approach is applicable when constructing smooth solutions of ill-posed problems, in particular when solving
On Parametric (and Non-Parametric Variation
Directory of Open Access Journals (Sweden)
Neil Smith
2009-11-01
Full Text Available This article raises the issue of the correct characterization of ‘Parametric Variation’ in syntax and phonology. After specifying their theoretical commitments, the authors outline the relevant parts of the Principles–and–Parameters framework, and draw a three-way distinction among Universal Principles, Parameters, and Accidents. The core of the contribution then consists of an attempt to provide identity criteria for parametric, as opposed to non-parametric, variation. Parametric choices must be antecedently known, and it is suggested that they must also satisfy seven individually necessary and jointly sufficient criteria. These are that they be cognitively represented, systematic, dependent on the input, deterministic, discrete, mutually exclusive, and irreversible.
von Hirschhausen, Christian R.; Cullmann, Astrid
2005-01-01
Abstract This paper applies parametric and non-parametric and parametric tests to assess the efficiency of electricity distribution companies in Germany. We address traditional issues in electricity sector benchmarking, such as the role of scale effects and optimal utility size, as well as new evidence specific to the situation in Germany. We use labour, capital, and peak load capacity as inputs, and units sold and the number of customers as output. The data cover 307 (out of 553) ...
A non-parametric method for correction of global radiation observations
DEFF Research Database (Denmark)
Bacher, Peder; Madsen, Henrik; Perers, Bengt
2013-01-01
in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...
Benchmarking Usage Statistics in Collection Management Decisions for Serials
Tucker, Cory
2009-01-01
Usage statistics are an important metric for making decisions on serials. Although the University of Nevada, Las Vegas (UNLV) Libraries have been collecting usage statistics, the statistics had not frequently been used to make decisions and had not been included in collection development policy. After undergoing a collection assessment, the…
Beretta Sergio; Dossi Andrea; Grove Hugh
2000-01-01
Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...
Statistical benchmarking for orthogonal electrostatic quantum dot qubit devices
Gamble, John; Frees, Adam; Friesen, Mark; Coppersmith, S. N.
2014-03-01
Quantum dots in semiconductor systems have emerged as attractive candidates for the implementation of quantum information processors because of the promise of scalability, manipulability, and integration with existing classical electronics. A limitation in current devices is that the electrostatic gates used for qubit manipulation exhibit strong cross-capacitance, presenting a barrier for practical scale-up. Here, we introduce a statistical framework for making precise the notion of orthogonality. We apply our method to analyze recently implemented designs at the University of Wisconsin-Madison that exhibit much increased orthogonal control than was previously possible. We then use our statistical modeling to future device designs, providing practical guidelines for devices to have robust control properties. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy Nuclear Security Administration under contract DE-AC04-94AL85000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressly or implied, of the US Government. This work was supported in part by the Laboratory Directed Research and Development program at Sandia National Laboratories, by ARO (W911NF-12-0607), and by the United States Department of Defense.
Non-Parametric Analysis of Rating Transition and Default Data
DEFF Research Database (Denmark)
Fledelius, Peter; Lando, David; Perch Nielsen, Jens
2004-01-01
We demonstrate the use of non-parametric intensity estimation - including construction of pointwise confidence sets - for analyzing rating transition data. We find that transition intensities away from the class studied here for illustration strongly depend on the direction of the previous move b...
Non-parametric analysis of production efficiency of poultry egg ...
African Journals Online (AJOL)
Non-parametric analysis of production efficiency of poultry egg farmers in Delta ... analysis of factors affecting the output of poultry farmers showed that stock ... should be put in place for farmers to learn the best farm practices carried out on the ...
Using non-parametric methods in econometric production analysis
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard; Henningsen, Arne
2012-01-01
by investigating the relationship between the elasticity of scale and the farm size. We use a balanced panel data set of 371~specialised crop farms for the years 2004-2007. A non-parametric specification test shows that neither the Cobb-Douglas function nor the Translog function are consistent with the "true......Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify a functional form of the production function of which the Cobb...... parameter estimates, but also in biased measures which are derived from the parameters, such as elasticities. Therefore, we propose to use non-parametric econometric methods. First, these can be applied to verify the functional form used in parametric production analysis. Second, they can be directly used...
Non-parametric estimation of the individual's utility map
Noguchi, Takao; Sanborn, Adam N.; Stewart, Neil
2013-01-01
Models of risky choice have attracted much attention in behavioural economics. Previous research has repeatedly demonstrated that individuals' choices are not well explained by expected utility theory, and a number of alternative models have been examined using carefully selected sets of choice alternatives. The model performance however, can depend on which choice alternatives are being tested. Here we develop a non-parametric method for estimating the utility map over the wide range of choi...
D.C. Blitz (David)
2011-01-01
textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.
Digital spectral analysis parametric, non-parametric and advanced methods
Castanié, Francis
2013-01-01
Digital Spectral Analysis provides a single source that offers complete coverage of the spectral analysis domain. This self-contained work includes details on advanced topics that are usually presented in scattered sources throughout the literature.The theoretical principles necessary for the understanding of spectral analysis are discussed in the first four chapters: fundamentals, digital signal processing, estimation in spectral analysis, and time-series models.An entire chapter is devoted to the non-parametric methods most widely used in industry.High resolution methods a
International Nuclear Information System (INIS)
Carew, John F.; Finch, Stephen J.; Lois, Lambros
2003-01-01
The calculated >1-MeV pressure vessel fluence is used to determine the fracture toughness and integrity of the reactor pressure vessel. It is therefore of the utmost importance to ensure that the fluence prediction is accurate and unbiased. In practice, this assurance is provided by comparing the predictions of the calculational methodology with an extensive set of accurate benchmarks. A benchmarking database is used to provide an estimate of the overall average measurement-to-calculation (M/C) bias in the calculations ( ). This average is used as an ad-hoc multiplicative adjustment to the calculations to correct for the observed calculational bias. However, this average only provides a well-defined and valid adjustment of the fluence if the M/C data are homogeneous; i.e., the data are statistically independent and there is no correlation between subsets of M/C data.Typically, the identification of correlations between the errors in the database M/C values is difficult because the correlation is of the same magnitude as the random errors in the M/C data and varies substantially over the database. In this paper, an evaluation of a reactor dosimetry benchmark database is performed to determine the statistical validity of the adjustment to the calculated pressure vessel fluence. Physical mechanisms that could potentially introduce a correlation between the subsets of M/C ratios are identified and included in a multiple regression analysis of the M/C data. Rigorous statistical criteria are used to evaluate the homogeneity of the M/C data and determine the validity of the adjustment.For the database evaluated, the M/C data are found to be strongly correlated with dosimeter response threshold energy and dosimeter location (e.g., cavity versus in-vessel). It is shown that because of the inhomogeneity in the M/C data, for this database, the benchmark data do not provide a valid basis for adjusting the pressure vessel fluence.The statistical criteria and methods employed in
kruX: matrix-based non-parametric eQTL discovery.
Qi, Jianlong; Asl, Hassan Foroughi; Björkegren, Johan; Michoel, Tom
2014-01-14
The Kruskal-Wallis test is a popular non-parametric statistical test for identifying expression quantitative trait loci (eQTLs) from genome-wide data due to its robustness against variations in the underlying genetic model and expression trait distribution, but testing billions of marker-trait combinations one-by-one can become computationally prohibitive. We developed kruX, an algorithm implemented in Matlab, Python and R that uses matrix multiplications to simultaneously calculate the Kruskal-Wallis test statistic for several millions of marker-trait combinations at once. KruX is more than ten thousand times faster than computing associations one-by-one on a typical human dataset. We used kruX and a dataset of more than 500k SNPs and 20k expression traits measured in 102 human blood samples to compare eQTLs detected by the Kruskal-Wallis test to eQTLs detected by the parametric ANOVA and linear model methods. We found that the Kruskal-Wallis test is more robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non-linear associations, but is more conservative for calling additive linear associations. kruX enables the use of robust non-parametric methods for massive eQTL mapping without the need for a high-performance computing infrastructure and is freely available from http://krux.googlecode.com.
A non-parametric framework for estimating threshold limit values
Directory of Open Access Journals (Sweden)
Ulm Kurt
2005-11-01
Full Text Available Abstract Background To estimate a threshold limit value for a compound known to have harmful health effects, an 'elbow' threshold model is usually applied. We are interested on non-parametric flexible alternatives. Methods We describe how a step function model fitted by isotonic regression can be used to estimate threshold limit values. This method returns a set of candidate locations, and we discuss two algorithms to select the threshold among them: the reduced isotonic regression and an algorithm considering the closed family of hypotheses. We assess the performance of these two alternative approaches under different scenarios in a simulation study. We illustrate the framework by analysing the data from a study conducted by the German Research Foundation aiming to set a threshold limit value in the exposure to total dust at workplace, as a causal agent for developing chronic bronchitis. Results In the paper we demonstrate the use and the properties of the proposed methodology along with the results from an application. The method appears to detect the threshold with satisfactory success. However, its performance can be compromised by the low power to reject the constant risk assumption when the true dose-response relationship is weak. Conclusion The estimation of thresholds based on isotonic framework is conceptually simple and sufficiently powerful. Given that in threshold value estimation context there is not a gold standard method, the proposed model provides a useful non-parametric alternative to the standard approaches and can corroborate or challenge their findings.
Spurious Seasonality Detection: A Non-Parametric Test Proposal
Directory of Open Access Journals (Sweden)
Aurelio F. Bariviera
2018-01-01
Full Text Available This paper offers a general and comprehensive definition of the day-of-the-week effect. Using symbolic dynamics, we develop a unique test based on ordinal patterns in order to detect it. This test uncovers the fact that the so-called “day-of-the-week” effect is partly an artifact of the hidden correlation structure of the data. We present simulations based on artificial time series as well. While time series generated with long memory are prone to exhibit daily seasonality, pure white noise signals exhibit no pattern preference. Since ours is a non-parametric test, it requires no assumptions about the distribution of returns, so that it could be a practical alternative to conventional econometric tests. We also made an exhaustive application of the here-proposed technique to 83 stock indexes around the world. Finally, the paper highlights the relevance of symbolic analysis in economic time series studies.
Debt and growth: A non-parametric approach
Brida, Juan Gabriel; Gómez, David Matesanz; Seijas, Maria Nela
2017-11-01
In this study, we explore the dynamic relationship between public debt and economic growth by using a non-parametric approach based on data symbolization and clustering methods. The study uses annual data of general government consolidated gross debt-to-GDP ratio and gross domestic product for sixteen countries between 1977 and 2015. Using symbolic sequences, we introduce a notion of distance between the dynamical paths of different countries. Then, a Minimal Spanning Tree and a Hierarchical Tree are constructed from time series to help detecting the existence of groups of countries sharing similar economic performance. The main finding of the study appears for the period 2008-2016 when several countries surpassed the 90% debt-to-GDP threshold. During this period, three groups (clubs) of countries are obtained: high, mid and low indebted countries, suggesting that the employed debt-to-GDP threshold drives economic dynamics for the selected countries.
Multi-Directional Non-Parametric Analysis of Agricultural Efficiency
DEFF Research Database (Denmark)
Balezentis, Tomas
This thesis seeks to develop methodologies for assessment of agricultural efficiency and employ them to Lithuanian family farms. In particular, we focus on three particular objectives throughout the research: (i) to perform a fully non-parametric analysis of efficiency effects, (ii) to extend...... to the Multi-Directional Efficiency Analysis approach when the proposed models were employed to analyse empirical data of Lithuanian family farm performance, we saw substantial differences in efficiencies associated with different inputs. In particular, assets appeared to be the least efficiently used input...... relative to labour, intermediate consumption and land (in some cases land was not treated as a discretionary input). These findings call for further research on relationships among financial structure, investment decisions, and efficiency in Lithuanian family farms. Application of different techniques...
De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric
2010-01-11
Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.
Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.
2018-03-01
This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).
Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao
2010-10-01
There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.
Assessing T cell clonal size distribution: a non-parametric approach.
Directory of Open Access Journals (Sweden)
Olesya V Bolkhovskaya
Full Text Available Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.
Assessing T cell clonal size distribution: a non-parametric approach.
Bolkhovskaya, Olesya V; Zorin, Daniil Yu; Ivanchenko, Mikhail V
2014-01-01
Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.
A local non-parametric model for trade sign inference
Blazejewski, Adam; Coggins, Richard
2005-03-01
We investigate a regularity in market order submission strategies for 12 stocks with large market capitalization on the Australian Stock Exchange. The regularity is evidenced by a predictable relationship between the trade sign (trade initiator), size of the trade, and the contents of the limit order book before the trade. We demonstrate this predictability by developing an empirical inference model to classify trades into buyer-initiated and seller-initiated. The model employs a local non-parametric method, k-nearest neighbor, which in the past was used successfully for chaotic time series prediction. The k-nearest neighbor with three predictor variables achieves an average out-of-sample classification accuracy of 71.40%, compared to 63.32% for the linear logistic regression with seven predictor variables. The result suggests that a non-linear approach may produce a more parsimonious trade sign inference model with a higher out-of-sample classification accuracy. Furthermore, for most of our stocks the observed regularity in market order submissions seems to have a memory of at least 30 trading days.
Non-parametric Bayesian networks: Improving theory and reviewing applications
International Nuclear Information System (INIS)
Hanea, Anca; Morales Napoles, Oswaldo; Ababei, Dan
2015-01-01
Applications in various domains often lead to high dimensional dependence modelling. A Bayesian network (BN) is a probabilistic graphical model that provides an elegant way of expressing the joint distribution of a large number of interrelated variables. BNs have been successfully used to represent uncertain knowledge in a variety of fields. The majority of applications use discrete BNs, i.e. BNs whose nodes represent discrete variables. Integrating continuous variables in BNs is an area fraught with difficulty. Several methods that handle discrete-continuous BNs have been proposed in the literature. This paper concentrates only on one method called non-parametric BNs (NPBNs). NPBNs were introduced in 2004 and they have been or are currently being used in at least twelve professional applications. This paper provides a short introduction to NPBNs, a couple of theoretical advances, and an overview of applications. The aim of the paper is twofold: one is to present the latest improvements of the theory underlying NPBNs, and the other is to complement the existing overviews of BNs applications with the NPNBs applications. The latter opens the opportunity to discuss some difficulties that applications pose to the theoretical framework and in this way offers some NPBN modelling guidance to practitioners. - Highlights: • The paper gives an overview of the current NPBNs methodology. • We extend the NPBN methodology by relaxing the conditions of one of its fundamental theorems. • We propose improvements of the data mining algorithm for the NPBNs. • We review the professional applications of the NPBNs.
Discrete non-parametric kernel estimation for global sensitivity analysis
International Nuclear Information System (INIS)
Senga Kiessé, Tristan; Ventura, Anne
2016-01-01
This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.
Directory of Open Access Journals (Sweden)
Sergio A. Alvarado
2010-12-01
Full Text Available Objetivo: Evaluar la eficiencia predictiva de modelos estadísticos paramétricos y no paramétricos para predecir episodios críticos de contaminación por material particulado PM10 del día siguiente, que superen en Santiago de Chile la norma de calidad diaria. Una predicción adecuada de tales episodios permite a la autoridad decretar medidas restrictivas que aminoren la gravedad del episodio, y consecuentemente proteger la salud de la comunidad. Método: Se trabajó con las concentraciones de material particulado PM10 registradas en una estación asociada a la red de monitorización de la calidad del aire MACAM-2, considerando 152 observaciones diarias de 14 variables, y con información meteorológica registrada durante los años 2001 a 2004. Se ajustaron modelos estadísticos paramétricos Gamma usando el paquete estadístico STATA v11, y no paramétricos usando una demo del software estadístico MARS v 2.0 distribuida por Salford-Systems. Resultados: Ambos métodos de modelación presentan una alta correlación entre los valores observados y los predichos. Los modelos Gamma presentan mejores aciertos que MARS para las concentraciones de PM10 con valores Objective: To evaluate the predictive efficiency of two statistical models (one parametric and the other non-parametric to predict critical episodes of air pollution exceeding daily air quality standards in Santiago, Chile by using the next day PM10 maximum 24h value. Accurate prediction of such episodes would allow restrictive measures to be applied by health authorities to reduce their seriousness and protect the community´s health. Methods: We used the PM10 concentrations registered by a station of the Air Quality Monitoring Network (152 daily observations of 14 variables and meteorological information gathered from 2001 to 2004. To construct predictive models, we fitted a parametric Gamma model using STATA v11 software and a non-parametric MARS model by using a demo version of Salford
Rank-based permutation approaches for non-parametric factorial designs.
Umlauft, Maria; Konietschke, Frank; Pauly, Markus
2017-11-01
Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.
Continuous/discrete non parametric Bayesian belief nets with UNICORN and UNINET
Cooke, R.M.; Kurowicka, D.; Hanea, A.M.; Morales Napoles, O.; Ababei, D.A.; Ale, B.J.M.; Roelen, A.
2007-01-01
Hanea et al. (2006) presented a method for quantifying and computing continuous/discrete non parametric Bayesian Belief Nets (BBN). Influences are represented as conditional rank correlations, and the joint normal copula enables rapid sampling and conditionalization. Further mathematical background
Kernel bandwidth estimation for non-parametric density estimation: a comparative study
CSIR Research Space (South Africa)
Van der Walt, CM
2013-12-01
Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...
Non-parametric early seizure detection in an animal model of temporal lobe epilepsy
Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.
2008-03-01
The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.
DEFF Research Database (Denmark)
Herrmann, Ivan Tengbjerg; Henningsen, Geraldine; Wood, Christian D.
2013-01-01
quantitative methods exist for evaluating uncertainty—for example, Monte Carlo simulation—and such methods work very well when the AN is in full control of the data collection and model-building processes. In many cases, however, the AN is not in control of these processes. In this article we develop a simple...... method that a DM can employ in order to evaluate the process of decision support from a statistical point-of-view. We call this approach the “Statistical Value Chain” (SVC): a consecutive benchmarking checklist with eight steps that can be used to evaluate decision support seen from a statistical point-of-view....
A NON-PARAMETRIC APPROACH TO CONSTRAIN THE TRANSFER FUNCTION IN REVERBERATION MAPPING
International Nuclear Information System (INIS)
Li, Yan-Rong; Wang, Jian-Min; Bai, Jin-Ming
2016-01-01
Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (i.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function is expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.
A NON-PARAMETRIC APPROACH TO CONSTRAIN THE TRANSFER FUNCTION IN REVERBERATION MAPPING
Energy Technology Data Exchange (ETDEWEB)
Li, Yan-Rong; Wang, Jian-Min [Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049 (China); Bai, Jin-Ming, E-mail: liyanrong@mail.ihep.ac.cn [Yunnan Observatories, Chinese Academy of Sciences, Kunming 650011 (China)
2016-11-10
Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (i.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function is expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
Impulse response identification with deterministic inputs using non-parametric methods
International Nuclear Information System (INIS)
Bhargava, U.K.; Kashyap, R.L.; Goodman, D.M.
1985-01-01
This paper addresses the problem of impulse response identification using non-parametric methods. Although the techniques developed herein apply to the truncated, untruncated, and the circulant models, we focus on the truncated model which is useful in certain applications. Two methods of impulse response identification will be presented. The first is based on the minimization of the C/sub L/ Statistic, which is an estimate of the mean-square prediction error; the second is a Bayesian approach. For both of these methods, we consider the effects of using both the identity matrix and the Laplacian matrix as weights on the energy in the impulse response. In addition, we present a method for estimating the effective length of the impulse response. Estimating the length is particularly important in the truncated case. Finally, we develop a method for estimating the noise variance at the output. Often, prior information on the noise variance is not available, and a good estimate is crucial to the success of estimating the impulse response with a nonparametric technique
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.
Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker
2012-08-01
Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.
Energy Technology Data Exchange (ETDEWEB)
Gonzalez-Manteiga, W.; Prada-Sanchez, J.M.; Fiestras-Janeiro, M.G.; Garcia-Jurado, I. (Universidad de Santiago de Compostela, Santiago de Compostela (Spain). Dept. de Estadistica e Investigacion Operativa)
1990-11-01
A statistical study of the dependence between various critical fusion temperatures of a certain kind of coal and its chemical components is carried out. As well as using classical dependence techniques (multiple, stepwise and PLS regression, principal components, canonical correlation, etc.) together with the corresponding inference on the parameters of interest, non-parametric regression and bootstrap inference are also performed. 11 refs., 3 figs., 8 tabs.
A non-parametric peak calling algorithm for DamID-Seq.
Directory of Open Access Journals (Sweden)
Renhua Li
Full Text Available Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS of double sex (DSX-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq. One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only. After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1 reads resampling; 2 reads scaling (normalization and computing signal-to-noise fold changes; 3 filtering; 4 Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC. We also used irreproducible discovery rate (IDR analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.
A non-parametric peak calling algorithm for DamID-Seq.
Li, Renhua; Hempel, Leonie U; Jiang, Tingbo
2015-01-01
Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.
A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection
Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari
2010-01-01
We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than
Directory of Open Access Journals (Sweden)
Archer Kellie J
2008-02-01
Full Text Available Abstract Background With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN to those with normal functioning allograft. Results The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. Conclusion We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been
DEFF Research Database (Denmark)
Tan, Qihua; Zhao, J H; Iachine, I
2004-01-01
This report investigates the power issue in applying the non-parametric linkage analysis of affected sib-pairs (ASP) [Kruglyak and Lander, 1995: Am J Hum Genet 57:439-454] to localize genes that contribute to human longevity using long-lived sib-pairs. Data were simulated by introducing a recently...... developed statistical model for measuring marker-longevity associations [Yashin et al., 1999: Am J Hum Genet 65:1178-1193], enabling direct power comparison between linkage and association approaches. The non-parametric linkage (NPL) scores estimated in the region harboring the causal allele are evaluated...... in case of a dominant effect. Although the power issue may depend heavily on the true genetic nature in maintaining survival, our study suggests that results from small-scale sib-pair investigations should be referred with caution, given the complexity of human longevity....
Assessing pupil and school performance by non-parametric and parametric techniques
de Witte, K.; Thanassoulis, E.; Simpson, G.; Battisti, G.; Charlesworth-May, A.
2010-01-01
This paper discusses the use of the non-parametric free disposal hull (FDH) and the parametric multi-level model (MLM) as alternative methods for measuring pupil and school attainment where hierarchical structured data are available. Using robust FDH estimates, we show how to decompose the overall
Low default credit scoring using two-class non-parametric kernel density estimation
CSIR Research Space (South Africa)
Rademeyer, E
2016-12-01
Full Text Available This paper investigates the performance of two-class classification credit scoring data sets with low default ratios. The standard two-class parametric Gaussian and non-parametric Parzen classifiers are extended, using Bayes’ rule, to include either...
DEFF Research Database (Denmark)
Ramirez, José Rangel; Sørensen, John Dalsgaard
2011-01-01
This work illustrates the updating and incorporation of information in the assessment of fatigue reliability for offshore wind turbine. The new information, coming from external and condition monitoring can be used to direct updating of the stochastic variables through a non-parametric Bayesian u...
Non-parametric production analysis of pesticides use in the Netherlands
Oude Lansink, A.G.J.M.; Silva, E.
2004-01-01
Many previous empirical studies on the productivity of pesticides suggest that pesticides are under-utilized in agriculture despite the general held believe that these inputs are substantially over-utilized. This paper uses data envelopment analysis (DEA) to calculate non-parametric measures of the
The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models
GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.
2008-01-01
In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.
Non-parametric tests of productive efficiency with errors-in-variables
Kuosmanen, T.K.; Post, T.; Scholtes, S.
2007-01-01
We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445-458]. The test is based on the general Pareto-Koopmans
Non-parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods
DEFF Research Database (Denmark)
Høg, Esben
In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean-reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...
Non-Parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods
DEFF Research Database (Denmark)
Høg, Esben
2003-01-01
In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean--reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...
A non-parametric Bayesian approach to decompounding from high frequency data
Gugushvili, Shota; van der Meulen, F.H.; Spreij, Peter
2016-01-01
Given a sample from a discretely observed compound Poisson process, we consider non-parametric estimation of the density f0 of its jump sizes, as well as of its intensity λ0. We take a Bayesian approach to the problem and specify the prior on f0 as the Dirichlet location mixture of normal densities.
A comparative study of non-parametric models for identification of ...
African Journals Online (AJOL)
However, the frequency response method using random binary signals was good for unpredicted white noise characteristics and considered the best method for non-parametric system identifica-tion. The autoregressive external input (ARX) model was very useful for system identification, but on applicati-on, few input ...
A non-parametric hierarchical model to discover behavior dynamics from tracks
Kooij, J.F.P.; Englebienne, G.; Gavrila, D.M.
2012-01-01
We present a novel non-parametric Bayesian model to jointly discover the dynamics of low-level actions and high-level behaviors of tracked people in open environments. Our model represents behaviors as Markov chains of actions which capture high-level temporal dynamics. Actions may be shared by
Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, J.G.P.W.; Camps-Valls, Gustau; Moreno, José
2015-01-01
Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC),
Shi, Yang; Chinnaiyan, Arul M; Jiang, Hui
2015-07-01
High-throughput sequencing of transcriptomes (RNA-Seq) has become a powerful tool to study gene expression. Here we present an R package, rSeqNP, which implements a non-parametric approach to test for differential expression and splicing from RNA-Seq data. rSeqNP uses permutation tests to access statistical significance and can be applied to a variety of experimental designs. By combining information across isoforms, rSeqNP is able to detect more differentially expressed or spliced genes from RNA-Seq data. The R package with its source code and documentation are freely available at http://www-personal.umich.edu/∼jianghui/rseqnp/. jianghui@umich.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Parametric and Non-Parametric Vibration-Based Structural Identification Under Earthquake Excitation
Pentaris, Fragkiskos P.; Fouskitakis, George N.
2014-05-01
The problem of modal identification in civil structures is of crucial importance, and thus has been receiving increasing attention in recent years. Vibration-based methods are quite promising as they are capable of identifying the structure's global characteristics, they are relatively easy to implement and they tend to be time effective and less expensive than most alternatives [1]. This paper focuses on the off-line structural/modal identification of civil (concrete) structures subjected to low-level earthquake excitations, under which, they remain within their linear operating regime. Earthquakes and their details are recorded and provided by the seismological network of Crete [2], which 'monitors' the broad region of south Hellenic arc, an active seismic region which functions as a natural laboratory for earthquake engineering of this kind. A sufficient number of seismic events are analyzed in order to reveal the modal characteristics of the structures under study, that consist of the two concrete buildings of the School of Applied Sciences, Technological Education Institute of Crete, located in Chania, Crete, Hellas. Both buildings are equipped with high-sensitivity and accuracy seismographs - providing acceleration measurements - established at the basement (structure's foundation) presently considered as the ground's acceleration (excitation) and at all levels (ground floor, 1st floor, 2nd floor and terrace). Further details regarding the instrumentation setup and data acquisition may be found in [3]. The present study invokes stochastic, both non-parametric (frequency-based) and parametric methods for structural/modal identification (natural frequencies and/or damping ratios). Non-parametric methods include Welch-based spectrum and Frequency response Function (FrF) estimation, while parametric methods, include AutoRegressive (AR), AutoRegressive with eXogeneous input (ARX) and Autoregressive Moving-Average with eXogeneous input (ARMAX) models[4, 5
Directory of Open Access Journals (Sweden)
Mayr Andreas
2012-01-01
Full Text Available Abstract Background The construction of prediction intervals (PIs for future body mass index (BMI values of individual children based on a recent German birth cohort study with n = 2007 children is problematic for standard parametric approaches, as the BMI distribution in childhood is typically skewed depending on age. Methods We avoid distributional assumptions by directly modelling the borders of PIs by additive quantile regression, estimated by boosting. We point out the concept of conditional coverage to prove the accuracy of PIs. As conditional coverage can hardly be evaluated in practical applications, we conduct a simulation study before fitting child- and covariate-specific PIs for future BMI values and BMI patterns for the present data. Results The results of our simulation study suggest that PIs fitted by quantile boosting cover future observations with the predefined coverage probability and outperform the benchmark approach. For the prediction of future BMI values, quantile boosting automatically selects informative covariates and adapts to the age-specific skewness of the BMI distribution. The lengths of the estimated PIs are child-specific and increase, as expected, with the age of the child. Conclusions Quantile boosting is a promising approach to construct PIs with correct conditional coverage in a non-parametric way. It is in particular suitable for the prediction of BMI patterns depending on covariates, since it provides an interpretable predictor structure, inherent variable selection properties and can even account for longitudinal data structures.
2014-10-02
defined by Eqs. (3)–(4) (Greenwell & Finch , 2004) (Kar & Mohanty, 2006). The p value provides the metric for novelty scoring. p = QKS(z) = 2 ∞∑ j=1 (−1...provides early detection of degradation and ability to score its significance in order to inform maintenance planning and consequently reduce disruption ...actionable information, sig- nals are typically processed from raw measurements into a reduced dimension novelty summary value that may be more easily
Hadron Energy Reconstruction for ATLAS Barrel Combined Calorimeter Using Non-Parametrical Method
Kulchitskii, Yu A
2000-01-01
Hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter in the framework of the non-parametrical method is discussed. The non-parametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to fast energy reconstruction in a first level trigger. The reconstructed mean values of the hadron energies are within \\pm1% of the true values and the fractional energy resolution is [(58\\pm 3)%{\\sqrt{GeV}}/\\sqrt{E}+(2.5\\pm0.3)%]\\bigoplus(1.7\\pm0.2) GeV/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74\\pm0.04. Results of a study of the longitudinal hadronic shower development are also presented.
DEFF Research Database (Denmark)
Carrao, Hugo; Sepulcre, Guadalupe; Horion, Stéphanie Marie Anne F
2013-01-01
This study evaluates the relationship between the frequency and duration of meteorological droughts and the subsequent temporal changes on the quantity of actively photosynthesizing biomass (greenness) estimated from satellite imagery on rainfed croplands in Latin America. An innovative non-parametric...... and non-supervised approach, based on the Fisher-Jenks optimal classification algorithm, is used to identify multi-scale meteorological droughts on the basis of empirical cumulative distributions of 1, 3, 6, and 12-monthly precipitation totals. As input data for the classifier, we use the gridded GPCC...... for the period between 1998 and 2010. The time-series analysis of vegetation greenness is performed during the growing season with a non-parametric method, namely the seasonal Relative Greenness (RG) of spatially accumulated fAPAR. The Global Land Cover map of 2000 and the GlobCover maps of 2005/2006 and 2009...
A simple non-parametric goodness-of-fit test for elliptical copulas
Directory of Open Access Journals (Sweden)
Jaser Miriam
2017-12-01
Full Text Available In this paper, we propose a simple non-parametric goodness-of-fit test for elliptical copulas of any dimension. It is based on the equality of Kendall’s tau and Blomqvist’s beta for all bivariate margins. Nominal level and power of the proposed test are investigated in a Monte Carlo study. An empirical application illustrates our goodness-of-fit test at work.
Bootstrapping the economy -- a non-parametric method of generating consistent future scenarios
Müller, Ulrich A; Bürgi, Roland; Dacorogna, Michel M
2004-01-01
The fortune and the risk of a business venture depends on the future course of the economy. There is a strong demand for economic forecasts and scenarios that can be applied to planning and modeling. While there is an ongoing debate on modeling economic scenarios, the bootstrapping (or resampling) approach presented here has several advantages. As a non-parametric method, it directly relies on past market behaviors rather than debatable assumptions on models and parameters. Simultaneous dep...
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data
DEFF Research Database (Denmark)
Tan, Qihua; Thomassen, Mads; Burton, Mark
2017-01-01
the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray...... time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health....
Directory of Open Access Journals (Sweden)
Ibsen Chivatá Cárdenas
2008-05-01
Full Text Available This article presents a rainfall model constructed by applying non-parametric modelling and imprecise probabilities; these tools were used because there was not enough homogeneous information in the study area. The area’s hydro-logical information regarding rainfall was scarce and existing hydrological time series were not uniform. A distributed extended rainfall model was constructed from so-called probability boxes (p-boxes, multinomial probability distribu-tion and confidence intervals (a friendly algorithm was constructed for non-parametric modelling by combining the last two tools. This model confirmed the high level of uncertainty involved in local rainfall modelling. Uncertainty en-compassed the whole range (domain of probability values thereby showing the severe limitations on information, leading to the conclusion that a detailed estimation of probability would lead to significant error. Nevertheless, rele-vant information was extracted; it was estimated that maximum daily rainfall threshold (70 mm would be surpassed at least once every three years and the magnitude of uncertainty affecting hydrological parameter estimation. This paper’s conclusions may be of interest to non-parametric modellers and decisions-makers as such modelling and imprecise probability represents an alternative for hydrological variable assessment and maybe an obligatory proce-dure in the future. Its potential lies in treating scarce information and represents a robust modelling strategy for non-seasonal stochastic modelling conditions
Non-parametric Tuning of PID Controllers A Modified Relay-Feedback-Test Approach
Boiko, Igor
2013-01-01
The relay feedback test (RFT) has become a popular and efficient tool used in process identification and automatic controller tuning. Non-parametric Tuning of PID Controllers couples new modifications of classical RFT with application-specific optimal tuning rules to form a non-parametric method of test-and-tuning. Test and tuning are coordinated through a set of common parameters so that a PID controller can obtain the desired gain or phase margins in a system exactly, even with unknown process dynamics. The concept of process-specific optimal tuning rules in the nonparametric setup, with corresponding tuning rules for flow, level pressure, and temperature control loops is presented in the text. Common problems of tuning accuracy based on parametric and non-parametric approaches are addressed. In addition, the text treats the parametric approach to tuning based on the modified RFT approach and the exact model of oscillations in the system under test using the locus of a perturbedrelay system (LPRS) meth...
Pestman, Wiebe R
2009-01-01
This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.
Directory of Open Access Journals (Sweden)
Alicja P. Sobańtka
2014-01-01
Full Text Available Extended statistical entropy analysis (eSEA is used to assess the nitrogen (N removal performance of the wastewater treatment (WWT simulation software, the Benchmarking Simulation Model No. 2 (BSM No. 2 . Six simulations with three different types of wastewater are carried out, which vary in the dissolved oxygen concentration (O2,diss. during the aerobic treatment. N2O emissions generated during denitrification are included in the model. The N-removal performance is expressed as reduction in statistical entropy, ΔH, compared to the hypothetical reference situation of direct discharge of the wastewater into the river. The parameters chemical and biological oxygen demand (COD, BOD and suspended solids (SS are analogously expressed in terms of reduction of COD, BOD, and SS, compared to a direct discharge of the wastewater to the river (ΔEQrest. The cleaning performance is expressed as ΔEQnew, the weighted average of ΔH and ΔEQrest. The results show that ΔEQnew is a more comprehensive indicator of the cleaning performance because, in contrast to the traditional effluent quality index (EQ, it considers the characteristics of the wastewater, includes all N-compounds and their distribution in the effluent, the off-gas, and the sludge. Furthermore, it is demonstrated that realistically expectable N2O emissions have only a moderate impact on ΔEQnew.
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration
Comparative Study of Parametric and Non-parametric Approaches in Fault Detection and Isolation
DEFF Research Database (Denmark)
Katebi, S.D.; Blanke, M.; Katebi, M.R.
This report describes a comparative study between two approaches to fault detection and isolation in dynamic systems. The first approach uses a parametric model of the system. The main components of such techniques are residual and signature generation for processing and analyzing. The second...... approach is non-parametric in the sense that the signature analysis is only dependent on the frequency or time domain information extracted directly from the input-output signals. Based on these approaches, two different fault monitoring schemes are developed where the feature extraction and fault decision...
Kerschbamer, Rudolf
2015-05-01
This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.
Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben
2017-06-06
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.
Non-parametric system identification from non-linear stochastic response
DEFF Research Database (Denmark)
Rüdinger, Finn; Krenk, Steen
2001-01-01
An estimation method is proposed for identification of non-linear stiffness and damping of single-degree-of-freedom systems under stationary white noise excitation. Non-parametric estimates of the stiffness and damping along with an estimate of the white noise intensity are obtained by suitable...... of the energy at mean-level crossings, which yields the damping relative to white noise intensity. Finally, an estimate of the noise intensity is extracted by estimating the absolute damping from the autocovariance functions of a set of modified phase plane variables at different energy levels. The method...
Voorhoeve, R.J.; van der Maas, A.; Oomen, T.A.J.
2018-01-01
Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF
Measuring energy performance with sectoral heterogeneity: A non-parametric frontier approach
International Nuclear Information System (INIS)
Wang, H.; Ang, B.W.; Wang, Q.W.; Zhou, P.
2017-01-01
Evaluating economy-wide energy performance is an integral part of assessing the effectiveness of a country's energy efficiency policy. Non-parametric frontier approach has been widely used by researchers for such a purpose. This paper proposes an extended non-parametric frontier approach to studying economy-wide energy efficiency and productivity performances by accounting for sectoral heterogeneity. Relevant techniques in index number theory are incorporated to quantify the driving forces behind changes in the economy-wide energy productivity index. The proposed approach facilitates flexible modelling of different sectors' production processes, and helps to examine sectors' impact on the aggregate energy performance. A case study of China's economy-wide energy efficiency and productivity performances in its 11th five-year plan period (2006–2010) is presented. It is found that sectoral heterogeneities in terms of energy performance are significant in China. Meanwhile, China's economy-wide energy productivity increased slightly during the study period, mainly driven by the technical efficiency improvement. A number of other findings have also been reported. - Highlights: • We model economy-wide energy performance by considering sectoral heterogeneity. • The proposed approach can identify sectors' impact on the aggregate energy performance. • Obvious sectoral heterogeneities are identified in evaluating China's energy performance.
MEASURING DARK MATTER PROFILES NON-PARAMETRICALLY IN DWARF SPHEROIDALS: AN APPLICATION TO DRACO
International Nuclear Information System (INIS)
Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.; Williams, Michael J.; Drory, Niv
2013-01-01
We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Our non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 ≤ r ≤ 700 pc. The profile for r ≥ 20 pc is well fit by a power law with slope α = –1.0 ± 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.
Robust non-parametric one-sample tests for the analysis of recurrent events.
Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia
2010-12-30
One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. Copyright © 2010 John Wiley & Sons, Ltd.
Non-parametric transformation for data correlation and integration: From theory to practice
Energy Technology Data Exchange (ETDEWEB)
Datta-Gupta, A.; Xue, Guoping; Lee, Sang Heon [Texas A& M Univ., College Station, TX (United States)
1997-08-01
The purpose of this paper is two-fold. First, we introduce the use of non-parametric transformations for correlating petrophysical data during reservoir characterization. Such transformations are completely data driven and do not require a priori functional relationship between response and predictor variables which is the case with traditional multiple regression. The transformations are very general, computationally efficient and can easily handle mixed data types for example, continuous variables such as porosity, permeability and categorical variables such as rock type, lithofacies. The power of the non-parametric transformation techniques for data correlation has been illustrated through synthetic and field examples. Second, we utilize these transformations to propose a two-stage approach for data integration during heterogeneity characterization. The principal advantages of our approach over traditional cokriging or cosimulation methods are: (1) it does not require a linear relationship between primary and secondary data, (2) it exploits the secondary information to its fullest potential by maximizing the correlation between the primary and secondary data, (3) it can be easily applied to cases where several types of secondary or soft data are involved, and (4) it significantly reduces variance function calculations and thus, greatly facilitates non-Gaussian cosimulation. We demonstrate the data integration procedure using synthetic and field examples. The field example involves estimation of pore-footage distribution using well data and multiple seismic attributes.
Ferrarini, Luca; Veer, Ilya M; van Lew, Baldur; Oei, Nicole Y L; van Buchem, Mark A; Reiber, Johan H C; Rombouts, Serge A R B; Milles, J
2011-06-01
In recent years, graph theory has been successfully applied to study functional and anatomical connectivity networks in the human brain. Most of these networks have shown small-world topological characteristics: high efficiency in long distance communication between nodes, combined with highly interconnected local clusters of nodes. Moreover, functional studies performed at high resolutions have presented convincing evidence that resting-state functional connectivity networks exhibits (exponentially truncated) scale-free behavior. Such evidence, however, was mostly presented qualitatively, in terms of linear regressions of the degree distributions on log-log plots. Even when quantitative measures were given, these were usually limited to the r(2) correlation coefficient. However, the r(2) statistic is not an optimal estimator of explained variance, when dealing with (truncated) power-law models. Recent developments in statistics have introduced new non-parametric approaches, based on the Kolmogorov-Smirnov test, for the problem of model selection. In this work, we have built on this idea to statistically tackle the issue of model selection for the degree distribution of functional connectivity at rest. The analysis, performed at voxel level and in a subject-specific fashion, confirmed the superiority of a truncated power-law model, showing high consistency across subjects. Moreover, the most highly connected voxels were found to be consistently part of the default mode network. Our results provide statistically sound support to the evidence previously presented in literature for a truncated power-law model of resting-state functional connectivity. Copyright © 2010 Elsevier Inc. All rights reserved.
Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations
Directory of Open Access Journals (Sweden)
Guillermo Macbeth
2011-05-01
Full Text Available The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calculator software that performs such analysis and offers some interpretation tips. Differences and similarities with the parametric case are analysed and illustrated. The implementation of this free program is fully described and compared with other calculators. Alternative algorithmic approaches are mathematically analysed and a basic linear algebra proof of its equivalence is formally presented. Two worked examples in cognitive psychology are commented. A visual interpretation of Cliff´s Delta is suggested. Availability, installation and applications of the program are presented and discussed.
Statistical Analysis of Data for Timber Strengths
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Hoffmeyer, P.
Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
A non-parametric consistency test of the ΛCDM model with Planck CMB data
Energy Technology Data Exchange (ETDEWEB)
Aghamousa, Amir; Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)
2017-09-01
Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation of the base ΛCDM model as cosmology's gold standard.
International Nuclear Information System (INIS)
Morio, Jerome
2011-01-01
Importance sampling (IS) is a useful simulation technique to estimate critical probability with a better accuracy than Monte Carlo methods. It consists in generating random weighted samples from an auxiliary distribution rather than the distribution of interest. The crucial part of this algorithm is the choice of an efficient auxiliary PDF that has to be able to simulate more rare random events. The optimisation of this auxiliary distribution is often in practice very difficult. In this article, we propose to approach the IS optimal auxiliary density with non-parametric adaptive importance sampling (NAIS). We apply this technique for the probability estimation of spatial launcher impact position since it has currently become a more and more important issue in the field of aeronautics.
Performance of non-parametric algorithms for spatial mapping of tropical forest structure
Directory of Open Access Journals (Sweden)
Liang Xu
2016-08-01
Full Text Available Abstract Background Mapping tropical forest structure is a critical requirement for accurate estimation of emissions and removals from land use activities. With the availability of a wide range of remote sensing imagery of vegetation characteristics from space, development of finer resolution and more accurate maps has advanced in recent years. However, the mapping accuracy relies heavily on the quality of input layers, the algorithm chosen, and the size and quality of inventory samples for calibration and validation. Results By using airborne lidar data as the “truth” and focusing on the mean canopy height (MCH as a key structural parameter, we test two commonly-used non-parametric techniques of maximum entropy (ME and random forest (RF for developing maps over a study site in Central Gabon. Results of mapping show that both approaches have improved accuracy with more input layers in mapping canopy height at 100 m (1-ha pixels. The bias-corrected spatial models further improve estimates for small and large trees across the tails of height distributions with a trade-off in increasing overall mean squared error that can be readily compensated by increasing the sample size. Conclusions A significant improvement in tropical forest mapping can be achieved by weighting the number of inventory samples against the choice of image layers and the non-parametric algorithms. Without future satellite observations with better sensitivity to forest biomass, the maps based on existing data will remain slightly biased towards the mean of the distribution and under and over estimating the upper and lower tails of the distribution.
Non-parametric PSF estimation from celestial transit solar images using blind deconvolution
Directory of Open Access Journals (Sweden)
González Adriana
2016-01-01
Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.
Directory of Open Access Journals (Sweden)
Anne-Laure Boulesteix
2017-09-01
Full Text Available Abstract Background The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly “evidence-based”. Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. Main message In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of “evidence-based” statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. Conclusion We suggest that benchmark studies—a method of assessment of statistical methods using real-world datasets—might benefit from adopting (some concepts from evidence-based medicine towards the goal of more evidence-based statistical research.
Boulesteix, Anne-Laure; Wilson, Rory; Hapfelmeier, Alexander
2017-09-09
The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly "evidence-based". Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of "evidence-based" statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. We suggest that benchmark studies-a method of assessment of statistical methods using real-world datasets-might benefit from adopting (some) concepts from evidence-based medicine towards the goal of more evidence-based statistical research.
Energy Technology Data Exchange (ETDEWEB)
Ford, Eric B.; /Florida U.; Fabrycky, Daniel C.; /Lick Observ.; Steffen, Jason H.; /Fermilab; Carter, Joshua A.; /Harvard-Smithsonian Ctr. Astrophys.; Fressin, Francois; /Harvard-Smithsonian Ctr. Astrophys.; Holman, Matthew J.; /Harvard-Smithsonian Ctr. Astrophys.; Lissauer, Jack J.; /NASA, Ames; Moorhead, Althea V.; /Florida U.; Morehead, Robert C.; /Florida U.; Ragozzine, Darin; /Harvard-Smithsonian Ctr. Astrophys.; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.
2012-01-01
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
International Nuclear Information System (INIS)
Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Batalha, Natalie M.; Buchhave, Lars A.
2012-01-01
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
Energy Technology Data Exchange (ETDEWEB)
Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C. [Astronomy Department, University of Florida, 211 Bryant Space Sciences Center, Gainesville, FL 32611 (United States); Fabrycky, Daniel C. [UCO/Lick Observatory, University of California, Santa Cruz, CA 95064 (United States); Steffen, Jason H. [Fermilab Center for Particle Astrophysics, P.O. Box 500, MS 127, Batavia, IL 60510 (United States); Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A. [NASA Ames Research Center, Moffett Field, CA 94035 (United States); Welsh, William F. [Astronomy Department, San Diego State University, San Diego, CA 92182-1221 (United States); Allen, Christopher [Orbital Sciences Corporation/NASA Ames Research Center, Moffett Field, CA 94035 (United States); Batalha, Natalie M. [Department of Physics and Astronomy, San Jose State University, San Jose, CA 95192 (United States); Buchhave, Lars A., E-mail: eford@astro.ufl.edu [Niels Bohr Institute, Copenhagen University, DK-2100 Copenhagen (Denmark); Collaboration: Kepler Science Team; and others
2012-05-10
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
Directory of Open Access Journals (Sweden)
Navid Haghighat
2017-12-01
Full Text Available This paper focuses on evaluating airline service quality from the perspective of passengers' view. Until now a lot of researches has been performed in airline service quality evaluation in the world but a little research has been conducted in Iran, yet. In this study, a framework for measuring airline service quality in Iran is proposed. After reviewing airline service quality criteria, SSQAI model was selected because of its comprehensiveness in covering airline service quality dimensions. SSQAI questionnaire items were redesigned to adopt with Iranian airlines requirements and environmental circumstances in the Iran's economic and cultural context. This study includes fuzzy decision-making theory, considering the possible fuzzy subjective judgment of the evaluators during airline service quality evaluation. Fuzzy TOPSIS have been applied for ranking airlines service quality performances. Three major Iranian airlines which have the most passenger transfer volumes in domestic and foreign flights were chosen for evaluation in this research. Results demonstrated Mahan airline has got the best service quality performance rank in gaining passengers' satisfaction with delivery of high-quality services to its passengers, among the three major Iranian airlines. IranAir and Aseman airlines placed in the second and third rank, respectively, according to passenger's evaluation. Statistical analysis has been used in analyzing passenger responses. Due to the abnormality of data, Non-parametric tests were applied. To demonstrate airline ranks in every criterion separately, Friedman test was performed. Variance analysis and Tukey test were applied to study the influence of increasing in age and educational level of passengers on degree of their satisfaction from airline's service quality. Results showed that age has no significant relation to passenger satisfaction of airlines, however, increasing in educational level demonstrated a negative impact on
Maydeu-Olivares, Albert
2005-01-01
Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…
Trend Analysis of Pahang River Using Non-Parametric Analysis: Mann Kendalls Trend Test
International Nuclear Information System (INIS)
Nur Hishaam Sulaiman; Mohd Khairul Amri Kamarudin; Mohd Khairul Amri Kamarudin; Ahmad Dasuki Mustafa; Muhammad Azizi Amran; Fazureen Azaman; Ismail Zainal Abidin; Norsyuhada Hairoma
2015-01-01
Flood is common in Pahang especially during northeast monsoon season from November to February. Three river cross station: Lubuk Paku, Sg. Yap and Temerloh were selected as area of this study. The stream flow and water level data were gathered from DID record. Data set for this study were analysed by using non-parametric analysis, Mann-Kendall Trend Test. The results that obtained from stream flow and water level analysis indicate that there are positively significant trend for Lubuk Paku (0.001) and Sg. Yap (<0.0001) from 1972-2011 with the p-value < 0.05. Temerloh (0.178) data from 1963-2011 recorded no trend for stream flow parameter but negative trend for water level parameter. Hydrological pattern and trend are extremely affected by outside factors such as north east monsoon season that occurred in South China Sea and affected Pahang during November to March. There are other factors such as development and management of the areas which can be considered as factors affected the data and results. Hydrological Pattern is important to indicate the river trend such as stream flow and water level. It can be used as flood mitigation by local authorities. (author)
Design Automation Using Script Languages. High-Level CAD Templates in Non-Parametric Programs
Moreno, R.; Bazán, A. M.
2017-10-01
The main purpose of this work is to study the advantages offered by the application of traditional techniques of technical drawing in processes for automation of the design, with non-parametric CAD programs, provided with scripting languages. Given that an example drawing can be solved with traditional step-by-step detailed procedures, is possible to do the same with CAD applications and to generalize it later, incorporating references. In today’s modern CAD applications, there are striking absences of solutions for building engineering: oblique projections (military and cavalier), 3D modelling of complex stairs, roofs, furniture, and so on. The use of geometric references (using variables in script languages) and their incorporation into high-level CAD templates allows the automation of processes. Instead of repeatedly creating similar designs or modifying their data, users should be able to use these templates to generate future variations of the same design. This paper presents the automation process of several complex drawing examples based on CAD script files aided with parametric geometry calculation tools. The proposed method allows us to solve complex geometry designs not currently incorporated in the current CAD applications and to subsequently create other new derivatives without user intervention. Automation in the generation of complex designs not only saves time but also increases the quality of the presentations and reduces the possibility of human errors.
A Non-Parametric Delphi Approach to Foster Innovation Policy Debate in Spain
Directory of Open Access Journals (Sweden)
Juan Carlos Salazar-Elena
2016-05-01
Full Text Available The aim of this paper is to identify some changes needed in Spain’s innovation policy to fill the gap between its innovation results and those of other European countries in lieu of sustainable leadership. To do this we apply the Delphi methodology to experts from academia, business, and government. To overcome the shortcomings of traditional descriptive methods, we develop an inferential analysis by following a non-parametric bootstrap method which enables us to identify important changes that should be implemented. Particularly interesting is the support found for improving the interconnections among the relevant agents of the innovation system (instead of focusing exclusively in the provision of knowledge and technological inputs through R and D activities, or the support found for “soft” policy instruments aimed at providing a homogeneous framework to assess the innovation capabilities of firms (e.g., for funding purposes. Attention to potential innovators among small and medium enterprises (SMEs and traditional industries is particularly encouraged by experts.
International Nuclear Information System (INIS)
Dimas, George; Iakovidis, Dimitris K; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios
2017-01-01
Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup. (paper)
Dimas, George; Iakovidis, Dimitris K.; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios
2017-09-01
Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup.
Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana
2015-03-01
The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. Copyright © 2014 John Wiley & Sons, Ltd.
Two non-parametric methods for derivation of constraints from radiotherapy dose–histogram data
International Nuclear Information System (INIS)
Ebert, M A; Kennedy, A; Joseph, D J; Gulliford, S L; Buettner, F; Foo, K; Haworth, A; Denham, J W
2014-01-01
Dose constraints based on histograms provide a convenient and widely-used method for informing and guiding radiotherapy treatment planning. Methods of derivation of such constraints are often poorly described. Two non-parametric methods for derivation of constraints are described and investigated in the context of determination of dose-specific cut-points—values of the free parameter (e.g., percentage volume of the irradiated organ) which best reflect resulting changes in complication incidence. A method based on receiver operating characteristic (ROC) analysis and one based on a maximally-selected standardized rank sum are described and compared using rectal toxicity data from a prostate radiotherapy trial. Multiple test corrections are applied using a free step-down resampling algorithm, which accounts for the large number of tests undertaken to search for optimal cut-points and the inherent correlation between dose–histogram points. Both methods provide consistent significant cut-point values, with the rank sum method displaying some sensitivity to the underlying data. The ROC method is simple to implement and can utilize a complication atlas, though an advantage of the rank sum method is the ability to incorporate all complication grades without the need for grade dichotomization. (note)
Non-Parametric Kinetic (NPK Analysis of Thermal Oxidation of Carbon Aerogels
Directory of Open Access Journals (Sweden)
Azadeh Seifi
2017-05-01
Full Text Available In recent years, much attention has been paid to aerogel materials (especially carbon aerogels due to their potential uses in energy-related applications, such as thermal energy storage and thermal protection systems. These open cell carbon-based porous materials (carbon aerogels can strongly react with oxygen at relatively low temperatures (~ 400°C. Therefore, it is necessary to evaluate the thermal performance of carbon aerogels in view of their energy-related applications at high temperatures and under thermal oxidation conditions. The objective of this paper is to study theoretically and experimentally the oxidation reaction kinetics of carbon aerogel using the non-parametric kinetic (NPK as a powerful method. For this purpose, a non-isothermal thermogravimetric analysis, at three different heating rates, was performed on three samples each with its specific pore structure, density and specific surface area. The most significant feature of this method, in comparison with the model-free isoconversional methods, is its ability to separate the functionality of the reaction rate with the degree of conversion and temperature by the direct use of thermogravimetric data. Using this method, it was observed that the Nomen-Sempere model could provide the best fit to the data, while the temperature dependence of the rate constant was best explained by a Vogel-Fulcher relationship, where the reference temperature was the onset temperature of oxidation. Moreover, it was found from the results of this work that the assumption of the Arrhenius relation for the temperature dependence of the rate constant led to over-estimation of the apparent activation energy (up to 160 kJ/mol that was considerably different from the values (up to 3.5 kJ/mol predicted by the Vogel-Fulcher relationship in isoconversional methods
A Non-Parametric Item Response Theory Evaluation of the CAGE Instrument Among Older Adults.
Abdin, Edimansyah; Sagayadevan, Vathsala; Vaingankar, Janhavi Ajit; Picco, Louisa; Chong, Siow Ann; Subramaniam, Mythily
2018-02-23
The validity of the CAGE using item response theory (IRT) has not yet been examined in older adult population. This study aims to investigate the psychometric properties of the CAGE using both non-parametric and parametric IRT models, assess whether there is any differential item functioning (DIF) by age, gender and ethnicity and examine the measurement precision at the cut-off scores. We used data from the Well-being of the Singapore Elderly study to conduct Mokken scaling analysis (MSA), dichotomous Rasch and 2-parameter logistic IRT models. The measurement precision at the cut-off scores were evaluated using classification accuracy (CA) and classification consistency (CC). The MSA showed the overall scalability H index was 0.459, indicating a medium performing instrument. All items were found to be homogenous, measuring the same construct and able to discriminate well between respondents with high levels of the construct and the ones with lower levels. The item discrimination ranged from 1.07 to 6.73 while the item difficulty ranged from 0.33 to 2.80. Significant DIF was found for 2-item across ethnic group. More than 90% (CC and CA ranged from 92.5% to 94.3%) of the respondents were consistently and accurately classified by the CAGE cut-off scores of 2 and 3. The current study provides new evidence on the validity of the CAGE from the IRT perspective. This study provides valuable information of each item in the assessment of the overall severity of alcohol problem and the precision of the cut-off scores in older adult population.
DEFF Research Database (Denmark)
Bogetoft, Peter; Nielsen, Kurt
2005-01-01
We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...
Van Steenbergen, N.; Willems, P.
2012-04-01
Reliable flood forecasts are the most important non-structural measures to reduce the impact of floods. However flood forecasting systems are subject to uncertainty originating from the input data, model structure and model parameters of the different hydraulic and hydrological submodels. To quantify this uncertainty a non-parametric data-based approach has been developed. This approach analyses the historical forecast residuals (differences between the predictions and the observations at river gauging stations) without using a predefined statistical error distribution. Because the residuals are correlated with the value of the forecasted water level and the lead time, the residuals are split up into discrete classes of simulated water levels and lead times. For each class, percentile values are calculated of the model residuals and stored in a 'three dimensional error' matrix. By 3D interpolation in this error matrix, the uncertainty in new forecasted water levels can be quantified. In addition to the quantification of the uncertainty, the communication of this uncertainty is equally important. The communication has to be done in a consistent way, reducing the chance of misinterpretation. Also, the communication needs to be adapted to the audience; the majority of the larger public is not interested in in-depth information on the uncertainty on the predicted water levels, but only is interested in information on the likelihood of exceedance of certain alarm levels. Water managers need more information, e.g. time dependent uncertainty information, because they rely on this information to undertake the appropriate flood mitigation action. There are various ways in presenting uncertainty information (numerical, linguistic, graphical, time (in)dependent, etc.) each with their advantages and disadvantages for a specific audience. A useful method to communicate uncertainty of flood forecasts is by probabilistic flood mapping. These maps give a representation of the
de-Graft Acquah, Henry
2014-01-01
This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukeyâ€™s test sugge...
Siciliani, Luigi
2006-01-01
Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.
Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.
Singh, Sanjeet
2016-08-01
Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance
Monitoring coastal marshes biomass with CASI: a comparison of parametric and non-parametric models
Mo, Y.; Kearney, M.
2017-12-01
Coastal marshes are important carbon sinks that face multiple natural and anthropogenic stresses. Optical remote sensing is a powerful tool for closely monitoring the biomass of coastal marshes. However, application of hyperspectral sensors on assessing the biomass of diverse coastal marsh ecosystems is limited. This study samples spectral and biophysical data from coastal freshwater, intermediate, brackish, and saline marshes in Louisiana, and develops parametric and non-parametric models for using the Compact Airborne Spectrographic Imager (CASI) to retrieve the marshes' biomass. Linear models and random forest models are developed from simulated CASI data (48 bands, 380-1050 nm, bandwidth 14 nm). Linear models are also developed using narrowband vegetation indices computed from all possible band combinations from the blue, red, and near infrared wavelengths. It is found that the linear models derived from the optimal narrowband vegetation indices provide strong predictions for the marshes' Leaf Area Index (LAI; R2 > 0.74 for ARVI), but not for their Aboveground Green Biomass (AGB; R2 > 0.25). The linear models derived from the simulated CASI data strongly predict the marshes' LAI (R2 = 0.93) and AGB (R2 = 0.71) and have 27 and 30 bands/variables in the final models through stepwise regression, respectively. The random forest models derived from the simulated CASI data also strongly predict the marshes' LAI and AGB (R2 = 0.91 and 0.84, respectively), where the most important variables for predicting LAI are near infrared bands at 784 and 756 nm and for predicting ABG are red bands at 684 and 670 nm. In sum, the random forest model is preferable for assessing coastal marsh biomass using CASI data as it offers high R2 for both LAI and AGB. The superior performance of the random forest model is likely to due to that it fully utilizes the full-spectrum data and makes no assumption of the approximate normality of the sampling population. This study offers solutions
International Nuclear Information System (INIS)
Khoshroo, Alireza; Mulwa, Richard; Emrouznejad, Ali; Arabi, Behrouz
2013-01-01
Grape is one of the world's largest fruit crops with approximately 67.5 million tonnes produced each year and energy is an important element in modern grape productions as it heavily depends on fossil and other energy resources. Efficient use of these energies is a necessary step toward reducing environmental hazards, preventing destruction of natural resources and ensuring agricultural sustainability. Hence, identifying excessive use of energy as well as reducing energy resources is the main focus of this paper to optimize energy consumption in grape production. In this study we use a two-stage methodology to find the association of energy efficiency and performance explained by farmers' specific characteristics. In the first stage a non-parametric Data Envelopment Analysis is used to model efficiencies as an explicit function of human labor, machinery, chemicals, FYM (farmyard manure), diesel fuel, electricity and water for irrigation energies. In the second step, farm specific variables such as farmers' age, gender, level of education and agricultural experience are used in a Tobit regression framework to explain how these factors influence efficiency of grape farming. The result of the first stage shows substantial inefficiency between the grape producers in the studied area while the second stage shows that the main difference between efficient and inefficient farmers was in the use of chemicals, diesel fuel and water for irrigation. The use of chemicals such as insecticides, herbicides and fungicides were considerably less than inefficient ones. The results revealed that the more educated farmers are more energy efficient in comparison with their less educated counterparts. - Highlights: • The focus of this paper is to identify excessive use of energy and optimize energy consumption in grape production. • We measure the efficiency as a function of labor/machinery/chemicals/farmyard manure/diesel-fuel/electricity/water. • Data were obtained from 41 grape
CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions
Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.
Directory of Open Access Journals (Sweden)
Wiji Suwarno
2017-02-01
Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.
DEFF Research Database (Denmark)
Lawson, Lartey; Nielsen, Kurt
2005-01-01
We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....
DEFF Research Database (Denmark)
Peña, Alfredo
This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Tvede, Mich
2002-01-01
Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...
A new non-parametric stationarity test of time series in the time domain
Jin, Lei; Wang, Suojin; Wang, Haiyan
2014-01-01
© 2015 The Royal Statistical Society and Blackwell Publishing Ltd. We propose a new double-order selection test for checking second-order stationarity of a time series. To develop the test, a sequence of systematic samples is defined via Walsh
Comparing non-parametric methods for ungrouping coarsely aggregated age-specific distributions
DEFF Research Database (Denmark)
Rizzi, Silvia; Thinggaard, Mikael; Vaupel, James W.
2016-01-01
Demographers have often access to vital statistics that are less than ideal for the purpose of their research. In many instances demographic data are reported in coarse histograms, where the values given are only the summation of true latent values, thereby making detailed analysis troublesome. O...
Non-parametric probabilistic forecasts of wind power: required properties and evaluation
DEFF Research Database (Denmark)
Pinson, Pierre; Nielsen, Henrik Aalborg; Møller, Jan Kloppenborg
2007-01-01
of a single or a set of quantile forecasts. The required and desirable properties of such probabilistic forecasts are defined and a framework for their evaluation is proposed. This framework is applied for evaluating the quality of two statistical methods producing full predictive distributions from point...
DEFF Research Database (Denmark)
Petersen, Jørgen Holm
2009-01-01
A conceptually simple two-dimensional conditional reference curve is described. The curve gives a decision basis for determining whether a bivariate response from an individual is "normal" or "abnormal" when taking into account that a third (conditioning) variable may influence the bivariate...... response. The reference curve is not only characterized analytically but also by geometric properties that are easily communicated to medical doctors - the users of such curves. The reference curve estimator is completely non-parametric, so no distributional assumptions are needed about the two......-dimensional response. An example that will serve to motivate and illustrate the reference is the study of the height/weight distribution of 7-8-year-old Danish school girls born in 1930, 1950, or 1970....
Non-parametric Bayesian models of response function in dynamic image sequences
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav
2016-01-01
Roč. 151, č. 1 (2016), s. 90-100 ISSN 1077-3142 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Response function * Blind source separation * Dynamic medical imaging * Probabilistic models * Bayesian methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.498, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/tichy-0456983.pdf
Salameh , Farah; Picot , Antoine; Chabert , Marie; Maussion , Pascal
2017-01-01
International audience; This paper describes an original statistical approach for the lifespan modeling of electric machine insulation materials. The presented models aim to study the effect of three main stress factors (voltage, frequency and temperature) and their interactions on the insulation lifespan. The proposed methodology is applied to two different insulation materials tested in partial discharge regime. Accelerated ageing tests are organized according to experimental optimization m...
A new non-parametric stationarity test of time series in the time domain
Jin, Lei
2014-11-07
© 2015 The Royal Statistical Society and Blackwell Publishing Ltd. We propose a new double-order selection test for checking second-order stationarity of a time series. To develop the test, a sequence of systematic samples is defined via Walsh functions. Then the deviations of the autocovariances based on these systematic samples from the corresponding autocovariances of the whole time series are calculated and the uniform asymptotic joint normality of these deviations over different systematic samples is obtained. With a double-order selection scheme, our test statistic is constructed by combining the deviations at different lags in the systematic samples. The null asymptotic distribution of the statistic proposed is derived and the consistency of the test is shown under fixed and local alternatives. Simulation studies demonstrate well-behaved finite sample properties of the method proposed. Comparisons with some existing tests in terms of power are given both analytically and empirically. In addition, the method proposed is applied to check the stationarity assumption of a chemical process viscosity readings data set.
International Nuclear Information System (INIS)
Leszczynski, Francisco
2002-01-01
The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)
Directory of Open Access Journals (Sweden)
A Francina Webster
Full Text Available Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD values derived from toxicogenomics data be used as point of departure (PoD values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd and carcinogenic (4, 8 mkd doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses.
Non-parametric characterization of long-term rainfall time series
Tiwari, Harinarayan; Pandey, Brij Kishor
2018-03-01
The statistical study of rainfall time series is one of the approaches for efficient hydrological system design. Identifying, and characterizing long-term rainfall time series could aid in improving hydrological systems forecasting. In the present study, eventual statistics was applied for the long-term (1851-2006) rainfall time series under seven meteorological regions of India. Linear trend analysis was carried out using Mann-Kendall test for the observed rainfall series. The observed trend using the above-mentioned approach has been ascertained using the innovative trend analysis method. Innovative trend analysis has been found to be a strong tool to detect the general trend of rainfall time series. Sequential Mann-Kendall test has also been carried out to examine nonlinear trends of the series. The partial sum of cumulative deviation test is also found to be suitable to detect the nonlinear trend. Innovative trend analysis, sequential Mann-Kendall test and partial cumulative deviation test have potential to detect the general as well as nonlinear trend for the rainfall time series. Annual rainfall analysis suggests that the maximum changes in mean rainfall is 11.53% for West Peninsular India, whereas the maximum fall in mean rainfall is 7.8% for the North Mountainous Indian region. The innovative trend analysis method is also capable of finding the number of change point available in the time series. Additionally, we have performed von Neumann ratio test and cumulative deviation test to estimate the departure from homogeneity. Singular spectrum analysis has been applied in this study to evaluate the order of departure from homogeneity in the rainfall time series. Monsoon season (JS) of North Mountainous India and West Peninsular India zones has higher departure from homogeneity and singular spectrum analysis shows the results to be in coherence with the same.
A new measure for gene expression biclustering based on non-parametric correlation.
Flores, Jose L; Inza, Iñaki; Larrañaga, Pedro; Calvo, Borja
2013-12-01
One of the emerging techniques for performing the analysis of the DNA microarray data known as biclustering is the search of subsets of genes and conditions which are coherently expressed. These subgroups provide clues about the main biological processes. Until now, different approaches to this problem have been proposed. Most of them use the mean squared residue as quality measure but relevant and interesting patterns can not be detected such as shifting, or scaling patterns. Furthermore, recent papers show that there exist new coherence patterns involved in different kinds of cancer and tumors such as inverse relationships between genes which can not be captured. The proposed measure is called Spearman's biclustering measure (SBM) which performs an estimation of the quality of a bicluster based on the non-linear correlation among genes and conditions simultaneously. The search of biclusters is performed by using a evolutionary technique called estimation of distribution algorithms which uses the SBM measure as fitness function. This approach has been examined from different points of view by using artificial and real microarrays. The assessment process has involved the use of quality indexes, a set of bicluster patterns of reference including new patterns and a set of statistical tests. It has been also examined the performance using real microarrays and comparing to different algorithmic approaches such as Bimax, CC, OPSM, Plaid and xMotifs. SBM shows several advantages such as the ability to recognize more complex coherence patterns such as shifting, scaling and inversion and the capability to selectively marginalize genes and conditions depending on the statistical significance. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Non-parametric causality detection: An application to social media and financial data
Tsapeli, Fani; Musolesi, Mirco; Tino, Peter
2017-10-01
According to behavioral finance, stock market returns are influenced by emotional, social and psychological factors. Several recent works support this theory by providing evidence of correlation between stock market prices and collective sentiment indexes measured using social media data. However, a pure correlation analysis is not sufficient to prove that stock market returns are influenced by such emotional factors since both stock market prices and collective sentiment may be driven by a third unmeasured factor. Controlling for factors that could influence the study by applying multivariate regression models is challenging given the complexity of stock market data. False assumptions about the linearity or non-linearity of the model and inaccuracies on model specification may result in misleading conclusions. In this work, we propose a novel framework for causal inference that does not require any assumption about a particular parametric form of the model expressing statistical relationships among the variables of the study and can effectively control a large number of observed factors. We apply our method in order to estimate the causal impact that information posted in social media may have on stock market returns of four big companies. Our results indicate that social media data not only correlate with stock market returns but also influence them.
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
DEFF Research Database (Denmark)
Agrell, Per J.; Bogetoft, Peter
2017-01-01
Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...
DEFF Research Database (Denmark)
Agrell, Per J.; Bogetoft, Peter
2017-01-01
Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...
Romero, C.; McWilliam, M.; Macías-Pérez, J.-F.; Adam, R.; Ade, P.; André, P.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; de Petris, M.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Lestrade, J.-F.; Mauskopf, P.; Mayet, F.; Monfardini, A.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Ritacco, A.; Roussel, H.; Ruppin, F.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.
2018-04-01
Context. In the past decade, sensitive, resolved Sunyaev-Zel'dovich (SZ) studies of galaxy clusters have become common. Whereas many previous SZ studies have parameterized the pressure profiles of galaxy clusters, non-parametric reconstructions will provide insights into the thermodynamic state of the intracluster medium. Aim. We seek to recover the non-parametric pressure profiles of the high redshift (z = 0.89) galaxy cluster CLJ 1226.9+3332 as inferred from SZ data from the MUSTANG, NIKA, Bolocam, and Planck instruments, which all probe different angular scales. Methods: Our non-parametric algorithm makes use of logarithmic interpolation, which under the assumption of ellipsoidal symmetry is analytically integrable. For MUSTANG, NIKA, and Bolocam we derive a non-parametric pressure profile independently and find good agreement among the instruments. In particular, we find that the non-parametric profiles are consistent with a fitted generalized Navaro-Frenk-White (gNFW) profile. Given the ability of Planck to constrain the total signal, we include a prior on the integrated Compton Y parameter as determined by Planck. Results: For a given instrument, constraints on the pressure profile diminish rapidly beyond the field of view. The overlap in spatial scales probed by these four datasets is therefore critical in checking for consistency between instruments. By using multiple instruments, our analysis of CLJ 1226.9+3332 covers a large radial range, from the central regions to the cluster outskirts: 0.05 R500 generation of SZ instruments such as NIKA2 and MUSTANG2.
Directory of Open Access Journals (Sweden)
Shantanu Desai
2016-04-01
Full Text Available The coupling between spin and torsion in the Einstein–Cartan–Sciama–Kibble theory of gravity generates gravitational repulsion at very high densities, which prevents a singularity in a black hole and may create there a new universe. We show that quantum particle production in such a universe near the last bounce, which represents the Big Bang, gives the dynamics that solves the horizon, flatness, and homogeneity problems in cosmology. For a particular range of the particle production coefficient, we obtain a nearly constant Hubble parameter that gives an exponential expansion of the universe with more than 60 e-folds, which lasts about ∼10−42 s. This scenario can thus explain cosmic inflation without requiring a fundamental scalar field and reheating. From the obtained time dependence of the scale factor, we follow the prescription of Ellis and Madsen to reconstruct in a non-parametric way a scalar field potential which gives the same dynamics of the early universe. This potential gives the slow-roll parameters of cosmic inflation, from which we calculate the tensor-to-scalar ratio, the scalar spectral index of density perturbations, and its running as functions of the production coefficient. We find that these quantities do not significantly depend on the scale factor at the Big Bounce. Our predictions for these quantities are consistent with the Planck 2015 observations.
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
International Nuclear Information System (INIS)
2005-01-01
For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees
International Nuclear Information System (INIS)
2001-01-01
For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
1999-01-01
For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
Wei, Chu; Löschel, Andreas; Liu, Bing
2015-01-01
In the context of soaring demand for electricity, mitigating and controlling greenhouse gas emissions is a great challenge for China's power sector. Increasing attention has been placed on the evaluation of energy efficiency and CO 2 abatement potential in the power sector. However, studies at the micro-level are relatively rare due to serious data limitations. This study uses the 2004 and 2008 Census data of Zhejiang province to construct a non-parametric frontier in order to assess the abatement space of energy and associated CO 2 emission from China's coal-fired power enterprises. A Weighted Russell Directional Distance Function (WRDDF) is applied to construct an energy-saving potential index and a CO 2 emission-abatement potential index. Both indicators depict the inefficiency level in terms of energy utilization and CO 2 emissions of electric power plants. Our results show a substantial variation of energy-saving potential and CO 2 abatement potential among enterprises. We find that large power enterprises are less efficient in 2004, but become more efficient than smaller enterprises in 2008. State-owned enterprises (SOE) are not significantly different in 2008 from 2004, but perform better than their non-SOE counterparts in 2008. This change in performance for large enterprises and SOE might be driven by the “top-1000 Enterprise Energy Conservation Action” that was implemented in 2006. - Highlights: • Energy-saving potential and CO 2 abatement-potential for Chinese power enterprise are evaluated. • The potential to curb energy and emission shows great variation and dynamic changes. • Large enterprise is less efficient than small enterprise in 2004, but more efficient in 2008. • The state-owned enterprise performs better than non-state-owned enterprise in 2008
Non-parametric trend analysis of the aridity index for three large arid and semi-arid basins in Iran
Ahani, Hossien; Kherad, Mehrzad; Kousari, Mohammad Reza; van Roosmalen, Lieke; Aryanfar, Ramin; Hosseini, Seyyed Mashaallah
2013-05-01
Currently, an important scientific challenge that researchers are facing is to gain a better understanding of climate change at the regional scale, which can be especially challenging in an area with low and highly variable precipitation amounts such as Iran. Trend analysis of the medium-term change using ground station observations of meteorological variables can enhance our knowledge of the dominant processes in an area and contribute to the analysis of future climate projections. Generally, studies focus on the long-term variability of temperature and precipitation and to a lesser extent on other important parameters such as moisture indices. In this study the recent 50-year trends (1955-2005) of precipitation (P), potential evapotranspiration (PET), and aridity index (AI) in monthly time scale were studied over 14 synoptic stations in three large Iran basins using the Mann-Kendall non-parametric test. Additionally, an analysis of the monthly, seasonal and annual trend of each parameter was performed. Results showed no significant trends in the monthly time series. However, PET showed significant, mostly decreasing trends, for the seasonal values, which resulted in a significant negative trend in annual PET at five stations. Significant negative trends in seasonal P values were only found at a number of stations in spring and summer and no station showed significant negative trends in annual P. Due to the varied positive and negative trends in annual P and to a lesser extent PET, almost as many stations with negative as positive trends in annual AI were found, indicating that both drying and wetting trends occurred in Iran. Overall, the northern part of the study area showed an increasing trend in annual AI which meant that the region became wetter, while the south showed decreasing trends in AI.
International Nuclear Information System (INIS)
2003-01-01
For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products
International Nuclear Information System (INIS)
2004-01-01
For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Mura, Maria Chiara; De Felice, Marco; Morlino, Roberta; Fuselli, Sergio
2010-01-01
In step with the need to develop statistical procedures to manage small-size environmental samples, in this work we have used concentration values of benzene (C6H6), concurrently detected by seven outdoor and indoor monitoring stations over 12 000 minutes, in order to assess the representativeness of collected data and the impact of the pollutant on indoor environment. Clearly, the former issue is strictly connected to sampling-site geometry, which proves critical to correctly retrieving information from analysis of pollutants of sanitary interest. Therefore, according to current criteria for network-planning, single stations have been interpreted as nodes of a set of adjoining triangles; then, a) node pairs have been taken into account in order to estimate pollutant stationarity on triangle sides, as well as b) node triplets, to statistically associate data from air-monitoring with the corresponding territory area, and c) node sextuplets, to assess the impact probability of the outdoor pollutant on indoor environment for each area. Distributions from the various node combinations are all non-Gaussian, in the consequently, Kruskal-Wallis (KW) non-parametric statistics has been exploited to test variability on continuous density function from each pair, triplet and sextuplet. Results from the above-mentioned statistical analysis have shown randomness of site selection, which has not allowed a reliable generalization of monitoring data to the entire selected territory, except for a single "forced" case (70%); most important, they suggest a possible procedure to optimize network design.
Directory of Open Access Journals (Sweden)
Maria Chiara Mura
2010-12-01
Full Text Available In step with the need to develop statistical procedures to manage small-size environmental samples, in this work we have used concentration values of benzene (C6H6, concurrently detected by seven outdoor and indoor monitoring stations over 12 000 minutes, in order to assess the representativeness of collected data and the impact of the pollutant on indoor environment. Clearly, the former issue is strictly connected to sampling-site geometry, which proves critical to correctly retrieving information from analysis of pollutants of sanitary interest. Therefore, according to current criteria for network-planning, single stations have been interpreted as nodes of a set of adjoining triangles; then, a node pairs have been taken into account in order to estimate pollutant stationarity on triangle sides, as well as b node triplets, to statistically associate data from air-monitoring with the corresponding territory area, and c node sextuplets, to assess the impact probability of the outdoor pollutant on indoor environment for each area. Distributions from the various node combinations are all non-Gaussian, in the consequently, Kruskal-Wallis (KW non-parametric statistics has been exploited to test variability on continuous density function from each pair, triplet and sextuplet. Results from the above-mentioned statistical analysis have shown randomness of site selection, which has not allowed a reliable generalization of monitoring data to the entire selected territory, except for a single "forced" case (70%; most important, they suggest a possible procedure to optimize network design.
Statistical concepts a second course
Lomax, Richard G
2012-01-01
Statistical Concepts consists of the last 9 chapters of An Introduction to Statistical Concepts, 3rd ed. Designed for the second course in statistics, it is one of the few texts that focuses just on intermediate statistics. The book highlights how statistics work and what they mean to better prepare students to analyze their own data and interpret SPSS and research results. As such it offers more coverage of non-parametric procedures used when standard assumptions are violated since these methods are more frequently encountered when working with real data. Determining appropriate sample sizes
Mathematical statistics and stochastic processes
Bosq, Denis
2013-01-01
Generally, books on mathematical statistics are restricted to the case of independent identically distributed random variables. In this book however, both this case AND the case of dependent variables, i.e. statistics for discrete and continuous time processes, are studied. This second case is very important for today's practitioners.Mathematical Statistics and Stochastic Processes is based on decision theory and asymptotic statistics and contains up-to-date information on the relevant topics of theory of probability, estimation, confidence intervals, non-parametric statistics and rob
Grootes, M. W.; Tuffs, R. J.; Popescu, C. C.; Robotham, A. S. G.; Seibert, M.; Kelvin, L. S.
2014-02-01
We present a non-parametric cell-based method of selecting highly pure and largely complete samples of spiral galaxies using photometric and structural parameters as provided by standard photometric pipelines and simple shape fitting algorithms. The performance of the method is quantified for different parameter combinations, using purely human-based classifications as a benchmark. The discretization of the parameter space allows a markedly superior selection than commonly used proxies relying on a fixed curve or surface of separation. Moreover, we find structural parameters derived using passbands longwards of the g band and linked to older stellar populations, especially the stellar mass surface density μ* and the r-band effective radius re, to perform at least equally well as parameters more traditionally linked to the identification of spirals by means of their young stellar populations, e.g. UV/optical colours. In particular, the distinct bimodality in the parameter μ*, consistent with expectations of different evolutionary paths for spirals and ellipticals, represents an often overlooked yet powerful parameter in differentiating between spiral and non-spiral/elliptical galaxies. We use the cell-based method for the optical parameter set including re in combination with the Sérsic index n and the i-band magnitude to investigate the intrinsic specific star formation rate-stellar mass relation (ψ*-M*) for a morphologically defined volume-limited sample of local Universe spiral galaxies. The relation is found to be well described by ψ _* ∝ M_*^{-0.5} over the range of 109.5 ≤ M* ≤ 1011 M⊙ with a mean interquartile range of 0.4 dex. This is somewhat steeper than previous determinations based on colour-selected samples of star-forming galaxies, primarily due to the inclusion in the sample of red quiescent discs.
Benchmarking in Foodservice Operations
National Research Council Canada - National Science Library
Johnson, Bonnie
1998-01-01
The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...
Directory of Open Access Journals (Sweden)
Charles Onyutha
2017-10-01
Full Text Available Some of the problems in drought assessments are that: analyses tend to focus on coarse temporal scales, many of the methods yield skewed indices, a few terminologies are ambiguously used, and analyses comprise an implicit assumption that the observations come from a stationary process. To solve these problems, this paper introduces non-stationary frequency analyses of quantiles. How to use non-parametric rescaling to obtain robust indices that are not (or minimally skewed is also introduced. To avoid ambiguity, some concepts on, e.g., incidence, extremity, etc., were revisited through shift from monthly to daily time scale. Demonstrations on the introduced methods were made using daily flow and precipitation insufficiency (precipitation minus potential evapotranspiration from the Blue Nile basin in Africa. Results show that, when a significant trend exists in extreme events, stationarity-based quantiles can be far different from those when non-stationarity is considered. The introduced non-parametric indices were found to closely agree with the well-known standardized precipitation evapotranspiration indices in many aspects but skewness. Apart from revisiting some concepts, the advantages of the use of fine instead of coarse time scales in drought assessment were given. The links for obtaining freely downloadable tools on how to implement the introduced methods were provided.
Rights, Jason D; Sterba, Sonya K
2016-11-01
Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.
Benchmarking and Performance Measurement.
Town, J. Stephen
This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…
Benchmarking in the Netherlands
International Nuclear Information System (INIS)
1999-01-01
In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies
International Nuclear Information System (INIS)
Buccheri, R.; Coffaro, P.; Di Gesu, V.; Salemi, S.; Colomba, G.
1975-01-01
Preliminary results are given of the application of a direct non parametric pattern recognition method to the classification of the pictures of a multiwire spark chamber. The method, developed in an earlier work for an optical spark chamber, looks promising. The picture sample used has with respect to the previous one, the following characteristis: a) the event pictures have a more complicated structure; b) the amount of background sparks in an event is greater; c) there exists a kind of noise which is almost always present in some structured way (double sparkling, bursts...). New features have been used to characterize the event pictures; the results show that the method could be also used as a super filter to reduce the cost of further analysis. (Auth.)
U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...
International Nuclear Information System (INIS)
Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe
2013-01-01
Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)
Benchmarking for Higher Education.
Jackson, Norman, Ed.; Lund, Helen, Ed.
The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…
Benchmarking semantic web technology
García-Castro, R
2009-01-01
This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:
Benchmarking in University Toolbox
Directory of Open Access Journals (Sweden)
Katarzyna Kuźmicz
2015-06-01
Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.
Benchmarking Academic Anatomic Pathologists
Directory of Open Access Journals (Sweden)
Barbara S. Ducatman MD
2016-10-01
Full Text Available The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above. The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
International Nuclear Information System (INIS)
Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.
1991-01-01
Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems
Benchmarking af kommunernes sagsbehandling
DEFF Research Database (Denmark)
Amilon, Anna
Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...
P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel
1998-01-01
textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It
Canadian Health Libraries Association.
Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…
International Nuclear Information System (INIS)
Behringer, K.; Spiekerman, G.
1984-01-01
Piety (1977) proposed an automated signature analysis of power spectral density data. Eight statistical decision discriminants are introduced. For nearly all the discriminants, improved confidence statements can be made. The statistical characteristics of the last three discriminants, which are applications of non-parametric tests, are considered. (author)
DEFF Research Database (Denmark)
Seabrooke, Leonard; Wigan, Duncan
2015-01-01
Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....
International Nuclear Information System (INIS)
Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.
1995-01-01
This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)
International Nuclear Information System (INIS)
Mohammadi, Hassan; Ram, Rati
2017-01-01
Noting the paucity of studies of convergence in energy consumption across the US states, and the usefulness of a study that shares the spirit of the enormous research on convergence in energy-related variables in cross-country contexts, this paper explores convergence in per-capita energy consumption across the US states over the 44-year period 1970–2013. Several well-known parametric and non-parametric approaches are explored partly to shed light on the substantive question and partly to provide a comparative methodological perspective on these approaches. Several statements summarize the outcome of our explorations. First, the widely-used Barro-type regressions do not indicate beta-convergence during the entire period or any of several sub-periods. Second, lack of sigma-convergence is also noted in terms of standard deviation of logarithms and coefficient of variation which do not show a decline between 1970 and 2013, but show slight upward trends. Third, kernel density function plots indicate some flattening of the distribution which is consistent with the results from sigma-convergence scenario. Fourth, intra-distribution mobility (“gamma convergence”) in terms of an index of rank concordance suggests a slow decline in the index. Fifth, the general impression from several types of panel and time-series unit-root tests is that of non-stationarity of the series and thus the lack of stochastic convergence during the period. Sixth, therefore, the overall impression seems to be that of the lack of convergence across states in per-capita energy consumption. The present interstate inequality in per-capita energy consumption may, therefore, reflect variations in structural factors and might not be expected to diminish.
2011-01-01
Background Nonparametric item response theory (IRT) was used to examine (a) the performance of the 30 Positive and Negative Syndrome Scale (PANSS) items and their options ((levels of severity), (b) the effectiveness of various subscales to discriminate among differences in symptom severity, and (c) the development of an abbreviated PANSS (Mini-PANSS) based on IRT and a method to link scores to the original PANSS. Methods Baseline PANSS scores from 7,187 patients with Schizophrenia or Schizoaffective disorder who were enrolled between 1995 and 2005 in psychopharmacology trials were obtained. Option characteristic curves (OCCs) and Item Characteristic Curves (ICCs) were constructed to examine the probability of rating each of seven options within each of 30 PANSS items as a function of subscale severity, and summed-score linking was applied to items selected for the Mini-PANSS. Results The majority of items forming the Positive and Negative subscales (i.e. 19 items) performed very well and discriminate better along symptom severity compared to the General Psychopathology subscale. Six of the seven Positive Symptom items, six of the seven Negative Symptom items, and seven out of the 16 General Psychopathology items were retained for inclusion in the Mini-PANSS. Summed score linking and linear interpolation was able to produce a translation table for comparing total subscale scores of the Mini-PANSS to total subscale scores on the original PANSS. Results show scores on the subscales of the Mini-PANSS can be linked to scores on the original PANSS subscales, with very little bias. Conclusions The study demonstrated the utility of non-parametric IRT in examining the item properties of the PANSS and to allow selection of items for an abbreviated PANSS scale. The comparisons between the 30-item PANSS and the Mini-PANSS revealed that the shorter version is comparable to the 30-item PANSS, but when applying IRT, the Mini-PANSS is also a good indicator of illness severity
Khan, Anzalee; Lewis, Charles; Lindenmayer, Jean-Pierre
2011-11-16
Nonparametric item response theory (IRT) was used to examine (a) the performance of the 30 Positive and Negative Syndrome Scale (PANSS) items and their options ((levels of severity), (b) the effectiveness of various subscales to discriminate among differences in symptom severity, and (c) the development of an abbreviated PANSS (Mini-PANSS) based on IRT and a method to link scores to the original PANSS. Baseline PANSS scores from 7,187 patients with Schizophrenia or Schizoaffective disorder who were enrolled between 1995 and 2005 in psychopharmacology trials were obtained. Option characteristic curves (OCCs) and Item Characteristic Curves (ICCs) were constructed to examine the probability of rating each of seven options within each of 30 PANSS items as a function of subscale severity, and summed-score linking was applied to items selected for the Mini-PANSS. The majority of items forming the Positive and Negative subscales (i.e. 19 items) performed very well and discriminate better along symptom severity compared to the General Psychopathology subscale. Six of the seven Positive Symptom items, six of the seven Negative Symptom items, and seven out of the 16 General Psychopathology items were retained for inclusion in the Mini-PANSS. Summed score linking and linear interpolation was able to produce a translation table for comparing total subscale scores of the Mini-PANSS to total subscale scores on the original PANSS. Results show scores on the subscales of the Mini-PANSS can be linked to scores on the original PANSS subscales, with very little bias. The study demonstrated the utility of non-parametric IRT in examining the item properties of the PANSS and to allow selection of items for an abbreviated PANSS scale. The comparisons between the 30-item PANSS and the Mini-PANSS revealed that the shorter version is comparable to the 30-item PANSS, but when applying IRT, the Mini-PANSS is also a good indicator of illness severity.
Energy Technology Data Exchange (ETDEWEB)
Constantinescu, C C; Yoder, K K; Normandin, M D; Morris, E D [Department of Radiology, Indiana University School of Medicine, Indianapolis, IN (United States); Kareken, D A [Department of Neurology, Indiana University School of Medicine, Indianapolis, IN (United States); Bouman, C A [Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN (United States); O' Connor, S J [Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN (United States)], E-mail: emorris@iupui.edu
2008-03-07
We previously developed a model-independent technique (non-parametric ntPET) for extracting the transient changes in neurotransmitter concentration from paired (rest and activation) PET studies with a receptor ligand. To provide support for our method, we introduced three hypotheses of validation based on work by Endres and Carson (1998 J. Cereb. Blood Flow Metab. 18 1196-210) and Yoder et al (2004 J. Nucl. Med. 45 903-11), and tested them on experimental data. All three hypotheses describe relationships between the estimated free (synaptic) dopamine curves (F{sup DA}(t)) and the change in binding potential ({delta}BP). The veracity of the F{sup DA}(t) curves recovered by nonparametric ntPET is supported when the data adhere to the following hypothesized behaviors: (1) {delta}BP should decline with increasing DA peak time, (2) {delta}BP should increase as the strength of the temporal correlation between F{sup DA}(t) and the free raclopride (F{sup RAC}(t)) curve increases, (3) {delta}BP should decline linearly with the effective weighted availability of the receptor sites. We analyzed regional brain data from 8 healthy subjects who received two [{sup 11}C]raclopride scans: one at rest, and one during which unanticipated IV alcohol was administered to stimulate dopamine release. For several striatal regions, nonparametric ntPET was applied to recover F{sup DA}(t), and binding potential values were determined. Kendall rank-correlation analysis confirmed that the F{sup DA}(t) data followed the expected trends for all three validation hypotheses. Our findings lend credence to our model-independent estimates of F{sup DA}(t). Application of nonparametric ntPET may yield important insights into how alterations in timing of dopaminergic neurotransmission are involved in the pathologies of addiction and other psychiatric disorders.
Verification and validation benchmarks.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy
2007-02-01
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of
Verification and validation benchmarks
International Nuclear Information System (INIS)
Oberkampf, William Louis; Trucano, Timothy Guy
2007-01-01
Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the
Verification and validation benchmarks
International Nuclear Information System (INIS)
Oberkampf, William L.; Trucano, Timothy G.
2008-01-01
Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the
CSIR Research Space (South Africa)
Ntaka, L
2013-08-01
Full Text Available . In this work, statistical inference approach specifically the non-parametric bootstrapping and linear model were applied. Data used to develop the model were sourced from the literature. 104 data points with information on aggregation, natural organic matter...
Benchmarking and the laboratory
Galloway, M; Nadin, L
2001-01-01
This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112
Shielding benchmark problems, (2)
International Nuclear Information System (INIS)
Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.
1980-02-01
Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)
Toxicological Benchmarks for Wildlife
Energy Technology Data Exchange (ETDEWEB)
Sample, B.E. Opresko, D.M. Suter, G.W.
1993-01-01
Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red
Diagnostic Algorithm Benchmarking
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Benchmarking Swiss electricity grids
International Nuclear Information System (INIS)
Walti, N.O.; Weber, Ch.
2001-01-01
This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article
DEFF Research Database (Denmark)
Agrell, Per J.; Bogetoft, Peter
. The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...
Financial Integrity Benchmarks
City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....
Benchmarking in Foodservice Operations
National Research Council Canada - National Science Library
Johnson, Bonnie
1998-01-01
.... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...
Statistical Analysis of Data for Timber Strengths
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
2003-01-01
Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...... for timber are investigated....
International Nuclear Information System (INIS)
Choy, J.H.
1979-06-01
A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base
Accelerator shielding benchmark problems
International Nuclear Information System (INIS)
Hirayama, H.; Ban, S.; Nakamura, T.
1993-01-01
Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)
International Nuclear Information System (INIS)
Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.
1978-09-01
Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)
Couvy-Duchesne, Baptiste; Davenport, Tracey A; Martin, Nicholas G; Wright, Margaret J; Hickie, Ian B
2017-08-01
The Somatic and Psychological HEalth REport (SPHERE) is a 34-item self-report questionnaire that assesses symptoms of mental distress and persistent fatigue. As it was developed as a screening instrument for use mainly in primary care-based clinical settings, its validity and psychometric properties have not been studied extensively in population-based samples. We used non-parametric Item Response Theory to assess scale validity and item properties of the SPHERE-34 scales, collected through four waves of the Brisbane Longitudinal Twin Study (N = 1707, mean age = 12, 51% females; N = 1273, mean age = 14, 50% females; N = 1513, mean age = 16, 54% females, N = 1263, mean age = 18, 56% females). We estimated the heritability of the new scores, their genetic correlation, and their predictive ability in a sub-sample (N = 1993) who completed the Composite International Diagnostic Interview. After excluding items most responsible for noise, sex or wave bias, the SPHERE-34 questionnaire was reduced to 21 items (SPHERE-21), comprising a 14-item scale for anxiety-depression and a 10-item scale for chronic fatigue (3 items overlapping). These new scores showed high internal consistency (alpha > 0.78), moderate three months reliability (ICC = 0.47-0.58) and item scalability (Hi > 0.23), and were positively correlated (phenotypic correlations r = 0.57-0.70; rG = 0.77-1.00). Heritability estimates ranged from 0.27 to 0.51. In addition, both scores were associated with later DSM-IV diagnoses of MDD, social anxiety and alcohol dependence (OR in 1.23-1.47). Finally, a post-hoc comparison showed that several psychometric properties of the SPHERE-21 were similar to those of the Beck Depression Inventory. The scales of SPHERE-21 measure valid and comparable constructs across sex and age groups (from 9 to 28 years). SPHERE-21 scores are heritable, genetically correlated and show good predictive ability of mental health in an Australian-based population
Directory of Open Access Journals (Sweden)
Stochl Jan
2012-06-01
Full Text Available Abstract Background Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Methods Scalability of data from 1 a cross-sectional health survey (the Scottish Health Education Population Survey and 2 a general population birth cohort study (the National Child Development Study illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. Results and conclusions After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items we show that all items from the 12-item General Health Questionnaire (GHQ-12 – when binary scored – were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech’s “well-being” and “distress” clinical scales. An illustration of ordinal item analysis
Stochl, Jan; Jones, Peter B; Croudace, Tim J
2012-06-11
Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related) Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Scalability of data from 1) a cross-sectional health survey (the Scottish Health Education Population Survey) and 2) a general population birth cohort study (the National Child Development Study) illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items) we show that all items from the 12-item General Health Questionnaire (GHQ-12)--when binary scored--were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech's "well-being" and "distress" clinical scales). An illustration of ordinal item analysis confirmed that all 14 positively worded items of the Warwick-Edinburgh Mental
Benchmarking electricity distribution
Energy Technology Data Exchange (ETDEWEB)
Watts, K. [Department of Justice and Attorney-General, QLD (Australia)
1995-12-31
Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.
The KMAT: Benchmarking Knowledge Management.
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…
Benchmarking the Netherlands. Benchmarking for growth
International Nuclear Information System (INIS)
2003-01-01
This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout
Benchmarking the Netherlands. Benchmarking for growth
Energy Technology Data Exchange (ETDEWEB)
NONE
2003-01-01
This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity
Benchmarking in Mobarakeh Steel Company
Directory of Open Access Journals (Sweden)
Sasan Ghasemi
2008-05-01
Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahans Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.
Benchmarking in Mobarakeh Steel Company
Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati
2008-01-01
Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...
Shielding Benchmark Computational Analysis
International Nuclear Information System (INIS)
Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.
2000-01-01
Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)
DEFF Research Database (Denmark)
Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela
This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...
Directory of Open Access Journals (Sweden)
Steven M Carr
-stepping-stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed.
HPCG Benchmark Technical Specification
Energy Technology Data Exchange (ETDEWEB)
Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)
2013-10-01
The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.
Benchmarking for Best Practice
Zairi, Mohamed
1998-01-01
Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l
Benchmarking Danish Industries
DEFF Research Database (Denmark)
Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette
2003-01-01
compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...
Bonnet, F; Solignac, S; Marty, J
2008-03-01
The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.
International Nuclear Information System (INIS)
Pesic, M.
1998-01-01
A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)
Statistical trend analysis methods for temporal phenomena
International Nuclear Information System (INIS)
Lehtinen, E.; Pulkkinen, U.; Poern, K.
1997-04-01
We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods
Statistical trend analysis methods for temporal phenomena
Energy Technology Data Exchange (ETDEWEB)
Lehtinen, E.; Pulkkinen, U. [VTT Automation, (Finland); Poern, K. [Poern Consulting, Nykoeping (Sweden)
1997-04-01
We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods. 14 refs, 10 figs.
Energy benchmarking of South Australian WWTPs.
Krampe, J
2013-01-01
Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.
Practical statistics in pain research.
Kim, Tae Kyun
2017-10-01
Pain is subjective, while statistics related to pain research are objective. This review was written to help researchers involved in pain research make statistical decisions. The main issues are related with the level of scales that are often used in pain research, the choice of statistical methods between parametric or nonparametric statistics, and problems which arise from repeated measurements. In the field of pain research, parametric statistics used to be applied in an erroneous way. This is closely related with the scales of data and repeated measurements. The level of scales includes nominal, ordinal, interval, and ratio scales. The level of scales affects the choice of statistics between parametric or non-parametric methods. In the field of pain research, the most frequently used pain assessment scale is the ordinal scale, which would include the visual analogue scale (VAS). There used to be another view, however, which considered the VAS to be an interval or ratio scale, so that the usage of parametric statistics would be accepted practically in some cases. Repeated measurements of the same subjects always complicates statistics. It means that measurements inevitably have correlations between each other, and would preclude the application of one-way ANOVA in which independence between the measurements is necessary. Repeated measures of ANOVA (RMANOVA), however, would permit the comparison between the correlated measurements as long as the condition of sphericity assumption is satisfied. Conclusively, parametric statistical methods should be used only when the assumptions of parametric statistics, such as normality and sphericity, are established.
Benchmarking and Performance Management
Directory of Open Access Journals (Sweden)
Adrian TANTAU
2010-12-01
Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.
Bers, Trudy
2012-01-01
Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…
BENCHMARKING VALUE CREATION OF COMPANIESLISTED ONTHE JSE’S ALTX
Directory of Open Access Journals (Sweden)
John Beneke
2016-01-01
Full Text Available The primaryobjectiveof most companiesin today’s business world is tomaximise shareholders’ wealth.Value-based management (VBM is amanagement approach that maximises long-term shareholder value using variousmetrics to determine if wealth was created (or destroyed.Small and mediumenterprises (SMEs are viewed as a means to achieving a dynamic and flourishingprivate sector as well as to ensure development that is more equitable. This thenraises the following questions: can value-based management principles be appliedinSMEs to create shareholder wealth; are SMEs able to create value; and, howdoes SMEs compare when benchmarked against each other? For this studycompanies listed on the AltX board of the Johannesburg Stock Exchangefrom2007 to 2012was used.Three VBM metrics was used to determine if valued wascreated or destroyed based on financial performance.Data Envelopment Analysis(DEA, which is a non-parametric linear programming technique, was used as thebenchmarking tool.The results indicates thata very limited number of companieswere deemed efficientin creating value.It was also established that the financialcrisis of 2008 and 2009 had a significant negative impact on the AltX companies.
Statistical reliability analyses of two wood plastic composite extrusion processes
International Nuclear Information System (INIS)
Crookston, Kevin A.; Mark Young, Timothy; Harper, David; Guess, Frank M.
2011-01-01
Estimates of the reliability of wood plastic composites (WPC) are explored for two industrial extrusion lines. The goal of the paper is to use parametric and non-parametric analyses to examine potential differences in the WPC metrics of reliability for the two extrusion lines that may be helpful for use by the practitioner. A parametric analysis of the extrusion lines reveals some similarities and disparities in the best models; however, a non-parametric analysis reveals unique and insightful differences between Kaplan-Meier survival curves for the modulus of elasticity (MOE) and modulus of rupture (MOR) of the WPC industrial data. The distinctive non-parametric comparisons indicate the source of the differences in strength between the 10.2% and 48.0% fractiles [3,183-3,517 MPa] for MOE and for MOR between the 2.0% and 95.1% fractiles [18.9-25.7 MPa]. Distribution fitting as related to selection of the proper statistical methods is discussed with relevance to estimating the reliability of WPC. The ability to detect statistical differences in the product reliability of WPC between extrusion processes may benefit WPC producers in improving product reliability and safety of this widely used house-decking product. The approach can be applied to many other safety and complex system lifetime comparisons.
Some statistical issues important to future developments in human radiation research
International Nuclear Information System (INIS)
Vaeth, Michael
1991-01-01
Using his two years experience at the Radiation Effects Research Foundation at Hiroshima, the author tries to outline some of the areas of statistics where methodologies relevant to the future developments in human radiation research are likely to be found. Problems related to statistical analysis of existing data are discussed, together with methodological developments in non-parametric and semi-parametric regression modelling, and interpretation and presentation of results. (Author)
Benchmarking i den offentlige sektor
DEFF Research Database (Denmark)
Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels
2008-01-01
I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...
Statistical analysis applied to safety culture self-assessment
International Nuclear Information System (INIS)
Macedo Soares, P.P.
2002-01-01
Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)
Energy Technology Data Exchange (ETDEWEB)
Conroy, Charlie [Department of Astronomy, Harvard University, Cambridge, MA, 02138 (United States); Van Dokkum, Pieter G. [Department of Astronomy, Yale University, New Haven, CT, 06511 (United States); Villaume, Alexa [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States)
2017-03-10
It is now well-established that the stellar initial mass function (IMF) can be determined from the absorption line spectra of old stellar systems, and this has been used to measure the IMF and its variation across the early-type galaxy population. Previous work focused on measuring the slope of the IMF over one or more stellar mass intervals, implicitly assuming that this is a good description of the IMF and that the IMF has a universal low-mass cutoff. In this work we consider more flexible IMFs, including two-component power laws with a variable low-mass cutoff and a general non-parametric model. We demonstrate with mock spectra that the detailed shape of the IMF can be accurately recovered as long as the data quality is high (S/N ≳ 300 Å{sup −1}) and cover a wide wavelength range (0.4–1.0 μ m). We apply these flexible IMF models to a high S/N spectrum of the center of the massive elliptical galaxy NGC 1407. Fitting the spectrum with non-parametric IMFs, we find that the IMF in the center shows a continuous rise extending toward the hydrogen-burning limit, with a behavior that is well-approximated by a power law with an index of −2.7. These results provide strong evidence for the existence of extreme (super-Salpeter) IMFs in the cores of massive galaxies.
Cloud benchmarking for performance
Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam
2014-01-01
Date of Acceptance: 20/09/2014 How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computa...
Benchmarking reference services: an introduction.
Marshall, J G; Buchanan, H S
1995-01-01
Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.
DEFF Research Database (Denmark)
Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda
2012-01-01
ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care we...... document pronounced regional differences in adherence to guidelines and can help to identify gaps and direct target interventions. It may serve as a tool for assessment and benchmarking the clinical management of HIV-patients in any setting worldwide....
Benchmarking Cloud Storage Systems
Wang, Xing
2014-01-01
With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius
2006-01-01
An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects...... takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...
Benchmarking multimedia performance
Zandi, Ahmad; Sudharsanan, Subramania I.
1998-03-01
With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.
International Nuclear Information System (INIS)
Pavlovichev, A.M.
2001-01-01
Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented
Directory of Open Access Journals (Sweden)
H. Groessing
2015-02-01
Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.
Benchmarking Using Basic DBMS Operations
Crolotte, Alain; Ghazal, Ahmad
The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.
Benchmarking & European Sustainable Transport Policies
DEFF Research Database (Denmark)
Gudmundsson, H.
2003-01-01
, Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....
Benchmarking in Czech Higher Education
Plaček Michal; Ochrana František; Půček Milan
2015-01-01
The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...
Power reactor pressure vessel benchmarks
International Nuclear Information System (INIS)
Rahn, F.J.
1978-01-01
A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)
MOx Depletion Calculation Benchmark
International Nuclear Information System (INIS)
San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin
2016-01-01
Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone
Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions
Energy Technology Data Exchange (ETDEWEB)
Mathew, Paul; Sartor, Dale; Tschudi, William
2009-07-13
This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.
Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions
Energy Technology Data Exchange (ETDEWEB)
Mathew, Paul; Greenberg, Steve; Sartor, Dale
2009-07-13
This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.
Díaz, Zuleyka; Segovia, María Jesús; Fernández, José
2005-01-01
Prediction of insurance companies insolvency has arisen as an important problem in the field of financial research. Most methods applied in the past to tackle this issue are traditional statistical techniques which use financial ratios as explicative variables. However, these variables often do not satisfy statistical assumptions, which complicates the application of the mentioned methods. In this paper, a comparative study of the performance of two non-parametric machine learning techniques ...
Martínez-Santiago, O; Marrero-Ponce, Y; Vivas-Reyes, R; Rivera-Borroto, O M; Hurtado, E; Treto-Suarez, M A; Ramos, Y; Vergara-Murillo, F; Orozco-Ugarriza, M E; Martínez-López, Y
2017-05-01
Graph derivative indices (GDIs) have recently been defined over N-atoms (N = 2, 3 and 4) simultaneously, which are based on the concept of derivatives in discrete mathematics (finite difference), metaphorical to the derivative concept in classical mathematical analysis. These molecular descriptors (MDs) codify topo-chemical and topo-structural information based on the concept of the derivative of a molecular graph with respect to a given event (S) over duplex, triplex and quadruplex relations of atoms (vertices). These GDIs have been successfully applied in the description of physicochemical properties like reactivity, solubility and chemical shift, among others, and in several comparative quantitative structure activity/property relationship (QSAR/QSPR) studies. Although satisfactory results have been obtained in previous modelling studies with the aforementioned indices, it is necessary to develop new, more rigorous analysis to assess the true predictive performance of the novel structure codification. So, in the present paper, an assessment and statistical validation of the performance of these novel approaches in QSAR studies are executed, as well as a comparison with those of other QSAR procedures reported in the literature. To achieve the main aim of this research, QSARs were developed on eight chemical datasets widely used as benchmarks in the evaluation/validation of several QSAR methods and/or many different MDs (fundamentally 3D MDs). Three to seven variable QSAR models were built for each chemical dataset, according to the original dissection into training/test sets. The models were developed by using multiple linear regression (MLR) coupled with a genetic algorithm as the feature wrapper selection technique in the MobyDigs software. Each family of GDIs (for duplex, triplex and quadruplex) behaves similarly in all modelling, although there were some exceptions. However, when all families were used in combination, the results achieved were quantitatively
Direct data access protocols benchmarking on DPM
Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina
2015-01-01
The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...
Bouhaj, M.; von Estorff, O.; Peiffer, A.
2017-09-01
In the application of Statistical Energy Analysis "SEA" to complex assembled structures, a purely predictive model often exhibits errors. These errors are mainly due to a lack of accurate modelling of the power transmission mechanism described through the Coupling Loss Factors (CLF). Experimental SEA (ESEA) is practically used by the automotive and aerospace industry to verify and update the model or to derive the CLFs for use in an SEA predictive model when analytical estimates cannot be made. This work is particularly motivated by the lack of procedures that allow an estimate to be made of the variance and confidence intervals of the statistical quantities when using the ESEA technique. The aim of this paper is to introduce procedures enabling a statistical description of measured power input, vibration energies and the derived SEA parameters. Particular emphasis is placed on the identification of structural CLFs of complex built-up structures comparing different methods. By adopting a Stochastic Energy Model (SEM), the ensemble average in ESEA is also addressed. For this purpose, expressions are obtained to randomly perturb the energy matrix elements and generate individual samples for the Monte Carlo (MC) technique applied to derive the ensemble averaged CLF. From results of ESEA tests conducted on an aircraft fuselage section, the SEM approach provides a better performance of estimated CLFs compared to classical matrix inversion methods. The expected range of CLF values and the synthesized energy are used as quality criteria of the matrix inversion, allowing to assess critical SEA subsystems, which might require a more refined statistical description of the excitation and the response fields. Moreover, the impact of the variance of the normalized vibration energy on uncertainty of the derived CLFs is outlined.
International Nuclear Information System (INIS)
Kawai, Masayoshi
1984-01-01
Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)
Benchmarking foreign electronics technologies
Energy Technology Data Exchange (ETDEWEB)
Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.
1994-12-01
This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
International Nuclear Information System (INIS)
Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.
1987-01-01
This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described
Benchmarking Commercial Conformer Ensemble Generators.
Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes
2017-11-27
We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.
Review for session K - benchmarks
International Nuclear Information System (INIS)
McCracken, A.K.
1980-01-01
Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually
Internal Benchmarking for Institutional Effectiveness
Ronco, Sharron L.
2012-01-01
Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…
Entropy-based benchmarking methods
Temurshoev, Umed
2012-01-01
We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth
Benchmark simulation models, quo vadis?
DEFF Research Database (Denmark)
Jeppsson, U.; Alex, J; Batstone, D. J.
2013-01-01
As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...
EPA's Benchmark Dose Modeling Software
The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...
Benchmark for Strategic Performance Improvement.
Gohlke, Annette
1997-01-01
Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)
Benchmarking: A Process for Improvement.
Peischl, Thomas M.
One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…
Staff Association
2017-01-01
On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...
Fernández-Llamazares, Álvaro; Belmonte, Jordina; Delgado, Rosario; De Linares, Concepción
2014-04-01
Airborne pollen records are a suitable indicator for the study of climate change. The present work focuses on the role of annual pollen indices for the detection of bioclimatic trends through the analysis of the aerobiological spectra of 11 taxa of great biogeographical relevance in Catalonia over an 18-year period (1994-2011), by means of different parametric and non-parametric statistical methods. Among others, two non-parametric rank-based statistical tests were performed for detecting monotonic trends in time series data of the selected airborne pollen types and we have observed that they have similar power in detecting trends. Except for those cases in which the pollen data can be well-modeled by a normal distribution, it is better to apply non-parametric statistical methods to aerobiological studies. Our results provide a reliable representation of the pollen trends in the region and suggest that greater pollen quantities are being liberated to the atmosphere in the last years, specially by Mediterranean taxa such as Pinus, Total Quercus and Evergreen Quercus, although the trends may differ geographically. Longer aerobiological monitoring periods are required to corroborate these results and survey the increasing levels of certain pollen types that could exert an impact in terms of public health.
Benchmarking: applications to transfusion medicine.
Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M
2012-10-01
Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.
Benchmarking school nursing practice: the North West Regional Benchmarking Group
Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn
2016-01-01
It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...
Benchmarking Nuclear Power Plants
International Nuclear Information System (INIS)
Jakic, I.
2016-01-01
One of the main tasks an owner have is to keep its business competitive on the market while delivering its product. Being owner of nuclear power plant bear the same (or even more complex and stern) responsibility due to safety risks and costs. In the past, nuclear power plant managements could (partly) ignore profit or it was simply expected and to some degree assured through the various regulatory processes governing electricity rate design. It is obvious now that, with the deregulation, utility privatization and competitive electricity market, key measure of success used at nuclear power plants must include traditional metrics of successful business (return on investment, earnings and revenue generation) as well as those of plant performance, safety and reliability. In order to analyze business performance of (specific) nuclear power plant, benchmarking, as one of the well-established concept and usual method was used. Domain was conservatively designed, with well-adjusted framework, but results have still limited application due to many differences, gaps and uncertainties. (author).
Virtual machine performance benchmarking.
Langer, Steve G; French, Todd
2011-10-01
The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.
AER benchmark specification sheet
International Nuclear Information System (INIS)
Aszodi, A.; Toth, S.
2009-01-01
In the VVER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational Fluid Dynamics (CFD) codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D CFD modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the 23rd cycle of the Paks NPP's Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (authors)
AER Benchmark Specification Sheet
International Nuclear Information System (INIS)
Aszodi, A.; Toth, S.
2009-01-01
In the WWER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational fluid dynamics codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D computational fluid dynamics modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the twenty third cycle of the Paks NPPs Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (Authors)
Benchmarking biofuels; Biobrandstoffen benchmarken
Energy Technology Data Exchange (ETDEWEB)
Croezen, H.; Kampman, B.; Bergsma, G.
2012-03-15
A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.
Model based energy benchmarking for glass furnace
International Nuclear Information System (INIS)
Sardeshpande, Vishal; Gaitonde, U.N.; Banerjee, Rangan
2007-01-01
Energy benchmarking of processes is important for setting energy efficiency targets and planning energy management strategies. Most approaches used for energy benchmarking are based on statistical methods by comparing with a sample of existing plants. This paper presents a model based approach for benchmarking of energy intensive industrial processes and illustrates this approach for industrial glass furnaces. A simulation model for a glass furnace is developed using mass and energy balances, and heat loss equations for the different zones and empirical equations based on operating practices. The model is checked with field data from end fired industrial glass furnaces in India. The simulation model enables calculation of the energy performance of a given furnace design. The model results show the potential for improvement and the impact of different operating and design preferences on specific energy consumption. A case study for a 100 TPD end fired furnace is presented. An achievable minimum energy consumption of about 3830 kJ/kg is estimated for this furnace. The useful heat carried by glass is about 53% of the heat supplied by the fuel. Actual furnaces operating at these production scales have a potential for reduction in energy consumption of about 20-25%
Benchmarking in academic pharmacy departments.
Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann
2010-10-11
Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.
Applications of quantum entropy to statistics
International Nuclear Information System (INIS)
Silver, R.N.; Martz, H.F.
1994-01-01
This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to heirarchical Bayes methods
Issues in Benchmark Metric Selection
Crolotte, Alain
It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.
California commercial building energy benchmarking
Energy Technology Data Exchange (ETDEWEB)
Kinney, Satkartar; Piette, Mary Ann
2003-07-01
Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the
A Heterogeneous Medium Analytical Benchmark
International Nuclear Information System (INIS)
Ganapol, B.D.
1999-01-01
A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results
A Global Vision over Benchmarking Process: Benchmarking Based Enterprises
Sitnikov, Catalina; Giurca Vasilescu, Laura
2008-01-01
Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...
Benchmarking and Learning in Public Healthcare
DEFF Research Database (Denmark)
Buckmaster, Natalie; Mouritsen, Jan
2017-01-01
This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....
Performance Targets and External Benchmarking
DEFF Research Database (Denmark)
Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.
Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...
Benchmarking and Sustainable Transport Policy
DEFF Research Database (Denmark)
Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy
2004-01-01
Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for sustainable transport. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly sustainable transport...... evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark sustainable transport policies against one another would be a highly complex task, which...
Benchmarking: contexts and details matter.
Zheng, Siyuan
2017-07-05
Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.
Handbook of critical experiments benchmarks
International Nuclear Information System (INIS)
Durst, B.M.; Bierman, S.R.; Clayton, E.D.
1978-03-01
Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input
Analysis of Benchmark 2 results
International Nuclear Information System (INIS)
Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.
1994-01-01
The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab
Benchmarks for GADRAS performance validation
International Nuclear Information System (INIS)
Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.
2009-01-01
The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.
Benchmarking in Czech Higher Education
Directory of Open Access Journals (Sweden)
Plaček Michal
2015-12-01
Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.
Dynamic benchmarking of simulation codes
International Nuclear Information System (INIS)
Henry, R.E.; Paik, C.Y.; Hauser, G.M.
1996-01-01
Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer
Recommendations for Benchmarking Web Site Usage among Academic Libraries.
Hightower, Christy; Sih, Julie; Tilghman, Adam
1998-01-01
To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…
Understanding Statistics - Cancer Statistics
Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.
Non-Parametric Model Drift Detection
2016-07-01
framework on two tasks in NLP domain, topic modeling, and machine translation. Our main findings are summarized as follows: • We can measure important...thank,us,me,hope,today Group num: 4, TC(X;Y_j): 0.407 4:republic,palestinian,israel, arab ,israeli,democratic,congo,mr,president,occupied Group num: 5...support,change,lessons,partnerships,l earned Group num: 35, TC(X;Y_j): 0.094 35:russian,federation,spoke,you,french,spanish, arabic ,your,chinese,sir
Regional Competitive Intelligence: Benchmarking and Policymaking
Huggins , Robert
2010-01-01
Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...
Benchmarking of human resources management
Directory of Open Access Journals (Sweden)
David M. Akinnusi
2008-11-01
Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.
Benchmark simulation models, quo vadis?
Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D
2013-01-01
As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.
Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings
International Nuclear Information System (INIS)
Chung, William
2012-01-01
Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.
MCNP: Photon benchmark problems
International Nuclear Information System (INIS)
Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.
1991-09-01
The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs
Basic statistical tools in research and data analysis
Directory of Open Access Journals (Sweden)
Zulfiqar Ali
2016-01-01
Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.
Functional summary statistics for the Johnson-Mehl model
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad
The Johnson-Mehl germination-growth model is a spatio-temporal point process model which among other things have been used for the description of neurotransmitters datasets. However, for such datasets parametric Johnson-Mehl models fitted by maximum likelihood have yet not been evaluated by means...... of functional summary statistics. This paper therefore invents four functional summary statistics adapted to the Johnson-Mehl model, with two of them based on the second-order properties and the other two on the nuclei-boundary distances for the associated Johnson-Mehl tessellation. The functional summary...... statistics theoretical properties are investigated, non-parametric estimators are suggested, and their usefulness for model checking is examined in a simulation study. The functional summary statistics are also used for checking fitted parametric Johnson-Mehl models for a neurotransmitters dataset....
Radiation Detection Computational Benchmark Scenarios
Energy Technology Data Exchange (ETDEWEB)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.
2013-09-24
Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for
3-D neutron transport benchmarks
International Nuclear Information System (INIS)
Takeda, T.; Ikeda, H.
1991-03-01
A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes
Strategic behaviour under regulatory benchmarking
Energy Technology Data Exchange (ETDEWEB)
Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management
2004-09-01
In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)
Atomic Energy Research benchmark activity
International Nuclear Information System (INIS)
Makai, M.
1998-01-01
The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)
International Nuclear Information System (INIS)
Jeong, Jaewook; Hong, Taehoon; Ji, Changyoon; Kim, Jimin; Lee, Minhyun; Jeong, Kwangbok
2016-01-01
Highlights: • The energy benchmarks for MFHC using district heating were developed. • We consider heating, hot water, electricity, and water energy consumption. • The benchmarks cover the site EUI, source EUI, and CO_2 emission intensity. • The benchmarks were developed through data mining and statistical methodologies. • The developed benchmarks provide fair criteria to evaluate energy efficiency. - Abstract: The reliable benchmarks are required to evaluate building energy efficiency fairly. This study aims to develop the energy benchmarks and relevant process for a multi-family housing complex (MFHC), which is responsible for huge CO_2 emissions in South Korea. A database, including the information on building attributes and energy consumption of 503 MFHCs, was established. The database was classified into three groups based on average enclosed area per household (AEA) through data mining techniques. The benchmarks of site energy use intensity (EUI), source EUI, and CO_2 emission intensity (CEI) were developed from Groups 1, 2, and 3. Representatively, the developed benchmarks of CEI for Groups 1, 2, and 3 were 28.17, 24.16, and 20.96 kg-CO_2/m"2 y, respectively. A comparative analysis using the operational rating identified that the developed benchmarks could solve the irrationality of the original benchmarks from overall database. In the case of the original benchmarks, 93% of small-AEA-groups and 16% of large-AEA-groups received lower grades. In the case of the developed benchmark, the upper and lower grades in Groups 1–3 were both adjusted to 50%. The proposed process for developing energy benchmark is applicable to evaluate the energy efficiency of other buildings, in other regions.
Neil, Amanda; Pfeffer, Sally; Burnett, Leslie
2013-01-01
This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.
Nonparametric estimation of benchmark doses in environmental risk assessment
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133
Direct data access protocols benchmarking on DPM
Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina
2015-12-01
The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.
A large-scale benchmark of gene prioritization methods.
Guala, Dimitri; Sonnhammer, Erik L L
2017-04-21
In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.
Benchmarked Library Websites Comparative Study
Ramli, Rindra M.; Tyhurst, Janis
2015-01-01
This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.
Prismatic Core Coupled Transient Benchmark
International Nuclear Information System (INIS)
Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.
2011-01-01
The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.
Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-06-01
Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.
NASA Indexing Benchmarks: Evaluating Text Search Engines
Esler, Sandra L.; Nelson, Michael L.
1997-01-01
The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.
Benchmarking computer platforms for lattice QCD applications
International Nuclear Information System (INIS)
Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.
2003-09-01
We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)
Benchmarking computer platforms for lattice QCD applications
International Nuclear Information System (INIS)
Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.
2004-01-01
We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC
Log-concave Probability Distributions: Theory and Statistical Testing
DEFF Research Database (Denmark)
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...
Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners
Directory of Open Access Journals (Sweden)
Luštický Martin
2012-03-01
Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.
BONFIRE: benchmarking computers and computer networks
Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker
2011-01-01
The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...
Benchmarking clinical photography services in the NHS.
Arbon, Giles
2015-01-01
Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.
Energy use pattern and benchmarking of selected greenhouses in Iran using data envelopment analysis
International Nuclear Information System (INIS)
Omid, M.; Ghojabeige, F.; Delshad, M.; Ahmadi, H.
2011-01-01
This paper studies the degree of technical efficiency (TE) and scale efficiency (SE) of selected greenhouses in Iran and describes the process of benchmarking energy inputs and cucumber yield. Inquiries on 18 greenhouses were conducted in a face-to-face interviewing during September-December 2008 period. A non-parametric data envelopment analysis (DEA) technique was applied to investigate the degree of TE and SE of producers, and evaluate and rank productivity performance of cucumber producers based on eight energy inputs: human labour, diesel, machinery, fertilizers, chemicals, water for irrigation, seeds and electricity, and output yield values of cucumber. DEA optimizes the performance measure of each greenhouse or decision making unit (DMU). Specifically, the DEA was used to compare the performance of each DMU in region of increasing, constant or decreasing return to scale in multiple-inputs situations. The CRS model helped us to decompose the pure TE into the overall TE and SE components, thereby allowing investigating the scale effects. The results of analysis showed that DEA is an effective tool for analyzing and benchmarking productive efficiency of greenhouses. The VRS analysis showed that only 12 out of the 18 DMUs were efficient. The TE of the inefficient DMUs, on average, was calculated as 91.5%. This implies that the same level of output could be produced with 91.5% of the resources if these units were performing on the frontier. Another interpretation of this result is that 8.5% of overall resources could be saved by raising the performance of these DMUs to the highest level.
Energy use pattern and benchmarking of selected greenhouses in Iran using data envelopment analysis
Energy Technology Data Exchange (ETDEWEB)
Omid, M.; Ghojabeige, F.; Ahmadi, H. [Department of Agricultural Machinery, College of Agriculture and Natural Resources, University of Tehran, Karaj (Iran, Islamic Republic of); Delshad, M. [Department of Horticultural Sciences, College of Agriculture and Natural Resources, University of Tehran, Karaj (Iran, Islamic Republic of)
2011-01-15
This paper studies the degree of technical efficiency (TE) and scale efficiency (SE) of selected greenhouses in Iran and describes the process of benchmarking energy inputs and cucumber yield. Inquiries on 18 greenhouses were conducted in a face-to-face interviewing during September-December 2008 period. A non-parametric data envelopment analysis (DEA) technique was applied to investigate the degree of TE and SE of producers, and evaluate and rank productivity performance of cucumber producers based on eight energy inputs: human labour, diesel, machinery, fertilizers, chemicals, water for irrigation, seeds and electricity, and output yield values of cucumber. DEA optimizes the performance measure of each greenhouse or decision making unit (DMU). Specifically, the DEA was used to compare the performance of each DMU in region of increasing, constant or decreasing return to scale in multiple-inputs situations. The CRS model helped us to decompose the pure TE into the overall TE and SE components, thereby allowing investigating the scale effects. The results of analysis showed that DEA is an effective tool for analyzing and benchmarking productive efficiency of greenhouses. The VRS analysis showed that only 12 out of the 18 DMUs were efficient. The TE of the inefficient DMUs, on average, was calculated as 91.5%. This implies that the same level of output could be produced with 91.5% of the resources if these units were performing on the frontier. Another interpretation of this result is that 8.5% of overall resources could be saved by raising the performance of these DMUs to the highest level. (author)
Piedrahita, Ricardo A.
The Denver Aerosol Sources and Health study (DASH) was a long-term study of the relationship between the variability in fine particulate mass and chemical constituents (PM2.5, particulate matter less than 2.5mum) and adverse health effects such as cardio-respiratory illnesses and mortality. Daily filter samples were chemically analyzed for multiple species. We present findings based on 2.8 years of DASH data, from 2003 to 2005. Multilinear Engine 2 (ME-2), a receptor-based source apportionment model was applied to the data to estimate source contributions to PM2.5 mass concentrations. This study relied on two different ME-2 models: (1) a 2-way model that closely reflects PMF-2; and (2) an enhanced model with meteorological data that used additional temporal and meteorological factors. The Coarse Rural Urban Sources and Health study (CRUSH) is a long-term study of the relationship between the variability in coarse particulate mass (PMcoarse, particulate matter between 2.5 and 10mum) and adverse health effects such as cardio-respiratory illnesses, pre-term births, and mortality. Hourly mass concentrations of PMcoarse and fine particulate matter (PM2.5) are measured using tapered element oscillating microbalances (TEOMs) with Filter Dynamics Measurement Systems (FDMS), at two rural and two urban sites. We present findings based on nine months of mass concentration data, including temporal trends, and non-parametric regressions (NPR) results, which were used to characterize the wind speed and wind direction relationships that might point to sources. As part of CRUSH, 1-year coarse and fine mode particulate matter filter sampling network, will allow us to characterize the chemical composition of the particulate matter collected and perform spatial comparisons. This work describes the construction and validation testing of four dichotomous filter samplers for this purpose. The use of dichotomous splitters with an approximate 2.5mum cut point, coupled with a 10mum cut
How Benchmarking and Higher Education Came Together
Levy, Gary D.; Ronco, Sharron L.
2012-01-01
This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…
WWER-1000 Burnup Credit Benchmark (CB5)
International Nuclear Information System (INIS)
Manolova, M.A.
2002-01-01
In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)
Benchmarking and Learning in Public Healthcare
DEFF Research Database (Denmark)
Buckmaster, Natalie; Mouritsen, Jan
2017-01-01
This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...
Geothermal Heat Pump Benchmarking Report
Energy Technology Data Exchange (ETDEWEB)
None
1997-01-17
A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.
The development of code benchmarks
International Nuclear Information System (INIS)
Glass, R.E.
1986-01-01
Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum
Benchmarking Variable Selection in QSAR.
Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars
2012-02-01
Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Closed-loop neuromorphic benchmarks
CSIR Research Space (South Africa)
Stewart, TC
2015-11-01
Full Text Available Benchmarks Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1 1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa Submitted to Journal: Frontiers in Neuroscience Specialty... Eliasmith 1 1Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada 2Mobile Intelligent Autonomous Systems group, Council for Scientific and Industrial Research, Pretoria, South Africa Correspondence*: Terrence C. Stewart Centre...
Investible benchmarks & hedge fund liquidity
Freed, Marc S; McMillan, Ben
2011-01-01
A lack of commonly accepted benchmarks for hedge fund performance has permitted hedge fund managers to attribute to skill returns that may actually accrue from market risk factors and illiquidity. Recent innovations in hedge fund replication permits us to estimate the extent of this misattribution. Using an option-based model, we find evidence that the value of liquidity options that investors implicitly grant managers when they invest may account for part or even all hedge fund returns. C...
RISKIND verification and benchmark comparisons
International Nuclear Information System (INIS)
Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.
1997-08-01
This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models
RISKIND verification and benchmark comparisons
Energy Technology Data Exchange (ETDEWEB)
Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.
1997-08-01
This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.
Benchmarking routine psychological services: a discussion of challenges and methods.
Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick
2014-01-01
Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.
HS06 Benchmark for an ARM Server
Kluth, Stefan
2014-06-01
We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.
HS06 benchmark for an ARM server
International Nuclear Information System (INIS)
Kluth, Stefan
2014-01-01
We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.
Statistical Analysis of Environmental Tritium around Wolsong Site
Energy Technology Data Exchange (ETDEWEB)
Kim, Ju Youl [FNC Technology Co., Yongin (Korea, Republic of)
2010-04-15
To find the relationship among airborne tritium, tritium in rainwater, TFWT (Tissue Free Water Tritium) and TBT (Tissue Bound Tritium), statistical analysis is conducted based on tritium data measured at KHNP employees' house around Wolsong nuclear power plants during 10 years from 1999 to 2008. The results show that tritium in such media exhibits a strong seasonal and annual periodicity. Tritium concentration in rainwater is observed to be highly correlated with TFWT and directly transmitted to TFWT without delay. The response of environmental radioactivity of tritium around Wolsong site is analyzed using time-series technique and non-parametric trend analysis. Tritium in the atmosphere and rainwater is strongly auto-correlated by seasonal and annual periodicity. TFWT concentration in pine needle is proven to be more sensitive to rainfall phenomenon than other weather variables. Non-parametric trend analysis of TFWT concentration within pine needle shows a increasing slope in terms of confidence level of 95%. This study demonstrates a usefulness of time-series and trend analysis for the interpretation of environmental radioactivity relationship with various environmental media.
Argonne Code Center: Benchmark problem book.
Energy Technology Data Exchange (ETDEWEB)
None, None
1977-06-01
This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.
Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...
Benchmarking homogenization algorithms for monthly data
Directory of Open Access Journals (Sweden)
V. K. C. Venema
2012-01-01
Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.
Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve
Present Status and Extensions of the Monte Carlo Performance Benchmark
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
NASA Software Engineering Benchmarking Effort
Godfrey, Sally; Rarick, Heather
2012-01-01
Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA
NEACRP thermal fission product benchmark
International Nuclear Information System (INIS)
Halsall, M.J.; Taubman, C.J.
1989-09-01
The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)
Benchmark neutron porosity log calculations
International Nuclear Information System (INIS)
Little, R.C.; Michael, M.; Verghese, K.; Gardner, R.P.
1989-01-01
Calculations have been made for a benchmark neutron porosity log problem with the general purpose Monte Carlo code MCNP and the specific purpose Monte Carlo code McDNL. For accuracy and timing comparison purposes the CRAY XMP and MicroVax II computers have been used with these codes. The CRAY has been used for an analog version of the MCNP code while the MicroVax II has been used for the optimized variance reduction versions of both codes. Results indicate that the two codes give the same results within calculated standard deviations. Comparisons are given and discussed for accuracy (precision) and computation times for the two codes
International Nuclear Information System (INIS)
Lim, Gyeong Hui
2008-03-01
This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics
Anomaly detection in OECD Benchmark data using co-variance methods
International Nuclear Information System (INIS)
Srinivasan, G.S.; Krinizs, K.; Por, G.
1993-02-01
OECD Benchmark data distributed for the SMORN VI Specialists Meeting in Reactor Noise were investigated for anomaly detection in artificially generated reactor noise benchmark analysis. It was observed that statistical features extracted from covariance matrix of frequency components are very sensitive in terms of the anomaly detection level. It is possible to create well defined alarm levels. (R.P.) 5 refs.; 23 figs.; 1 tab
Statistical significance of trends in monthly heavy precipitation over the US
Mahajan, Salil
2011-05-11
Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall\\'s τ test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong. © 2011 Springer-Verlag.
Reevaluation of the Jezebel Benchmark
Energy Technology Data Exchange (ETDEWEB)
Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-03-10
Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.
SCWEB, Scientific Workstation Evaluation Benchmark
Energy Technology Data Exchange (ETDEWEB)
Raffenetti, R C [Computing Services-Support Services Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States)
1988-06-16
1 - Description of program or function: The SCWEB (Scientific Workstation Evaluation Benchmark) software includes 16 programs which are executed in a well-defined scenario to measure the following performance capabilities of a scientific workstation: implementation of FORTRAN77, processor speed, memory management, disk I/O, monitor (or display) output, scheduling of processing (multiprocessing), and scheduling of print tasks (spooling). 2 - Method of solution: The benchmark programs are: DK1, DK2, and DK3, which do Fourier series fitting based on spline techniques; JC1, which checks the FORTRAN function routines which produce numerical results; JD1 and JD2, which solve dense systems of linear equations in double- and single-precision, respectively; JD3 and JD4, which perform matrix multiplication in single- and double-precision, respectively; RB1, RB2, and RB3, which perform substantial amounts of I/O processing on files other than the input and output files; RR1, which does intense single-precision floating-point multiplication in a tight loop, RR2, which initializes a 512x512 integer matrix in a manner which skips around in the address space rather than initializing each consecutive memory cell in turn; RR3, which writes alternating text buffers to the output file; RR4, which evaluates the timer routines and demonstrates that they conform to the specification; and RR5, which determines whether the workstation is capable of executing a 4-megabyte program
Pynamic: the Python Dynamic Benchmark
Energy Technology Data Exchange (ETDEWEB)
Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J
2007-07-10
Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.
The Isprs Benchmark on Indoor Modelling
Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.
2017-09-01
Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.
Analysis of a molten salt reactor benchmark
International Nuclear Information System (INIS)
Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.
2013-01-01
This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)
Benchmarking i eksternt regnskab og revision
DEFF Research Database (Denmark)
Thinggaard, Frank; Kiertzner, Lars
2001-01-01
løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....
Computational Chemistry Comparison and Benchmark Database
SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access) The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.
Statistical studies of powerful extragalactic radio sources
Energy Technology Data Exchange (ETDEWEB)
Macklin, J T
1981-01-01
This dissertation is mainly about the use of efficient statistical tests to study the properties of powerful extragalactic radio sources. Most of the analysis is based on subsets of a sample of 166 bright (3CR) sources selected at 178 MHz. The first chapter is introductory and it is followed by three on the misalignment and symmetry of double radio sources. The properties of nuclear components in extragalactic sources are discussed in the next chapter, using statistical tests which make efficient use of upper limits, often the only available information on the flux density from the nuclear component. Multifrequency observations of four 3CR sources are presented in the next chapter. The penultimate chapter is about the analysis of correlations involving more than two variables. The Spearman partial rank correlation coefficient is shown to be the most powerful test available which is based on non-parametric statistics. It is therefore used to study the dependences of the properties of sources on their size at constant redshift, and the results are interpreted in terms of source evolution. Correlations of source properties with luminosity and redshift are then examined.
Aerodynamic Benchmarking of the Deepwind Design
DEFF Research Database (Denmark)
Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge
2015-01-01
The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...
HPC Benchmark Suite NMx, Phase I
National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...
High Energy Physics (HEP) benchmark program
International Nuclear Information System (INIS)
Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.
1993-01-01
High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)
Establishing benchmarks and metrics for utilization management.
Melanson, Stacy E F
2014-01-01
The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.
Professional Performance and Bureaucratic Benchmarking Information
DEFF Research Database (Denmark)
Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz
Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...
Benchmarking of nuclear economics tools
International Nuclear Information System (INIS)
Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh
2017-01-01
Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and
FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark
International Nuclear Information System (INIS)
Sawan, M.E.
1994-12-01
During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)
Higgs Pair Production: Choosing Benchmarks With Cluster Analysis
Carvalho, Alexandra; Dorigo, Tommaso; Goertz, Florian; Gottardo, Carlo A.; Tosi, Mia
2016-01-01
New physics theories often depend on a large number of free parameters. The precise values of those parameters in some cases drastically affect the resulting phenomenology of fundamental physics processes, while in others finite variations can leave it basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics of different models; a clustering algorithm using that metric may then allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmark points are then guaranteed to be sensitive to a large area of the parameter space. In this doc...
Human factors reliability Benchmark exercise
International Nuclear Information System (INIS)
Poucet, A.
1989-06-01
The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise
Experimental and computational benchmark tests
International Nuclear Information System (INIS)
Gilliam, D.M.; Briesmeister, J.F.
1994-01-01
A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel
ENVIRONMENTAL BENCHMARKING FOR LOCAL AUTHORITIES
Directory of Open Access Journals (Sweden)
Marinela GHEREŞ
2010-01-01
Full Text Available This paper is an attempt to clarify and present the many definitions ofbenchmarking. It also attempts to explain the basic steps of benchmarking, toshow how this tool can be applied by local authorities as well as to discuss itspotential benefits and limitations. It is our strong belief that if cities useindicators and progressively introduce targets to improve management andrelated urban life quality, and to measure progress towards more sustainabledevelopment, we will also create a new type of competition among cities andfoster innovation. This is seen to be important because local authorities’actions play a vital role in responding to the challenges of enhancing thestate of the environment not only in policy-making, but also in the provision ofservices and in the planning process. Local communities therefore need tobe aware of their own sustainability performance levels and should be able toengage in exchange of best practices to respond effectively to the ecoeconomicalchallenges of the century.
Benchmark results in radiative transfer
International Nuclear Information System (INIS)
Garcia, R.D.M.; Siewert, C.E.
1986-02-01
Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt
NASA Software Engineering Benchmarking Study
Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.
2013-01-01
To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths
SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit
Directory of Open Access Journals (Sweden)
Annie Chu
2009-04-01
Full Text Available The web-based, Java-written SOCR (Statistical Online Computational Resource toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b. It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b. Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses.
... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...
An introduction to inferential statistics: A review and practical guide
Energy Technology Data Exchange (ETDEWEB)
Marshall, Gill, E-mail: gill.marshall@cumbria.ac.u [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom); Jonker, Leon [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom)
2011-02-15
Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.
An introduction to inferential statistics: A review and practical guide
International Nuclear Information System (INIS)
Marshall, Gill; Jonker, Leon
2011-01-01
Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.
The role of benchmarking for yardstick competition
International Nuclear Information System (INIS)
Burns, Phil; Jenkins, Cloda; Riechmann, Christoph
2005-01-01
With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)
Benchmarking set for domestic smart grid management
Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria
2010-01-01
In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used
Medical school benchmarking - from tools to programmes.
Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T
2015-02-01
Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.
Benchmarking in digital circuit design automation
Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.
2008-01-01
This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious
Benchmark Two-Good Utility Functions
de Jaegher, K.
Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price
Repeated Results Analysis for Middleware Regression Benchmarking
Czech Academy of Sciences Publication Activity Database
Bulej, Lubomír; Kalibera, T.; Tůma, P.
2005-01-01
Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005
Benchmarking the energy efficiency of commercial buildings
International Nuclear Information System (INIS)
Chung, William; Hui, Y.V.; Lam, Y. Miu
2006-01-01
Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method
Benchmarking, Total Quality Management, and Libraries.
Shaughnessy, Thomas W.
1993-01-01
Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)
Benchmarking strategies for measuring the quality of healthcare: problems and prospects.
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.
Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140
A Seafloor Benchmark for 3-dimensional Geodesy
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone
SP2Bench: A SPARQL Performance Benchmark
Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg
A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.
Benchmarking of refinery emissions performance : Executive summary
International Nuclear Information System (INIS)
2003-07-01
This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs
Vver-1000 Mox core computational benchmark
International Nuclear Information System (INIS)
2006-01-01
The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the
Thermal Performance Benchmarking: Annual Report
Energy Technology Data Exchange (ETDEWEB)
Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center
2017-10-19
In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction
Statistical approach for selection of regression model during validation of bioanalytical method
Directory of Open Access Journals (Sweden)
Natalija Nakov
2014-06-01
Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.
Hart, Liz
2001-01-01
We can learn a lot from others. Benchmarking provides a structural framework for making comparisons with other organisations. The techniques enable us to learn from one another by looking at why there are differences in performance outcomes between organisations undertaking similar functions. This seminar concentrated on: Importance of benchmarking / benchmarking techniques, Establishment of benchmarking consortia, Utilising statistics and performance indicators and Practical examples of how ...
What Randomized Benchmarking Actually Measures
International Nuclear Information System (INIS)
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin
2017-01-01
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.
International Nuclear Information System (INIS)
Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.
1982-02-01
Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)
Human factors reliability benchmark exercise
International Nuclear Information System (INIS)
Poucet, A.
1989-08-01
The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches
An appraisal of statistical procedures used in derivation of reference intervals.
Ichihara, Kiyoshi; Boyd, James C
2010-11-01
When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.
Academic Productivity in Psychiatry: Benchmarks for the H-Index.
MacMaster, Frank P; Swansburg, Rose; Rittenbach, Katherine
2017-08-01
Bibliometrics play an increasingly critical role in the assessment of faculty for promotion and merit increases. Bibliometrics is the statistical analysis of publications, aimed at evaluating their impact. The objective of this study is to describe h-index and citation benchmarks in academic psychiatry. Faculty lists were acquired from online resources for all academic departments of psychiatry listed as having residency training programs in Canada (as of June 2016). Potential authors were then searched on Web of Science (Thomson Reuters) for their corresponding h-index and total number of citations. The sample included 1683 faculty members in academic psychiatry departments. Restricted to those with a rank of assistant, associate, or full professor resulted in 1601 faculty members (assistant = 911, associate = 387, full = 303). h-index and total citations differed significantly by academic rank. Both were highest in the full professor rank, followed by associate, then assistant. The range in each, however, was large. This study provides the initial benchmarks for the h-index and total citations in academic psychiatry. Regardless of any controversies or criticisms of bibliometrics, they are increasingly influencing promotion, merit increases, and grant support. As such, benchmarking by specialties is needed in order to provide needed context.
Revaluering benchmarking - A topical theme for the construction industry
DEFF Research Database (Denmark)
Rasmussen, Grane Mikael Gregaard
2011-01-01
and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...
Developing integrated benchmarks for DOE performance measurement
Energy Technology Data Exchange (ETDEWEB)
Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.
1992-09-30
The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.
Bugała, Artur; Bednarek, Karol; Kasprzyk, Leszek; Tomczewski, Andrzej
2017-10-01
The paper presents the most representative - from the three-year measurement time period - characteristics of daily and monthly electricity production from a photovoltaic conversion using modules installed in a fixed and 2-axis tracking construction. Results are presented for selected summer, autumn, spring and winter days. Analyzed measuring stand is located on the roof of the Faculty of Electrical Engineering Poznan University of Technology building. The basic parameters of the statistical analysis like mean value, standard deviation, skewness, kurtosis, median, range, or coefficient of variation were used. It was found that the asymmetry factor can be useful in the analysis of the daily electricity production from a photovoltaic conversion. In order to determine the repeatability of monthly electricity production, occurring between the summer, and summer and winter months, a non-parametric Mann-Whitney U test was used as a statistical solution. In order to analyze the repeatability of daily peak hours, describing the largest value of the hourly electricity production, a non-parametric Kruskal-Wallis test was applied as an extension of the Mann-Whitney U test. Based on the analysis of the electric energy distribution from a prepared monitoring system it was found that traditional forecasting methods of the electricity production from a photovoltaic conversion, like multiple regression models, should not be the preferred methods of the analysis.
REPORT FOR COMMERCIAL GRADE NICKEL CHARACTERIZATION AND BENCHMARKING
Energy Technology Data Exchange (ETDEWEB)
None
2012-12-20
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, has completed the collection, sample analysis, and review of analytical results to benchmark the concentrations of gross alpha-emitting radionuclides, gross beta-emitting radionuclides, and technetium-99 in commercial grade nickel. This report presents methods, change management, observations, and statistical analysis of materials procured from sellers representing nine countries on four continents. The data suggest there is a low probability of detecting alpha- and beta-emitting radionuclides in commercial nickel. Technetium-99 was not detected in any samples, thus suggesting it is not present in commercial nickel.
IAEA sodium void reactivity benchmark calculations
International Nuclear Information System (INIS)
Hill, R.N.; Finck, P.J.
1992-01-01
In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated
Benchmarking gate-based quantum computers
Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans
2017-11-01
With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.
Benchmark Imagery FY11 Technical Report
Energy Technology Data Exchange (ETDEWEB)
Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2011-06-14
This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.
Present status and extensions of the Monte Carlo performance benchmark
International Nuclear Information System (INIS)
Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.
2013-01-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed. (authors)
... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...
How benchmarking can improve patient nutrition.
Ellis, Jane
Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.
Benchmarking and validation activities within JEFF project
Directory of Open Access Journals (Sweden)
Cabellos O.
2017-01-01
Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.
Measuring Distribution Performance? Benchmarking Warrants Your Attention
Energy Technology Data Exchange (ETDEWEB)
Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group
2018-04-13
Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.
Numerical methods: Analytical benchmarking in transport theory
International Nuclear Information System (INIS)
Ganapol, B.D.
1988-01-01
Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered
Benchmarking Linked Open Data Management Systems
R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)
2014-01-01
htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics
International Nuclear Information System (INIS)
Marck, Steven C. van der
2006-01-01
The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257
Benchmarks for dynamic multi-objective optimisation
CSIR Research Space (South Africa)
Helbig, M
2013-06-01
Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...
Professional Performance and Bureaucratic Benchmarking Information
DEFF Research Database (Denmark)
Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz
controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...
Second benchmark problem for WIPP structural computations
International Nuclear Information System (INIS)
Krieg, R.D.; Morgan, H.S.; Hunter, T.O.
1980-12-01
This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications
Reactor fuel depletion benchmark of TINDER
International Nuclear Information System (INIS)
Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.
2014-01-01
Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work
XWeB: The XML Warehouse Benchmark
Mahboubi, Hadj; Darmont, Jérôme
With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.
Benchmarking and validation activities within JEFF project
Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der
2017-09-01
The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.
Nuclear data uncertainties for local power densities in the Martin-Hoogenboom benchmark
International Nuclear Information System (INIS)
Van der Marck, S.C.; Rochman, D.A.
2013-01-01
The recently developed method of fast Total Monte Carlo to propagate nuclear data uncertainties was applied to the Martin-Hoogenboom benchmark. This Martin- Hoogenboom benchmark prescribes that one calculates local pin powers (of light water cooled reactor) with a statistical uncertainty lower than 1% everywhere. Here we report, for the first time, an estimate of the nuclear data uncertainties for these local pin powers. For each of the more than 6 million local power tallies, the uncertainty due to nuclear data uncertainties was calculated, based on random variation of data for 235 U, 238 U, 239 Pu and H in H 2 O thermal scattering. In the center of the core region, the nuclear data uncertainty is 0.9%. Towards the edges of the core, this uncertainty increases to roughly 3%. The nuclear data uncertainties have been shown to be larger than the statistical uncertainties that the benchmark prescribes
Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics
Benchmarking Danish Vocational Education and Training Programmes
DEFF Research Database (Denmark)
Bogetoft, Peter; Wittrup, Jesper
This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....
A framework for benchmarking land models
Directory of Open Access Journals (Sweden)
Y. Q. Luo
2012-10-01
Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties
EPRI depletion benchmark calculations using PARAGON
International Nuclear Information System (INIS)
Kucukboyaci, Vefa N.
2015-01-01
Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty
Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.
2015-01-01
Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642
Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2
International Nuclear Information System (INIS)
Buranathiti, Thaweepat; Cao Jian
2005-01-01
This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission
Benchmarking infrastructure for mutation text mining.
Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo
2014-02-25
Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.
Ad hoc committee on reactor physics benchmarks
International Nuclear Information System (INIS)
Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.
1996-01-01
In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route
Benchmarking for controllere: metoder, teknikker og muligheder
DEFF Research Database (Denmark)
Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe
2008-01-01
Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....
Pool critical assembly pressure vessel facility benchmark
International Nuclear Information System (INIS)
Remec, I.; Kam, F.B.K.
1997-07-01
This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used
Benchmarking infrastructure for mutation text mining
2014-01-01
Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600
Benchmarking for Cost Improvement. Final report
Energy Technology Data Exchange (ETDEWEB)
1993-09-01
The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.
Directory of Open Access Journals (Sweden)
Aiman El-Saed
2013-10-01
Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections
Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014
Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina
2016-01-01
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...
Efficient bootstrap estimates for tail statistics
Breivik, Øyvind; Aarnes, Ole Johan
2017-03-01
Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.
Benchmarking - a validation of UTDefect
International Nuclear Information System (INIS)
Niklasson, Jonas; Bostroem, Anders; Wirdelius, Haakan
2006-06-01
New and stronger demands on reliability of used NDE/NDT procedures and methods have stimulated the development of simulation tools of NDT. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies and in the qualification of procedures and personnel. The traditional way of qualifying a procedure is to generate a technical justification by employing experimental verification of the chosen technique. The manufacturing of test pieces is often very expensive and time consuming. It also tends to introduce a number of possible misalignments between the actual NDT situation and the proposed experimental simulation. The UTDefect computer code (SUNDT/simSUNDT) has been developed, together with the Dept. of Mechanics at Chalmers Univ. of Technology, during a decade and simulates the entire ultrasonic testing situation. A thorough validated model has the ability to be an alternative and a complement to the experimental work in order to reduce the extensive cost. The validation can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. This project addresses the last alternative but provides an opportunity to, in a later stage, compare with other software when all data are made public and available. The comparison has been with experimental data from an international benchmark study initiated by the World Federation of NDE Centers. The experiments have been conducted with planar and spherically focused immersion transducers. The defects considered are side-drilled holes, flat-bottomed holes, and a spherical cavity. The data from the experiments are a reference signal used for calibration (the signal from the front surface of the test block at normal incidence) and the raw output from the scattering experiment. In all, more than forty cases have been compared. The agreement between UTDefect and the experiments was in general good (deviation less than 2dB) when the
International Nuclear Information System (INIS)
Soto, R; Wu, Ch. H; Bubela, A M
1999-01-01
This work introduces a novel methodology to improve reservoir characterization models. In this methodology we integrated multivariate statistical analyses, and neural network models for forecasting the infill drilling ultimate oil recovery from reservoirs in San Andres and Clearfork carbonate formations in west Texas. Development of the oil recovery forecast models help us to understand the relative importance of dominant reservoir characteristics and operational variables, reproduce recoveries for units included in the database, forecast recoveries for possible new units in similar geological setting, and make operational (infill drilling) decisions. The variety of applications demands the creation of multiple recovery forecast models. We have developed intelligent software (Soto, 1998), oilfield intelligence (01), as an engineering tool to improve the characterization of oil and gas reservoirs. 01 integrates neural networks and multivariate statistical analysis. It is composed of five main subsystems: data input, preprocessing, architecture design, graphic design, and inference engine modules. One of the challenges in this research was to identify the dominant and the optimum number of independent variables. The variables include porosity, permeability, water saturation, depth, area, net thickness, gross thickness, formation volume factor, pressure, viscosity, API gravity, number of wells in initial water flooding, number of wells for primary recovery, number of infill wells over the initial water flooding, PRUR, IWUR, and IDUR. Multivariate principal component analysis is used to identify the dominant and the optimum number of independent variables. We compared the results from neural network models with the non-parametric approach. The advantage of the non-parametric regression is that it is easy to use. The disadvantage is that it retains a large variance of forecast results for a particular data set. We also used neural network concepts to develop recovery
Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu
2016-01-01
A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.
Directory of Open Access Journals (Sweden)
Jahn, Franziska
2015-08-01
Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.
Raising Quality and Achievement. A College Guide to Benchmarking.
Owen, Jane
This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…
Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies
Energy Technology Data Exchange (ETDEWEB)
Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2017-05-23
This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.
Benchmarks: The Development of a New Approach to Student Evaluation.
Larter, Sylvia
The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…
Storage-Intensive Supercomputing Benchmark Study
Energy Technology Data Exchange (ETDEWEB)
Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A
2007-10-30
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows
Sadovskii, Michael V
2012-01-01
This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.
Full sphere hydrodynamic and dynamo benchmarks
Marti, P.
2014-01-26
Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.
Goodman, Joseph W
2015-01-01
This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications. The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i
Energy Technology Data Exchange (ETDEWEB)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
2017-05-15
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.
International Nuclear Information System (INIS)
Eliazar, Iddo
2017-01-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.
Szulc, Stefan
1965-01-01
Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then
... Testing Treatment & Outcomes Health Professionals Statistics More Resources Candidiasis Candida infections of the mouth, throat, and esophagus Vaginal candidiasis Invasive candidiasis Definition Symptoms Risk & Prevention Sources Diagnosis ...
Benchmarking – A tool for judgment or improvement?
DEFF Research Database (Denmark)
Rasmussen, Grane Mikael Gregaard
2010-01-01
perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...
The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example
Steyn, H. J.
2015-01-01
Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…
Criteria of benchmark selection for efficient flexible multibody system formalisms
Directory of Open Access Journals (Sweden)
Valášek M.
2007-10-01
Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.
Toxicological benchmarks for wildlife: 1994 Revision
International Nuclear Information System (INIS)
Opresko, D.M.; Sample, B.E.; Suter, G.W. II.
1994-09-01
The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report
Toxicological benchmarks for wildlife: 1994 Revision
Energy Technology Data Exchange (ETDEWEB)
Opresko, D.M.; Sample, B.E.; Suter, G.W. II
1994-09-01
The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.
A simplified 2D HTTR benchmark problem
International Nuclear Information System (INIS)
Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.
2009-01-01
To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…
Petocz, Peter; Sowey, Eric
2008-01-01
In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…
Glaz, Joseph
2009-01-01
Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.
Lyons, L.
2016-01-01
Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.
Nick, Todd G
2007-01-01
Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.
Managment of electronic services of the Serbian postal system by benchmarking index
Directory of Open Access Journals (Sweden)
Dupljanin Đorđije D.
2014-01-01
Full Text Available The aim of this paper is analys of the state of electronic services within the postal system in Serbia in order to improve the management of this services. As an indicator of the state of development we can use benchmarking index of development of postal e-services. Benchmarking index is created by PCA (Principal Component Analysis method in statistical software Minitab which has been designed for PCA. The development of electronic economy has influence on postal system to improve postal electronic services. In conclusion we can say those who make the decisions and management experts have to define appropriate strategy of e-services.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most
Benchmark calculations of power distribution within assemblies
International Nuclear Information System (INIS)
Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.
1994-09-01
The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received
ZZ WPPR, Pu Recycling Benchmark Results
International Nuclear Information System (INIS)
Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc
2002-01-01
Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks
Interior beam searchlight semi-analytical benchmark
International Nuclear Information System (INIS)
Ganapol, Barry D.; Kornreich, Drew E.
2008-01-01
Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)
The national hydrologic bench-mark network
Cobb, Ernest D.; Biesecker, J.E.
1971-01-01
The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.
Confidential benchmarking based on multiparty computation
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt
We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...
Benchmark referencing of neutron dosimetry measurements
International Nuclear Information System (INIS)
Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.
1980-01-01
The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes
MIPS bacterial genomes functional annotation benchmark dataset.
Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen
2005-05-15
Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab
Benchmarking criticality safety calculations with subcritical experiments
International Nuclear Information System (INIS)
Mihalczo, J.T.
1984-06-01
Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments
A Benchmarking System for Domestic Water Use
Directory of Open Access Journals (Sweden)
Dexter V. L. Hunt
2014-05-01
Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.
Blakemore, J S
1962-01-01
Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co
Wannier, Gregory Hugh
1966-01-01
Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
Toxicological benchmarks for wildlife: 1996 Revision
International Nuclear Information System (INIS)
Sample, B.E.; Opresko, D.M.; Suter, G.W., II.
1996-06-01
The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets
Benchmarking af kommunernes førtidspensionspraksis
DEFF Research Database (Denmark)
Gregersen, Ole
Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...
Benchmark calculations for fusion blanket development
International Nuclear Information System (INIS)
Sawan, M.E.; Cheng, E.T.
1985-01-01
Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets
Benchmark calculations for fusion blanket development
International Nuclear Information System (INIS)
Sawan, M.L.; Cheng, E.T.
1986-01-01
Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)
Toxicological benchmarks for wildlife: 1996 Revision
Energy Technology Data Exchange (ETDEWEB)
Sample, B.E.; Opresko, D.M.; Suter, G.W., II
1996-06-01
The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.
Reactor group constants and benchmark test
Energy Technology Data Exchange (ETDEWEB)
Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2001-08-01
The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)
Benchmarking Academic Anatomic Pathologists: The Association of Pathology Chairs Survey.
Ducatman, Barbara S; Parslow, Tristram
2016-01-01
The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA) or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC) databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization's methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical "full-time faculty" (0.60 clinical full-time equivalent and above). The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs-reported median work relative value unit productivity
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-08
In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.
Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...
U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Data about the usage of the WPRDC site and its various datasets, obtained by combining Google Analytics statistics with information from the WPRDC's data portal.
Serdobolskii, Vadim Ivanovich
2007-01-01
This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...
... Search Form Controls Cancel Submit Search the CDC Gonorrhea Note: Javascript is disabled or is not supported ... Twitter STD on Facebook Sexually Transmitted Diseases (STDs) Gonorrhea Statistics Recommend on Facebook Tweet Share Compartir Gonorrhea ...
DEFF Research Database (Denmark)
Tryggestad, Kjell
2004-01-01
The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...
PRIS-STATISTICS: Power Reactor Information System Statistical Reports. User's Manual
International Nuclear Information System (INIS)
2013-01-01
The IAEA developed the Power Reactor Information System (PRIS)-Statistics application to assist PRIS end users with generating statistical reports from PRIS data. Statistical reports provide an overview of the status, specification and performance results of every nuclear power reactor in the world. This user's manual was prepared to facilitate the use of the PRIS-Statistics application and to provide guidelines and detailed information for each report in the application. Statistical reports support analyses of nuclear power development and strategies, and the evaluation of nuclear power plant performance. The PRIS database can be used for comprehensive trend analyses and benchmarking against best performers and industrial standards.
MacKenzie, Dana
2004-01-01
The drawbacks of using 19th-century mathematics in physics and astronomy are illustrated. To continue with the expansion of the knowledge about the cosmos, the scientists will have to come in terms with modern statistics. Some researchers have deliberately started importing techniques that are used in medical research. However, the physicists need to identify the brand of statistics that will be suitable for them, and make a choice between the Bayesian and the frequentists approach. (Edited abstract).
Mitchell, L
1996-01-01
The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.
The Global Benchmarking as a Method of Countering the Intellectual Migration in Ukraine
Directory of Open Access Journals (Sweden)
Striy Lуbov A.
2017-05-01
Full Text Available The publication is aimed at studying the global benchmarking as a method of countering the intellectual migration in Ukraine. The article explores the intellectual process of migration in Ukraine; the current status of the country in the light of crisis and all the problems that arose has been analyzed; statistical data on the migration process are provided, the method of countering it has been determined; types of benchmarking have been considered; the benchmarking method as a way of achieving objective has been analyzed; the benefits to be derived from this method have been determined, as well as «bottlenecks» in the State process of regulating migratory flows, not only to call attention to, but also take corrective actions.
Streamflow characteristics at hydrologic bench-mark stations
Lawrence, C.L.
1987-01-01
The Hydrologic Bench-Mark Network was established in the 1960's. Its objectives were to document the hydrologic characteristics of representative undeveloped watersheds nationwide and to provide a comparative base for studying the effects of man on the hydrologic environment. The network, which consists of 57 streamflow gaging stations and one lake-stage station in 39 States, is planned for permanent operation. This interim report describes streamflow characteristics at each bench-mark site and identifies time trends in annual streamflow that have occurred during the data-collection period. The streamflow characteristics presented for each streamflow station are (1) flood and low-flow frequencies, (2) flow duration, (3) annual mean flow, and (4) the serial correlation coefficient for annual mean discharge. In addition, Kendall's tau is computed as an indicator of time trend in annual discharges. The period of record for most stations was 13 to 17 years, although several stations had longer periods of record. The longest period was 65 years for Merced River near Yosemite, Calif. Records of flow at 6 of 57 streamflow sites in the network showed a statistically significant change in annual mean discharge over the period of record, based on computations of Kendall's tau. The values of Kendall's tau ranged from -0.533 to 0.648. An examination of climatological records showed that changes in precipitation were most likely the cause for the change in annual mean discharge.
Benchmarking comparison and validation of MCNP photon interaction data
Directory of Open Access Journals (Sweden)
Colling Bethany
2017-01-01
Full Text Available The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p. Suitable benchmark experiments (iron and water were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p with MCNP6 and 84p if using MCNP-5.
Higgs pair production: choosing benchmarks with cluster analysis
Energy Technology Data Exchange (ETDEWEB)
Carvalho, Alexandra; Dall’Osso, Martino; Dorigo, Tommaso [Dipartimento di Fisica e Astronomia and INFN, Sezione di Padova,Via Marzolo 8, I-35131 Padova (Italy); Goertz, Florian [CERN,1211 Geneva 23 (Switzerland); Gottardo, Carlo A. [Physikalisches Institut, Universität Bonn,Nussallee 12, 53115 Bonn (Germany); Tosi, Mia [CERN,1211 Geneva 23 (Switzerland)
2016-04-20
New physics theories often depend on a large number of free parameters. The phenomenology they predict for fundamental physics processes is in some cases drastically affected by the precise value of those free parameters, while in other cases is left basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics predicted by different models; a clustering algorithm using that metric may allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmarks are then guaranteed to be sensitive to a large area of the parameter space. In this document we show a practical implementation of the above strategy for the study of non-resonant production of Higgs boson pairs in the context of extensions of the standard model with anomalous couplings of the Higgs bosons. A non-standard value of those couplings may significantly enhance the Higgs boson pair-production cross section, such that the process could be detectable with the data that the LHC will collect in Run 2.
Benchmarking comparison and validation of MCNP photon interaction data
Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.
2017-09-01
The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.
Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.
Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian
2017-03-01
One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.
Benchmarking optimization solvers for structural topology optimization
DEFF Research Database (Denmark)
Rojas Labanda, Susana; Stolpe, Mathias
2015-01-01
solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...