WorldWideScience

Sample records for ranked probability score

  1. Optimization of continuous ranked probability score using PSO

    Directory of Open Access Journals (Sweden)

    Seyedeh Atefeh Mohammadi

    2015-07-01

    Full Text Available Weather forecast has been a major concern in various industries such as agriculture, aviation, maritime, tourism, transportation, etc. A good weather prediction may reduce natural disasters and unexpected events. This paper presents an empirical investigation to predict weather temperature using continuous ranked probability score (CRPS. The mean and standard deviation of normal density function are linear combination of the components of ensemble system. The resulted optimization model has been solved using particle swarm optimization (PSO and the results are compared with Broyden–Fletcher–Goldfarb–Shanno (BFGS method. The preliminary results indicate that the proposed PSO provides better results in terms of root-mean-square deviation criteria than the alternative BFGS method.

  2. Improving Ranking Using Quantum Probability

    OpenAIRE

    Melucci, Massimo

    2011-01-01

    The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...

  3. Quantum probability ranking principle for ligand-based virtual screening.

    Science.gov (United States)

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  4. Quantum probability ranking principle for ligand-based virtual screening

    Science.gov (United States)

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  5. Alternative Class Ranks Using Z-Scores

    Science.gov (United States)

    Brown, Philip H.; Van Niel, Nicholas

    2012-01-01

    Grades at US colleges and universities have increased precipitously over the last 50 years, suggesting that their signalling power has become attenuated. Moreover, average grades have risen disproportionately in some departments, implying that weak students in departments with high grades may obtain better class ranks than strong students in…

  6. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  7. Essays on probability elicitation scoring rules

    Science.gov (United States)

    Firmino, Paulo Renato A.; dos Santos Neto, Ademir B.

    2012-10-01

    In probability elicitation exercises it has been usual to considerer scoring rules (SRs) to measure the performance of experts when inferring about a given unknown, Θ, for which the true value, θ*, is (or will shortly be) known to the experimenter. Mathematically, SRs quantify the discrepancy between f(θ) (the distribution reflecting the expert's uncertainty about Θ) and d(θ), a zero-one indicator function of the observation θ*. Thus, a remarkable characteristic of SRs is to contrast expert's beliefs with the observation θ*. The present work aims at extending SRs concepts and formulas for the cases where Θ is aleatory, highlighting advantages of goodness-of-fit and entropy-like measures. Conceptually, it is argued that besides of evaluating the personal performance of the expert, SRs may also play a role when comparing the elicitation processes adopted to obtain f(θ). Mathematically, it is proposed to replace d(θ) by g(θ), the distribution that model the randomness of Θ, and do also considerer goodness-of-fit and entropylike metrics, leading to SRs that measure the adherence of f(θ) to g(θ). The implications of this alternative perspective are discussed and illustrated by means of case studies based on the simulation of controlled experiments. The usefulness of the proposed approach for evaluating the performance of experts and elicitation processes is investigated.

  8. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...

  9. Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks

    Science.gov (United States)

    Frahm, Klaus M.; Shepelyansky, Dima L.

    2014-04-01

    We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.

  10. The exact probability distribution of the rank product statistics for replicated experiments.

    Science.gov (United States)

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  11. Point scoring system to rank traffic calming projects

    Directory of Open Access Journals (Sweden)

    Farzana Rahman

    2016-08-01

    Full Text Available The installation of calming measures on a road network is systematically planned way in general to reduce driving speeds, but also reduces the volume of through traffic on local and residential streets. When the demands of traffic calming exceed city resources, there is a need to prioritize or rank them. Asian countries, like Japan, Korea, Bangladesh and etc., do not have a prioritization system to apply in such cases. The objective of this research is to develop a point ranking system to prioritize traffic calming projects. Firstly paired comparison method was employed to obtain residents' opinions about the streets severity and needs of traffic calming treatment. A binary logistic regression model was developed to identify the factors of selecting streets for traffic calming. This model also explored the weight of variables during developing the point ranking system. The weights used in the point ranking system include vehicle speed, pedestrian generation, sidewalk condition and hourly vehicle volume per width (m of street. Results suggest that the severity of street largely depends on the absence of sidewalks, which has a weight of 45%, and high hourly vehicle volume of traffic per width (m of street, which has a weight of 38%. These outcomes are significant to develop the state of traffic safety in Japan and other Asian countries.

  12. Correlation between the clinical pretest probability score and the lung ventilation and perfusion scan probability

    OpenAIRE

    Bhoobalan, Shanmugasundaram; Chakravartty, Riddhika; Dolbear, Gill; Al-Janabi, Mazin

    2013-01-01

    Purpose: Aim of the study was to determine the accuracy of the clinical pretest probability (PTP) score and its association with lung ventilation and perfusion (VQ) scan. Materials and Methods: A retrospective analysis of 510 patients who had a lung VQ scan between 2008 and 2010 were included in the study. Out of 510 studies, the number of normal, low, and high probability VQ scans were 155 (30%), 289 (57%), and 55 (11%), respectively. Results: A total of 103 patients underwent computed tomog...

  13. Comparison of a Class of Rank-Score Tests in Two-Factor Designs ...

    African Journals Online (AJOL)

    The empirical Type I error rate and power of these test statistics on the rank scores were determined using Monte Carlo simulation to investigate the robustness of the tests. The results show that there are problems of inflation in the Type I error rate using asymptotic ƒÓ2 test for all the rank score functions, especially for small ...

  14. The Publication Ranking Score for pediatric urology: quantifying thought leadership within the subspecialty.

    Science.gov (United States)

    Lloyd, Jessica C; Madden-Fuentes, Ramiro J; Nelson, Caleb P; Kokorowski, Paul J; Wiener, John S; Ross, Sherry S; Kutikov, Alexander; Routh, Jonathan C

    2013-12-01

    Clinical care parameters are frequently assessed by national ranking systems. However, these rankings do little to comment on institutions' academic contributions. The Publication Ranking Score (PRS) was developed to allow for objective comparisons of scientific thought-leadership at various pediatric urology institutions. Faculty lists were compiled for each of the US News & World Report (USNWR) top-50 pediatric urology hospitals. A list of all faculty publications (2006-2011) was then compiled, after adjusting for journal impact factor, and summed to derive a Publication Ranking Score (PRS). PRS rankings were then compared to the USNWR pediatric urology top-50 hospital list. A total of 1811 publications were indexed. PRS rankings resulted in a mean change in rank of 12 positions, compared to USNWR ranks. Of the top-10 USNWR hospitals, only 4 were ranked in the top-10 by the PRS. There was little correlation between the USNWR and PRS ranks for either top-10 (r = 0.42, p = 0.23) or top-50 (r = 0.48, p = 0.0004) hospitals. PRS institutional ranking differs significantly from the USNWR top-50 hospital list in pediatric urology. While not a replacement, we believe the PRS to be a useful adjunct to the USNWR rankings of pediatric urology hospitals. Copyright © 2013 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  15. Covariate-adjusted Spearman's rank correlation with probability-scale residuals.

    Science.gov (United States)

    Liu, Qi; Li, Chun; Wanga, Valentine; Shepherd, Bryan E

    2017-11-13

    It is desirable to adjust Spearman's rank correlation for covariates, yet existing approaches have limitations. For example, the traditionally defined partial Spearman's correlation does not have a sensible population parameter, and the conditional Spearman's correlation defined with copulas cannot be easily generalized to discrete variables. We define population parameters for both partial and conditional Spearman's correlation through concordance-discordance probabilities. The definitions are natural extensions of Spearman's rank correlation in the presence of covariates and are general for any orderable random variables. We show that they can be neatly expressed using probability-scale residuals (PSRs). This connection allows us to derive simple estimators. Our partial estimator for Spearman's correlation between X and Y adjusted for Z is the correlation of PSRs from models of X on Z and of Y on Z, which is analogous to the partial Pearson's correlation derived as the correlation of observed-minus-expected residuals. Our conditional estimator is the conditional correlation of PSRs. We describe estimation and inference, and highlight the use of semiparametric cumulative probability models, which allow preservation of the rank-based nature of Spearman's correlation. We conduct simulations to evaluate the performance of our estimators and compare them with other popular measures of association, demonstrating their robustness and efficiency. We illustrate our method in two applications, a biomarker study and a large survey. © 2017, The International Biometric Society.

  16. Mortality Probability Model III and Simplified Acute Physiology Score II

    Science.gov (United States)

    Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams

    2009-01-01

    Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210

  17. Ranking of microRNA target prediction scores by Pareto front analysis.

    Science.gov (United States)

    Sahoo, Sudhakar; Albrecht, Andreas A

    2010-12-01

    Over the past ten years, a variety of microRNA target prediction methods has been developed, and many of the methods are constantly improved and adapted to recent insights into miRNA-mRNA interactions. In a typical scenario, different methods return different rankings of putative targets, even if the ranking is reduced to selected mRNAs that are related to a specific disease or cell type. For the experimental validation it is then difficult to decide in which order to process the predicted miRNA-mRNA bindings, since each validation is a laborious task and therefore only a limited number of mRNAs can be analysed. We propose a new ranking scheme that combines ranked predictions from several methods and - unlike standard thresholding methods - utilises the concept of Pareto fronts as defined in multi-objective optimisation. In the present study, we attempt a proof of concept by applying the new ranking scheme to hsa-miR-21, hsa-miR-125b, and hsa-miR-373 and prediction scores supplied by PITA and RNAhybrid. The scores are interpreted as a two-objective optimisation problem, and the elements of the Pareto front are ranked by the STarMir score with a subsequent re-calculation of the Pareto front after removal of the top-ranked mRNA from the basic set of prediction scores. The method is evaluated on validated targets of the three miRNA, and the ranking is compared to scores from DIANA-microT and TargetScan. We observed that the new ranking method performs well and consistent, and the first validated targets are elements of Pareto fronts at a relatively early stage of the recurrent procedure, which encourages further research towards a higher-dimensional analysis of Pareto fronts. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. The ranking probability approach and its usage in design and analysis of large-scale studies.

    Science.gov (United States)

    Kuo, Chia-Ling; Zaykin, Dmitri

    2013-01-01

    In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.

  19. PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION

    Data.gov (United States)

    National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...

  20. How different from random are docking predictions when ranked by scoring functions?

    DEFF Research Database (Denmark)

    Feliu, Elisenda; Oliva, Baldomero

    2010-01-01

    Docking algorithms predict the structure of protein-protein interactions. They sample the orientation of two unbound proteins to produce various predictions about their interactions, followed by a scoring step to rank the predictions. We present a statistical assessment of scoring functions used...... to rank near-native orientations, applying our statistical analysis to a benchmark dataset of decoys of protein-protein complexes and assessing the statistical significance of the outcome in the Critical Assessment of PRedicted Interactions (CAPRI) scoring experiment. A P value was assigned that depended...... functions results merely from random choice. This analysis reveals that changes should be made in the design of the CAPRI scoring experiment. We propose including the statistical assessment in this experiment either at the preprocessing or the evaluation step....

  1. Use of recommended score chart and ranking of clinical features in ...

    African Journals Online (AJOL)

    Background: The diagnosis of childhood pulmonary tuberculosis among medical doctors has presented serious challenge in tuberculosis case finding in resource poor settings. Aim of the study: To determine the use of recommended score chart among medical doctors; and to compare the ranking of diagnostic clinical ...

  2. Comparison of a Class of Rank-Score Tests in Two-Factor Designs ...

    African Journals Online (AJOL)

    Department of Mathematics, Usmanu Danfodiyo University, Sokoto ... The empirical Type I error rate and power of these test statistics on the rank scores were ... INTRODUCTION. When analyzing data from a two-factor design, usually a linear model is assumed and the hypotheses are formulated by the parameters of this ...

  3. Meta-server for automatic analysis, scoring and ranking of docking models.

    Science.gov (United States)

    Anashkina, Anastasia A; Kravatsky, Yuri; Kuznetsov, Eugene; Makarov, Alexander A; Adzhubei, Alexei A

    2017-09-18

    Modelling with multiple servers that use different algorithms for docking results in more reliable predictions of interaction sites. However, the scoring and comparison of all models by an expert is time-consuming and is not feasible for large volumes of data generated by such modelling. QASDOM Server (Quality ASsessment of DOcking Models) is a simple and efficient tool for real-time simultaneous analysis, scoring and ranking of datasets of receptor-ligand complexes built by a range of docking techniques. This meta-server is designed to analyse large datasets of docking models and rank them by scoring criteria developed in this study. It produces two types of output showing the likelihood of specific residues and clusters of residues to be involved in receptor-ligand interactions, and the ranking of models. The server also allows visualising residues that form interaction sites in the receptor and ligand sequence, and displays three-dimensional model structures of the receptor-ligand complexes. http://qasdom.eimb.ru. Supplementary data are available at Bioinformatics online.

  4. Quantification of type I error probabilities for heterogeneity LOD scores.

    Science.gov (United States)

    Abreu, Paula C; Hodge, Susan E; Greenberg, David A

    2002-02-01

    Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.

  5. Relations between Prestige Rankings of Clinical Psychology Doctoral Programs and Scores on the Examination for Professional Practice in Psychology (EPPP)

    Science.gov (United States)

    Townsend, James M.; Ryan, Joseph J.

    2010-01-01

    We assessed the relationship between "U.S. News and World Report" 2008 rankings of clinical psychology doctoral programs and scores earned by graduates on the Examination for Professional Practice in Psychology (EPPP). For the top 25 programs, relationship between ranking and EPPP scores was not significant, r[subscript s] = -0.28. EPPP scores…

  6. Detecting determinism with improved sensitivity in time series: Rank-based nonlinear predictability score

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G.

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  7. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  8. Comparison of the Wells score with the simplified revised Geneva score for assessing pretest probability of pulmonary embolism.

    Science.gov (United States)

    Penaloza, Andrea; Melot, Christian; Motte, Serge

    2011-02-01

    The Wells score is widely used in the assessment of pretest probability of pulmonary embolism (PE). The revised Geneva score is a fully standardized clinical decision rule that was recently validated and further simplified. We compared the predictive accuracy of these two scores. Data from 339 patients clinically suspected of PE from two prospective management studies were used and combined. Pretest probability of PE was assessed prospectively by the Wells score. The simplified revised (SR) Geneva score was calculated retrospectively. The predictive accuracy of both scores was compared by area under the curve (AUC) of receiver operating characteristic (ROC) curves. The overall prevalence of PE was 19%. Prevalence of PE in the low, moderate and high pretest probability groups assessed by the Wells score and by the simplified revised Geneva score was respectively 2%(95% CI (CI) 1-6) and 4% (CI 2-10), 28% (CI 22-35) and 25% (CI 20-32), 93% (CI 70-99) and 56% (CI 27-81). The Wells score performed better than the simplified revised Geneva score in patients with a high suspicion of PE (pGeneva score was 0.85 (CI: 0.81 to 0.89) and 0.76 (CI: 0.71 to 0.80) respectively. The difference between the AUCs was statistically significant (p=0.005). In our population the Wells score appeared to be more accurate than the simplified revised Geneva score. The impact of this finding in terms of patient outcomes should be investigated in a prospective study. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  10. A Family Longevity Selection Score: Ranking Sibships by Their Longevity, Size, and Availability for Study

    DEFF Research Database (Denmark)

    Sebastiani, Paola; Hadley, Evan C; Province, Michael

    2009-01-01

    Family studies of exceptional longevity can potentially identify genetic and other factors contributing to long life and healthy aging. Although such studies seek families that are exceptionally long lived, they also need living members who can provide DNA and phenotype information. On the basis ...... to characterize exceptional longevity in individuals or families in various types of studies and correlates well with later-observed longevity.......Family studies of exceptional longevity can potentially identify genetic and other factors contributing to long life and healthy aging. Although such studies seek families that are exceptionally long lived, they also need living members who can provide DNA and phenotype information. On the basis...... of these considerations, the authors developed a metric to rank families for selection into a family study of longevity. Their measure, the family longevity selection score (FLoSS), is the sum of 2 components: 1) an estimated family longevity score built from birth-, gender-, and nation-specific cohort survival...

  11. Easy probability estimation of the diagnosis of early axial spondyloarthritis by summing up scores.

    Science.gov (United States)

    Feldtkeller, Ernst; Rudwaleit, Martin; Zeidler, Henning

    2013-09-01

    Several sets of criteria for the diagnosis of axial SpA (including non-radiographic axial spondyloarthritis) have been proposed in the literature in which scores were attributed to relevant findings and the diagnosis requests a minimal sum of these scores. To quantitatively estimate the probability of axial SpA, multiplying the likelihood ratios of all relevant findings was proposed by Rudwaleit et al. in 2004. The objective of our proposal is to combine the advantages of both, i.e. to estimate the probability by summing up scores instead of multiplying likelihood ratios. An easy way to estimate the probability of axial spondyloarthritis is to use the logarithms of the likelihood ratios as scores attributed to relevant findings and to use the sum of these scores for the probability estimation. A list of whole-numbered scores for relevant findings is presented, and also threshold sum values necessary for a definite and for a probable diagnosis of axial SpA as well as a threshold below which the diagnosis of axial spondyloarthritis can be excluded. In a diagram, the probability of axial spondyloarthritis is given for sum values between these thresholds. By the method proposed, the advantages of both, the easy summing up of scores and the quantitative calculation of the diagnosis probability, are combined. Our method also makes it easier to estimate which additional tests are necessary to come to a definite diagnosis.

  12. A new plan-scoring method using normal tissue complication probability for personalized treatment plan decisions in prostate cancer

    Science.gov (United States)

    Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie; Chang, Kyung Hwan

    2018-01-01

    The aim of this study was to derive a new plan-scoring index using normal tissue complication probabilities to verify different plans in the selection of personalized treatment. Plans for 12 patients treated with tomotherapy were used to compare scoring for ranking. Dosimetric and biological indexes were analyzed for the plans for a clearly distinguishable group ( n = 7) and a similar group ( n = 12), using treatment plan verification software that we developed. The quality factor ( QF) of our support software for treatment decisions was consistent with the final treatment plan for the clearly distinguishable group (average QF = 1.202, 100% match rate, n = 7) and the similar group (average QF = 1.058, 33% match rate, n = 12). Therefore, we propose a normal tissue complication probability (NTCP) based on the plan scoring index for verification of different plans for personalized treatment-plan selection. Scoring using the new QF showed a 100% match rate (average NTCP QF = 1.0420). The NTCP-based new QF scoring method was adequate for obtaining biological verification quality and organ risk saving using the treatment-planning decision-support software we developed for prostate cancer.

  13. A Lyme borreliosis diagnosis probability score - no relation with antibiotic treatment response.

    Science.gov (United States)

    Briciu, Violeta T; Flonta, Mirela; Leucuţa, Daniel; Cârstina, Dumitru; Ţăţulescu, Doina F; Lupşe, Mihaela

    2017-05-01

    (1) To describe epidemiological and clinical data of patients that present with the suspicion of Lyme borreliosis (LB); (2) to evaluate a previous published score that classifies patients on the probability of having LB, following-up patients' clinical outcome after antibiotherapy. Inclusion criteria: patients with clinical manifestations compatible with LB and Borrelia (B.) burgdorferi positive serology, hospitalized in a Romanian hospital between January 2011 and October 2012. erythema migrans (EM) or suspicion of Lyme neuroborreliosis (LNB) with lumbar puncture performed for diagnosis. A questionnaire was completed for each patient regarding associated diseases, tick bites or EM history and clinical signs/symptoms at admission, end of treatment and 3 months later. Two-tier testing (TTT) used an ELISA followed by a Western Blot kit. The patients were classified in groups, using the LB probability score and were evaluated in a multidisciplinary team. Antibiotherapy followed guidelines' recommendations. Sixty-four patients were included, presenting diverse associated comorbidities. Fifty-seven patients presented positive TTT, seven presenting either ELISA or Western Blot test positive. No differences in outcome were found between the groups of patients classified as very probable, probable and little probable LB. Instead, a better post-treatment outcome was described in patients with positive TTT. The patients investigated for the suspicion of LB present diverse clinical manifestations and comorbidities that complicate differential diagnosis. The LB diagnosis probability score used in our patients did not correlate with the antibiotic treatment response, suggesting that the probability score does not bring any benefit in diagnosis.

  14. Associations between the probabilities of frequency-specific hearing loss and unaided APHAB scores.

    Science.gov (United States)

    Löhler, J; Wollenberg, B; Schlattmann, P; Hoang, N; Schönweiler, R

    2017-03-01

    The Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire reports subjective hearing impairments in four typical conditions. We investigated the association between the frequency-specific probability of hearing loss and scores from the unaided APHAB (APHABu) to determine whether the APHABu could be useful in primary diagnoses of hearing loss, in addition to pure tone and speech audiometry. This retrospective study included database records from 6558 patients (average age 69.0 years). We employed a multivariate generalised linear mixed model to analyse the probabilities of hearing losses (severity range 20-75 dB, evaluated in 5-dB steps), measured at different frequencies (0.5, 1.0, 2.0, 4.0, and 8.0 kHz), for nearly all combinations of APHABu subscale scores (subscale scores from 20 to 80%, evaluated in steps of 5%). We calculated the probability of hearing loss for 28,561 different combinations of APHABu subscale scores (results available online). In general, the probability of hearing loss was positively associated with the combined APHABu score (i.e. increasing probability with increasing scores). However, this association was negative at one frequency (8 kHz). The highest probabilities were for a hearing loss of 45 dB at test frequency 2.0 kHz, but with a wide spreading. We showed that the APHABu subscale scores were associated with the probability of hearing loss measured with audiometry. This information could enrich the expert's evaluation of the subject's hearing loss, and it might help resolve suspicious cases of aggravation. The 0.5 and 8.0 kHz frequencies influenced hearing loss less than the frequencies in-between, and 2.0 kHz was most influential on intermediate degree hearing loss (around 45 dB), which corresponded to the frequency-dependence of speech intelligibility measured with speech audiometry.

  15. Anchoring Effects in World University Rankings: Exploring Biases in Reputation Scores

    Science.gov (United States)

    Bowman, Nicholas A.; Bastedo, Michael N.

    2011-01-01

    Despite ongoing debates about their uses and validity, university rankings are a popular means to compare institutions within a country and around the world. Anchoring theory suggests that these rankings may influence assessments of institutional reputation, and this effect may be particularly strong when a new rankings system is introduced. We…

  16. Infection Probability Score, APACHE II and KARNOFSKY scoring systems as predictors of bloodstream infection onset in hematology-oncology patients

    Directory of Open Access Journals (Sweden)

    Terzis Konstantinos

    2010-05-01

    Full Text Available Abstract Background Bloodstream Infections (BSIs in neutropenic patients often cause considerable morbidity and mortality. Therefore, the surveillance and early identification of patients at high risk for developing BSIs might be useful for the development of preventive measures. The aim of the current study was to assess the predictive power of three scoring systems: Infection Probability Score (IPS, APACHE II and KARNOFSKY score for the onset of Bloodstream Infections in hematology-oncology patients. Methods A total of 102 patients who were hospitalized for more than 48 hours in a hematology-oncology department in Athens, Greece between April 1st and October 31st 2007 were included in the study. Data were collected by using an anonymous standardized recording form. Source materials included medical records, temperature charts, information from nursing and medical staff, and results on microbiological testing. Patients were followed daily until hospital discharge or death. Results Among the 102 patients, Bloodstream Infections occurred in 17 (16.6% patients. The incidence density of Bloodstream Infections was 7.74 per 1,000 patient-days or 21.99 per 1,000 patient-days at risk. The patients who developed a Bloodstream Infection were mainly females (p = 0.004, with twofold time mean length of hospital stay (p Conclusion Between the three different prognostic scoring systems, Infection Probability Score had the best sensitivity in predicting Bloodstream Infections.

  17. Probability of Survival Scores in Different Trauma Registries: A Systematic Review.

    Science.gov (United States)

    Stoica, Bogdan; Paun, Sorin; Tanase, Ioan; Negoi, Ionut; Chiotoroiu, Alexandru; Beuran, Mircea

    2016-01-01

    A mixed score to predict the probability of survival has a key role in the modern trauma systems. The aim of the current studies is to summarize the current knowledge about estimation of survival in major trauma patients, in different trauma registries. Systematic review of the literature using electronic search in the PubMed/Medline, Web of Science Core Collection and EBSCO databases. We have used as a MeSH or truncated words a combination of trauma "probability of survival" and "mixed scores". The search strategy in PubMed was: "((((trauma(MeSH Major Topic)) OR injury(Title/Abstract)) AND score (Title/Abstract)) AND survival) AND registry (Title/Abstract))))". We used as a language selection only English language literature. There is no consensus between the major trauma registries, regarding probability of survival estimation in major trauma patients. The German (RISC II), United Kingdom (PS Model 14) trauma registries scores are based of the largest population, with demographics updated to the nowadays European injury pattern. The revised TRISS, resulting from the USA National Trauma Database, seems to be inaccurate for trauma systems managing predominantly blunt injuries. The probability of survival should be evaluated in all major trauma patients, with a score derived from a population which reproduce the current demographics.Only a careful audit of the unpredicted deaths may continuously improve our care for severely injured patients. Celsius.

  18. Rank-k Maximal Statistics for Divergence and Probability of Misclassification

    Science.gov (United States)

    Decell, H. P., Jr.

    1972-01-01

    A technique is developed for selecting from n-channel multispectral data some k combinations of the n-channels upon which to base a given classification technique so that some measure of the loss of the ability to distinguish between classes, using the compressed k-dimensional data, is minimized. Information loss in compressing the n-channel data to k channels is taken to be the difference in the average interclass divergences (or probability of misclassification) in n-space and in k-space.

  19. Where to stop reading a ranked list? Threshold optimization using truncated score distributions

    NARCIS (Netherlands)

    Arampatzis, A.; Kamps, J.; Robertson, S.; Sanderson, M.; Zhai, C.; Zobel, J.; Allan, J.; Aslam, J.A.

    2009-01-01

    Ranked retrieval has a particular disadvantage in comparison with traditional Boolean retrieval: there is no clear cut-off point where to stop consulting results. This is a serious problem in some setups. We investigate and further develop methods to select the rank cut-off value which optimizes a

  20. Score distributions of gapped multiple sequence alignments down to the low-probability tail

    Science.gov (United States)

    Fieth, Pascal; Hartmann, Alexander K.

    2016-08-01

    Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.

  1. Assessing clinical probability of pulmonary embolism: prospective validation of the simplified Geneva score.

    Science.gov (United States)

    Robert-Ebadi, H; Mostaguir, K; Hovens, M M; Kare, M; Verschuren, F; Girard, P; Huisman, M V; Moustafa, F; Kamphuisen, P W; Buller, H R; Righini, M; Le Gal, G

    2017-09-01

    Essentials The simplified Geneva score allows easier pretest probability assessment of pulmonary embolism (PE). We prospectively validated this score in the ADJUST-PE management outcome study. The study shows that it is safe to manage patients with suspected PE according to this score. The simplified Geneva score is now ready for use in routine clinical practice. Background Pretest probability assessment by a clinical prediction rule (CPR) is an important step in the management of patients with suspected pulmonary embolism (PE). A limitation to the use of CPRs is that their constitutive variables and corresponding number of points are difficult to memorize. A simplified version of the Geneva score (i.e. attributing one point to each variable) has been proposed but never been prospectively validated. Aims Prospective validation of the simplified Geneva score (SGS) and comparison with the previous version of the Geneva score (GS). Methods In the ADJUST-PE study, which had the primary aim of validating the age-adjusted D-dimer cut-off, the SGS was prospectively used to determine the pretest probability in a subsample of 1621 study patients. Results Overall, PE was confirmed in 294 (18.1%) patients. Using the SGS, 608 (37.5%), 980 (60.5%) and 33 (2%) were classified as having a low, intermediate and high clinical probability. Corresponding prevalences of PE were 9.7%, 22.4% and 45.5%; 490 (30.1%) patients with low or intermediate probability had a D-dimer level below 500 μg L-1 and 653 (41.1%) had a negative D-dimer test according to the age-adjusted cut-off. Using the GS, the figures were 491(30.9%) and 650 (40.9%). None of the patients considered as not having PE based on a low or intermediate SGS and negative D-dimer had a recurrent thromboembolic event during the 3-month follow-up. Conclusions The use of SGS has similar efficiency and safety to the GS in excluding PE in association with the D-dimer test. © 2017 International Society on Thrombosis and

  2. Manifold ranking based scoring system with its application to cardiac arrest prediction: A retrospective study in emergency department patients.

    Science.gov (United States)

    Liu, Tianchi; Lin, Zhiping; Ong, Marcus Eng Hock; Koh, Zhi Xiong; Pek, Pin Pin; Yeo, Yong Kiang; Oh, Beom-Seok; Ho, Andrew Fu Wah; Liu, Nan

    2015-12-01

    The recently developed geometric distance scoring system has shown the effectiveness of scoring systems in predicting cardiac arrest within 72h and the potential to predict other clinical outcomes. However, the geometric distance scoring system predicts scores based on only local structure embedded by the data, thus leaving much room for improvement in terms of prediction accuracy. We developed a novel scoring system for predicting cardiac arrest within 72h. The scoring system was developed based on a semi-supervised learning algorithm, manifold ranking, which explores both the local and global consistency of the data. System evaluation was conducted on emergency department patients׳ data, including both vital signs and heart rate variability (HRV) parameters. Comparison of the proposed scoring system with previous work was given in terms of sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV). Out of 1025 patients, 52 (5.1%) met the primary outcome. Experimental results show that the proposed scoring system was able to achieve higher area under the curve (AUC) on both the balanced dataset (0.907 vs. 0.824) and the imbalanced dataset (0.774 vs. 0.734) compared to the geometric distance scoring system. The proposed scoring system improved the prediction accuracy by utilizing the global consistency of the training data. We foresee the potential of extending this scoring system, as well as manifold ranking algorithm, to other medical decision making problems. Furthermore, we will investigate the parameter selection process and other techniques to improve performance on the imbalanced dataset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Simplification of the revised Geneva score for assessing clinical probability of pulmonary embolism.

    Science.gov (United States)

    Klok, Frederikus A; Mos, Inge C M; Nijkeuter, Mathilde; Righini, Marc; Perrier, Arnaud; Le Gal, Grégoire; Huisman, Menno V

    2008-10-27

    The revised Geneva score is a fully standardized clinical decision rule (CDR) in the diagnostic workup of patients with suspected pulmonary embolism (PE). The variables of the decision rule have different weights, which could lead to miscalculations in an acute setting. We have validated a simplified version of the revised Geneva score. Data from 1049 patients from 2 large prospective diagnostic trials that included patients with suspected PE were used and combined to validate the simplified revised Geneva score. We constructed the simplified CDR by attributing 1 point to each item of the original CDR and compared the diagnostic accuracy of the 2 versions by a receiver operating characteristic curve analysis. We also assessed the clinical utility of the simplified CDR by evaluating the safety of ruling out PE on the basis of the combination of either a low-intermediate clinical probability (using a 3-level scheme) or a "PE unlikely" assessment (using a dichotomized rule) with a normal result on a highly sensitive D-dimer test. The complete study population had an overall prevalence of venous thromboembolism of 23%. The diagnostic accuracy between the 2 CDRs did not differ (area under the curve for the revised Geneva score was 0.75 [95% confidence interval, 0.71-0.78] vs 0.74 [0.70-0.77] for the simplified revised Geneva score). During 3 months of follow-up, no patient with a combination of either a low (0%; 95% confidence interval, 0.0%-1.7%) or intermediate (0%; 0.0%-2.8%) clinical probability, or a "PE unlikely" assessment (0%; 0.0%-1.2%) with the simplified score and a normal result of a D-dimer test was diagnosed as having venous thromboembolism. This study suggests that simplification of the revised Geneva score does not lead to a decrease in diagnostic accuracy and clinical utility, which should be confirmed in a prospective study.

  4. Probability scores and diagnostic algorithms in pulmonary embolism: are they followed in clinical practice?

    Science.gov (United States)

    Sanjuán, Pilar; Rodríguez-Núñez, Nuria; Rábade, Carlos; Lama, Adriana; Ferreiro, Lucía; González-Barcala, Francisco Javier; Alvarez-Dobaño, José Manuel; Toubes, María Elena; Golpe, Antonio; Valdés, Luis

    2014-05-01

    Clinical probability scores (CPS) determine the pre-test probability of pulmonary embolism (PE) and assess the need for the tests required in these patients. Our objective is to investigate if PE is diagnosed according to clinical practice guidelines. Retrospective study of clinically suspected PE in the emergency department between January 2010 and December 2012. A D-dimer value ≥ 500 ng/ml was considered positive. PE was diagnosed on the basis of the multislice computed tomography angiography and, to a lesser extent, with other imaging techniques. The CPS used was the revised Geneva scoring system. There was 3,924 cases of suspected PE (56% female). Diagnosis was determined in 360 patients (9.2%) and the incidence was 30.6 cases per 100,000 inhabitants/year. Sensitivity and the negative predictive value of the D-dimer test were 98.7% and 99.2% respectively. CPS was calculated in only 24 cases (0.6%) and diagnostic algorithms were not followed in 2,125 patients (54.2%): in 682 (17.4%) because clinical probability could not be estimated and in 482 (37.6%), 852 (46.4%) and 109 (87.9%) with low, intermediate and high clinical probability, respectively, because the diagnostic algorithms for these probabilities were not applied. CPS are rarely calculated in the diagnosis of PE and the diagnostic algorithm is rarely used in clinical practice. This may result in procedures with potential significant side effects being unnecessarily performed or to a high risk of underdiagnosis. Copyright © 2013 SEPAR. Published by Elsevier Espana. All rights reserved.

  5. The Chemistry Scoring Index (CSI: A Hazard-Based Scoring and Ranking Tool for Chemicals and Products Used in the Oil and Gas Industry

    Directory of Open Access Journals (Sweden)

    Tim Verslycke

    2014-06-01

    Full Text Available A large portfolio of chemicals and products is needed to meet the wide range of performance requirements of the oil and gas industry. The oil and gas industry is under increased scrutiny from regulators, environmental groups, the public, and other stakeholders for use of their chemicals. In response, industry is increasingly incorporating “greener” products and practices but is struggling to define and quantify what exactly constitutes “green” in the absence of a universally accepted definition. We recently developed the Chemistry Scoring Index (CSI which is ultimately intended to be a globally implementable tool that comprehensively scores and ranks hazards to human health, safety, and the environment for products used in oil and gas operations. CSI scores are assigned to products designed for the same use (e.g., surfactants, catalysts on the basis of product composition as well as intrinsic hazard properties and data availability for each product component. As such, products with a lower CSI score within a product use group are considered to have a lower intrinsic hazard compared to other products within the same use group. The CSI provides a powerful tool to evaluate relative product hazards; to review and assess product portfolios; and to aid in the formulation of products.

  6. Text mining effectively scores and ranks the literature for improving chemical-gene-disease curation at the comparative toxicogenomics database.

    Science.gov (United States)

    Davis, Allan Peter; Wiegers, Thomas C; Johnson, Robin J; Lay, Jean M; Lennon-Hopkins, Kelley; Saraceni-Richards, Cynthia; Sciaky, Daniela; Murphy, Cynthia Grondin; Mattingly, Carolyn J

    2013-01-01

    The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) is a public resource that curates interactions between environmental chemicals and gene products, and their relationships to diseases, as a means of understanding the effects of environmental chemicals on human health. CTD provides a triad of core information in the form of chemical-gene, chemical-disease, and gene-disease interactions that are manually curated from scientific articles. To increase the efficiency, productivity, and data coverage of manual curation, we have leveraged text mining to help rank and prioritize the triaged literature. Here, we describe our text-mining process that computes and assigns each article a document relevancy score (DRS), wherein a high DRS suggests that an article is more likely to be relevant for curation at CTD. We evaluated our process by first text mining a corpus of 14,904 articles triaged for seven heavy metals (cadmium, cobalt, copper, lead, manganese, mercury, and nickel). Based upon initial analysis, a representative subset corpus of 3,583 articles was then selected from the 14,094 articles and sent to five CTD biocurators for review. The resulting curation of these 3,583 articles was analyzed for a variety of parameters, including article relevancy, novel data content, interaction yield rate, mean average precision, and biological and toxicological interpretability. We show that for all measured parameters, the DRS is an effective indicator for scoring and improving the ranking of literature for the curation of chemical-gene-disease information at CTD. Here, we demonstrate how fully incorporating text mining-based DRS scoring into our curation pipeline enhances manual curation by prioritizing more relevant articles, thereby increasing data content, productivity, and efficiency.

  7. Text Mining Effectively Scores and Ranks the Literature for Improving Chemical-Gene-Disease Curation at the Comparative Toxicogenomics Database

    Science.gov (United States)

    Johnson, Robin J.; Lay, Jean M.; Lennon-Hopkins, Kelley; Saraceni-Richards, Cynthia; Sciaky, Daniela; Murphy, Cynthia Grondin; Mattingly, Carolyn J.

    2013-01-01

    The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) is a public resource that curates interactions between environmental chemicals and gene products, and their relationships to diseases, as a means of understanding the effects of environmental chemicals on human health. CTD provides a triad of core information in the form of chemical-gene, chemical-disease, and gene-disease interactions that are manually curated from scientific articles. To increase the efficiency, productivity, and data coverage of manual curation, we have leveraged text mining to help rank and prioritize the triaged literature. Here, we describe our text-mining process that computes and assigns each article a document relevancy score (DRS), wherein a high DRS suggests that an article is more likely to be relevant for curation at CTD. We evaluated our process by first text mining a corpus of 14,904 articles triaged for seven heavy metals (cadmium, cobalt, copper, lead, manganese, mercury, and nickel). Based upon initial analysis, a representative subset corpus of 3,583 articles was then selected from the 14,094 articles and sent to five CTD biocurators for review. The resulting curation of these 3,583 articles was analyzed for a variety of parameters, including article relevancy, novel data content, interaction yield rate, mean average precision, and biological and toxicological interpretability. We show that for all measured parameters, the DRS is an effective indicator for scoring and improving the ranking of literature for the curation of chemical-gene-disease information at CTD. Here, we demonstrate how fully incorporating text mining-based DRS scoring into our curation pipeline enhances manual curation by prioritizing more relevant articles, thereby increasing data content, productivity, and efficiency. PMID:23613709

  8. Selecting pesticides for inclusion in drinking water quality guidelines on the basis of detection probability and ranking.

    Science.gov (United States)

    Narita, Kentaro; Matsui, Yoshihiko; Iwao, Kensuke; Kamata, Motoyuki; Matsushita, Taku; Shirasaki, Nobutaka

    2014-02-01

    Pesticides released into the environment may pose both ecological and human health risks. Governments set the regulations and guidelines for the allowable levels of active components of pesticides in various exposure sources, including drinking water. Several pesticide risk indicators have been developed using various methodologies, but such indicators are seldom used for the selection of pesticides to be included in national regulations and guidelines. The aim of the current study was to use risk indicators for the selection of pesticides to be included in regulations and guidelines. Twenty-four risk indicators were created, and a detection rate was defined to judge which indicators were the best for selection. The combination of two indicators (local sales of a pesticide for the purposes of either rice farming or other farming, divided by the guideline value and annual precipitation, and amended with the scores from the physical and chemical properties of the pesticide) gave the highest detection rates. In this case study, this procedure was used to evaluate 134 pesticides that are currently unregulated in the Japanese Drinking Water Quality Guidelines, from which 44 were selected as pesticides to be added to the primary group in the guidelines. The detection probability of the 44 pesticides was more than 72%. Among the 102 pesticides currently in the primary group, 17 were selected for withdrawal from the group. © 2013.

  9. A nonparametric Bayesian method of translating machine learning scores to probabilities in clinical decision support.

    Science.gov (United States)

    Connolly, Brian; Cohen, K Bretonnel; Santel, Daniel; Bayram, Ulya; Pestian, John

    2017-08-07

    Probabilistic assessments of clinical care are essential for quality care. Yet, machine learning, which supports this care process has been limited to categorical results. To maximize its usefulness, it is important to find novel approaches that calibrate the ML output with a likelihood scale. Current state-of-the-art calibration methods are generally accurate and applicable to many ML models, but improved granularity and accuracy of such methods would increase the information available for clinical decision making. This novel non-parametric Bayesian approach is demonstrated on a variety of data sets, including simulated classifier outputs, biomedical data sets from the University of California, Irvine (UCI) Machine Learning Repository, and a clinical data set built to determine suicide risk from the language of emergency department patients. The method is first demonstrated on support-vector machine (SVM) models, which generally produce well-behaved, well understood scores. The method produces calibrations that are comparable to the state-of-the-art Bayesian Binning in Quantiles (BBQ) method when the SVM models are able to effectively separate cases and controls. However, as the SVM models' ability to discriminate classes decreases, our approach yields more granular and dynamic calibrated probabilities comparing to the BBQ method. Improvements in granularity and range are even more dramatic when the discrimination between the classes is artificially degraded by replacing the SVM model with an ad hoc k-means classifier. The method allows both clinicians and patients to have a more nuanced view of the output of an ML model, allowing better decision making. The method is demonstrated on simulated data, various biomedical data sets and a clinical data set, to which diverse ML methods are applied. Trivially extending the method to (non-ML) clinical scores is also discussed.

  10. Comparison of the unstructured clinician gestalt, the wells score, and the revised Geneva score to estimate pretest probability for suspected pulmonary embolism.

    Science.gov (United States)

    Penaloza, Andrea; Verschuren, Franck; Meyer, Guy; Quentin-Georget, Sybille; Soulie, Caroline; Thys, Frédéric; Roy, Pierre-Marie

    2013-08-01

    The assessment of clinical probability (as low, moderate, or high) with clinical decision rules has become a cornerstone of diagnostic strategy for patients with suspected pulmonary embolism, but little is known about the use of physician gestalt assessment of clinical probability. We evaluate the performance of gestalt assessment for diagnosing pulmonary embolism. We conducted a retrospective analysis of a prospective observational cohort of consecutive suspected pulmonary embolism patients in emergency departments. Accuracy of gestalt assessment was compared with the Wells score and the revised Geneva score by the area under the curve (AUC) of receiver operating characteristic curves. Agreement between the 3 methods was determined by κ test. The study population was 1,038 patients, with a pulmonary embolism prevalence of 31.3%. AUC differed significantly between the 3 methods and was 0.81 (95% confidence interval [CI] 0.78 to 0.84) for gestalt assessment, 0.71 (95% CI 0.68 to 0.75) for Wells, and 0.66 (95% CI 0.63 to 0.70) for the revised Geneva score. The proportion of patients categorized as having low clinical probability was statistically higher with gestalt than with revised Geneva score (43% versus 26%; 95% CI for the difference of 17%=13% to 21%). Proportion of patients categorized as having high clinical probability was higher with gestalt than with Wells (24% versus 7%; 95% CI for the difference of 17%=14% to 20%) or revised Geneva score (24% versus 10%; 95% CI for the difference of 15%=13% to 21%). Pulmonary embolism prevalence was significantly lower with gestalt versus clinical decision rules in low clinical probability (7.6% for gestalt versus 13.0% for revised Geneva score and 12.6% for Wells score) and non-high clinical probability groups (18.3% for gestalt versus 29.3% for Wells and 27.4% for revised Geneva score) and was significantly higher with gestalt versus Wells score in high clinical probability groups (72.1% versus 58.1%). Agreement

  11. Comparison of the revised Geneva score with the Wells rule for assessing clinical probability of pulmonary embolism.

    Science.gov (United States)

    Klok, F A; Kruisman, E; Spaan, J; Nijkeuter, M; Righini, M; Aujesky, D; Roy, P M; Perrier, A; Le Gal, G; Huisman, M V

    2008-01-01

    The revised Geneva score, a standardized clinical decision rule in the diagnosis of pulmonary embolism (PE), was recently developed. The Wells clinical decision is widely used but lacks full standardization, as it includes subjective clinician's judgement. We have compared the performance of the revised Geneva score with the Wells rule, and their usefulness for ruling out PE in combination with D-dimer measurement. In 300 consecutive patients, the clinical probability of PE was assessed prospectively by the Wells rule and retrospectively using the revised Geneva score. Patients comprised a random sample from a single center, participating in a large prospective multicenter diagnostic study. The predictive accuracy of both scores was compared by area under the curve (AUC) of receiver operating characteristic (ROC) curves. The overall prevalence of PE was 16%. The prevalence of PE in the low-probability, intermediate-probability and high-probability categories as classified by the revised Geneva score was similar to that of the original derivation set. The performance of the revised Geneva score as measured by the AUC in a ROC analysis did not differ statistically from the Wells rule. After 3 months of follow-up, no patient classified into the low or intermediate clinical probability category by the revised Geneva score and a normal D-dimer result was subsequently diagnosed with acute venous thromboembolism. This study suggests that the performance of the revised Geneva score is equivalent to that of the Wells rule. In addition, it seems safe to exclude PE in patients by the combination of a low or intermediate clinical probability by the revised Geneva score and a normal D-dimer level. Prospective clinical outcome studies are needed to confirm this latter finding.

  12. Development of a score and probability estimate for detecting angle closure based on anterior segment optical coherence tomography.

    Science.gov (United States)

    Nongpiur, Monisha E; Haaland, Benjamin A; Perera, Shamira A; Friedman, David S; He, Mingguang; Sakata, Lisandro M; Baskaran, Mani; Aung, Tin

    2014-01-01

    To develop a score along with an estimated probability of disease for detecting angle closure based on anterior segment optical coherence tomography (AS OCT) imaging. Cross-sectional study. A total of 2047 subjects 50 years of age and older were recruited from a community polyclinic in Singapore. All subjects underwent standardized ocular examination including gonioscopy and imaging by AS OCT (Carl Zeiss Meditec). Customized software (Zhongshan Angle Assessment Program) was used to measure AS OCT parameters. Complete data were available for 1368 subjects. Data from the right eyes were used for analysis. A stepwise logistic regression model with Akaike information criterion was used to generate a score that then was converted to an estimated probability of the presence of gonioscopic angle closure, defined as the inability to visualize the posterior trabecular meshwork for at least 180 degrees on nonindentation gonioscopy. Of the 1368 subjects, 295 (21.6%) had gonioscopic angle closure. The angle closure score was calculated from the shifted linear combination of the AS OCT parameters. The score can be converted to an estimated probability of having angle closure using the relationship: estimated probability = e(score)/(1 + e(score)), where e is the natural exponential. The score performed well in a second independent sample of 178 angle-closure subjects and 301 normal controls, with an area under the receiver operating characteristic curve of 0.94. A score derived from a single AS OCT image, coupled with an estimated probability, provides an objective platform for detection of angle closure. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Development of new risk score for pre-test probability of obstructive coronary artery disease based on coronary CT angiography.

    Science.gov (United States)

    Fujimoto, Shinichiro; Kondo, Takeshi; Yamamoto, Hideya; Yokoyama, Naoyuki; Tarutani, Yasuhiro; Takamura, Kazuhisa; Urabe, Yoji; Konno, Kumiko; Nishizaki, Yuji; Shinozaki, Tomohiro; Kihara, Yasuki; Daida, Hiroyuki; Isshiki, Takaaki; Takase, Shinichi

    2015-09-01

    Existing methods to calculate pre-test probability of obstructive coronary artery disease (CAD) have been established using selected high-risk patients who were referred to conventional coronary angiography. The purpose of this study is to develop and validate our new method for pre-test probability of obstructive CAD using patients who underwent coronary CT angiography (CTA), which could be applicable to a wider range of patient population. Using consecutive 4137 patients with suspected CAD who underwent coronary CTA at our institution, a multivariate logistic regression model including clinical factors as covariates calculated the pre-test probability (K-score) of obstructive CAD determined by coronary CTA. The K-score was compared with the Duke clinical score using the area under the curve (AUC) for the receiver-operating characteristic curve. External validation was performed by an independent sample of 319 patients. The final model included eight significant predictors: age, gender, coronary risk factor (hypertension, diabetes mellitus, dyslipidemia, smoking), history of cerebral infarction, and chest symptom. The AUC of the K-score was significantly greater than that of the Duke clinical score for both derivation (0.736 vs. 0.699) and validation (0.714 vs. 0.688) data sets. Among patients who underwent coronary CTA, newly developed K-score had better pre-test prediction ability of obstructive CAD compared to Duke clinical score in Japanese population.

  14. Top scores are possible, bottom scores are certain (and middle scores are not worth mentioning: A pragmatic view of verbal probabilities

    Directory of Open Access Journals (Sweden)

    Marie Juanchich

    2013-05-01

    Full Text Available In most previous studies of verbal probabilities, participants are asked to translate expressions such as possible and not certain into numeric probability values. This probabilistic translation approach can be contrasted with a novel which-outcome (WO approach that focuses on the outcomes that people naturally associate with probability terms. The WO approach has revealed that, when given bell-shaped distributions of quantitative outcomes, people tend to associate certainty with minimum (unlikely outcome magnitudes and possibility with (unlikely maximal ones. The purpose of the present paper is to test the factors that foster these effects and the conditions in which they apply. Experiment 1 showed that the association of probability term and outcome was related to the association of scalar modifiers (i.e., it is certain that the battery will last at least..., it is possible that the battery will last up to.... Further, we tested whether this pattern was dependent on the frequency (e.g., increasing vs. decreasing distribution or the nature of the outcomes presented (i.e., categorical vs. continuous. Results showed that despite being slightly affected by the shape of the distribution, participants continue to prefer to associate possible with maximum outcomes and certain with minimum outcomes. The final experiment provided a boundary condition to the effect, showing that it applies to verbal but not numerical probabilities.

  15. Italians do it worse. Montreal Cognitive Assessment (MoCA) optimal cut-off scores for people with probable Alzheimer's disease and with probable cognitive impairment.

    Science.gov (United States)

    Bosco, Andrea; Spano, Giuseppina; Caffò, Alessandro O; Lopez, Antonella; Grattagliano, Ignazio; Saracino, Giuseppe; Pinto, Katia; Hoogeveen, Frans; Lancioni, Giulio E

    2017-12-01

    Montreal cognitive assessment (MoCA) is a test providing a brief screening for people with cognitive impairment due to aging or neurodegenerative syndromes. In Italy, as in the rest of the world, several validation studies of MoCA have been carried out. This study compared, for the first time in Italy, a sample of people with probable Alzheimer's Disease (AD) with healthy counterparts. The study also compared two community-dwelling groups of aged participants with and without probable cognitive impairment, as discriminated by two cut-off points of adjusted MMSE score. All the comparisons were carried out according to ROC statistics. Optimal cutoff for a diagnosis of probable AD was a MoCA score ≤14. Optimal cutoff for the discrimination of probable cognitive impairment was a MoCA score ≤17 (associated to MMSE cutoff of 23.8). Results confirm the substantial discrepancy in cut-off points existing between Italian and other international validation studies, showing that Italian performance on MoCA seems to be globally lower than that in other Countries. Characteristics of population might explain these results.

  16. Discrimination Level of Students' Ratio, Number of Students per Faculty Member and Article Scores Indicators According to Place of Turkish Universities in International Ranking Systems

    Science.gov (United States)

    Özkan, Metin

    2016-01-01

    The aim of this research is to determine classification in which the level of accuracy in Turkish universities rankings is detected by the international assessments according to the independent variables PhD students ratio, the number of students per faculty member and the article scores. The data of research were obtained from University Ranking…

  17. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  18. Overall and class-specific scores of pesticide residues from fruits and vegetables as a tool to rank intake of pesticide residues in United States: A validation study.

    Science.gov (United States)

    Hu, Yang; Chiu, Yu-Han; Hauser, Russ; Chavarro, Jorge; Sun, Qi

    2016-01-01

    Pesticide residues in fruits and vegetables are among the primary sources of pesticide exposure through diet, but the lack of adequate measurements hinder the research on health effects of pesticide residues. Pesticide Residue Burden Score (PRBS) for estimating overall dietary pesticide intake, organochlorine pesticide score (OC-PRBS) and organophosphate pesticide score (OP-PRBS) for estimating organochlorine and organophosphate pesticides-specific intake, respectively, were derived using U.S. Department of Agriculture Pesticide Data Program data and National Health and Nutrition Examination Survey (NHANES) food frequency questionnaire data. We evaluated the performance of these scores by validating the scores against pesticide metabolites measured in urine or serum among 3,679 participants in NHANES using generalized linear regression. The PRBS was positively associated with a score summarizing the ranks of all pesticide metabolites in a linear fashion (p for linear trend vegetables with high vs. low pesticide residues, respectively (p for trend vegetables (p for trend 0.07) than from less contaminated Fruits and vegetables (p for trend 0.63), although neither of the associations achieved statistical significance. The PRBS and the class-specific scores for two major types of pesticides were significantly associated with pesticide biomarkers. These scores can reasonably rank study participants by their pesticide residue exposures from fruits and vegetables in large-scale environmental epidemiological studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. What Predicts Performance? A Multicenter Study Examining the Association Between Resident Performance, Rank List Position, and United States Medical Licensing Examination Step 1 Scores.

    Science.gov (United States)

    Wagner, Jonathan G; Schneberk, Todd; Zobrist, Marissa; Hern, H Gene; Jordan, Jamie; Boysen-Osborn, Megan; Menchine, Michael

    2017-03-01

    Each application cycle, emergency medicine (EM) residency programs attempt to predict which applicants will be most successful in residency and rank them accordingly on their program's Rank Order List (ROL). Determine if ROL position, participation in a medical student rotation at their respective program, or United States Medical Licensing Examination (USMLE) Step 1 rank within a class is predictive of residency performance. All full-time EM faculty at Los Angeles County + University of Southern California (LAC + USC), Harbor-UCLA (Harbor), Alameda Health System-Highland (Highland), and the University of California-Irvine (UCI) ranked each resident in the classes of 2013 and 2014 at time of graduation. From these anonymous surveys, a graduation ROL was created, and using Spearman's rho, was compared with the program's adjusted ROL, USMLE Step 1 rank, and whether the resident participated in a medical student rotation. A total of 93 residents were evaluated. Graduation ROL position did not correlate with adjusted ROL position (Rho = 0.14, p = 0.19) or USMLE Step 1 rank (Rho = 0.15, p = 0.14). Interestingly, among the subgroup of residents who rotated as medical students, adjusted ROL position demonstrated significant correlation with final ranking on graduation ROL (Rho = 0.31, p = 0.03). USMLE Step 1 score rank and adjusted ROL position did not predict resident performance at time of graduation. However, adjusted ROL position was predictive of future residency success in the subgroup of residents who had completed a sub-internship at their respective programs. These findings should guide the future selection of EM residents. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. [Correlation between APACHE II scores and delirium probability of senile severe pneumonia patients undergoing invasive mechanical ventilation].

    Science.gov (United States)

    Pei, Xinghua; Yu, Haiming; Wu, Yanhong; Zhou, Xu

    2017-09-01

    To investigate the correlation between acute physiology and chronic health evaluation II (APACHE II) scores and delirium probability of senile severe pneumonia patients undergoing invasive mechanical ventilation (MV). A retrospective study was conducted. Eighty-nine senile severe pneumonia patients undergoing invasive MV admitted to intensive care unit (ICU) of Hunan Provincial People's Hospital from January 2015 to March 2017 were enrolled. APACHE II scores were collected 24 hours before invasive MV. Consciousness assessment method-ICU (CAM-ICU) was used to diagnose delirium, and the patients were divided into delirium group and non-delirium group. The first delirium occurrence time, duration of MV and the length of ICU stay were recorded. The patients were divided into ≤15, 16-20, 21-25, 26-30, 31-35, 36-40 groups according to APACHE II score, and the incidence of delirium in all groups were observed. The linear regression and Pearson correlation were used to analyze the correlation between APACHE II scores and delirium probability. Receiver operating characteristic (ROC) curve was plotted to analyze the predictive effect of APACHE II score on delirium. Eighty-nine patients were enrolled in the final analysis, of which 35 had delirium, and 54 had no delirium, with delirium incidence of 39.33%, and the first delirium occurrence time of (1.85±1.30) days. The duration of MV and the length of ICU stay of delirium group were significantly higher than those of non-delirium group [duration of MV (days): 9.43±4.77 vs. 6.08±3.30, length of ICU stay (days): 14.60±6.59 vs. 9.69±4.61, both P APACHE II score in delirium group was significantly higher than that in non-delirium group (29.89±5.45 vs. 21.48±4.76, P APACHE II scores, the delirium incidence was gradually increased. Correlation analysis showed that there was a negative correlation between APACHE II scores and first delirium occurrence time (r = -0.411, P = 0.014), and a significant linear positive

  1. Modeling probability-based injury severity scores in logistic regression models: the logit transformation should be used.

    Science.gov (United States)

    Moore, Lynne; Lavoie, André; Bergeron, Eric; Emond, Marcel

    2007-03-01

    The International Classification of Disease Injury Severity Score (ICISS) and the Trauma Registry Abbreviated Injury Scale Score (TRAIS) are trauma injury severity scores based on probabilities of survival. They are widely used in logistic regression models as raw probability scores to predict the logit of mortality. The aim of this study was to evaluate whether these severity indicators would offer a more accurate prediction of mortality if they were used with a logit transformation. Analyses were based on 25,111 patients from the trauma registries of the four Level I trauma centers in the province of Quebec, Canada, abstracted between 1998 and 2005. The ICISS and TRAIS were calculated using survival proportions from the National Trauma Data Bank. The performance of the ICISS and TRAIS in their widely used form, proportions varying from 0 to 1, was compared with a logit transformation of the scores in logistic regression models predicting in-hospital mortality. Calibration was assessed with the Hosmer-Lemeshow statistic. Neither the ICISS nor the TRAIS had a linear relation with the logit of mortality. A logit transformation of these scores led to a near-linear association and consequently improved model calibration. The Hosmer-Lemeshow statistic was 68 (35-192) and 69 (41-120) with the logit transformation compared with 272 (227-339) and 204 (166-266) with no transformation, for the ICISS and TRAIS, respectively. In logistic regression models predicting mortality, the ICISS and TRAIS should be used with a logit transformation. This study has direct implications for improving the validity of analyses requiring control for injury severity case mix.

  2. Propensity scores-potential outcomes framework to incorporate severity probabilities in the highway safety manual crash prediction algorithm.

    Science.gov (United States)

    Sasidharan, Lekshmi; Donnell, Eric T

    2014-10-01

    Accurate estimation of the expected number of crashes at different severity levels for entities with and without countermeasures plays a vital role in selecting countermeasures in the framework of the safety management process. The current practice is to use the American Association of State Highway and Transportation Officials' Highway Safety Manual crash prediction algorithms, which combine safety performance functions and crash modification factors, to estimate the effects of safety countermeasures on different highway and street facility types. Many of these crash prediction algorithms are based solely on crash frequency, or assume that severity outcomes are unchanged when planning for, or implementing, safety countermeasures. Failing to account for the uncertainty associated with crash severity outcomes, and assuming crash severity distributions remain unchanged in safety performance evaluations, limits the utility of the Highway Safety Manual crash prediction algorithms in assessing the effect of safety countermeasures on crash severity. This study demonstrates the application of a propensity scores-potential outcomes framework to estimate the probability distribution for the occurrence of different crash severity levels by accounting for the uncertainties associated with them. The probability of fatal and severe injury crash occurrence at lighted and unlighted intersections is estimated in this paper using data from Minnesota. The results show that the expected probability of occurrence of fatal and severe injury crashes at a lighted intersection was 1 in 35 crashes and the estimated risk ratio indicates that the respective probabilities at an unlighted intersection was 1.14 times higher compared to lighted intersections. The results from the potential outcomes-propensity scores framework are compared to results obtained from traditional binary logit models, without application of propensity scores matching. Traditional binary logit analysis suggests that

  3. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  4. Universal scaling in sports ranking

    Science.gov (United States)

    Deng, Weibing; Li, Wei; Cai, Xu; Bulou, Alain; Wang, Qiuping A.

    2012-09-01

    Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters.

  5. Universal scaling in sports ranking

    CERN Document Server

    Deng, Weibing; Cai, Xu; Bulou, Alain; Wang, Qiuping A

    2011-01-01

    Ranking is a ubiquitous phenomenon in the human society. By clicking the web pages of Forbes, you may find all kinds of rankings, such as world's most powerful people, world's richest people, top-paid tennis stars, and so on and so forth. Herewith, we study a specific kind, sports ranking systems in which players' scores and prize money are calculated based on their performances in attending various tournaments. A typical example is tennis. It is found that the distributions of both scores and prize money follow universal power laws, with exponents nearly identical for most sports fields. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player will top the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simul...

  6. 24 CFR 200.857 - Administrative process for scoring and ranking the physical condition of multifamily housing...

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Administrative process for scoring.... The administrative process provided in this section does not prohibit the Office of Housing, the DEC... GENERAL INTRODUCTION TO FHA PROGRAMS Physical Condition of Multifamily Properties § 200.857 Administrative...

  7. Internationally comparable diagnosis-specific survival probabilities for calculation of the ICD-10-based Injury Severity Score

    DEFF Research Database (Denmark)

    Gedeborg, R.; Warner, M.; Chen, L. H.

    2014-01-01

    BACKGROUND: The International Statistical Classification of Diseases, 10th Revision (ICD-10) -based Injury Severity Score (ICISS) performs well but requires diagnosis-specific survival probabilities (DSPs), which are empirically derived, for its calculation. The objective was to examine if DSPs b...... based on data pooled from several countries could increase accuracy, precision, utility, and international comparability of DSPs and ICISS. METHODS: Australia, Argentina, Austria, Canada, Denmark, New Zealand, and Sweden provided ICD-10-coded injury hospital discharge data, including in......-hospital mortality status. Data from the seven countries were pooled using four different methods to create an international collaborative effort ICISS (ICE-ICISS). The ability of the ICISS to predict mortality using the country-specific DSPs and the pooled DSPs was estimated and compared. RESULTS: The pooled DSPs...... generated empirically derived DSPs. These pooled DSPs facilitate international comparisons and enables the use of ICISS in all settings where ICD-10 hospital discharge diagnoses are available. The modest reduction in performance of the ICE-ICISS compared with the country-specific scores is unlikely...

  8. Diet quality of Italian yogurt consumers: an application of the probability of adequate nutrient intake score (PANDiet).

    Science.gov (United States)

    Mistura, Lorenza; D'Addezio, Laura; Sette, Stefania; Piccinelli, Raffaela; Turrini, Aida

    2016-01-01

    The diet quality in yogurt consumers and non-consumers was evaluated by applying the probability of adequate nutrient intake (PANDiet) index to a sample of adults and elderly from the Italian food consumption survey INRAN SCAI 2005-06. Overall, yogurt consumers had a significantly higher mean intake of energy, calcium and percentage of energy from total sugars whereas the mean percentage of energy from total fat, saturated fatty acid and total carbohydrate were significantly (p < 0.01) lower than in non-consumers. The PANDiet index was significantly higher in yogurt consumers than in non-consumers, (60.58 ± 0.33 vs. 58.58 ± 0.19, p < 0.001). The adequacy sub-score for 17 nutrients for which usual intake should be above the reference value was significantly higher among yogurt consumers. The items of calcium, potassium and riboflavin showed the major percentage variation between consumers and non-consumers. Yogurt consumers were more likely to have adequate intakes of vitamins and minerals, and a higher quality score of the diet.

  9. The HIT Expert Probability (HEP) Score: a novel pre‐test probability model for heparin‐induced thrombocytopenia based on broad expert opinion

    National Research Council Canada - National Science Library

    CUKER, A; AREPALLY, G; CROWTHER, M. A; RICE, L; DATKO, F; HOOK, K; PROPERT, K. J; KUTER, D. J; ORTEL, T. L; KONKLE, B. A; CINES, D. B

    2010-01-01

    .... Objectives:  To develop a pre‐test clinical scoring model for HIT based on broad expert opinion that may be useful in guiding clinical decisions regarding therapy. Patients/methods:  A pre...

  10. Utilization of 4T score to determine the pretest probability of heparin-induced thrombocytopenia in a community hospital in upstate New York

    Directory of Open Access Journals (Sweden)

    Yazan Samhouri

    2016-09-01

    Full Text Available Background: Thrombocytopenia is common in hospitalized patients. Heparin-induced thrombocytopenia (HIT is a life-threatening condition which can lead to extensive thrombosis. Diagnosis of HIT relies on clinical suspicion determined by 4T score and immunoassays through testing for anti-PF4/heparin antibodies. Clinical practice guidelines published by the American Society of Hematology in 2013 recommended use of the 4T score before ordering the immunoassays as a measure of pretest probability. The purpose of this study was to evaluate the utilization of 4T score before ordering anti-PF4/heparin antibodies at Unity Hospital. Methods: We did a retrospective chart review for patients who are 18 years or older, admitted to Unity Hospital between July 1, 2013, and December 31, 2014, and had anti-PF4/heparin antibodies ordered. Subjects who had prior history of HIT or had end-stage renal disease on hemodialysis were excluded. After calculating 4T score retrospectively, we calculated the proportion of patients who had 4T score documented prior to ELISA testing and proportion of ELISA tests, which were not indicated due to a 4T score less than or equal to 3 using Minitab 16. Results: Review of 123 patients, with an average age of 69.4 years, showed that testing was indicated in 18 patients. Six subjects had positive results, and testing was indicated in all of them. 4T score was documented in three patients. This quality improvement study showed that 4T score documentation rate at Unity Hospital is 2.4%. Anti-PF4/heparin antibody testing was indicated in 14.6%. This test is being overused in thrombocytopenia work up at Unity Hospital, costing $9,345. The topic was reviewed for residents. A prompt and calculator for 4T score were added to electronic medical records before ordering the test as a step to improve high value care.

  11. Tensor Rank

    OpenAIRE

    Erdtman, Elias; Jönsson, Carl

    2012-01-01

    This master's thesis addresses numerical methods of computing the typical ranks of tensors over the real numbers and explores some properties of tensors over finite fields. We present three numerical methods to compute typical tensor rank. Two of these have already been published and can be used to calculate the lowest typical ranks of tensors and an approximate percentage of how many tensors have the lowest typical ranks (for some tensor formats), respectively. The third method was developed...

  12. PageRank (II): Mathematics

    African Journals Online (AJOL)

    maths/stats

    INTRODUCTION. PageRank is Google's system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a ... Felix U. Ogban, Department of Mathematics/Statistics and Computer Science, Faculty of Science, University of ..... probability, 2004, 41, (3): 721-734.

  13. Rank Dynamics

    Science.gov (United States)

    Gershenson, Carlos

    Studies of rank distributions have been popular for decades, especially since the work of Zipf. For example, if we rank words of a given language by use frequency (most used word in English is 'the', rank 1; second most common word is 'of', rank 2), the distribution can be approximated roughly with a power law. The same applies for cities (most populated city in a country ranks first), earthquakes, metabolism, the Internet, and dozens of other phenomena. We recently proposed ``rank diversity'' to measure how ranks change in time, using the Google Books Ngram dataset. Studying six languages between 1800 and 2009, we found that the rank diversity curves of languages are universal, adjusted with a sigmoid on log-normal scale. We are studying several other datasets (sports, economies, social systems, urban systems, earthquakes, artificial life). Rank diversity seems to be universal, independently of the shape of the rank distribution. I will present our work in progress towards a general description of the features of rank change in time, along with simple models which reproduce it

  14. Ranking beta sheet topologies of proteins

    DEFF Research Database (Denmark)

    Fonseca, Rasmus; Helles, Glennie; Winter, Pawel

    2010-01-01

    One of the challenges of protein structure prediction is to identify long-range interactions between amino acids. To reliably predict such interactions, we enumerate, score and rank all beta-topologies (partitions of beta-strands into sheets, orderings of strands within sheets and orientations...... of paired strands) of a given protein. We show that the beta-topology corresponding to the native structure is, with high probability, among the top-ranked. Since full enumeration is very time-consuming, we also suggest a method to deal with proteins with many beta-strands. The results reported...... in this paper are highly relevant for ab initio protein structure prediction methods based on decoy generation. The top-ranked beta-topologies can be used to find initial conformations from which conformational searches can be started. They can also be used to filter decoys by removing those with poorly...

  15. A configuration space of homologous proteins conserving mutual information and allowing a phylogeny inference based on pair-wise Z-score probabilities

    Directory of Open Access Journals (Sweden)

    Maréchal Eric

    2005-03-01

    Full Text Available Abstract Background Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons and be the basis for a novel method of consistent and stable phylogenetic reconstruction. Results We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. Conclusion The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.

  16. Elaboration of a clinical and paraclinical score to estimate the probability of herpes simplex virus encephalitis in patients with febrile, acute neurologic impairment.

    Science.gov (United States)

    Gennai, S; Rallo, A; Keil, D; Seigneurin, A; Germi, R; Epaulard, O

    2016-06-01

    Herpes simplex virus (HSV) encephalitis is associated with a high risk of mortality and sequelae, and early diagnosis and treatment in the emergency department are necessary. However, most patients present with non-specific febrile, acute neurologic impairment; this may lead clinicians to overlook the diagnosis of HSV encephalitis. We aimed to identify which data collected in the first hours in a medical setting were associated with the diagnosis of HSV encephalitis. We conducted a multicenter retrospective case-control study in four French public hospitals from 2007 to 2013. The cases were the adult patients who received a confirmed diagnosis of HSV encephalitis. The controls were all the patients who attended the emergency department of Grenoble hospital with a febrile acute neurologic impairment, without HSV detection by polymerase chain reaction (PCR) in the cerebrospinal fluid (CSF), in 2012 and 2013. A multivariable logistic model was elaborated to estimate factors significantly associated with HSV encephalitis. Finally, an HSV probability score was derived from the logistic model. We identified 36 cases and 103 controls. Factors independently associated with HSV encephalitis were the absence of past neurological history (odds ratio [OR] 6.25 [95 % confidence interval (CI): 2.22-16.7]), the occurrence of seizure (OR 8.09 [95 % CI: 2.73-23.94]), a systolic blood pressure ≥140 mmHg (OR 5.11 [95 % CI: 1.77-14.77]), and a C-reactive protein <10 mg/L (OR 9.27 [95 % CI: 2.98-28.88]). An HSV probability score was calculated summing the value attributed to each independent factor. HSV encephalitis diagnosis may benefit from the use of this score based upon some easily accessible data. However, diagnostic evocation and probabilistic treatment must remain the rule.

  17. Use of the probability of stone formation (PSF score to assess stone forming risk and treatment response in a cohort of Brazilian stone formers

    Directory of Open Access Journals (Sweden)

    Benjamin Turney

    2014-08-01

    Full Text Available Introduction The aim was to confirm that PSF (probability of stone formation changed appropriately following medical therapy on recurrent stone formers. Materials and Methods Data were collected on 26 Brazilian stone-formers. A baseline 24-hour urine collection was performed prior to treatment. Details of the medical treatment initiated for stone-disease were recorded. A PSF calculation was performed on the 24 hour urine sample using the 7 urinary parameters required: voided volume, oxalate, calcium, urate, pH, citrate and magnesium. A repeat 24-hour urine sample was performed for PSF calculation after treatment. Comparison was made between the PSF scores before and during treatment. Results At baseline, 20 of the 26 patients (77% had a high PSF score (> 0.5. Of the 26 patients, 17 (65% showed an overall reduction in their PSF profiles with a medical treatment regimen. Eleven patients (42% changed from a high risk (PSF > 0.5 to a low risk (PSF 0.5 during both assessments. Conclusions The PSF score reduced following medical treatment in the majority of patients in this cohort.

  18. Efficient Top-k Search for PageRank

    National Research Council Canada - National Science Library

    Fujiwara, Yasuhiro; Nakatsuji, Makoto; Shiokawa, Hiroaki; Mishima, Takeshi; Onizuka, Makoto

    2015-01-01

      In AI communities, many applications utilize PageRank. To obtain high PageRank score nodes, the original approach iteratively computes the PageRank score of each node until convergence from the whole graph...

  19. Mortality probability model III and simplified acute physiology score II: assessing their value in predicting length of stay and comparison to APACHE IV.

    Science.gov (United States)

    Vasilevskis, Eduard E; Kuzniewicz, Michael W; Cason, Brian A; Lane, Rondall K; Dean, Mitzi L; Clay, Ted; Rennie, Deborah J; Vittinghoff, Eric; Dudley, R Adams

    2009-07-01

    To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM(0)) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R(2) = 0.422], mortality probability model III at zero hours (MPM(0) III) [R(2) = 0.279], and simplified acute physiology score (SAPS II) [R(2) = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p APACHE IVrecal, MPM(0) III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. APACHE IV and MPM(0) III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM(0) III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration.

  20. Quantitative EEG Markers of Entropy and Auto Mutual Information in Relation to MMSE Scores of Probable Alzheimer’s Disease Patients

    Directory of Open Access Journals (Sweden)

    Carmina Coronel

    2017-03-01

    Full Text Available Analysis of nonlinear quantitative EEG (qEEG markers describing complexity of signal in relation to severity of Alzheimer’s disease (AD was the focal point of this study. In this study, 79 patients diagnosed with probable AD were recruited from the multi-centric Prospective Dementia Database Austria (PRODEM. EEG recordings were done with the subjects seated in an upright position in a resting state with their eyes closed. Models of linear regressions explaining disease severity, expressed in Mini Mental State Examination (MMSE scores, were analyzed by the nonlinear qEEG markers of auto mutual information (AMI, Shannon entropy (ShE, Tsallis entropy (TsE, multiscale entropy (MsE, or spectral entropy (SpE, with age, duration of illness, and years of education as co-predictors. Linear regression models with AMI were significant for all electrode sites and clusters, where R 2 is 0.46 at the electrode site C3, 0.43 at Cz, F3, and central region, and 0.42 at the left region. MsE also had significant models at C3 with R 2 > 0.40 at scales τ = 5 and τ = 6 . ShE and TsE also have significant models at T7 and F7 with R 2 > 0.30 . Reductions in complexity, calculated by AMI, SpE, and MsE, were observed as the MMSE score decreased.

  1. Kid-Short Marfan Score (Kid-SMS) Is a Useful Diagnostic Tool for Stratifying the Pre-Test Probability of Marfan Syndrome in Childhood.

    Science.gov (United States)

    Stark, Veronika C; Arndt, Florian; Harring, Gesa; von Kodolitsch, Yskert; Kozlik-Feldmann, Rainer; Mueller, Goetz C; Steiner, Kristoffer J; Mir, Thomas S

    2015-03-12

    Due to age dependent organ manifestation, diagnosis of Marfan syndrome (MFS) is a challenge, especially in childhood. It is important to identify children at risk of MFS as soon as possible to direct those to appropriate treatment but also to avoid stigmatization due to false diagnosis. We published the Kid-Short Marfan Score (Kid-SMS) in 2012 to stratify the pre-test probability of MFS in childhood. Hence we now evaluate the predictive performance of Kid-SMS in a new cohort of children. We prospectively investigated 106 patients who were suspected of having MFS. At baseline, children were examined according to Kid-SMS. At baseline and follow-up visit, diagnosis of MFS was established or rejected using standard current diagnostic criteria according to the revised Ghent Criteria (Ghent-2). At baseline 43 patients were identified with a risk of MFS according to Kid-SMS whereas 21 patients had Ghent-2 diagnosis of MFS. Sensitivity was 100%, specificity 77%, negative predictive value 100% and Likelihood ratio of Kid-SMS 4.3. During follow-up period, three other patients with a stratified risk for MFS were diagnosed according to Ghent-2. We confirm very good predictive performance of Kid-SMS with excellent sensitivity and negative predictive value but restricted specificity. Kid-SMS avoids stigmatization due to diagnosis of MFS and thus restriction to quality of life. Especially outpatient pediatricians and pediatric cardiologists can use it for primary assessment.

  2. Comparison of Heidelberg Retina Tomograph-3 glaucoma probability score and Moorfields regression analysis of optic nerve head in glaucoma patients and healthy individuals.

    Science.gov (United States)

    Caglar, Çagatay; Gul, Adem; Batur, Muhammed; Yasar, Tekin

    2017-01-01

    To compare the sensitivity and specificity of Moorfields regression analysis (MRA) and glaucoma probability score (GPS) between healthy and glaucomatous eyes with Heidelberg Retinal Tomograph 3 (HRT-3). The study included 120 eyes of 75 glaucoma patients and 138 eyes of 73 normal subjects, for a total of 258 eyes of 148 individuals. All measurements were performed with the HRT-3. Diagnostic test criteria (sensitivity, specificity, etc.) were used to evaluate how efficiently GPS and MRA algorithms in the HRT-3 discriminated between the glaucoma and control groups. The GPS showed 88 % sensitivity and 66 % specificity, whereas MRA had 71.5 % sensitivity and 82.5 % specificity. There was 71 % agreement between the final results of MRA and GPS in the glaucoma group. Excluding borderline patients from both analyses resulted in 91.6 % agreement. In the control group the level of agreement between MRA and GPS was 64 % including borderline patients and 84.1 % after excluding borderline patients. The accuracy rate is 92 % for MRA and 91 % for GPS in the glaucoma group excluding borderline patients. The difference was nor statistically different. In both cases, agreement was higher between MRA and GPS in the glaucoma group. We found that both sensitivity and specificity increased with disc size for MRA, while the sensitivity increased and specificity decreased with larger disc sizes for GPS. HRT is able to quantify and clearly reveal structural changes in the ONH and RNFL in glaucoma.

  3. Evaluating probability forecasts

    OpenAIRE

    Lai, Tze Leung; Gross, Shulamith T.; Shen, David Bo

    2011-01-01

    Probability forecasts of events are routinely used in climate predictions, in forecasting default probabilities on bank loans or in estimating the probability of a patient's positive response to treatment. Scoring rules have long been used to assess the efficacy of the forecast probabilities after observing the occurrence, or nonoccurrence, of the predicted events. We develop herein a statistical theory for scoring rules and propose an alternative approach to the evaluation of probability for...

  4. Which set of embryo variables is most predictive for live birth? A prospective study in 6252 single embryo transfers to construct an embryo score for the ranking and selection of embryos.

    Science.gov (United States)

    Rhenman, A; Berglund, L; Brodin, T; Olovsson, M; Milton, K; Hadziosmanovic, N; Holte, J

    2015-01-01

    Which embryo score variables are most powerful for predicting live birth after single embryo transfer (SET) at the early cleavage stage? This large prospective study of visual embryo scoring variables shows that blastomere number (BL), the proportion of mononucleated blastomeres (NU) and the degree of fragmentation (FR) have independent prognostic power to predict live birth. Other studies suggest prognostic power, at least univariately and for implantation potential, for all five variables. A previous study from the same centre on double embryo transfers with implantation as the end-point resulted in the integrated morphology cleavage (IMC) score, which incorporates BL, NU and EQ. A prospective cohort study of IVF/ICSI SET on Day 2 (n = 6252) during a 6-year period (2006-2012). The five variables (BL NU, FR, EQ and symmetry of cleavage (SY)) were scored in 3- to 5-step scales and subsequently related to clinical pregnancy and LBR. A total of 4304 women undergoing IVF/ICSI in a university-affiliated private fertility clinic were included. Generalized estimating equation models evaluated live birth (yes/no) as primary outcome using the embryo variables as predictors. Odds ratios with 95% confidence intervals and P-values were presented for each predictor. The C statistic (i.e. area under receiver operating characteristic curve) was calculated for each model. Model calibration was assessed with the Hosmer-Lemeshow test. A shrinkage method was applied to remove bias in c statistics due to over-fitting. LBR was 27.1% (1693/6252). BL, NU, FR and EQ were univariately highly significantly associated with LBR. In a multivariate model, BL, NU and FR were independently significant, with c statistic 0.579 (age-adjusted c statistic 0.637). EQ did not retain significance in the multivariate model. Prediction model calibration was good for both pregnancy and live birth. We present a ranking tree with combinations of values of the BL, NU and FR embryo variables for optimal

  5. Comparison of the diagnostic ability of Moorfield′s regression analysis and glaucoma probability score using Heidelberg retinal tomograph III in eyes with primary open angle glaucoma

    Directory of Open Access Journals (Sweden)

    Jindal Shveta

    2010-01-01

    Full Text Available Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT glaucoma probability score (GPS with that of Moorfield′s regression analysis (MRA. Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 - 0.315. The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives and least specific criteria (borderline results included as test positives. The MRA sensitivity and specificity were 30.61 and 98% (most specific and 57.14 and 98% (least specific. The GPS sensitivity and specificity were 81.63 and 73.47% (most specific and 95.92 and 34.69% (least specific. The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08 and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44.The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs.

  6. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  7. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  8. Time evolution of Wikipedia network ranking

    Science.gov (United States)

    Eom, Young-Ho; Frahm, Klaus M.; Benczúr, András; Shepelyansky, Dima L.

    2013-12-01

    We study the time evolution of ranking and spectral properties of the Google matrix of English Wikipedia hyperlink network during years 2003-2011. The statistical properties of ranking of Wikipedia articles via PageRank and CheiRank probabilities, as well as the matrix spectrum, are shown to be stabilized for 2007-2011. A special emphasis is done on ranking of Wikipedia personalities and universities. We show that PageRank selection is dominated by politicians while 2DRank, which combines PageRank and CheiRank, gives more accent on personalities of arts. The Wikipedia PageRank of universities recovers 80% of top universities of Shanghai ranking during the considered time period.

  9. The Privilege of Ranking: Google Plays Ball.

    Science.gov (United States)

    Wiggins, Richard

    2003-01-01

    Discussion of ranking systems used in various settings, including college football and academic admissions, focuses on the Google search engine. Explains the PageRank mathematical formula that scores Web pages by connecting the number of links; limitations, including authenticity and accuracy of ranked Web pages; relevancy; adjusting algorithms;…

  10. The Objective Borderline Method (OBM): A Probability-Based Model for Setting up an Objective Pass/Fail Cut-Off Score in Medical Programme Assessments

    Science.gov (United States)

    Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim

    2013-01-01

    The decision to pass or fail a medical student is a "high stakes" one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the…

  11. Ranking in Swiss system chess team tournaments

    OpenAIRE

    Csató, László

    2015-01-01

    The paper uses paired comparison-based scoring procedures for ranking the participants of a Swiss system chess team tournament. We present the main challenges of ranking in Swiss system, the features of individual and team competitions as well as the failures of official lexicographical orders. The tournament is represented as a ranking problem, our model is discussed with respect to the properties of the score, generalized row sum and least squares methods. The proposed procedure is illustra...

  12. University rankings in computer science

    DEFF Research Database (Denmark)

    Ehret, Philip; Zuccala, Alesia Ann; Gipp, Bela

    2017-01-01

    This is a research-in-progress paper concerning two types of institutional rankings, the Leiden and QS World ranking, and their relationship to a list of universities’ ‘geo-based’ impact scores, and Computing Research and Education Conference (CORE) participation scores in the field of computer...... science. A ‘geo-based’ impact measure examines the geographical distribution of incoming citations to a particular university’s journal articles for a specific period of time. It takes into account both the number of citations and the geographical variability in these citations. The CORE participation...... score is calculated on the basis of the number of weighted proceedings papers that a university has contributed to either an A*, A, B, or C conference as ranked by the Computing Research and Education Association of Australasia. In addition to calculating the correlations between the distinct university...

  13. PageRank of integers

    Science.gov (United States)

    Frahm, K. M.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2012-10-01

    We up a directed network tracing links from a given integer to its divisors and analyze the properties of the Google matrix of this network. The PageRank vector of this matrix is computed numerically and it is shown that its probability is approximately inversely proportional to the PageRank index thus being similar to the Zipf law and the dependence established for the World Wide Web. The spectrum of the Google matrix of integers is characterized by a large gap and a relatively small number of nonzero eigenvalues. A simple semi-analytical expression for the PageRank of integers is derived that allows us to find this vector for matrices of billion size. This network provides a new PageRank order of integers.

  14. University ranking methodologies. An interview with Ben Sowter about the Quacquarelli Symonds World University Ranking

    Directory of Open Access Journals (Sweden)

    Alberto Baccini

    2015-10-01

    Full Text Available University rankings represent a controversial issue in the debate about higher education policy. One of the best known university ranking is the Quacquarelli Symonds World University Rankings (QS, published annually since 2004 by Quacquarelli Symonds ltd, a company founded in 1990 and headquartered in London. QS provides a ranking based on a score calculated by weighting six different indicators. The 2015 edition, published in October 2015, introduced major methodological innovations and, as a consequence, many universities worldwide underwent major changes of their scores and ranks. Ben Sowter, head of division of intelligence unit of Quacquarelli Symonds, responds to 15 questions about the new QS methodology.

  15. Ranking library materials

    OpenAIRE

    Lewandowski, Dirk

    2015-01-01

    Purpose: This paper discusses ranking factors suitable for library materials and shows that ranking in general is a complex process and that ranking for library materials requires a variety of techniques. Design/methodology/approach: The relevant literature is reviewed to provide a systematic overview of suitable ranking factors. The discussion is based on an overview of ranking factors used in Web search engines. Findings: While there are a wide variety of ranking factors appl...

  16. University ranking methodologies. An interview with Ben Sowter about the Quacquarelli Symonds World University Ranking

    OpenAIRE

    Alberto Baccini; Antono Banfi; Giuseppe De Nicolao; Paola Galimberti

    2015-01-01

    University rankings represent a controversial issue in the debate about higher education policy. One of the best known university ranking is the Quacquarelli Symonds World University Rankings (QS), published annually since 2004 by Quacquarelli Symonds ltd, a company founded in 1990 and headquartered in London. QS provides a ranking based on a score calculated by weighting six different indicators. The 2015 edition, published in October 2015, introduced major methodological innovations and, as...

  17. Block models and personalized PageRank

    National Research Council Canada - National Science Library

    Kloumann, Isabel M; Ugander, Johan; Kleinberg, Jon

    2017-01-01

    ...? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk...

  18. Comparing classical and quantum PageRanks

    Science.gov (United States)

    Loke, T.; Tang, J. W.; Rodriguez, J.; Small, M.; Wang, J. B.

    2017-01-01

    Following recent developments in quantum PageRanking, we present a comparative analysis of discrete-time and continuous-time quantum-walk-based PageRank algorithms. Relative to classical PageRank and to different extents, the quantum measures better highlight secondary hubs and resolve ranking degeneracy among peripheral nodes for all networks we studied in this paper. For the discrete-time case, we investigated the periodic nature of the walker's probability distribution for a wide range of networks and found that the dominant period does not grow with the size of these networks. Based on this observation, we introduce a new quantum measure using the maximum probabilities of the associated walker during the first couple of periods. This is particularly important, since it leads to a quantum PageRanking scheme that is scalable with respect to network size.

  19. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating e...... eigenvalues and eigenvectors. We give a number of different applications to regression and time series analysis, and show how the reduced rank regression estimator can be derived as a Gaussian maximum likelihood estimator. We briefly mention asymptotic results......The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  20. A probabilistic verification score for contours demonstrated with idealized ice-edge forecasts

    Science.gov (United States)

    Goessling, Helge; Jung, Thomas

    2017-04-01

    We introduce a probabilistic verification score for ensemble-based forecasts of contours: the Spatial Probability Score (SPS). Defined as the spatial integral of local (Half) Brier Scores, the SPS can be considered the spatial analog of the Continuous Ranked Probability Score (CRPS). Applying the SPS to idealized seasonal ensemble forecasts of the Arctic sea-ice edge in a global coupled climate model, we demonstrate that the SPS responds properly to ensemble size, bias, and spread. When applied to individual forecasts or ensemble means (or quantiles), the SPS is reduced to the 'volume' of mismatch, in case of the ice edge corresponding to the Integrated Ice Edge Error (IIEE).

  1. Ranking Operations Management conferences

    NARCIS (Netherlands)

    Steenhuis, H.J.; de Bruijn, E.J.; Gupta, Sushil; Laptaned, U

    2007-01-01

    Several publications have appeared in the field of Operations Management which rank Operations Management related journals. Several ranking systems exist for journals based on , for example, perceived relevance and quality, citation, and author affiliation. Many academics also publish at conferences

  2. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    2017-01-01

    Subjective beliefs are elicited routinely in economics experiments. However, such elicitation often suffers from two possible disadvantages. First, beliefs are recovered in the form of a summary statistic, usually the mean, of the underlying latent distribution. Second, recovered beliefs are bias...

  3. Rank distributions: Frequency vs. magnitude.

    Science.gov (United States)

    Velarde, Carlos; Robledo, Alberto

    2017-01-01

    We examine the relationship between two different types of ranked data, frequencies and magnitudes. We consider data that can be sorted out either way, through numbers of occurrences or size of the measures, as it is the case, say, of moon craters, earthquakes, billionaires, etc. We indicate that these two types of distributions are functional inverses of each other, and specify this link, first in terms of the assumed parent probability distribution that generates the data samples, and then in terms of an analog (deterministic) nonlinear iterated map that reproduces them. For the particular case of hyperbolic decay with rank the distributions are identical, that is, the classical Zipf plot, a pure power law. But their difference is largest when one displays logarithmic decay and its counterpart shows the inverse exponential decay, as it is the case of Benford law, or viceversa. For all intermediate decay rates generic differences appear not only between the power-law exponents for the midway rank decline but also for small and large rank. We extend the theoretical framework to include thermodynamic and statistical-mechanical concepts, such as entropies and configuration.

  4. Block models and personalized PageRank.

    Science.gov (United States)

    Kloumann, Isabel M; Ugander, Johan; Kleinberg, Jon

    2017-01-03

    Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods through the "seed set expansion problem": given a subset [Formula: see text] of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work, we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter [Formula: see text] that depends on the block model parameters. This connection provides a formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance, despite being simple linear classification rules, and are even competitive with belief propagation.

  5. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    , extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially......The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities...

  6. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence......., extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...

  7. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  8. Ignition Probability

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...

  9. Risk Probabilities

    DEFF Research Database (Denmark)

    Rojas-Nandayapa, Leonardo

    Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think....... By doing so, we will obtain a deeper insight into how events involving large values of sums of heavy-tailed random variables are likely to occur....

  10. Maximum Waring ranks of monomials

    OpenAIRE

    Holmes, Erik; Plummer, Paul; Siegert, Jeremy; Teitler, Zach

    2013-01-01

    We show that monomials and sums of pairwise coprime monomials in four or more variables have Waring rank less than the generic rank, with a short list of exceptions. We asymptotically compare their ranks with the generic rank.

  11. How to Rank Journals.

    Science.gov (United States)

    Bradshaw, Corey J A; Brook, Barry W

    2016-01-01

    There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68-0.84 Spearman's ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.

  12. Quantum Probabilities as Behavioral Probabilities

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Yukalov

    2017-03-01

    Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

  13. Probability theory

    CERN Document Server

    S Varadhan, S R

    2001-01-01

    This volume presents topics in probability theory covered during a first-year graduate course given at the Courant Institute of Mathematical Sciences. The necessary background material in measure theory is developed, including the standard topics, such as extension theorem, construction of measures, integration, product spaces, Radon-Nikodym theorem, and conditional expectation. In the first part of the book, characteristic functions are introduced, followed by the study of weak convergence of probability distributions. Then both the weak and strong limit theorems for sums of independent rando

  14. Academic rankings: an approach to a Portuguese ranking

    OpenAIRE

    Bernardino, Pedro; Marques,Rui

    2009-01-01

    The academic rankings are a controversial subject in higher education. However, despite all the criticism, academic rankings are here to stay and more and more different stakeholders use rankings to obtain information about the institutions’ performance. The two most well-known rankings, The Times and the Shanghai Jiao Tong University rankings have different methodologies. The Times ranking is based on peer review, whereas the Shanghai ranking has only quantitative indicators and is mainly ba...

  15. Ranking Economic History Journals

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Weisdorf, Jacob Louis

    This study ranks - for the first time - 12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We...... also compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential...

  16. Ranking economic history journals

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Weisdorf, Jacob Louis

    2010-01-01

    This study ranks-for the first time-12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We also...... compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential for economic...

  17. Recurrent fuzzy ranking methods

    Science.gov (United States)

    Hajjari, Tayebeh

    2012-11-01

    With the increasing development of fuzzy set theory in various scientific fields and the need to compare fuzzy numbers in different areas. Therefore, Ranking of fuzzy numbers plays a very important role in linguistic decision-making, engineering, business and some other fuzzy application systems. Several strategies have been proposed for ranking of fuzzy numbers. Each of these techniques has been shown to produce non-intuitive results in certain case. In this paper, we reviewed some recent ranking methods, which will be useful for the researchers who are interested in this area.

  18. A network-based dynamical ranking system

    CERN Document Server

    Motegi, Shun

    2012-01-01

    Ranking players or teams in sports is of practical interests. From the viewpoint of networks, a ranking system is equivalent a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score (i.e., strength) of a player, for example, depends on time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. Our ranking system, also interpreted as a centrality measure for directed temporal networks, has two parameters. One parameter represents the exponential decay rate of the past score, and the other parameter controls the effect of indirect wins on the score. We derive a set of linear online update equ...

  19. Asset ranking manager (ranking index of components)

    Energy Technology Data Exchange (ETDEWEB)

    Maloney, S.M.; Engle, A.M.; Morgan, T.A. [Applied Reliability, Maracor Software and Engineering (United States)

    2004-07-01

    The Ranking Index of Components (RIC) is an Asset Reliability Manager (ARM), which itself is a Web Enabled front end where plant database information fields from several disparate databases are combined. That information is used to create a specific weighted number (Ranking Index) relating to that components health and risk to the site. The higher the number, the higher priority that any work associated with that component receives. ARM provides site Engineering, Maintenance and Work Control personnel with a composite real time - (current condition) look at the components 'risk of not working' to the plant. Information is extracted from the existing Computerized Maintenance management System (CMMS) and specific site applications and processed nightly. ARM helps to ensure that the most important work is placed into the workweeks and the non value added work is either deferred, frequency changed or deleted. This information is on the web, updated each night, and available for all employees to use. This effort assists the work management specialist when allocating limited resources to the most important work. The use of this tool has maximized resource usage, performing the most critical work with available resources. The ARM numbers are valued inputs into work scoping for the workweek managers. System and Component Engineers are using ARM to identify the components that are at 'risk of failure' and therefore should be placed into the appropriate work week schedule.

  20. Partial Kernelization for Rank Aggregation: Theory and Experiments

    Science.gov (United States)

    Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf

    Rank Aggregation is important in many areas ranging from web search over databases to bioinformatics. The underlying decision problem Kemeny Score is NP-complete even in case of four input rankings to be aggregated into a "median ranking". We study efficient polynomial-time data reduction rules that allow us to find optimal median rankings. On the theoretical side, we improve a result for a "partial problem kernel" from quadratic to linear size. On the practical side, we provide encouraging experimental results with data based on web search and sport competitions, e.g., computing optimal median rankings for real-world instances with more than 100 candidates within milliseconds.

  1. Multiplex PageRank.

    Directory of Open Access Journals (Sweden)

    Arda Halu

    Full Text Available Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  2. Multiplex PageRank.

    Science.gov (United States)

    Halu, Arda; Mondragón, Raúl J; Panzarasa, Pietro; Bianconi, Ginestra

    2013-01-01

    Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  3. Ranking of Rankings: Benchmarking Twenty-Five Higher Education Ranking Systems in Europe

    Science.gov (United States)

    Stolz, Ingo; Hendel, Darwin D.; Horn, Aaron S.

    2010-01-01

    The purpose of this study is to evaluate the ranking practices of 25 European higher education ranking systems (HERSs). Ranking practices were assessed with 14 quantitative measures derived from the Berlin Principles on Ranking of Higher Education Institutions (BPs). HERSs were then ranked according to their degree of congruence with the BPs.…

  4. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  5. Microsatellite pathologic score does not efficiently identify high microsatellite instability in colorectal serrated adenocarcinoma.

    Science.gov (United States)

    García-Solano, José; Conesa-Zamora, Pablo; Carbonell, Pablo; Trujillo-Santos, Javier; Torres-Moreno, Daniel; Rodriguez-Braun, Edith; Vicente-Ortega, Vicente; Pérez-Guillermo, Miguel

    2013-05-01

    The microsatellite pathologic score has been proposed as a valuable tool to estimate the probability of a colorectal cancer having high microsatellite instability; however, this score has not been tested in serrated adenocarcinoma. Our aim was to evaluate microsatellite pathologic score in serrated adenocarcinoma, conventional carcinoma, and colorectal cancer with high microsatellite instability histologic features. Eighty-nine serrated adenocarcinoma and 81 matched conventional carcinomas were tested with microsatellite pathologic score, and the results were compared with those of 24 high microsatellite instability histologic features. Validation was performed by microsatellite instability analysis. Although all colorectal cancers with high microsatellite instability histologic features rendered a more than 5.5 score, the microsatellite pathologic score performance was of lower rank in high microsatellite instability serrated adenocarcinoma because none of the cases scored above 5.5 (>77% probability of being high microsatellite instability). High microsatellite instability serrated adenocarcinoma shows pathologic features different from those observed in high microsatellite instability histologic features such as adverse prognostic histologic features at the invasive front. We describe a serrated adenocarcinoma subtype showing high microsatellite instability and some, but not all, high microsatellite instability histologic features that would not be detected if the microsatellite pathologic score cutoff is set at the highest rank. To increase microsatellite pathologic score sensitivity in serrated adenocarcinoma, we propose to set up a 2.1 cutoff score when faced by a right-sided colorectal cancer with serrated features. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Ouderdom, omvang en citatiescores: rankings nader bekeken

    NARCIS (Netherlands)

    van Rooij, Jules

    2017-01-01

    By comparing the Top-300 lists of four global university rankings (ARWU, THE, QS, Leiden), three hypotheses are tested: 1) position correlates with size in the ARWU more than in the THE and QS; 2) given their strong dependency on reputation scores, position will be correlated more with a

  7. From rankings to mission.

    Science.gov (United States)

    Kirch, Darrell G; Prescott, John E

    2013-08-01

    Since the 1980s, school ranking systems have been a topic of discussion among leaders of higher education. Various ranking systems are based on inadequate data that fail to illustrate the complex nature and special contributions of the institutions they purport to rank, including U.S. medical schools, each of which contributes uniquely to meeting national health care needs. A study by Tancredi and colleagues in this issue of Academic Medicine illustrates the limitations of rankings specific to primary care training programs. This commentary discusses, first, how each school's mission and strengths, as well as the impact it has on the community it serves, are distinct, and, second, how these schools, which are each unique, are poorly represented by overly subjective ranking methodologies. Because academic leaders need data that are more objective to guide institutional development, the Association of American Medical Colleges (AAMC) has been developing tools to provide valid data that are applicable to each medical school. Specifically, the AAMC's Medical School Admissions Requirements and its Missions Management Tool each provide a comprehensive assessment of medical schools that leaders are using to drive institutional capacity building. This commentary affirms the importance of mission while challenging the leaders of medical schools, teaching hospitals, and universities to use reliable data to continually improve the quality of their training programs to improve the health of all.

  8. Dynamic Matrix Rank

    DEFF Research Database (Denmark)

    Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands

    2009-01-01

    We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions....

  9. Estimating Independent Locally Shifted Random Utility Models for Ranking Data

    Science.gov (United States)

    Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans

    2011-01-01

    We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…

  10. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  11. Multicolinearity and Indicator Redundancy Problem in World University Rankings: An Example Using Times Higher Education World University Ranking 2013-2014 Data

    Science.gov (United States)

    Kaycheng, Soh

    2015-01-01

    World university ranking systems used the weight-and-sum approach to combined indicator scores into overall scores on which the universities are then ranked. This approach assumes that the indicators all independently contribute to the overall score in the specified proportions. In reality, this assumption is doubtful as the indicators tend to…

  12. Diversifying customer review rankings.

    Science.gov (United States)

    Krestel, Ralf; Dokoohaki, Nima

    2015-06-01

    E-commerce Web sites owe much of their popularity to consumer reviews accompanying product descriptions. On-line customers spend hours and hours going through heaps of textual reviews to decide which products to buy. At the same time, each popular product has thousands of user-generated reviews, making it impossible for a buyer to read everything. Current approaches to display reviews to users or recommend an individual review for a product are based on the recency or helpfulness of each review. In this paper, we present a framework to rank product reviews by optimizing the coverage of the ranking with respect to sentiment or aspects, or by summarizing all reviews with the top-K reviews in the ranking. To accomplish this, we make use of the assigned star rating for a product as an indicator for a review's sentiment polarity and compare bag-of-words (language model) with topic models (latent Dirichlet allocation) as a mean to represent aspects. Our evaluation on manually annotated review data from a commercial review Web site demonstrates the effectiveness of our approach, outperforming plain recency ranking by 30% and obtaining best results by combining language and topic model representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. OutRank

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Steinhausen, Uwe

    2008-01-01

    Outlier detection is an important data mining task for consistency checks, fraud detection, etc. Binary decision making on whether or not an object is an outlier is not appropriate in many applications and moreover hard to parametrize. Thus, recently, methods for outlier ranking have been proposed...

  14. Combining Document-and Paragraph-Based Entity Ranking

    NARCIS (Netherlands)

    Rode, H.; Serdyukov, Pavel; Hiemstra, Djoerd

    2008-01-01

    We study entity ranking on the INEX entity track and pro- pose a simple graph-based ranking approach that enables to combine scores on document and paragraph level. The com- bined approach improves the retrieval results not only on the INEX testset, but similarly on TREC’s expert finding task.

  15. University Rankings: How Well Do They Measure Library Service Quality?

    Science.gov (United States)

    Jackson, Brian

    2015-01-01

    University rankings play an increasingly large role in shaping the goals of academic institutions and departments, while removing universities themselves from the evaluation process. This study compares the library-related results of two university ranking publications with scores on the LibQUAL+™ survey to identify if library service quality--as…

  16. PageRank model of opinion formation on social networks

    Science.gov (United States)

    Kandiah, Vivek; Shepelyansky, Dima L.

    2012-11-01

    We propose the PageRank model of opinion formation and investigate its rich properties on real directed networks of the Universities of Cambridge and Oxford, LiveJournal, and Twitter. In this model, the opinion formation of linked electors is weighted with their PageRank probability. Such a probability is used by the Google search engine for ranking of web pages. We find that the society elite, corresponding to the top PageRank nodes, can impose its opinion on a significant fraction of the society. However, for a homogeneous distribution of two opinions, there exists a bistability range of opinions which depends on a conformist parameter characterizing the opinion formation. We find that the LiveJournal and Twitter networks have a stronger tendency to a totalitarian opinion formation than the university networks. We also analyze the Sznajd model generalized for scale-free networks with the weighted PageRank vote of electors.

  17. Fractional cointegration rank estimation

    DEFF Research Database (Denmark)

    Lasak, Katarzyna; Velasco, Carlos

    We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating the parame......We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...

  18. Resolution of ranking hierarchies in directed networks

    Science.gov (United States)

    Barucca, Paolo; Lillo, Fabrizio

    2018-01-01

    Identifying hierarchies and rankings of nodes in directed graphs is fundamental in many applications such as social network analysis, biology, economics, and finance. A recently proposed method identifies the hierarchy by finding the ordered partition of nodes which minimises a score function, termed agony. This function penalises the links violating the hierarchy in a way depending on the strength of the violation. To investigate the resolution of ranking hierarchies we introduce an ensemble of random graphs, the Ranked Stochastic Block Model. We find that agony may fail to identify hierarchies when the structure is not strong enough and the size of the classes is small with respect to the whole network. We analytically characterise the resolution threshold and we show that an iterated version of agony can partly overcome this resolution limit. PMID:29394278

  19. VaRank: a simple and powerful tool for ranking genetic variants

    Directory of Open Access Journals (Sweden)

    Véronique Geoffroy

    2015-03-01

    Full Text Available Background. Most genetic disorders are caused by single nucleotide variations (SNVs or small insertion/deletions (indels. High throughput sequencing has broadened the catalogue of human variation, including common polymorphisms, rare variations or disease causing mutations. However, identifying one variation among hundreds or thousands of others is still a complex task for biologists, geneticists and clinicians.Results. We have developed VaRank, a command-line tool for the ranking of genetic variants detected by high-throughput sequencing. VaRank scores and prioritizes variants annotated either by Alamut Batch or SnpEff. A barcode allows users to quickly view the presence/absence of variants (with homozygote/heterozygote status in analyzed samples. VaRank supports the commonly used VCF input format for variants analysis thus allowing it to be easily integrated into NGS bioinformatics analysis pipelines. VaRank has been successfully applied to disease-gene identification as well as to molecular diagnostics setup for several hundred patients.Conclusions. VaRank is implemented in Tcl/Tk, a scripting language which is platform-independent but has been tested only on Unix environment. The source code is available under the GNU GPL, and together with sample data and detailed documentation can be downloaded from http://www.lbgi.fr/VaRank/.

  20. Probability workshop to be better in probability topic

    Science.gov (United States)

    Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed

    2015-02-01

    The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.

  1. Can College Rankings Be Believed?

    Directory of Open Access Journals (Sweden)

    Meredith Davis

    Full Text Available The article summarizes literature on college and university rankings worldwide and the strategies used by various ranking organizations, including those of government and popular media. It traces the history of national and global rankings, indicators used by ranking systems, and the effect of rankings on academic programs and their institutions. Although ranking systems employ diverse criteria and most weight certain indicators over others, there is considerable skepticism that most actually measure educational quality. At the same time, students and their families increasingly consult these evaluations when making college decisions, and sponsors of faculty research consider reputation when forming academic partnerships. While there are serious concerns regarding the validity of ranking institutions when so little data can support differences between one institution and another, college rankings appear to be here to stay.

  2. Ranking Baltic States Researchers

    Directory of Open Access Journals (Sweden)

    Gyula Mester

    2017-10-01

    Full Text Available In this article, using the h-index and the total number of citations, the best 10 Lithuanian, Latvian and Estonian researchers from several disciplines are ranked. The list may be formed based on the h-index and the total number of citations, given in Web of Science, Scopus, Publish or Perish Program and Google Scholar database. Data for the first 10 researchers are presented. Google Scholar is the most complete. Therefore, to define a single indicator, h-index calculated by Google Scholar may be a good and simple one. The author chooses the Google Scholar database as it is the broadest one.

  3. Sync-rank: Robust Ranking, Constrained Ranking and Rank Aggregation via Eigenvector and SDP Synchronization

    Science.gov (United States)

    2015-04-28

    eigenvector of the associated Laplacian matrix (i.e., the Fiedler vector) matches that of the variables. In other words, this approach (reminiscent of...S1), i.e., Dii = ∑n j=1Gi,j is the degree of node i in the measurement graph G. 3: Compute the Fiedler vector of S (eigenvector corresponding to the...smallest nonzero eigenvalue of LS). 4: Output the ranking induced by sorting the Fiedler vector of S, with the global ordering (increasing or decreasing

  4. Rankings from Fuzzy Pairwise Comparisons

    NARCIS (Netherlands)

    van den Broek, P.M.; Noppen, J.A.R.; Mohammadian, M.

    2006-01-01

    We propose a new method for deriving rankings from fuzzy pairwise comparisons. It is based on the observation that quantification of the uncertainty of the pairwise comparisons should be used to obtain a better crisp ranking, instead of a fuzzified version of the ranking obtained from crisp pairwise

  5. University Rankings and Social Science

    Science.gov (United States)

    Marginson, Simon

    2014-01-01

    University rankings widely affect the behaviours of prospective students and their families, university executive leaders, academic faculty, governments and investors in higher education. Yet the social science foundations of global rankings receive little scrutiny. Rankings that simply recycle reputation without any necessary connection to real…

  6. Reconsidering the use of rankings in the valuation of health states: a model for estimating cardinal values from ordinal data

    Directory of Open Access Journals (Sweden)

    Salomon Joshua A

    2003-12-01

    Full Text Available Abstract Background In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. Methods Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC between predictions and mean observations, and the root mean squared error of predictions at the individual level. Results Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99. Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. Conclusions

  7. Sequential rank agreement methods for comparison of ranked lists

    DEFF Research Database (Denmark)

    Ekstrøm, Claus Thorn; Gerds, Thomas Alexander; Jensen, Andreas Kryger

    2015-01-01

    The comparison of alternative rankings of a set of items is a general and prominent task in applied statistics. Predictor variables are ranked according to magnitude of association with an outcome, prediction models rank subjects according to the personalized risk of an event, and genetic studies...... are illustrated using gene rankings, and using data from two Danish ovarian cancer studies where we assess the within and between agreement of different statistical classification methods.......The comparison of alternative rankings of a set of items is a general and prominent task in applied statistics. Predictor variables are ranked according to magnitude of association with an outcome, prediction models rank subjects according to the personalized risk of an event, and genetic studies...

  8. Evaluating ranking methods on heterogeneous digital library collections

    CERN Document Server

    Canévet, Olivier; Marian, Ludmila; Chonavel, Thierry

    In the frame of research in particle physics, CERN has been developing its own web-based software /Invenio/ to run the digital library of all the documents related to CERN and fundamental physics. The documents (articles, photos, news, thesis, ...) can be retrieved through a search engine. The results matching the query of the user can be displayed in several ways: sorted by latest first, author, title and also ranked by word similarity. The purpose of this project is to study and implement a new ranking method in Invenio: distributed-ranking (D-Rank). This method aims at aggregating several ranking scores coming from different ranking methods into a new score. In addition to query-related scores such as word similarity, the goal of the work is to take into account non-query-related scores such as citations, journal impact factor and in particular scores related to the document access frequency in the database. The idea is that for two equally query-relevant documents, if one has been more downloaded for inst...

  9. Probability Aggregates in Probability Answer Set Programming

    OpenAIRE

    Saad, Emad

    2013-01-01

    Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

  10. Ranking nodes in growing networks: When PageRank fails.

    Science.gov (United States)

    Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng

    2015-11-10

    PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm's efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank's performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.

  11. Feature fusion using ranking for object tracking in aerial imagery

    Science.gov (United States)

    Candemir, Sema; Palaniappan, Kannappan; Bunyak, Filiz; Seetharaman, Guna

    2012-06-01

    Aerial wide-area monitoring and tracking using multi-camera arrays poses unique challenges compared to stan- dard full motion video analysis due to low frame rate sampling, accurate registration due to platform motion, low resolution targets, limited image contrast, static and dynamic parallax occlusions.1{3 We have developed a low frame rate tracking system that fuses a rich set of intensity, texture and shape features, which enables adaptation of the tracker to dynamic environment changes and target appearance variabilities. However, improper fusion and overweighting of low quality features can adversely aect target localization and reduce tracking performance. Moreover, the large computational cost associated with extracting a large number of image-based feature sets will in uence tradeos for real-time and on-board tracking. This paper presents a framework for dynamic online ranking-based feature evaluation and fusion in aerial wide-area tracking. We describe a set of ecient descriptors suitable for small sized targets in aerial video based on intensity, texture, and shape feature representations or views. Feature ranking is then used as a selection procedure where target-background discrimination power for each (raw) feature view is scored using a two-class variance ratio approach. A subset of the k-best discriminative features are selected for further processing and fusion. The target match probability or likelihood maps for each of the k features are estimated by comparing target descriptors within a search region using a sliding win- dow approach. The resulting k likelihood maps are fused for target localization using the normalized variance ratio weights. We quantitatively measure the performance of the proposed system using ground-truth tracks within the framework of our tracking evaluation test-bed that incorporates various performance metrics. The proposed feature ranking and fusion approach increases localization accuracy by reducing multimodal eects

  12. Neophilia Ranking of Scientific Journals

    Science.gov (United States)

    Packalen, Mikko; Bhattacharya, Jay

    2017-01-01

    The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)—these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists' incentives to pursue innovative work. PMID:28713181

  13. Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

    Science.gov (United States)

    Perera, Upeksha; Wijewickrema, Manjula

    2018-01-01

    Ranking of scholarly journals is important to many parties. Studying the relationships among various ranking metrics is key to understanding the significance of one metric based on another. This research investigates the relationship among four major journal-ranking indicators: the impact factor (IF), the Eigenfactor score (ES), the "h."…

  14. Beyond Zipf's Law: The Lavalette Rank Function and its Properties

    CERN Document Server

    Fontanelli, Oscar; Yang, Yaning; Cocho, Germinal; Li, Wentian

    2016-01-01

    Although Zipf's law is widespread in natural and social data, one often encounters situations where one or both ends of the ranked data deviate from the power-law function. Previously we proposed the Beta rank function to improve the fitting of data which does not follow a perfect Zipf's law. Here we show that when the two parameters in the Beta rank function have the same value, the Lavalette rank function, the probability density function can be derived analytically. We also show both computationally and analytically that Lavalette distribution is approximately equal, though not identical, to the lognormal distribution. We illustrate the utility of Lavalette rank function in several datasets. We also address three analysis issues on the statistical testing of Lavalette fitting function, comparison between Zipf's law and lognormal distribution through Lavalette function, and comparison between lognormal distribution and Lavalette distribution.

  15. Scaling Qualitative Probability

    OpenAIRE

    Burgin, Mark

    2017-01-01

    There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...

  16. Improving ranking of models for protein complexes with side chain modeling and atomic potentials.

    Science.gov (United States)

    Viswanath, Shruthi; Ravikant, D V S; Elber, Ron

    2013-04-01

    An atomically detailed potential for docking pairs of proteins is derived using mathematical programming. A refinement algorithm that builds atomically detailed models of the complex and combines coarse grained and atomic scoring is introduced. The refinement step consists of remodeling the interface side chains of the top scoring decoys from rigid docking followed by a short energy minimization. The refined models are then re-ranked using a combination of coarse grained and atomic potentials. The docking algorithm including the refinement and re-ranking, is compared favorably to other leading docking packages like ZDOCK, Cluspro, and PATCHDOCK, on the ZLAB 3.0 Benchmark and a test set of 30 novel complexes. A detailed analysis shows that coarse grained potentials perform better than atomic potentials for realistic unbound docking (where the exact structures of the individual bound proteins are unknown), probably because atomic potentials are more sensitive to local errors. Nevertheless, the atomic potential captures a different signal from the residue potential and as a result a combination of the two scores provides a significantly better prediction than each of the approaches alone. Copyright © 2012 Wiley Periodicals, Inc.

  17. Wikipedia ranking of world universities

    Science.gov (United States)

    Lages, José; Patt, Antoine; Shepelyansky, Dima L.

    2016-03-01

    We use the directed networks between articles of 24 Wikipedia language editions for producing the wikipedia ranking of world Universities (WRWU) using PageRank, 2DRank and CheiRank algorithms. This approach allows to incorporate various cultural views on world universities using the mathematical statistical analysis independent of cultural preferences. The Wikipedia ranking of top 100 universities provides about 60% overlap with the Shanghai university ranking demonstrating the reliable features of this approach. At the same time WRWU incorporates all knowledge accumulated at 24 Wikipedia editions giving stronger highlights for historically important universities leading to a different estimation of efficiency of world countries in university education. The historical development of university ranking is analyzed during ten centuries of their history.

  18. Low-rank coal research

    Energy Technology Data Exchange (ETDEWEB)

    Weber, G. F.; Laudal, D. L.

    1989-01-01

    This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).

  19. Statistical methods for ranking data

    CERN Document Server

    Alvo, Mayer

    2014-01-01

    This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors’ website.

  20. Haavelmo's Probability Approach and the Cointegrated VAR

    DEFF Research Database (Denmark)

    Juselius, Katarina

    dependent residuals, normalization, reduced rank, model selection, missing variables, simultaneity, autonomy and iden- ti…cation. Speci…cally the paper discusses (1) the conditions under which the VAR model represents a full probability formulation of a sample of time-series observations, (2...

  1. Investigating Probability with the NBA Draft Lottery.

    Science.gov (United States)

    Quinn, Robert J.

    1997-01-01

    Investigates an interesting application of probability in the world of sports. Considers the role of permutations in the lottery system used by the National Basketball Association (NBA) in the United States to determine the order in which nonplayoff teams select players from the college ranks. Presents a lesson on this topic in which students work…

  2. Comonotonic Book-Making with Nonadditive Probabilities

    NARCIS (Netherlands)

    Diecidue, E.; Wakker, P.P.

    2000-01-01

    This paper shows how de Finetti's book-making principle, commonly used to justify additive subjective probabilities, can be modi-ed to agree with some nonexpected utility models.More precisely, a new foundation of the rank-dependent models is presented that is based on a comonotonic extension of the

  3. A network-based dynamical ranking system for competitive sports

    Science.gov (United States)

    Motegi, Shun; Masuda, Naoki

    2012-12-01

    From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score of a player (or team) fluctuates over time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. We derive a set of linear online update equations for the score of each player. The proposed ranking system predicts the outcome of the future games with a higher accuracy than the static counterparts.

  4. A network-based dynamical ranking system for competitive sports.

    Science.gov (United States)

    Motegi, Shun; Masuda, Naoki

    2012-01-01

    From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score of a player (or team) fluctuates over time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. We derive a set of linear online update equations for the score of each player. The proposed ranking system predicts the outcome of the future games with a higher accuracy than the static counterparts.

  5. Ranking nodes in growing networks: When PageRank fails

    Science.gov (United States)

    Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng

    2015-11-01

    PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm’s efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank’s performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.

  6. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  7. University Rankings in Critical Perspective

    Science.gov (United States)

    Pusser, Brian; Marginson, Simon

    2013-01-01

    This article addresses global postsecondary ranking systems by using critical-theoretical perspectives on power. This research suggests rankings are at once a useful lens for studying power in higher education and an important instrument for the exercise of power in service of dominant norms in global higher education. (Contains 1 table and 1…

  8. University Ranking as Social Exclusion

    Science.gov (United States)

    Amsler, Sarah S.; Bolsmann, Chris

    2012-01-01

    In this article we explore the dual role of global university rankings in the creation of a new, knowledge-identified, transnational capitalist class and in facilitating new forms of social exclusion. We examine how and why the practice of ranking universities has become widely defined by national and international organisations as an important…

  9. PageRank tracker: from ranking to tracking.

    Science.gov (United States)

    Gong, Chen; Fu, Keren; Loza, Artur; Wu, Qiang; Liu, Jia; Yang, Jie

    2014-06-01

    Video object tracking is widely used in many real-world applications, and it has been extensively studied for over two decades. However, tracking robustness is still an issue in most existing methods, due to the difficulties with adaptation to environmental or target changes. In order to improve adaptability, this paper formulates the tracking process as a ranking problem, and the PageRank algorithm, which is a well-known webpage ranking algorithm used by Google, is applied. Labeled and unlabeled samples in tracking application are analogous to query webpages and the webpages to be ranked, respectively. Therefore, determining the target is equivalent to finding the unlabeled sample that is the most associated with existing labeled set. We modify the conventional PageRank algorithm in three aspects for tracking application, including graph construction, PageRank vector acquisition and target filtering. Our simulations with the use of various challenging public-domain video sequences reveal that the proposed PageRank tracker outperforms mean-shift tracker, co-tracker, semiboosting and beyond semiboosting trackers in terms of accuracy, robustness and stability.

  10. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  11. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  12. Explosion probability of unexploded ordnance: expert beliefs.

    Science.gov (United States)

    MacDonald, Jacqueline Anne; Small, Mitchell J; Morgan, M G

    2008-08-01

    This article reports on a study to quantify expert beliefs about the explosion probability of unexploded ordnance (UXO). Some 1,976 sites at closed military bases in the United States are contaminated with UXO and are slated for cleanup, at an estimated cost of $15-140 billion. Because no available technology can guarantee 100% removal of UXO, information about explosion probability is needed to assess the residual risks of civilian reuse of closed military bases and to make decisions about how much to invest in cleanup. This study elicited probability distributions for the chance of UXO explosion from 25 experts in explosive ordnance disposal, all of whom have had field experience in UXO identification and deactivation. The study considered six different scenarios: three different types of UXO handled in two different ways (one involving children and the other involving construction workers). We also asked the experts to rank by sensitivity to explosion 20 different kinds of UXO found at a case study site at Fort Ord, California. We found that the experts do not agree about the probability of UXO explosion, with significant differences among experts in their mean estimates of explosion probabilities and in the amount of uncertainty that they express in their estimates. In three of the six scenarios, the divergence was so great that the average of all the expert probability distributions was statistically indistinguishable from a uniform (0, 1) distribution-suggesting that the sum of expert opinion provides no information at all about the explosion risk. The experts' opinions on the relative sensitivity to explosion of the 20 UXO items also diverged. The average correlation between rankings of any pair of experts was 0.41, which, statistically, is barely significant (p= 0.049) at the 95% confidence level. Thus, one expert's rankings provide little predictive information about another's rankings. The lack of consensus among experts suggests that empirical studies

  13. Quantum probability measures and tomographic probability densities

    NARCIS (Netherlands)

    Amosov, GG; Man'ko, [No Value

    2004-01-01

    Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the

  14. Agreeing Probability Measures for Comparative Probability Structures

    NARCIS (Netherlands)

    P.P. Wakker (Peter)

    1981-01-01

    textabstractIt is proved that fine and tight comparative probability structures (where the set of events is assumed to be an algebra, not necessarily a σ-algebra) have agreeing probability measures. Although this was often claimed in the literature, all proofs the author encountered are not valid

  15. Estimation of rank correlation for clustered data.

    Science.gov (United States)

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (Rxy ) is the maximum likelihood estimator of the Pearson correlation (ρxy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρxy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Bootstrap determination of the cointegration rank in heteroskedastic VAR models

    DEFF Research Database (Denmark)

    Cavaliere, Guiseppe; Rahbæk, Anders; Taylor, A.M. Robert

    2014-01-01

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio (PLR) co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... show that the bootstrap PLR tests are asymptotically correctly sized and, moreover, that the probability that the associated bootstrap sequential procedures select a rank smaller than the true rank converges to zero. This result is shown to hold for both the i.i.d. and wild bootstrap variants under...... conditional heteroskedasticity but only for the latter under unconditional heteroskedasticity. Monte Carlo evidence is reported which suggests that the bootstrap approach of Cavaliere et al. (2012) significantly improves upon the finite sample performance of corresponding procedures based on either...

  17. The THES University Rankings: Are They Really World Class?

    Directory of Open Access Journals (Sweden)

    Richard Holmes

    2006-06-01

    Full Text Available The Times Higher Education Supplement (THES international ranking of universities, published in 2004 and 2005, has received a great deal of attention throughout the world, nowhere more so than in East and Southeast Asia. This paper looks at the rankings and concludes that they are deficient in several respects. The sampling procedure is not explained and is very probably seriously biased, the weighting of the various components is not justified, inappropriate measures of teaching quality are used, the assessment of research achievement is biased against the humanities and social sciences, the classification of institutions is inconsistent, there are striking and implausible changes in the rankings between 2004 and 2005 and they are based in one crucial respect on regional rather than international comparisons. It is recommended that these rankings should not be the basis for the development and assessment of national and institutional policies

  18. Stationary algorithmic probability

    National Research Council Canada - National Science Library

    Müller, Markus

    2010-01-01

    ...,sincetheiractualvaluesdependonthechoiceoftheuniversal referencecomputer.Inthispaper,weanalyzeanaturalapproachtoeliminatethismachine- dependence. Our method is to assign algorithmic probabilities to the different...

  19. Probability concepts in quality risk management.

    Science.gov (United States)

    Claycamp, H Gregg

    2012-01-01

    Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although risk is generally a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management tools are relatively silent on the meaning and uses of "probability." The probability concept is typically applied by risk managers as a combination of frequency-based calculation and a "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management. A rich history of probability in risk management applied to other fields suggests that high-quality risk management decisions benefit from the implementation of more thoughtful probability concepts in both risk modeling and risk management. Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although "risk" generally describes a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management methodologies and respective tools focus on managing severity but are relatively silent on the in-depth meaning and uses of "probability." Pharmaceutical manufacturers are expanding their use of quality risk management to identify and manage risks to the patient that might occur in phases of the pharmaceutical life cycle from drug development to manufacture, marketing to product discontinuation. A probability concept is typically applied by risk managers as a combination of data-based measures of probability and a subjective "degree of belief" meaning of probability. Probability as

  20. Inconsistent year-to-year fluctuations limit the conclusiveness of global higher education rankings for university management

    Directory of Open Access Journals (Sweden)

    Johannes Sorz

    2015-08-01

    Full Text Available Backround. University rankings are getting very high international media attention, this holds particularly true for the Times Higher Education Ranking (THE and the Shanghai Jiao Tong University’s Academic Ranking of World Universities Ranking (ARWU. We therefore aimed to investigate how reliable the rankings are, especially for universities with lower ranking positions, that often show inconclusive year-to-year fluctuations in their rank, and if these rankings are thus a suitable basis for management purposes.Methods. We used the public available data from the web pages of the THE and the ARWU ranking to analyze the dynamics of change in score and ranking position from year to year, and we investigated possible causes for inconsistent fluctuations in the rankings by the means of regression analyses.Results. Regression analyses of results from the THE and ARWU from 2010–2014 show inconsistent fluctuations in the rank and score for universities with lower rank positions (below position 50 which lead to inconsistent “up and downs” in the total results, especially in the THE and to a lesser extent also in the ARWU. In both rankings, the mean year-to-year fluctuation of universities in groups of 50 universities aggregated by descending rank increases from less than 10% in the group of the 50 highest ranked universities to up to 60% in the group of the lowest ranked universities. Furthermore, year-to-year results do not correspond in THES- and ARWU-Rankings for universities below rank 50.Discussion. We conclude that the observed fluctuations in the THE do not correspond to actual university performance and ranking results are thus of limited conclusiveness for the university management of universities below a rank of 50. While the ARWU rankings seems more robust against inconsistent fluctuations, its year to year changes in the scores are very small, so essential changes from year to year could not be expected. Furthermore, year

  1. Inconsistent year-to-year fluctuations limit the conclusiveness of global higher education rankings for university management.

    Science.gov (United States)

    Sorz, Johannes; Wallner, Bernard; Seidler, Horst; Fieder, Martin

    2015-01-01

    Backround. University rankings are getting very high international media attention, this holds particularly true for the Times Higher Education Ranking (THE) and the Shanghai Jiao Tong University's Academic Ranking of World Universities Ranking (ARWU). We therefore aimed to investigate how reliable the rankings are, especially for universities with lower ranking positions, that often show inconclusive year-to-year fluctuations in their rank, and if these rankings are thus a suitable basis for management purposes. Methods. We used the public available data from the web pages of the THE and the ARWU ranking to analyze the dynamics of change in score and ranking position from year to year, and we investigated possible causes for inconsistent fluctuations in the rankings by the means of regression analyses. Results. Regression analyses of results from the THE and ARWU from 2010-2014 show inconsistent fluctuations in the rank and score for universities with lower rank positions (below position 50) which lead to inconsistent "up and downs" in the total results, especially in the THE and to a lesser extent also in the ARWU. In both rankings, the mean year-to-year fluctuation of universities in groups of 50 universities aggregated by descending rank increases from less than 10% in the group of the 50 highest ranked universities to up to 60% in the group of the lowest ranked universities. Furthermore, year-to-year results do not correspond in THES- and ARWU-Rankings for universities below rank 50. Discussion. We conclude that the observed fluctuations in the THE do not correspond to actual university performance and ranking results are thus of limited conclusiveness for the university management of universities below a rank of 50. While the ARWU rankings seems more robust against inconsistent fluctuations, its year to year changes in the scores are very small, so essential changes from year to year could not be expected. Furthermore, year-to-year results do not correspond

  2. Aggregate Interview Method of ranking orthopedic applicants predicts future performance.

    Science.gov (United States)

    Geissler, Jacqueline; VanHeest, Ann; Tatman, Penny; Gioe, Terence

    2013-07-01

    This article evaluates and describes a process of ranking orthopedic applicants using what the authors term the Aggregate Interview Method. The authors hypothesized that higher-ranking applicants using this method at their institution would perform better than those ranked lower using multiple measures of resident performance. A retrospective review of 115 orthopedic residents was performed at the authors' institution. Residents were grouped into 3 categories by matching rank numbers: 1-5, 6-14, and 15 or higher. Each rank group was compared with resident performance as measured by faculty evaluations, the Orthopaedic In-Training Examination (OITE), and American Board of Orthopaedic Surgery (ABOS) test results. Residents ranked 1-5 scored significantly better on patient care, behavior, and overall competence by faculty evaluation (Porthopedic resident candidates who scored highly on the Accreditation Council for Graduate Medical Education resident core competencies as measured by faculty evaluations, performed above the national average on the OITE, and passed the ABOS part 1 examination at rates exceeding the national average. Copyright 2013, SLACK Incorporated.

  3. Factual and cognitive probability

    OpenAIRE

    Chuaqui, Rolando

    2012-01-01

    This modification separates the two aspects of probability: probability as a part of physical theories (factual), and as a basis for statistical inference (cognitive). Factual probability is represented by probability structures as in the earlier papers, but now built independently of the language. Cognitive probability is interpreted as a form of "partial truth". The paper also contains a discussion of the Principle of Insufficient Reason and of Bayesian and classical statistical methods, in...

  4. Apgar Scores

    Science.gov (United States)

    ... Stages Listen Español Text Size Email Print Share Apgar Scores Page Content Article Body As soon as ... baby's general condition at birth. What Does the Apgar Test Measure? The test measures your baby's: Heart ...

  5. Ranking in evolving complex networks

    Science.gov (United States)

    Liao, Hao; Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng; Zhou, Ming-Yang

    2017-05-01

    Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithms perform, and which are their possible biases that may impair their effectiveness. Many popular ranking algorithms (such as Google's PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks. We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes.

  6. SRS: Site ranking system for hazardous chemical and radioactive waste

    Energy Technology Data Exchange (ETDEWEB)

    Rechard, R.P.; Chu, M.S.Y.; Brown, S.L.

    1988-05-01

    This report describes the rationale and presents instructions for a site ranking system (SRS). SRS ranks hazardous chemical and radioactive waste sites by scoring important and readily available factors that influence risk to human health. Using SRS, sites can be ranked for purposes of detailed site investigations. SRS evaluates the relative risk as a combination of potentially exposed population, chemical toxicity, and potential exposure of release from a waste site; hence, SRS uses the same concepts found in a detailed assessment of health risk. Basing SRS on the concepts of risk assessment tends to reduce the distortion of results found in other ranking schemes. More importantly, a clear logic helps ensure the successful application of the ranking procedure and increases its versatility when modifications are necessary for unique situations. Although one can rank sites using a detailed risk assessment, it is potentially costly because of data and resources required. SRS is an efficient approach to provide an order-of-magnitude ranking, requiring only readily available data (often only descriptive) and hand calculations. Worksheets are included to make the system easier to understand and use. 88 refs., 19 figs., 58 tabs.

  7. RANK and RANK ligand expression in primary human osteosarcoma

    Directory of Open Access Journals (Sweden)

    Daniel Branstetter

    2015-09-01

    Our results demonstrate RANKL expression was observed in the tumor element in 68% of human OS using IHC. However, the staining intensity was relatively low and only 37% (29/79 of samples exhibited≥10% RANKL positive tumor cells. RANK expression was not observed in OS tumor cells. In contrast, RANK expression was clearly observed in other cells within OS samples, including the myeloid osteoclast precursor compartment, osteoclasts and in giant osteoclast cells. The intensity and frequency of RANKL and RANK staining in OS samples were substantially less than that observed in GCTB samples. The observation that RANKL is expressed in OS cells themselves suggests that these tumors may mediate an osteoclastic response, and anti-RANKL therapy may potentially be protective against bone pathologies in OS. However, the absence of RANK expression in primary human OS cells suggests that any autocrine RANKL/RANK signaling in human OS tumor cells is not operative, and anti-RANKL therapy would not directly affect the tumor.

  8. Ranking structures and Rank-Rank Correlations of Countries. The FIFA and UEFA cases

    CERN Document Server

    Ausloos, Marcel; Gadomski, Adam; Vitanov, Nikolay K

    2014-01-01

    Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures, in both cases.

  9. Ranking structures and rank-rank correlations of countries: The FIFA and UEFA cases

    Science.gov (United States)

    Ausloos, Marcel; Cloots, Rudi; Gadomski, Adam; Vitanov, Nikolay K.

    2014-04-01

    Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures in both cases.

  10. University Ranking Systems; Criteria and Critiques

    OpenAIRE

    Saka, Yavuz; YAMAN, Süleyman

    2011-01-01

    The purpose of this paper is to explore international university ranking systems. As a compilation study this paper provides specific criteria that each ranking system uses and main critiques regarding these ranking systems. Since there are many ranking systems in this area of research, this study focused on only most cited and referred ranking systems. As there is no consensus in terms of the criteria that these systems use, this paper has no intention of identifying the best ranking system ...

  11. Rankings, Standards, and Competition: Task vs. Scale Comparisons

    Science.gov (United States)

    Garcia, Stephen M.; Tor, Avishalom

    2007-01-01

    Research showing how upward social comparison breeds competitive behavior has so far conflated local comparisons in "task" performance (e.g. a test score) with comparisons on a more general "scale" (i.e. an underlying skill). Using a ranking methodology (Garcia, Tor, & Gonzalez, 2006) to separate task and scale comparisons, Studies 1-2 reveal that…

  12. Shanghai Academic Ranking of World Universities (ARWU) and the ...

    African Journals Online (AJOL)

    This article critically examines the methodology of the Shanghai Academic Ranking of World Universities (ARWU) by generating raw scores for the 'big five' South African research universities (Stellenbosch, Cape Town, Kwazulu-Natal, Pretoria and the Witwatersrand, henceforth referred to as SU, UCT, UKZN, UP and ...

  13. What Are Probability Surveys?

    Science.gov (United States)

    The National Aquatic Resource Surveys (NARS) use probability-survey designs to assess the condition of the nation’s waters. In probability surveys (also known as sample-surveys or statistical surveys), sampling sites are selected randomly.

  14. Efficient probability sequence

    OpenAIRE

    Regnier, Eva

    2014-01-01

    A probability sequence is an ordered set of probability forecasts for the same event. Although single-period probabilistic forecasts and methods for evaluating them have been extensively analyzed, we are not aware of any prior work on evaluating probability sequences. This paper proposes an efficiency condition for probability sequences and shows properties of efficient forecasting systems, including memorylessness and increasing discrimination. These results suggest tests for efficiency and ...

  15. Efficient probability sequences

    OpenAIRE

    Regnier, Eva

    2014-01-01

    DRMI working paper A probability sequence is an ordered set of probability forecasts for the same event. Although single-period probabilistic forecasts and methods for evaluating them have been extensively analyzed, we are not aware of any prior work on evaluating probability sequences. This paper proposes an efficiency condition for probability sequences and shows properties of efficiency forecasting systems, including memorylessness and increasing discrimination. These res...

  16. Philosophical theories of probability

    CERN Document Server

    Gillies, Donald

    2000-01-01

    The Twentieth Century has seen a dramatic rise in the use of probability and statistics in almost all fields of research. This has stimulated many new philosophical ideas on probability. Philosophical Theories of Probability is the first book to present a clear, comprehensive and systematic account of these various theories and to explain how they relate to one another. Gillies also offers a distinctive version of the propensity theory of probability, and the intersubjective interpretation, which develops the subjective theory.

  17. Correlation Test Application of Supplier’s Ranking Using TOPSIS and AHP-TOPSIS Method

    Directory of Open Access Journals (Sweden)

    Ika Yuniwati

    2016-05-01

    Full Text Available The supplier selection process can be done using multi-criteria decision making (MCDM methods in firms. There are many MCDM Methods, but firms must choose the method suitable with the firm condition. Company A has analyzed supplier’s ranking using TOPSIS method. TOPSIS method has a marjor weakness in its subjective weighting. This flaw is overcome using AHP method weighting having undergone a consistency test. In this study, the comparison of supplier’s ranking using TOPSIS and AHP-TOPSIS method used correlation test. The aim of this paper is to determine different result from two methods. Data in suppliers’ ranking is ordinal data, so this process used Spearman’s rank and Kendall’s tau b correlation. If most of the ranked scored are same, Kendall’s tau b correlation should be used. The other way, Spearman rank should be used. The result of this study is that most of the ranked scored are different, so the process used Spearman rank p-value of Spearman’s rank correlation of 0.505. It is greater than 0.05, means there is no statistically significant correlation between two methods. Furthermore, increment or decrement of supplier’s ranking in one method is not significantly related to the increment or decrement of supplier’s ranking in the second method

  18. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    Subjective probabilities play a central role in many economic decisions, and act as an immediate confound of inferences about behavior, unless controlled for. Several procedures to recover subjective probabilities have been proposed, but in order to recover the correct latent probability one must...

  19. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    2014-01-01

    Subjective probabilities play a central role in many economic decisions and act as an immediate confound of inferences about behavior, unless controlled for. Several procedures to recover subjective probabilities have been proposed, but in order to recover the correct latent probability one must ...

  20. Non-Archimedean Probability

    NARCIS (Netherlands)

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned

  1. Interpretations of probability

    CERN Document Server

    Khrennikov, Andrei

    2009-01-01

    This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.

  2. Ranking species in mutualistic networks.

    Science.gov (United States)

    Domínguez-García, Virginia; Muñoz, Miguel A

    2015-02-02

    Understanding the architectural subtleties of ecological networks, believed to confer them enhanced stability and robustness, is a subject of outmost relevance. Mutualistic interactions have been profusely studied and their corresponding bipartite networks, such as plant-pollinator networks, have been reported to exhibit a characteristic "nested" structure. Assessing the importance of any given species in mutualistic networks is a key task when evaluating extinction risks and possible cascade effects. Inspired in a recently introduced algorithm--similar in spirit to Google's PageRank but with a built-in non-linearity--here we propose a method which--by exploiting their nested architecture--allows us to derive a sound ranking of species importance in mutualistic networks. This method clearly outperforms other existing ranking schemes and can become very useful for ecosystem management and biodiversity preservation, where decisions on what aspects of ecosystems to explicitly protect need to be made.

  3. PageRank model of opinion formation on Ulam networks

    Science.gov (United States)

    Chakhmakhchyan, L.; Shepelyansky, D.

    2013-12-01

    We consider a PageRank model of opinion formation on Ulam networks, generated by the intermittency map and the typical Chirikov map. The Ulam networks generated by these maps have certain similarities with such scale-free networks as the World Wide Web (WWW), showing an algebraic decay of the PageRank probability. We find that the opinion formation process on Ulam networks has certain similarities but also distinct features comparing to the WWW. We attribute these distinctions to internal differences in network structure of the Ulam and WWW networks. We also analyze the process of opinion formation in the frame of generalized Sznajd model which protects opinion of small communities.

  4. Oxygen boundary crossing probabilities.

    Science.gov (United States)

    Busch, N A; Silver, I A

    1987-01-01

    The probability that an oxygen particle will reach a time dependent boundary is required in oxygen transport studies involving solution methods based on probability considerations. A Volterra integral equation is presented, the solution of which gives directly the boundary crossing probability density function. The boundary crossing probability is the probability that the oxygen particle will reach a boundary within a specified time interval. When the motion of the oxygen particle may be described as strongly Markovian, then the Volterra integral equation can be rewritten as a generalized Abel equation, the solution of which has been widely studied.

  5. Philosophy and probability

    CERN Document Server

    Childers, Timothy

    2013-01-01

    Probability is increasingly important for our understanding of the world. What is probability? How do we model it, and how do we use it? Timothy Childers presents a lively introduction to the foundations of probability and to philosophical issues it raises. He keeps technicalities to a minimum, and assumes no prior knowledge of the subject. He explains the main interpretations of probability-frequentist, propensity, classical, Bayesian, and objective Bayesian-and uses stimulatingexamples to bring the subject to life. All students of philosophy will benefit from an understanding of probability,

  6. In All Probability, Probability is not All

    Science.gov (United States)

    Helman, Danny

    2004-01-01

    The national lottery is often portrayed as a game of pure chance with no room for strategy. This misperception seems to stem from the application of probability instead of expectancy considerations, and can be utilized to introduce the statistical concept of expectation.

  7. The lod score method.

    Science.gov (United States)

    Rice, J P; Saccone, N L; Corbett, J

    2001-01-01

    The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  8. Rankings Methodology Hurts Public Institutions

    Science.gov (United States)

    Van Der Werf, Martin

    2007-01-01

    In the 1980s, when the "U.S. News & World Report" rankings of colleges were based solely on reputation, the nation's public universities were well represented at the top. However, as soon as the magazine began including its "measures of excellence," statistics intended to define quality, public universities nearly disappeared from the top. As the…

  9. Let Us Rank Journalism Programs

    Science.gov (United States)

    Weber, Joseph

    2014-01-01

    Unlike law, business, and medical schools, as well as universities in general, journalism schools and journalism programs have rarely been ranked. Publishers such as "U.S. News & World Report," "Forbes," "Bloomberg Businessweek," and "Washington Monthly" do not pay them much mind. What is the best…

  10. Evaluation of probabilistic forecasts with the scoringRules package

    Science.gov (United States)

    Jordan, Alexander; Krüger, Fabian; Lerch, Sebastian

    2017-04-01

    Over the last decades probabilistic forecasts in the form of predictive distributions have become popular in many scientific disciplines. With the proliferation of probabilistic models arises the need for decision-theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way in order to better understand sources of prediction errors and to improve the models. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. In coherence with decision-theoretical principles they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This contribution presents the software package scoringRules for the statistical programming language R, which provides functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. For univariate variables, two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, ensemble weather forecasts take this form. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices. Recent developments include the addition of scoring rules to evaluate multivariate forecast distributions. The use of the scoringRules package is illustrated in an example on post-processing ensemble forecasts of temperature.

  11. Ranking Journals Using Social Choice Theory Methods: A Novel Approach in Bibliometrics

    Energy Technology Data Exchange (ETDEWEB)

    Aleskerov, F.T.; Pislyakov, V.; Subochev, A.N.

    2016-07-01

    We use data on economic, management and political science journals to produce quantitative estimates of (in)consistency of evaluations based on seven popular bibliometric indica (impact factor, 5-year impact factor, immediacy index, article influence score, h-index, SNIP and SJR). We propose a new approach to aggregating journal rankings: since rank aggregation is a multicriteria decision problem, ordinal ranking methods from social choice theory may solve it. We apply either a direct ranking method based on majority rule (the Copeland rule, the Markovian method) or a sorting procedure based on a tournament solution, such as the uncovered set and the minimal externally stable set. We demonstrate that aggregate rankings reduce the number of contradictions and represent the set of single-indicator-based rankings better than any of the seven rankings themselves. (Author)

  12. Predicted binding site information improves model ranking in protein docking using experimental and computer-generated target structures.

    Science.gov (United States)

    Maheshwari, Surabhi; Brylinski, Michal

    2015-11-23

    Protein-protein interactions (PPIs) mediate the vast majority of biological processes, therefore, significant efforts have been directed to investigate PPIs to fully comprehend cellular functions. Predicting complex structures is critical to reveal molecular mechanisms by which proteins operate. Despite recent advances in the development of new methods to model macromolecular assemblies, most current methodologies are designed to work with experimentally determined protein structures. However, because only computer-generated models are available for a large number of proteins in a given genome, computational tools should tolerate structural inaccuracies in order to perform the genome-wide modeling of PPIs. To address this problem, we developed eRank(PPI), an algorithm for the identification of near-native conformations generated by protein docking using experimental structures as well as protein models. The scoring function implemented in eRank(PPI) employs multiple features including interface probability estimates calculated by eFindSite(PPI) and a novel contact-based symmetry score. In comparative benchmarks using representative datasets of homo- and hetero-complexes, we show that eRank(PPI) consistently outperforms state-of-the-art algorithms improving the success rate by ~10 %. eRank(PPI) was designed to bridge the gap between the volume of sequence data, the evidence of binary interactions, and the atomic details of pharmacologically relevant protein complexes. Tolerating structure imperfections in computer-generated models opens up a possibility to conduct the exhaustive structure-based reconstruction of PPI networks across proteomes. The methods and datasets used in this study are available at www.brylinski.org/erankppi.

  13. The Globalization of College and University Rankings

    Science.gov (United States)

    Altbach, Philip G.

    2012-01-01

    In the era of globalization, accountability, and benchmarking, university rankings have achieved a kind of iconic status. The major ones--the Academic Ranking of World Universities (ARWU, or the "Shanghai rankings"), the QS (Quacquarelli Symonds Limited) World University Rankings, and the "Times Higher Education" World…

  14. Student Practices, Learning, and Attitudes When Using Computerized Ranking Tasks

    Science.gov (United States)

    Lee, Kevin M.; Prather, E. E.; Collaboration of Astronomy Teaching Scholars CATS

    2011-01-01

    Ranking Tasks are a novel type of conceptual exercise based on a technique called rule assessment. Ranking Tasks present students with a series of four to eight icons that describe slightly different variations of a basic physical situation. Students are then asked to identify the order, or ranking, of the various situations based on some physical outcome or result. The structure of Ranking Tasks makes it difficult for students to rely strictly on memorized answers and mechanical substitution of formulae. In addition, by changing the presentation of the different scenarios (e.g., photographs, line diagrams, graphs, tables, etc.) we find that Ranking Tasks require students to develop mental schema that are more flexible and robust. Ranking tasks may be implemented on the computer which requires students to order the icons through drag-and-drop. Computer implementation allows the incorporation of background material, grading with feedback, and providing additional similar versions of the task through randomization so that students can build expertise through practice. This poster will summarize the results of a study of student usage of computerized ranking tasks. We will investigate 1) student practices (How do they make use of these tools?), 2) knowledge and skill building (Do student scores improve with iteration and are there diminishing returns?), and 3) student attitudes toward using computerized Ranking Tasks (Do they like using them?). This material is based upon work supported by the National Science Foundation under Grant No. 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

  15. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

  16. Image ranking in video sequences using pairwise image comparisons and temporal smoothing

    CSIR Research Space (South Africa)

    Burke, Michael

    2016-12-01

    Full Text Available ’ values within a standard Bayesian ranking framework, and a Rauch-Tung-Striebel smoother is used to improve these interest scores. Results show that the training data requirements typically associated with pairwise ranking systems are dramatically reduced...

  17. Misleading University Rankings: Cause and Cure for Discrepancies between Nominal and Attained Weights

    Science.gov (United States)

    Soh, Kaycheng

    2013-01-01

    Recent research into university ranking methodologies uncovered several methodological problems among the systems currently in vogue. One of these is the discrepancy between the nominal and attained weights. The problem is the summation of unstandardized indicators for the total scores used in ranking. It is demonstrated that weight discrepancy…

  18. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...

  19. Statistics and Probability

    Directory of Open Access Journals (Sweden)

    Laktineh Imad

    2010-04-01

    Full Text Available This ourse constitutes a brief introduction to probability applications in high energy physis. First the mathematical tools related to the diferent probability conepts are introduced. The probability distributions which are commonly used in high energy physics and their characteristics are then shown and commented. The central limit theorem and its consequences are analysed. Finally some numerical methods used to produce diferent kinds of probability distribution are presented. The full article (17 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  20. Handbook of probability

    CERN Document Server

    Florescu, Ionut

    2013-01-01

    THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio

  1. Real analysis and probability

    CERN Document Server

    Ash, Robert B; Lukacs, E

    1972-01-01

    Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

  2. Teams ranking of Malaysia Super League using Bayesian expectation maximization for Generalized Bradley Terry Model

    Science.gov (United States)

    Nor, Shahdiba Binti Md; Mahmud, Zamalia

    2016-10-01

    The analysis of sports data has always aroused great interest among statisticians and sports data have been investigated from different perspectives often aim at forecasting the results. The study focuses on the 12 teams who join the Malaysian Super League (MSL) for season 2015. This paper used Bayesian Expectation Maximization for Generalized Bradley Terry Model to estimate all the football team's rankings. The Generalized Bradley-Terry model is possible to find the maximum likelihood (ML) estimate of the skill ratings λ using a simple iterative procedure. In order to maximize the function of ML, we need inferential bayesian method to get posterior distribution which can be computed quickly. The team's ability was estimated based on the previous year's game results by calculating the probability of winning based on the final scores for each team. It was found that model with tie scores does make a difference in affect the model of estimating the football team's ability in winning the next match. However, team with better results in the previous year has a better chance for scoring in the next game.

  3. Influence of Weather, Rank, and Home Advantage on Football Outcomes in the Gulf Region

    National Research Council Canada - National Science Library

    BROCHERIE, FRANCK; GIRARD, OLIVIER; FAROOQ, ABDULAZIZ; MILLET, GRÉGOIRE P

    2015-01-01

    PURPOSEThe objective of this study was to investigate the effects of weather, rank, and home advantage on international football match results and scores in the Gulf Cooperation Council (GCC) region...

  4. On Probability Domains IV

    Science.gov (United States)

    Frič, Roman; Papčo, Martin

    2017-12-01

    Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.

  5. Difficulties related to Probabilities

    OpenAIRE

    Rosinger, Elemer Elad

    2010-01-01

    Probability theory is often used as it would have the same ontological status with, for instance, Euclidean Geometry or Peano Arithmetics. In this regard, several highly questionable aspects of probability theory are mentioned which have earlier been presented in two arxiv papers.

  6. On Randomness and Probability

    Indian Academy of Sciences (India)

    casinos and gambling houses? How does one interpret a statement like "there is a 30 per cent chance of rain tonight" - a statement we often hear on the news? Such questions arise in the mind of every student when she/he is taught probability as part of mathematics. Many students who go on to study probability and ...

  7. Dynamic update with probabilities

    NARCIS (Netherlands)

    Van Benthem, Johan; Gerbrandy, Jelle; Kooi, Barteld

    2009-01-01

    Current dynamic-epistemic logics model different types of information change in multi-agent scenarios. We generalize these logics to a probabilistic setting, obtaining a calculus for multi-agent update with three natural slots: prior probability on states, occurrence probabilities in the relevant

  8. Elements of quantum probability

    NARCIS (Netherlands)

    Kummerer, B.; Maassen, H.

    1996-01-01

    This is an introductory article presenting some basic ideas of quantum probability. From a discussion of simple experiments with polarized light and a card game we deduce the necessity of extending the body of classical probability theory. For a class of systems, containing classical systems with

  9. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  10. On Probability Domains IV

    Science.gov (United States)

    Frič, Roman; Papčo, Martin

    2017-06-01

    Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.

  11. Validating rankings in soccer championships

    Directory of Open Access Journals (Sweden)

    Annibal Parracho Sant'Anna

    2012-08-01

    Full Text Available The final ranking of a championship is determined by quality attributes combined with other factors which should be filtered out of any decision on relegation or draft for upper level tournaments. Factors like referees' mistakes and difficulty of certain matches due to its accidental importance to the opponents should have their influence reduced. This work tests approaches to combine classification rules considering the imprecision of the number of points as a measure of quality and of the variables that provide reliable explanation for it. Two home-advantage variables are tested and shown to be apt to enter as explanatory variables. Independence between the criteria is checked against the hypothesis of maximal correlation. The importance of factors and of composition rules is evaluated on the basis of correlation between rank vectors, number of classes and number of clubs in tail classes. Data from five years of the Brazilian Soccer Championship are analyzed.

  12. Janus-faced probability

    CERN Document Server

    Rocchi, Paolo

    2014-01-01

    The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.

  13. Minkowski metrics in creating universal ranking algorithms

    Directory of Open Access Journals (Sweden)

    Andrzej Ameljańczyk

    2014-06-01

    Full Text Available The paper presents a general procedure for creating the rankings of a set of objects, while the relation of preference based on any ranking function. The analysis was possible to use the ranking functions began by showing the fundamental drawbacks of commonly used functions in the form of a weighted sum. As a special case of the ranking procedure in the space of a relation, the procedure based on the notion of an ideal element and generalized Minkowski distance from the element was proposed. This procedure, presented as universal ranking algorithm, eliminates most of the disadvantages of ranking functions in the form of a weighted sum.[b]Keywords[/b]: ranking functions, preference relation, ranking clusters, categories, ideal point, universal ranking algorithm

  14. Combined Reduced-Rank Transform

    Directory of Open Access Journals (Sweden)

    Anatoli Torokhti

    2006-04-01

    Full Text Available We propose and justify a new approach to constructing optimal nonlinear transforms of random vectors. We show that the proposed transform improves such characteristics of {rank-reduced} transforms as compression ratio, accuracy of decompression and reduces required computational work. The proposed transform ${mathcal T}_p$ is presented in the form of a sum with $p$ terms where each term is interpreted as a particular rank-reduced transform. Moreover, terms in ${mathcal T}_p$ are represented as a combination of three operations ${mathcal F}_k$, ${mathcal Q}_k$ and ${oldsymbol{varphi}}_k$ with $k=1,ldots,p$. The prime idea is to determine ${mathcal F}_k$ separately, for each $k=1,ldots,p$, from an associated rank-constrained minimization problem similar to that used in the Karhunen--Lo`{e}ve transform. The operations ${mathcal Q}_k$ and ${oldsymbol{varphi}}_k$ are auxiliary for f/inding ${mathcal F}_k$. The contribution of each term in ${mathcal T}_p$ improves the entire transform performance. A corresponding unconstrained nonlinear optimal transform is also considered. Such a transform is important in its own right because it is treated as an optimal filter without signal compression. A rigorous analysis of errors associated with the proposed transforms is given.

  15. Functional Multiplex PageRank

    Science.gov (United States)

    Iacovacci, Jacopo; Rahmede, Christoph; Arenas, Alex; Bianconi, Ginestra

    2016-10-01

    Recently it has been recognized that many complex social, technological and biological networks have a multilayer nature and can be described by multiplex networks. Multiplex networks are formed by a set of nodes connected by links having different connotations forming the different layers of the multiplex. Characterizing the centrality of the nodes in a multiplex network is a challenging task since the centrality of the node naturally depends on the importance associated to links of a certain type. Here we propose to assign to each node of a multiplex network a centrality called Functional Multiplex PageRank that is a function of the weights given to every different pattern of connections (multilinks) existent in the multiplex network between any two nodes. Since multilinks distinguish all the possible ways in which the links in different layers can overlap, the Functional Multiplex PageRank can describe important non-linear effects when large relevance or small relevance is assigned to multilinks with overlap. Here we apply the Functional Page Rank to the multiplex airport networks, to the neuronal network of the nematode C. elegans, and to social collaboration and citation networks between scientists. This analysis reveals important differences existing between the most central nodes of these networks, and the correlations between their so-called pattern to success.

  16. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  17. A Comparison of Three Major Academic Rankings for World Universities: From a Research Evaluation Perspective

    Directory of Open Access Journals (Sweden)

    Mu-hsuan Huang

    2011-06-01

    Full Text Available This paper introduces three current major university ranking systems. The Performance Ranking of Scientific Papers for World Universities by Higher Education Evaluation and Accreditation Council of Taiwan (HEEACT Ranking emphasizes both the quality and quantity of research and current research performance. The Academic Ranking of World Universities by Shanghai Jiao Tung University (ARWU focuses on outstanding performance of universities with indicators such as Nobel Prize winners. The QS World University Ranking (2004-2009 by Times Higher Education (THE-QS emphasizes on peer review with high weighting in evaluation. This paper compares the 2009 ranking results from the three ranking systems. Differences exist in the top 20 universities in three ranking systems except the Harvard University, which scored top one in all of the three rankings. Comparisons also revealed that the THE-QS favored UK universities. Further, obvious differences can be observed between THE-QS and the other two rankings when ranking results of some European countries (Germany, UK, Netherlands, & Switzerland and Chinese speaking regions were compared.

  18. Fast Estimation of Approximate Matrix Ranks Using Spectral Densities.

    Science.gov (United States)

    Ubaru, Shashanka; Saad, Yousef; Seghouane, Abd-Krim

    2017-05-01

    Many machine learning and data-related applications require the knowledge of approximate ranks of large data matrices at hand. This letter presents two computationally inexpensive techniques to estimate the approximate ranks of such matrices. These techniques exploit approximate spectral densities, popular in physics, which are probability density distributions that measure the likelihood of finding eigenvalues of the matrix at a given point on the real line. Integrating the spectral density over an interval gives the eigenvalue count of the matrix in that interval. Therefore, the rank can be approximated by integrating the spectral density over a carefully selected interval. Two different approaches are discussed to estimate the approximate rank, one based on Chebyshev polynomials and the other based on the Lanczos algorithm. In order to obtain the appropriate interval, it is necessary to locate a gap between the eigenvalues that correspond to noise and the relevant eigenvalues that contribute to the matrix rank. A method for locating this gap and selecting the interval of integration is proposed based on the plot of the spectral density. Numerical experiments illustrate the performance of these techniques on matrices from typical applications.

  19. An interaction-motif-based scoring function for protein-ligand docking

    Directory of Open Access Journals (Sweden)

    Xie Zhong-Ru

    2010-06-01

    Full Text Available Abstract Background A good scoring function is essential for molecular docking computations. In conventional scoring functions, energy terms modeling pairwise interactions are cumulatively summed, and the best docking solution is selected. Here, we propose to transform protein-ligand interactions into three-dimensional geometric networks, from which recurring network substructures, or network motifs, are selected and used to provide probability-ranked interaction templates with which to score docking solutions. Results A novel scoring function for protein-ligand docking, MotifScore, was developed. It is non-energy-based, and docking is, instead, scored by counting the occurrences of motifs of protein-ligand interaction networks constructed using structures of protein-ligand complexes. MotifScore has been tested on a benchmark set established by others to assess its ability to identify near-native complex conformations among a set of decoys. In this benchmark test, 84% of the highest-scored docking conformations had root-mean-square deviations (rmsds below 2.0 Å from the native conformation, which is comparable with the best of several energy-based docking scoring functions. Many of the top motifs, which comprise a multitude of chemical groups that interact simultaneously and make a highly significant contribution to MotifScore, capture recurrent interacting patterns beyond pairwise interactions. Conclusions While providing quite good docking scores, MotifScore is quite different from conventional energy-based functions. MotifScore thus represents a new, network-based approach for exploring problems associated with molecular docking.

  20. Assigning Numerical Scores to Linguistic Expressions

    Directory of Open Access Journals (Sweden)

    María Jesús Campión

    2017-07-01

    Full Text Available In this paper, we study different methods of scoring linguistic expressions defined on a finite set, in the search for a linear order that ranks all those possible expressions. Among them, particular attention is paid to the canonical extension, and its representability through distances in a graph plus some suitable penalization of imprecision. The relationship between this setting and the classical problems of numerical representability of orderings, as well as extension of orderings from a set to a superset is also explored. Finally, aggregation procedures of qualitative rankings and scorings are also analyzed.

  1. Probability and Measure

    CERN Document Server

    Billingsley, Patrick

    2012-01-01

    Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this

  2. Probabilities in physics

    CERN Document Server

    Hartmann, Stephan

    2011-01-01

    Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...

  3. Probability in physics

    CERN Document Server

    Hemmo, Meir

    2012-01-01

    What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive. 

  4. Probability an introduction

    CERN Document Server

    Grimmett, Geoffrey

    2014-01-01

    Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

  5. Probability for statisticians

    CERN Document Server

    Shorack, Galen R

    2017-01-01

    This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...

  6. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  7. Concepts of probability theory

    CERN Document Server

    Pfeiffer, Paul E

    1979-01-01

    Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.

  8. Quantum computing and probability.

    Science.gov (United States)

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  9. Elements of quantum probability

    OpenAIRE

    Kummerer, B.; Maassen, Hans

    1996-01-01

    This is an introductory article presenting some basic ideas of quantum probability. From a discussion of simple experiments with polarized light and a card game we deduce the necessity of extending the body of classical probability theory. For a class of systems, containing classical systems with finitely many states, a probabilistic model is developed. It can describe, in particular, the polarization experiments. Some examples of ‘quantum coin tosses’ are discussed, closely related to V.F.R....

  10. Probability in quantum mechanics

    Directory of Open Access Journals (Sweden)

    J. G. Gilson

    1982-01-01

    Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.

  11. Mutual point-winning probabilities (MPW): a new performance measure for table tennis.

    Science.gov (United States)

    Ley, Christophe; Dominicy, Yves; Bruneel, Wim

    2017-11-13

    We propose a new performance measure for table tennis players: the mutual point-winning probabilities (MPW) as server and receiver. The MPWs quantify a player's chances to win a point against a given opponent, and hence nicely complement the classical match statistics history between two players. These new quantities are based on a Bradley-Terry-type statistical model taking into account the importance of individual points, since a rally at 8-2 in the first set is less crucial than a rally at the score of 9-9 in the final set. The MPWs hence reveal a player's strength on his/her service against a given opponent as well as his/her capacity of scoring crucial points. We estimate the MPWs by means of maximum likelihood estimation and show via a Monte Carlo simulation study that our estimation procedure works well. In order to illustrate the MPWs' versatile use, we have organized two round-robin tournaments of ten respectively eleven table tennis players from the Belgian table tennis federation. We compare the classical final ranking to the ranking based on MPWs, and we highlight how the MPWs shed new light on strengths and weaknesses of the players.

  12. Eliciting Subjective Probabilities with Binary Lotteries

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    2014-01-01

    We evaluate a binary lottery procedure for inducing risk neutral behavior in a subjective belief elicitation task. Prior research has shown this procedure to robustly induce risk neutrality when subjects are given a single risk task defined over objective probabilities. Drawing a sample from...... the same subject population, we find evidence that the binary lottery procedure also induces linear utility in a subjective probability elicitation task using the Quadratic Scoring Rule. We also show that the binary lottery procedure can induce direct revelation of subjective probabilities in subjects...

  13. The perception of probability.

    Science.gov (United States)

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  14. Methodology, Meaning and Usefulness of Rankings

    Science.gov (United States)

    Williams, Ross

    2008-01-01

    University rankings are having a profound effect on both higher education systems and individual universities. In this paper we outline these effects, discuss the desirable characteristics of a good ranking methodology and document existing practice, with an emphasis on the two main international rankings (Shanghai Jiao Tong and THES-QS). We take…

  15. Ranking of bank branches with undesirable and fuzzy data: A DEA-based approach

    Directory of Open Access Journals (Sweden)

    Sohrab Kordrostami

    2016-07-01

    Full Text Available Banks are one of the most important financial sectors in order to the economic development of each country. Certainly, efficiency scores and ranks of banks are significant and effective aspects towards future planning. Sometimes the performance of banks must be measured in the presence of undesirable and vague factors. For these reasons in the current paper a procedure based on data envelopment analysis (DEA is introduced for evaluating the efficiency and complete ranking of decision making units (DMUs where undesirable and fuzzy measures exist. To illustrate, in the presence of undesirable and fuzzy measures, DMUs are evaluated by using a fuzzy expected value approach and DMUs with similar efficiency scores are ranked by using constraints and the Maximal Balance Index based on the optimal shadow prices. Afterwards, the efficiency scores of 25 branches of an Iranian commercial bank are evaluated using the proposed method. Also, a complete ranking of bank branches is presented to discriminate branches.

  16. Tool for Ranking Research Options

    Science.gov (United States)

    Ortiz, James N.; Scott, Kelly; Smith, Harold

    2005-01-01

    Tool for Research Enhancement Decision Support (TREDS) is a computer program developed to assist managers in ranking options for research aboard the International Space Station (ISS). It could likely also be adapted to perform similar decision-support functions in industrial and academic settings. TREDS provides a ranking of the options, based on a quantifiable assessment of all the relevant programmatic decision factors of benefit, cost, and risk. The computation of the benefit for each option is based on a figure of merit (FOM) for ISS research capacity that incorporates both quantitative and qualitative inputs. Qualitative inputs are gathered and partly quantified by use of the time-tested analytical hierarchical process and used to set weighting factors in the FOM corresponding to priorities determined by the cognizant decision maker(s). Then by use of algorithms developed specifically for this application, TREDS adjusts the projected benefit for each option on the basis of levels of technical implementation, cost, and schedule risk. Based partly on Excel spreadsheets, TREDS provides screens for entering cost, benefit, and risk information. Drop-down boxes are provided for entry of qualitative information. TREDS produces graphical output in multiple formats that can be tailored by users.

  17. Issue Management Risk Ranking Systems

    Energy Technology Data Exchange (ETDEWEB)

    Novack, Steven David; Marshall, Frances Mc Clellan; Stromberg, Howard Merion; Grant, Gary Michael

    1999-06-01

    Thousands of safety issues have been collected on-line at the Idaho National Engineering and Environmental Laboratory (INEEL) as part of the Issue Management Plan. However, there has been no established approach to prioritize collected and future issues. The authors developed a methodology, based on hazards assessment, to identify and risk rank over 5000 safety issues collected at INEEL. This approach required that it was easily applied and understandable for site adaptation and commensurate with the Integrated Safety Plan. High-risk issues were investigated and mitigative/preventive measures were suggested and ranked based on a cost-benefit scheme to provide risk-informed safety measures. This methodology was consistent with other integrated safety management goals and tasks providing a site-wide risk informed decision tool to reduce hazardous conditions and focus resources on high-risk safety issues. As part of the issue management plan, this methodology was incorporated at the issue collection level and training was provided to management to better familiarize decision-makers with concepts of safety and risk. This prioritization methodology and issue dissemination procedure will be discussed. Results of issue prioritization and training efforts will be summarized. Difficulties and advantages of the process will be reported. Development and incorporation of this process into INEELs lessons learned reporting and the site-wide integrated safety management program will be shown with an emphasis on establishing self reliance and ownership of safety issues.

  18. Experimental Probability in Elementary School

    Science.gov (United States)

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  19. The pleasures of probability

    CERN Document Server

    Isaac, Richard

    1995-01-01

    The ideas of probability are all around us. Lotteries, casino gambling, the al­ most non-stop polling which seems to mold public policy more and more­ these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re­ moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac­ ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...

  20. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  1. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....

  2. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  3. System for ranking relative threats of U.S. volcanoes

    Science.gov (United States)

    Ewert, J.W.

    2007-01-01

    A methodology to systematically rank volcanic threat was developed as the basis for prioritizing volcanoes for long-term hazards evaluations, monitoring, and mitigation activities. A ranking of 169 volcanoes in the United States and the Commonwealth of the Northern Mariana Islands (U.S. volcanoes) is presented based on scores assigned for various hazard and exposure factors. Fifteen factors define the hazard: Volcano type, maximum known eruptive explosivity, magnitude of recent explosivity within the past 500 and 5,000 years, average eruption-recurrence interval, presence or potential for a suite of hazardous phenomena (pyroclastic flows, lahars, lava flows, tsunami, flank collapse, hydrothermal explosion, primary lahar), and deformation, seismic, or degassing unrest. Nine factors define exposure: a measure of ground-based human population in hazard zones, past fatalities and evacuations, a measure of airport exposure, a measure of human population on aircraft, the presence of power, transportation, and developed infrastructure, and whether or not the volcano forms a significant part of a populated island. The hazard score and exposure score for each volcano are multiplied to give its overall threat score. Once scored, the ordered list of volcanoes is divided into five overall threat categories from very high to very low. ?? 2007 ASCE.

  4. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  5. Classic Problems of Probability

    CERN Document Server

    Gorroochurn, Prakash

    2012-01-01

    "A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin

  6. Estimating tail probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Carr, D.B.; Tolley, H.D.

    1982-12-01

    This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.

  7. Introduction to imprecise probabilities

    CERN Document Server

    Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M

    2014-01-01

    In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin

  8. Two-dimensional ranking of Wikipedia articles

    Science.gov (United States)

    Zhirov, A. O.; Zhirov, O. V.; Shepelyansky, D. L.

    2010-10-01

    The Library of Babel, described by Jorge Luis Borges, stores an enormous amount of information. The Library exists ab aeterno. Wikipedia, a free online encyclopaedia, becomes a modern analogue of such a Library. Information retrieval and ranking of Wikipedia articles become the challenge of modern society. While PageRank highlights very well known nodes with many ingoing links, CheiRank highlights very communicative nodes with many outgoing links. In this way the ranking becomes two-dimensional. Using CheiRank and PageRank we analyze the properties of two-dimensional ranking of all Wikipedia English articles and show that it gives their reliable classification with rich and nontrivial features. Detailed studies are done for countries, universities, personalities, physicists, chess players, Dow-Jones companies and other categories.

  9. Consistent ranking of volatility models

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2006-01-01

    result in an inferior model being chosen as "best" with a probability that converges to one as the sample size increases. We document the practical relevance of this problem in an empirical application and by simulation experiments. Our results provide an additional argument for using the realized...

  10. Epistemology and Probability

    CERN Document Server

    Plotnitsky, Arkady

    2010-01-01

    Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general

  11. Huygens' foundations of probability

    NARCIS (Netherlands)

    Freudenthal, Hans

    It is generally accepted that Huygens based probability on expectation. The term “expectation,” however, stems from Van Schooten's Latin translation of Huygens' treatise. A literal translation of Huygens' Dutch text shows more clearly what Huygens actually meant and how he proceeded.

  12. Counterexamples in probability

    CERN Document Server

    Stoyanov, Jordan M

    2013-01-01

    While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.

  13. Probably Almost Bayes Decisions

    DEFF Research Database (Denmark)

    Anoulova, S.; Fischer, Paul; Poelt, S.

    1996-01-01

    In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean entries. Our aim is to "(efficiently) learn probably almost optimal classifications" from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian...

  14. Univariate Probability Distributions

    Science.gov (United States)

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  15. The Theory of Probability

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 4. The Theory of Probability. Andrei Nikolaevich Kolmogorov. Classics Volume 3 Issue 4 April 1998 pp 103-112. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/003/04/0103-0112. Author Affiliations.

  16. Probability Theory Without Tears!

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 2. Probability Theory Without Tears! S Ramasubramanian. Book Review Volume 1 Issue 2 February 1996 pp 115-116. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/02/0115-0116 ...

  17. probably mostly white

    African Journals Online (AJOL)

    Willem Scholtz

    internet – the (probably mostly white) public's interest in the so-called Border War is ostensibly at an all-time high. By far most of the publications are written by ex- ... understanding of this very important episode in the history of Southern Africa. It was, therefore, with some anticipation that one waited for this book, which.

  18. the theory of probability

    Indian Academy of Sciences (India)

    important practical applications in statistical quality control. Of a similar kind are the laws of probability for the scattering of missiles, which are basic in the ..... deviations for different ranges for each type of gun and of shell are found empirically in firing practice on an artillery range. But the subsequent solution of all possible ...

  19. On Randomness and Probability

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 2. On Randomness and Probability How to Mathematically Model Uncertain Events ... Author Affiliations. Rajeeva L Karandikar1. Statistics and Mathematics Unit, Indian Statistical Institute, 7 S J S Sansanwal Marg, New Delhi 110 016, India.

  20. A folk-psychological ranking of personality facets

    Directory of Open Access Journals (Sweden)

    Eka Roivainen

    2016-10-01

    Full Text Available Background Which personality facets should a general personality test measure? No consensus exists on the facet structure of personality, the nature of facets, or the correct method of identifying the most significant facets. However, it can be hypothesized (the lexical hypothesis that high frequency personality describing words more likely represent important personality facets and rarely used words refer to less significant aspects of personality. Participants and procedure A ranking of personality facets was performed by studying the frequency of the use of popular personality adjectives in causal clauses (because he is a kind person on the Internet and in books as attributes of the word person (kind person. Results In Study 1, the 40 most frequently used adjectives had a cumulative usage frequency equal to that of the rest of the 295 terms studied. When terms with a higher-ranking dictionary synonym or antonym were eliminated, 23 terms remained, which represent 23 different facets. In Study 2, clusters of synonymous terms were examined. Within the top 30 clusters, personality terms were used 855 times compared to 240 for the 70 lower-ranking clusters. Conclusions It is hypothesized that personality facets represented by the top-ranking terms and clusters of terms are important and impactful independent of their correlation with abstract underlying personality factors (five/six factor models. Compared to hierarchical personality models, lists of important facets probably better cover those aspects of personality that are situated between the five or six major domains.

  1. A comparison of probability of ruin and expected discounted utility ...

    African Journals Online (AJOL)

    Individuals in defined-contribution retirement funds currently have a number of options as to how to finance their post-retirement spending. The paper considers the ranking of selected annuitisation strategies by the probability of ruin and by expected discounted utility under different scenarios. 'Ruin' is defined as occurring ...

  2. Time-Aware Service Ranking Prediction in the Internet of Things Environment

    Directory of Open Access Journals (Sweden)

    Yuze Huang

    2017-04-01

    Full Text Available With the rapid development of the Internet of things (IoT, building IoT systems with high quality of service (QoS has become an urgent requirement in both academia and industry. During the procedures of building IoT systems, QoS-aware service selection is an important concern, which requires the ranking of a set of functionally similar services according to their QoS values. In reality, however, it is quite expensive and even impractical to evaluate all geographically-dispersed IoT services at a single client to obtain such a ranking. Nevertheless, distributed measurement and ranking aggregation have to deal with the high dynamics of QoS values and the inconsistency of partial rankings. To address these challenges, we propose a time-aware service ranking prediction approach named TSRPred for obtaining the global ranking from the collection of partial rankings. Specifically, a pairwise comparison model is constructed to describe the relationships between different services, where the partial rankings are obtained by time series forecasting on QoS values. The comparisons of IoT services are formulated by random walks, and thus, the global ranking can be obtained by sorting the steady-state probabilities of the underlying Markov chain. Finally, the efficacy of TSRPred is validated by simulation experiments based on large-scale real-world datasets.

  3. Time-Aware Service Ranking Prediction in the Internet of Things Environment

    Science.gov (United States)

    Huang, Yuze; Huang, Jiwei; Cheng, Bo; He, Shuqing; Chen, Junliang

    2017-01-01

    With the rapid development of the Internet of things (IoT), building IoT systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedures of building IoT systems, QoS-aware service selection is an important concern, which requires the ranking of a set of functionally similar services according to their QoS values. In reality, however, it is quite expensive and even impractical to evaluate all geographically-dispersed IoT services at a single client to obtain such a ranking. Nevertheless, distributed measurement and ranking aggregation have to deal with the high dynamics of QoS values and the inconsistency of partial rankings. To address these challenges, we propose a time-aware service ranking prediction approach named TSRPred for obtaining the global ranking from the collection of partial rankings. Specifically, a pairwise comparison model is constructed to describe the relationships between different services, where the partial rankings are obtained by time series forecasting on QoS values. The comparisons of IoT services are formulated by random walks, and thus, the global ranking can be obtained by sorting the steady-state probabilities of the underlying Markov chain. Finally, the efficacy of TSRPred is validated by simulation experiments based on large-scale real-world datasets. PMID:28448451

  4. Time-Aware Service Ranking Prediction in the Internet of Things Environment.

    Science.gov (United States)

    Huang, Yuze; Huang, Jiwei; Cheng, Bo; He, Shuqing; Chen, Junliang

    2017-04-27

    With the rapid development of the Internet of things (IoT), building IoT systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedures of building IoT systems, QoS-aware service selection is an important concern, which requires the ranking of a set of functionally similar services according to their QoS values. In reality, however, it is quite expensive and even impractical to evaluate all geographically-dispersed IoT services at a single client to obtain such a ranking. Nevertheless, distributed measurement and ranking aggregation have to deal with the high dynamics of QoS values and the inconsistency of partial rankings. To address these challenges, we propose a time-aware service ranking prediction approach named TSRPred for obtaining the global ranking from the collection of partial rankings. Specifically, a pairwise comparison model is constructed to describe the relationships between different services, where the partial rankings are obtained by time series forecasting on QoS values. The comparisons of IoT services are formulated by random walks, and thus, the global ranking can be obtained by sorting the steady-state probabilities of the underlying Markov chain. Finally, the efficacy of TSRPred is validated by simulation experiments based on large-scale real-world datasets.

  5. A Hybrid Model Ranking Search Result for Research Paper Searching on Social Bookmarking

    Directory of Open Access Journals (Sweden)

    pijitra jomsri

    2015-11-01

    Full Text Available Social bookmarking and publication sharing systems are essential tools for web resource discovery. The performance and capabilities of search results from research paper bookmarking system are vital. Many researchers use social bookmarking for searching papers related to their topics of interest. This paper proposes a combination of similarity based indexing “tag title and abstract” and static ranking to improve search results. In this particular study, the year of the published paper and type of research paper publication are combined with similarity ranking called (HybridRank. Different weighting scores are employed. The retrieval performance of these weighted combination rankings are evaluated using mean values of NDCG. The results suggest that HybridRank and similarity rank with weight 75:25 has the highest NDCG scores. From the preliminary result of experiment, the combination ranking technique provide more relevant research paper search results. Furthermore the chosen heuristic ranking can improve the efficiency of research paper searching on social bookmarking websites.

  6. On Probability Domains

    Science.gov (United States)

    Frič, Roman; Papčo, Martin

    2010-12-01

    Motivated by IF-probability theory (intuitionistic fuzzy), we study n-component probability domains in which each event represents a body of competing components and the range of a state represents a simplex S n of n-tuples of possible rewards-the sum of the rewards is a number from [0,1]. For n=1 we get fuzzy events, for example a bold algebra, and the corresponding fuzzy probability theory can be developed within the category ID of D-posets (equivalently effect algebras) of fuzzy sets and sequentially continuous D-homomorphisms. For n=2 we get IF-events, i.e., pairs ( μ, ν) of fuzzy sets μ, ν∈[0,1] X such that μ( x)+ ν( x)≤1 for all x∈ X, but we order our pairs (events) coordinatewise. Hence the structure of IF-events (where ( μ 1, ν 1)≤( μ 2, ν 2) whenever μ 1≤ μ 2 and ν 2≤ ν 1) is different and, consequently, the resulting IF-probability theory models a different principle. The category ID is cogenerated by I=[0,1] (objects of ID are subobjects of powers I X ), has nice properties and basic probabilistic notions and constructions are categorical. For example, states are morphisms. We introduce the category S n D cogenerated by Sn=\\{(x1,x2,ldots ,xn)in In;sum_{i=1}nxi≤ 1\\} carrying the coordinatewise partial order, difference, and sequential convergence and we show how basic probability notions can be defined within S n D.

  7. Negative probability in the framework of combined probability

    OpenAIRE

    Burgin, Mark

    2013-01-01

    Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

  8. Development of a comparative risk ranking system for agents posing a bioterrorism threat to human or animal populations.

    Science.gov (United States)

    Tomuzia, Katharina; Menrath, Andrea; Frentzel, Hendrik; Filter, Matthias; Weiser, Armin A; Bräunig, Juliane; Buschulte, Anja; Appel, Bernd

    2013-09-01

    Various systems for prioritizing biological agents with respect to their applicability as biological weapons are available, ranging from qualitative to (semi)quantitative approaches. This research aimed at generating a generic risk ranking system applicable to human and animal pathogenic agents based on scientific information. Criteria were evaluated and clustered to create a criteria list. Considering availability of data, a number of 28 criteria separated by content were identified that can be classified in 11 thematic areas or categories. Relevant categories contributing to probability were historical aspects, accessibility, production efforts, and possible paths for dispersion. Categories associated with impact are dealing with containment measures, availability of diagnostics, preventive and treatment measures in human and animal populations, impact on society, human and veterinary public health, and economic and ecological consequences. To allow data-based scoring, each criterion was described by at least 1 measure that allows the assignment of values. These values constitute quantities, ranges, or facts that are as explicit and precise as possible. The consideration of minimum and maximum values that can occur due to natural variations and that are often described in the literature led to the development of minimum and maximum criteria and consequently category scores. Missing or incomplete data, and uncertainty resulting therefrom, were integrated into the scheme via a cautious (but not overcautious) approach. The visualization technique that was used allows the description and illustration of uncertainty on the level of probability and impact. The developed risk ranking system was evaluated by assessing the risk originating from the bioterrorism threat of the animal pathogen bluetongue virus, the human pathogen Enterohemorrhagic Escherichia coli O157:H7, the zoonotic Bacillus anthracis, and Botulinum neurotoxin.

  9. Dietary risk ranking for residual antibiotics in cultured aquatic products around Tai Lake, China.

    Science.gov (United States)

    Song, Chao; Li, Le; Zhang, Cong; Qiu, Liping; Fan, Limin; Wu, Wei; Meng, Shunlong; Hu, Gengdong; Chen, Jiazhang; Liu, Ying; Mao, Aimin

    2017-10-01

    Antibiotics are widely used in aquaculture and therefore may be present as a dietary risk in cultured aquatic products. Using the Tai Lake Basin as a study area, we assessed the presence of 15 antibiotics in 5 widely cultured aquatic species using a newly developed dietary risk ranking approach. By assigning scores to each factor involved in the ranking matrices, the scores of dietary risks per antibiotic and per aquatic species were calculated. The results indicated that fluoroquinolone antibiotics posed the highest dietary risk in all aquatic species. Then, the total scores per aquatic species were summed by all 15 antibiotic scores of antibiotics, it was found that Crab (Eriocheir sinensis) had the highest dietary risks. Finally, the most concerned antibiotic category and aquatic species were selected. This study highlighted the importance of dietary risk ranking in the production and consumption of cultured aquatic products around Tai Lake. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A. M. Robert

    Determining the co-integrating rank of a system of variables has become a fundamental aspect of applied research in macroeconomics and finance. It is wellknown that standard asymptotic likelihood ratio tests for co-integration rank of Johansen (1996) can be unreliable in small samples...... with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... that the probability of selecting a rank smaller than (equal to) the true co-integrating rank will converge to zero (one minus the marginal significance level), as the sample size diverges, for general I(1) processes. No such likelihood-based procedure is currently known to be available. In this paper we fill this gap...

  11. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models

    DEFF Research Database (Denmark)

    Guiseppe, Cavaliere; Rahbæk, Anders; Taylor, A.M. Robert

    Determining the co-integrating rank of a system of variables has become a fundamental aspect of applied research in macroeconomics and finance. It is wellknown that standard asymptotic likelihood ratio tests for co-integration rank of Johansen (1996) can be unreliable in small samples...... that the probability of selecting a rank smaller than (equal to) the true co-integrating rank will converge to zero (one minus the marginal significance level), as the sample size diverges, for general I(1) processes. No such likelihood-based procedure is currently known to be available. In this paper we fill this gap...... with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...

  12. Sugeno integral ranking of release scenarios in a low and intermediate waste repository

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. Ho; Kim, Tae Woon; Ha, Jae Joo [Korea Atomic Energy Reserach Institute, Taejon (Korea, Republic of)

    2004-11-15

    In the present study, a multi criteria decision-making (MCDM) problem of ranking of important radionuclide release scenarios in a low and intermediate radioactive waste repository is to treat on the basis of {lambda}-fuzzy measures and Sugeno integral. Ranking of important scenarios can lead to the provision of more effective safety measure in a design stage of the repository. The ranking is determined by a relative degree of appropriateness of scenario alternatives. To demonstrate a validation of the proposed approach to ranking of release scenarios, results of the previous AHP study are used and compared with them of the present SIAHP approach. Since the AHP approach uses importance weight based on additive probability measures, the interaction among criteria is ignored. The comparison of scenarios ranking obtained from these two approaches enables us to figure out the effect of different models for interaction among criteria.

  13. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  14. Integrated inventory ranking system for oilfield equipment industry

    Directory of Open Access Journals (Sweden)

    Jalel Ben Hmida

    2014-01-01

    Full Text Available Purpose: This case study is motivated by the subcontracting problem in an oilfield equipment and service company where the management needs to decide which parts to manufacture in-house when the capacity is not enough to make all required parts. Currently the company is making subcontracting decisions based on management’s experience. Design/methodology/approach: Working with the management, a decision support system (DSS is developed to rank parts by integrating three inventory classification methods considering both quantitative factors such as cost and demand, and qualitative factors such as functionality, efficiency, and quality. The proposed integrated inventory ranking procedure will make use of three classification methods: ABC, FSN, and VED. Findings: An integration mechanism using weights is developed to rank the parts based on the total priority scores. The ranked list generated by the system helps management to identify about 50 critical parts to manufacture in-house. Originality/value: The integration of all three inventory classification techniques into a single system is a unique feature of this research. This is important as it provides a more inclusive, big picture view of the DSS for management’s use in making business decisions.

  15. The LAILAPS search engine: relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Bargsten, Joachim; Haberhauer, Gregor; Klapperstück, Matthias; Leps, Michael; Weinel, Christian; Wünschiers, Röbbe; Weissbach, Mandy; Stein, Jens; Scholz, Uwe

    2010-01-15

    Search engines and retrieval systems are popular tools at a life science desktop. The manual inspection of hundreds of database entries, that reflect a life science concept or fact, is a time intensive daily work. Hereby, not the number of query results matters, but the relevance does. In this paper, we present the LAILAPS search engine for life science databases. The concept is to combine a novel feature model for relevance ranking, a machine learning approach to model user relevance profiles, ranking improvement by user feedback tracking and an intuitive and slim web user interface, that estimates relevance rank by tracking user interactions. Queries are formulated as simple keyword lists and will be expanded by synonyms. Supporting a flexible text index and a simple data import format, LAILAPS can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. With a set of features, extracted from each database hit in combination with user relevance preferences, a neural network predicts user specific relevance scores. Using expert knowledge as training data for a predefined neural network or using users own relevance training sets, a reliable relevance ranking of database hits has been implemented. In this paper, we present the LAILAPS system, the concepts, benchmarks and use cases. LAILAPS is public available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  16. Paradoxes in probability theory

    CERN Document Server

    Eckhardt, William

    2013-01-01

    Paradoxes provide a vehicle for exposing misinterpretations and misapplications of accepted principles. This book discusses seven paradoxes surrounding probability theory.  Some remain the focus of controversy; others have allegedly been solved, however the accepted solutions are demonstrably incorrect. Each paradox is shown to rest on one or more fallacies.  Instead of the esoteric, idiosyncratic, and untested methods that have been brought to bear on these problems, the book invokes uncontroversial probability principles, acceptable both to frequentists and subjectivists. The philosophical disputation inspired by these paradoxes is shown to be misguided and unnecessary; for instance, startling claims concerning human destiny and the nature of reality are directly related to fallacious reasoning in a betting paradox, and a problem analyzed in philosophy journals is resolved by means of a computer program.

  17. Probability, Nondeterminism and Concurrency

    DEFF Research Database (Denmark)

    Varacca, Daniele

    reveals the computational intuition lying behind the mathematics. In the second part of the thesis we provide an operational reading of continuous valuations on certain domains (the distributive concrete domains of Kahn and Plotkin) through the model of probabilistic event structures. Event structures......Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particular...... there is no categorical distributive law between them. We introduce the powerdomain of indexed valuations which modifies the usual probabilistic powerdomain to take more detailed account of where probabilistic choices are made. We show the existence of a distributive law between the powerdomain of indexed valuations...

  18. Waste Package Misload Probability

    Energy Technology Data Exchange (ETDEWEB)

    J.K. Knudsen

    2001-11-20

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.

  19. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  20. Contributions to quantum probability

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Tobias

    2010-06-25

    Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a

  1. Probability theory and applications

    CERN Document Server

    Hsu, Elton P

    1999-01-01

    This volume, with contributions by leading experts in the field, is a collection of lecture notes of the six minicourses given at the IAS/Park City Summer Mathematics Institute. It introduces advanced graduates and researchers in probability theory to several of the currently active research areas in the field. Each course is self-contained with references and contains basic materials and recent results. Topics include interacting particle systems, percolation theory, analysis on path and loop spaces, and mathematical finance. The volume gives a balanced overview of the current status of probability theory. An extensive bibliography for further study and research is included. This unique collection presents several important areas of current research and a valuable survey reflecting the diversity of the field.

  2. Superpositions of probability distributions.

    Science.gov (United States)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=sigma;{2} play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  3. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  4. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  5. Measurement uncertainty and probability

    National Research Council Canada - National Science Library

    Willink, Robin

    2013-01-01

    ... and probability models 3.4 Inference and confidence 3.5 Two central limit theorems 3.6 The Monte Carlo method and process simulation 4 The randomization of systematic errors page xi xii 3 3 5 7 10 12 16 19 21 21 23 28 30 32 33 39 43 45 52 53 56 viiviii 4.1 4.2 4.3 4.4 4.5 Contents The Working Group of 1980 From classical repetition to practica...

  6. Structural Minimax Probability Machine.

    Science.gov (United States)

    Gu, Bin; Sun, Xingming; Sheng, Victor S

    2017-07-01

    Minimax probability machine (MPM) is an interesting discriminative classifier based on generative prior knowledge. It can directly estimate the probabilistic accuracy bound by minimizing the maximum probability of misclassification. The structural information of data is an effective way to represent prior knowledge, and has been found to be vital for designing classifiers in real-world problems. However, MPM only considers the prior probability distribution of each class with a given mean and covariance matrix, which does not efficiently exploit the structural information of data. In this paper, we use two finite mixture models to capture the structural information of the data from binary classification. For each subdistribution in a finite mixture model, only its mean and covariance matrix are assumed to be known. Based on the finite mixture models, we propose a structural MPM (SMPM). SMPM can be solved effectively by a sequence of the second-order cone programming problems. Moreover, we extend a linear model of SMPM to a nonlinear model by exploiting kernelization techniques. We also show that the SMPM can be interpreted as a large margin classifier and can be transformed to support vector machine and maxi-min margin machine under certain special conditions. Experimental results on both synthetic and real-world data sets demonstrate the effectiveness of SMPM.

  7. Probability via expectation

    CERN Document Server

    Whittle, Peter

    1992-01-01

    This book is a complete revision of the earlier work Probability which ap­ peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de­ manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...

  8. Robust Tracking with Discriminative Ranking Middle-Level Patches

    Directory of Open Access Journals (Sweden)

    Hong Liu

    2014-04-01

    Full Text Available The appearance model has been shown to be essential for robust visual tracking since it is the basic criterion to locating targets in video sequences. Though existing tracking-by-detection algorithms have shown to be greatly promising, they still suffer from the drift problem, which is caused by updating appearance models. In this paper, we propose a new appearance model composed of ranking middle-level patches to capture more object distinctiveness than traditional tracking-by-detection models. Targets and backgrounds are represented by both low-level bottom-up features and high-level top-down patches, which can compensate each other. Bottom-up features are defined at the pixel level, and each feature gets its discrimination score through selective feature attention mechanism. In top-down feature extraction, rectangular patches are ranked according to their bottom-up discrimination scores, by which all of them are clustered into irregular patches, named ranking middle-level patches. In addition, at the stage of classifier training, the online random forests algorithm is specially refined to reduce drifting problems. Experiments on challenging public datasets and our test videos demonstrate that our approach can effectively prevent the tracker drifting problem and obtain competitive performance in visual tracking.

  9. Rank Modulation for Translocation Error Correction

    CERN Document Server

    Farnoud, Farzad; Milenkovic, Olgica

    2012-01-01

    We consider rank modulation codes for flash memories that allow for handling arbitrary charge drop errors. Unlike classical rank modulation codes used for correcting errors that manifest themselves as swaps of two adjacently ranked elements, the proposed \\emph{translocation rank codes} account for more general forms of errors that arise in storage systems. Translocations represent a natural extension of the notion of adjacent transpositions and as such may be analyzed using related concepts in combinatorics and rank modulation coding. Our results include tight bounds on the capacity of translocation rank codes, construction techniques for asymptotically good codes, as well as simple decoding methods for one class of structured codes. As part of our exposition, we also highlight the close connections between the new code family and permutations with short common subsequences, deletion and insertion error-correcting codes for permutations and permutation arrays.

  10. Dynamics of Ranking Processes in Complex Systems

    Science.gov (United States)

    Blumm, Nicholas; Ghoshal, Gourab; Forró, Zalán; Schich, Maximilian; Bianconi, Ginestra; Bouchaud, Jean-Philippe; Barabási, Albert-László

    2012-09-01

    The world is addicted to ranking: everything, from the reputation of scientists, journals, and universities to purchasing decisions is driven by measured or perceived differences between them. Here, we analyze empirical data capturing real time ranking in a number of systems, helping to identify the universal characteristics of ranking dynamics. We develop a continuum theory that not only predicts the stability of the ranking process, but shows that a noise-induced phase transition is at the heart of the observed differences in ranking regimes. The key parameters of the continuum theory can be explicitly measured from data, allowing us to predict and experimentally document the existence of three phases that govern ranking stability.

  11. Error analysis of stochastic gradient descent ranking.

    Science.gov (United States)

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  12. A universal rank-size law

    CERN Document Server

    Ausloos, Marcel

    2016-01-01

    A mere hyperbolic law, like the Zipf's law power function, is often inadequate to describe rank-size relationships. An alternative theoretical distribution is proposed based on theoretical physics arguments starting from the Yule-Simon distribution. A modeling is proposed leading to a universal form. A theoretical suggestion for the "best (or optimal) distribution", is provided through an entropy argument. The ranking of areas through the number of cities in various countries and some sport competition ranking serves for the present illustrations.

  13. Methodology for ranking restoration options

    Energy Technology Data Exchange (ETDEWEB)

    Hedemann Jensen, Per

    1999-04-01

    The work described in this report has been performed as a part of the RESTRAT Project FI4P-CT95-0021a (PL 950128) co-funded by the Nuclear Fission Safety Programme of the European Commission. The RESTRAT project has the overall objective of developing generic methodologies for ranking restoration techniques as a function of contamination and site characteristics. The project includes analyses of existing remediation methodologies and contaminated sites, and is structured in the following steps: characterisation of relevant contaminated sites; identification and characterisation of relevant restoration techniques; assessment of the radiological impact; development and application of a selection methodology for restoration options; formulation of generic conclusions and development of a manual. The project is intended to apply to situations in which sites with nuclear installations have been contaminated with radioactive materials as a result of the operation of these installations. The areas considered for remedial measures include contaminated land areas, rivers and sediments in rivers, lakes, and sea areas. Five contaminated European sites have been studied. Various remedial measures have been envisaged with respect to the optimisation of the protection of the populations being exposed to the radionuclides at the sites. Cost-benefit analysis and multi-attribute utility analysis have been applied for optimisation. Health, economic and social attributes have been included and weighting factors for the different attributes have been determined by the use of scaling constants. (au)

  14. Ranking documents with a thesaurus.

    Science.gov (United States)

    Rada, R; Bicknell, E

    1989-09-01

    This article reports on exploratory experiments in evaluating and improving a thesaurus through studying its effect on retrieval. A formula called DISTANCE was developed to measure the conceptual distance between queries and documents encoded as sets of thesaurus terms. DISTANCE references MeSH (Medical Subject Headings) and assesses the degree of match between a MeSH-encoded query and document. The performance of DISTANCE on MeSH is compared to the performance of people in the assessment of conceptual distance between queries and documents, and is found to simulate with surprising accuracy the human performance. The power of the computer simulation stems both from the tendency of people to rely heavily on broader-than (BT) relations in making decisions about conceptual distance and from the thousands of accurate BT relations in MeSH. One source for discrepancy between the algorithms' measurement of closeness between query and document and people's measurement of closeness between query and document is occasional inconsistency in the BT relations. Our experiments with adding non-BT relations to MeSH showed how these non-BT non-BT relations to MeSH showed how these non-BT relations could improve document ranking, if DISTANCE were also appropriately revised to treat these relations differently from BT relations.

  15. Communities in Large Networks: Identification and Ranking

    DEFF Research Database (Denmark)

    Olsen, Martin

    2008-01-01

    We study the problem of identifying and ranking the members of a community in a very large network with link analysis only, given a set of representatives of the community. We define the concept of a community justified by a formal analysis of a simple model of the evolution of a directed graph. ...... and its immediate surroundings. The members are ranked with a “local” variant of the PageRank algorithm. Results are reported from successful experiments on identifying and ranking Danish Computer Science sites and Danish Chess pages using only a few representatives....

  16. Citation graph based ranking in Invenio

    CERN Document Server

    Marian, Ludmila; Rajman, Martin; Vesely, Martin

    2010-01-01

    Invenio is the web-based integrated digital library system developed at CERN. Within this framework, we present four types of ranking models based on the citation graph that complement the simple approach based on citation counts: time-dependent citation counts, a relevancy ranking which extends the PageRank model, a time-dependent ranking which combines the freshness of citations with PageRank and a ranking that takes into consideration the external citations. We present our analysis and results obtained on two main data sets: Inspire and CERN Document Server. Our main contributions are: (i) a study of the currently available ranking methods based on the citation graph; (ii) the development of new ranking methods that correct some of the identified limitations of the current methods such as treating all citations of equal importance, not taking time into account or considering the citation graph complete; (iii) a detailed study of the key parameters for these ranking methods. (The original publication is ava...

  17. D-score: a search engine independent MD-score.

    Science.gov (United States)

    Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P

    2013-03-01

    While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Alkaloid-derived molecules in low rank Argonne premium coals.

    Energy Technology Data Exchange (ETDEWEB)

    Winans, R. E.; Tomczyk, N. A.; Hunt, J. E.

    2000-11-30

    Molecules that are probably derived from alkaloids have been found in the extracts of the subbituminous and lignite Argonne Premium Coals. High resolution mass spectrometry (HRMS) and liquid chromatography mass spectrometry (LCMS) have been used to characterize pyridine and supercritical extracts. The supercritical extraction used an approach that has been successful for extracting alkaloids from natural products. The first indication that there might be these natural products in coals was the large number of molecules found containing multiple nitrogen and oxygen heteroatoms. These molecules are much less abundant in bituminous coals and absent in the higher rank coals.

  19. Eliciting Subjective Probabilities with Binary Lotteries

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    We evaluate the binary lottery procedure for inducing risk neutral behavior in a subjective belief elicitation task. Harrison, Martínez-Correa and Swarthout [2013] found that the binary lottery procedure works robustly to induce risk neutrality when subjects are given one risk task defined over...... objective probabilities. Drawing a sample from the same subject population, we find evidence that the binary lottery procedure induces linear utility in a subjective probability elicitation task using the Quadratic Scoring Rule. We also show that the binary lottery procedure can induce direct revelation...

  20. Improving the Rank Precision of Population Health Measures for Small Areas with Longitudinal and Joint Outcome Models.

    Science.gov (United States)

    Athens, Jessica K; Remington, Patrick L; Gangnon, Ronald E

    2015-01-01

    The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to

  1. Ranked Conservation Opportunity Areas for Region 7 (ECO_RES.RANKED_OAS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The RANKED_OAS are all the Conservation Opportunity Areas identified by MoRAP that have subsequently been ranked by patch size, landform representation, and the...

  2. Ranking scientific publications: the effect of nonlinearity.

    Science.gov (United States)

    Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; Di, Zengru

    2014-10-17

    Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected.

  3. Ranking scientific publications: the effect of nonlinearity

    Science.gov (United States)

    Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; di, Zengru

    2014-10-01

    Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected.

  4. Entity Ranking using Wikipedia as a Pivot

    NARCIS (Netherlands)

    R. Kaptein; P. Serdyukov; A.P. de Vries (Arjen); J. Kamps

    2010-01-01

    htmlabstractIn this paper we investigate the task of Entity Ranking on the Web. Searchers looking for entities are arguably better served by presenting a ranked list of entities directly, rather than a list of web pages with relevant but also potentially redundant information about

  5. Entity ranking using Wikipedia as a pivot

    NARCIS (Netherlands)

    Kaptein, R.; Serdyukov, P.; de Vries, A.; Kamps, J.; Huang, X.J.; Jones, G.; Koudas, N.; Wu, X.; Collins-Thompson, K.

    2010-01-01

    In this paper we investigate the task of Entity Ranking on the Web. Searchers looking for entities are arguably better served by presenting a ranked list of entities directly, rather than a list of web pages with relevant but also potentially redundant information about these entities. Since

  6. Biplots in Reduced-Rank Regression

    NARCIS (Netherlands)

    Braak, ter C.J.F.; Looman, C.W.N.

    1994-01-01

    Regression problems with a number of related response variables are typically analyzed by separate multiple regressions. This paper shows how these regressions can be visualized jointly in a biplot based on reduced-rank regression. Reduced-rank regression combines multiple regression and principal

  7. Mining Feedback in Ranking and Recommendation Systems

    Science.gov (United States)

    Zhuang, Ziming

    2009-01-01

    The amount of online information has grown exponentially over the past few decades, and users become more and more dependent on ranking and recommendation systems to address their information seeking needs. The advance in information technologies has enabled users to provide feedback on the utilities of the underlying ranking and recommendation…

  8. Using centrality to rank web snippets

    NARCIS (Netherlands)

    Jijkoun, V.; de Rijke, M.; Peters, C.; Jijkoun, V.; Mandl, T.; Müller, H.; Oard, D.W.; Peñas, A.; Petras, V.; Santos, D.

    2008-01-01

    We describe our participation in the WebCLEF 2007 task, targeted at snippet retrieval from web data. Our system ranks snippets based on a simple similarity-based centrality, inspired by the web page ranking algorithms. We experimented with retrieval units (sentences and paragraphs) and with the

  9. Generating and ranking of Dyck words

    CERN Document Server

    Kasa, Zoltan

    2010-01-01

    A new algorithm to generate all Dyck words is presented, which is used in ranking and unranking Dyck words. We emphasize the importance of using Dyck words in encoding objects related to Catalan numbers. As a consequence of formulas used in the ranking algorithm we can obtain a recursive formula for the nth Catalan number.

  10. Extension of the lod score: the mod score.

    Science.gov (United States)

    Clerget-Darpoux, F

    2001-01-01

    In 1955 Morton proposed the lod score method both for testing linkage between loci and for estimating the recombination fraction between them. If a disease is controlled by a gene at one of these loci, the lod score computation requires the prior specification of an underlying model that assigns the probabilities of genotypes from the observed phenotypes. To address the case of linkage studies for diseases with unknown mode of inheritance, we suggested (Clerget-Darpoux et al., 1986) extending the lod score function to a so-called mod score function. In this function, the variables are both the recombination fraction and the disease model parameters. Maximizing the mod score function over all these parameters amounts to maximizing the probability of marker data conditional on the disease status. Under the absence of linkage, the mod score conforms to a chi-square distribution, with extra degrees of freedom in comparison to the lod score function (MacLean et al., 1993). The mod score is asymptotically maximum for the true disease model (Clerget-Darpoux and Bonaïti-Pellié, 1992; Hodge and Elston, 1994). Consequently, the power to detect linkage through mod score will be highest when the space of models where the maximization is performed includes the true model. On the other hand, one must avoid overparametrization of the model space. For example, when the approach is applied to affected sibpairs, only two constrained disease model parameters should be used (Knapp et al., 1994) for the mod score maximization. It is also important to emphasize the existence of a strong correlation between the disease gene location and the disease model. Consequently, there is poor resolution of the location of the susceptibility locus when the disease model at this locus is unknown. Of course, this is true regardless of the statistics used. The mod score may also be applied in a candidate gene strategy to model the potential effect of this gene in the disease. Since, however, it

  11. Universal emergence of PageRank

    Energy Technology Data Exchange (ETDEWEB)

    Frahm, K M; Georgeot, B; Shepelyansky, D L, E-mail: frahm@irsamc.ups-tlse.fr, E-mail: georgeot@irsamc.ups-tlse.fr, E-mail: dima@irsamc.ups-tlse.fr [Laboratoire de Physique Theorique du CNRS, IRSAMC, Universite de Toulouse, UPS, 31062 Toulouse (France)

    2011-11-18

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter {alpha} Element-Of ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when {alpha} {yields} 1. The whole network can be divided into a core part and a group of invariant subspaces. For {alpha} {yields} 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at {alpha} {yields} 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  12. Reliability of journal impact factor rankings

    Science.gov (United States)

    Greenwood, Darren C

    2007-01-01

    Background Journal impact factors and their ranks are used widely by journals, researchers, and research assessment exercises. Methods Based on citations to journals in research and experimental medicine in 2005, Bayesian Markov chain Monte Carlo methods were used to estimate the uncertainty associated with these journal performance indicators. Results Intervals representing plausible ranges of values for journal impact factor ranks indicated that most journals cannot be ranked with great precision. Only the top and bottom few journals could place any confidence in their rank position. Intervals were wider and overlapping for most journals. Conclusion Decisions placed on journal impact factors are potentially misleading where the uncertainty associated with the measure is ignored. This article proposes that caution should be exercised in the interpretation of journal impact factors and their ranks, and specifically that a measure of uncertainty should be routinely presented alongside the point estimate. PMID:18005435

  13. Reliability of journal impact factor rankings

    Directory of Open Access Journals (Sweden)

    Greenwood Darren C

    2007-11-01

    Full Text Available Abstract Background Journal impact factors and their ranks are used widely by journals, researchers, and research assessment exercises. Methods Based on citations to journals in research and experimental medicine in 2005, Bayesian Markov chain Monte Carlo methods were used to estimate the uncertainty associated with these journal performance indicators. Results Intervals representing plausible ranges of values for journal impact factor ranks indicated that most journals cannot be ranked with great precision. Only the top and bottom few journals could place any confidence in their rank position. Intervals were wider and overlapping for most journals. Conclusion Decisions placed on journal impact factors are potentially misleading where the uncertainty associated with the measure is ignored. This article proposes that caution should be exercised in the interpretation of journal impact factors and their ranks, and specifically that a measure of uncertainty should be routinely presented alongside the point estimate.

  14. Cointegration rank testing under conditional heteroskedasticity

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders Christian; Taylor, Robert M.

    2010-01-01

    (martingale difference) innovations. We first demonstrate that the limiting null distributions of the rank statistics coincide with those derived by previous authors who assume either independent and identically distributed (i.i.d.) or (strict and covariance) stationary martingale difference innovations. We...... then propose wild bootstrap implementations of the cointegrating rank tests and demonstrate that the associated bootstrap rank statistics replicate the first-order asymptotic null distributions of the rank statistics. We show that the same is also true of the corresponding rank tests based on the i.......i.d. bootstrap of Swensen (2006, Econometrica 74, 1699-1714). The wild bootstrap, however, has the important property that, unlike the i.i.d. bootstrap, it preserves in the resampled data the pattern of heteroskedasticity present in the original shocks. Consistent with this, numerical evidence suggests that...

  15. Probabilities for Solar Siblings

    Science.gov (United States)

    Valtonen, Mauri; Bajkova, A. T.; Bobylev, V. V.; Mylläri, A.

    2015-02-01

    We have shown previously (Bobylev et al. Astron Lett 37:550-562, 2011) that some of the stars in the solar neighborhood today may have originated in the same star cluster as the Sun, and could thus be called Solar Siblings. In this work we investigate the sensitivity of this result to galactic models and to parameters of these models, and also extend the sample of orbits. There are a number of good candidates for the sibling category, but due to the long period of orbit evolution since the break-up of the birth cluster of the Sun, one can only attach probabilities of membership. We find that up to 10 % (but more likely around 1 %) of the members of the Sun's birth cluster could be still found within 100 pc from the Sun today.

  16. Measure, integral and probability

    CERN Document Server

    Capiński, Marek

    2004-01-01

    Measure, Integral and Probability is a gentle introduction that makes measure and integration theory accessible to the average third-year undergraduate student. The ideas are developed at an easy pace in a form that is suitable for self-study, with an emphasis on clear explanations and concrete examples rather than abstract theory. For this second edition, the text has been thoroughly revised and expanded. New features include: · a substantial new chapter, featuring a constructive proof of the Radon-Nikodym theorem, an analysis of the structure of Lebesgue-Stieltjes measures, the Hahn-Jordan decomposition, and a brief introduction to martingales · key aspects of financial modelling, including the Black-Scholes formula, discussed briefly from a measure-theoretical perspective to help the reader understand the underlying mathematical framework. In addition, further exercises and examples are provided to encourage the reader to become directly involved with the material.

  17. A seismic probability map

    Directory of Open Access Journals (Sweden)

    J. M. MUNUERA

    1964-06-01

    Full Text Available The material included in former two papers (SB and EF
    which summs 3307 shocks corresponding to 2360 years, up to I960, was
    reduced to a 50 years period by means the weight obtained for each epoch.
    The weitliing factor is the ratio 50 and the amount of years for every epoch.
    The frequency has been referred over basis VII of the international
    seismic scale of intensity, for all cases in which the earthquakes are equal or
    greater than VI and up to IX. The sum of products: frequency and parameters
    previously exposed, is the probable frequency expected for the 50
    years period.
    On each active small square, we have made the corresponding computation
    and so we have drawn the Map No 1, in percentage. The epicenters with
    intensity since X to XI are plotted in the Map No 2, in order to present a
    complementary information.
    A table shows the return periods obtained for all data (VII to XI,
    and after checking them with other computed from the first up to last shock,
    a list includes the probable approximate return periods estimated for the area.
    The solution, we suggest, is an appropriated form to express the seismic
    contingent phenomenon and it improves the conventional maps showing
    the equal intensity curves corresponding to the maximal values of given side.

  18. PageRank and rank-reversal dependence on the damping factor

    Science.gov (United States)

    Son, S.-W.; Christensen, C.; Grassberger, P.; Paczuski, M.

    2012-12-01

    PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d0=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d0.

  19. PageRank and rank-reversal dependence on the damping factor.

    Science.gov (United States)

    Son, S-W; Christensen, C; Grassberger, P; Paczuski, M

    2012-12-01

    PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d_{0}=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d_{0}.

  20. A tilting approach to ranking influence

    KAUST Repository

    Genton, Marc G.

    2014-12-01

    We suggest a new approach, which is applicable for general statistics computed from random samples of univariate or vector-valued or functional data, to assessing the influence that individual data have on the value of a statistic, and to ranking the data in terms of that influence. Our method is based on, first, perturbing the value of the statistic by ‘tilting’, or reweighting, each data value, where the total amount of tilt is constrained to be the least possible, subject to achieving a given small perturbation of the statistic, and, then, taking the ranking of the influence of data values to be that which corresponds to ranking the changes in data weights. It is shown, both theoretically and numerically, that this ranking does not depend on the size of the perturbation, provided that the perturbation is sufficiently small. That simple result leads directly to an elegant geometric interpretation of the ranks; they are the ranks of the lengths of projections of the weights onto a ‘line’ determined by the first empirical principal component function in a generalized measure of covariance. To illustrate the generality of the method we introduce and explore it in the case of functional data, where (for example) it leads to generalized boxplots. The method has the advantage of providing an interpretable ranking that depends on the statistic under consideration. For example, the ranking of data, in terms of their influence on the value of a statistic, is different for a measure of location and for a measure of scale. This is as it should be; a ranking of data in terms of their influence should depend on the manner in which the data are used. Additionally, the ranking recognizes, rather than ignores, sign, and in particular can identify left- and right-hand ‘tails’ of the distribution of a random function or vector.

  1. Analysis of linear dynamic systems of low rank

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2003-01-01

    We present here procedures of how obtain stable solutions to linear dynamic systems can be found. Different types of models are considered. The basic idea is to use the H-principle to develop low rank approximations to solutions. The approximations stop, when the prediction ability of the model...... cannot be improved for the present data. Therefore, the present methods give better prediction results than traditional methods that give exact solutions. The vectors used in the approximations can be used to carry out graphic analysis of the dynamic systems. We show how score vectors can display the low...

  2. Antitrust risk in EU manufacturing: A sector-level ranking

    OpenAIRE

    Mariniello, Mario; Antonielli, Marco

    2014-01-01

    Based on a dataset of manufacturing sectors from five major European economies (France, Germany, Italy, Spain and the United Kingdom) between 2000 and 2011, we identify a number of key sector-level features that, according to established economic research, have a positive impact on the likelihood of collusion. Each feature is proxied by an ‘Antitrust Risk Indicator’ (ARI).We rank the sectors according to their ARI scores. At 2-digit level, sectors that appears more exposed to collusion risk a...

  3. On a common generalization of Shelah's 2-rank, dp-rank, and o-minimal dimension

    OpenAIRE

    Guingona, Vincent; Hill, Cameron Donnay

    2013-01-01

    In this paper, we build a dimension theory related to Shelah's 2-rank, dp-rank, and o-minimal dimension. We call this dimension op-dimension. We exhibit the notion of the n-multi-order property, generalizing the order property, and use this to create op-rank, which generalizes 2-rank. From this we build op-dimension. We show that op-dimension bounds dp-rank, that op-dimension is sub-additive, and op-dimension generalizes o-minimal dimension in o-minimal theories.

  4. Academic rankings: an approach to rank portuguese universities Rankings académicos: un abordaje para clasificar las universidades portuguesas Rankings acadêmicos: uma abordagem ao ranking das universidades portuguesas

    Directory of Open Access Journals (Sweden)

    Pedro Bernardino

    2010-03-01

    Full Text Available The academic rankings are a controversial subject in higher education. However, despite all the criticism, academic rankings are here to stay and more and more different stakeholders use rankings to obtain information about the institutions' performance. The two most well-known rankings, The Times and the Shanghai Jiao Tong University rankings have different methodologies. The Times ranking is based on peer review, whereas the Shanghai ranking has only quantitative indicators and is mainly based on research outputs. In Germany, the CHE ranking uses a different methodology from the traditional rankings, allowing the users to choose criteria and weights. The Portuguese higher education institutions are performing below their European peers, and the Government believes that an academic ranking could improve both performance and competitiveness between institutions. The purpose of this paper is to analyse the advantages and problems of academic rankings and provide guidance to a new Portuguese ranking.Los rankings académicos son un tema muy contradictorio en la enseñanza superior. Todavía, además de todas las críticas los rankings están para quedarse entre nosotros. Y cada vez más, diferentes stakeholders utilizan los rankings para obtener información sobre el desempeño de las instituciones. Dos de los rankings más conocidos, el The Times y el ranking de la universidad de Shangai Jiao Tong tienen métodos distintos. El The Times se basa en la opinión de expertos mientras el ranking de la universidad de Shangai presenta solamente indicadores cuantitativos y mayoritariamente basados en los resultados de actividades de investigación. En Alemania el ranking CHE usa un método distinto permitiendo al utilizador elegir los criterios y su importancia. Las instituciones de enseñanza superior portuguesas tienen un desempeño abajo de las europeas y el gobierno cree que un ranking académico podría contribuir para mejorar su desempeño y

  5. Adiabatic quantum algorithm for search engine ranking.

    Science.gov (United States)

    Garnerone, Silvano; Zanardi, Paolo; Lidar, Daniel A

    2012-06-08

    We propose an adiabatic quantum algorithm for generating a quantum pure state encoding of the PageRank vector, the most widely used tool in ranking the relative importance of internet pages. We present extensive numerical simulations which provide evidence that this algorithm can prepare the quantum PageRank state in a time which, on average, scales polylogarithmically in the number of web pages. We argue that the main topological feature of the underlying web graph allowing for such a scaling is the out-degree distribution. The top-ranked log(n) entries of the quantum PageRank state can then be estimated with a polynomial quantum speed-up. Moreover, the quantum PageRank state can be used in "q-sampling" protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes designed for the same task. This can be used to decide whether to run a classical update of the PageRank.

  6. Ranking Adverse Drug Reactions With Crowdsourcing

    KAUST Repository

    Gottlieb, Assaf

    2015-03-23

    Background: There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. Objective: The intent of the study was to rank ADRs according to severity. Methods: We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. Results: There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. Conclusions: ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.

  7. Adiabatic Quantum Algorithm for Search Engine Ranking

    Science.gov (United States)

    Garnerone, Silvano; Zanardi, Paolo; Lidar, Daniel A.

    2012-06-01

    We propose an adiabatic quantum algorithm for generating a quantum pure state encoding of the PageRank vector, the most widely used tool in ranking the relative importance of internet pages. We present extensive numerical simulations which provide evidence that this algorithm can prepare the quantum PageRank state in a time which, on average, scales polylogarithmically in the number of web pages. We argue that the main topological feature of the underlying web graph allowing for such a scaling is the out-degree distribution. The top-ranked log⁡(n) entries of the quantum PageRank state can then be estimated with a polynomial quantum speed-up. Moreover, the quantum PageRank state can be used in “q-sampling” protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes designed for the same task. This can be used to decide whether to run a classical update of the PageRank.

  8. Ranking adverse drug reactions with crowdsourcing.

    Science.gov (United States)

    Gottlieb, Assaf; Hoehndorf, Robert; Dumontier, Michel; Altman, Russ B

    2015-03-23

    There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. The intent of the study was to rank ADRs according to severity. We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.

  9. Augmenting the Deliberative Method for Ranking Risks.

    Science.gov (United States)

    Susel, Irving; Lasley, Trace; Montezemolo, Mark; Piper, Joel

    2016-01-01

    The Department of Homeland Security (DHS) characterized and prioritized the physical cross-border threats and hazards to the nation stemming from terrorism, market-driven illicit flows of people and goods (illegal immigration, narcotics, funds, counterfeits, and weaponry), and other nonmarket concerns (movement of diseases, pests, and invasive species). These threats and hazards pose a wide diversity of consequences with very different combinations of magnitudes and likelihoods, making it very challenging to prioritize them. This article presents the approach that was used at DHS to arrive at a consensus regarding the threats and hazards that stand out from the rest based on the overall risk they pose. Due to time constraints for the decision analysis, it was not feasible to apply multiattribute methodologies like multiattribute utility theory or the analytic hierarchy process. Using a holistic approach was considered, such as the deliberative method for ranking risks first published in this journal. However, an ordinal ranking alone does not indicate relative or absolute magnitude differences among the risks. Therefore, the use of the deliberative method for ranking risks is not sufficient for deciding whether there is a material difference between the top-ranked and bottom-ranked risks, let alone deciding what the stand-out risks are. To address this limitation of ordinal rankings, the deliberative method for ranking risks was augmented by adding an additional step to transform the ordinal ranking into a ratio scale ranking. This additional step enabled the selection of stand-out risks to help prioritize further analysis. © 2015 Society for Risk Analysis.

  10. Evaluation of treatment effects by ranking

    DEFF Research Database (Denmark)

    Halekoh, U; Kristensen, K

    2008-01-01

    In crop experiments measurements are often made by a judge evaluating the crops' conditions after treatment. In the present paper an analysis is proposed for experiments where plots of crops treated differently are mutually ranked. In the experimental layout the crops are treated on consecutive...... plots usually placed side by side in one or more rows. In the proposed method a judge ranks several neighbouring plots, say three, by ranking them from best to worst. For the next observation the judge moves on by no more than two plots, such that up to two plots will be re-evaluated again...

  11. A ranking efficiency unit by restrictions using DEA models

    Science.gov (United States)

    Arsad, Roslah; Abdullah, Mohammad Nasir; Alias, Suriana

    2014-12-01

    In this paper, a comparison regarding the efficiency shares of listed companies in Bursa Malaysia was made, through the application of estimation method of Data Envelopment Analysis (DEA). In this study, DEA is used to measure efficiency shares of listed companies in Bursa Malaysia in terms of the financial performance. It is believed that only good financial performer will give a good return to the investors in the long run. The main objectives were to compute the relative efficiency scores of the shares in Bursa Malaysia and rank the shares based on Balance Index with regard to relative efficiency. The methods of analysis using Alirezaee and Afsharian's model were employed to this study; where the originality of Charnes, Cooper and Rhode model (CCR) with assumption of constant return to scale (CRS) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by using Balance Index. From the result, the companies that were recommended for investors based on ranking were NATWIDE, YTL and MUDA. These companies were the top three efficient companies with good performance in 2011 whereas in 2012 the top three companies were NATWIDE, MUDA and BERNAS.

  12. School background and university selection: ranking performance as an inclusion factor to higher education

    Directory of Open Access Journals (Sweden)

    Carlos René Rodríguez Garcés

    2016-07-01

    Full Text Available Using databases of Admission to Higher Education in Chile in 2013, the behavior of components that have school career (NEM and Ranking and PSU scores (Mathematics and Language is analyzed based variables segmentation of socio applicants. But both factors in theory be aligned with the curriculum, scores report a reduced correlation between them. The aim is to explore and analyze the distribution of the scores obtained by the candidates in various selection factors based on their socioeconomic and educational characteristics, and the impact of incorporating the Ranking of Scores on diversification and inclusion of the population students annually participates in the selection process. School career components, especially Ranking establishing the relative position within their respective student accommodation have less biased and with a higher concentration toward higher scores compared to the PSU component distributions, and show less influenced by variables sociofamiliar or economic. Ranking as an expression of good school performance, effort and dedication to study by the student, compensates for unwanted selection bias doing more inclusive university choice, whose effects on the modification of the student profile selected will depend on the valuation assigned the university institution to the school career the detriment of traditional PSU component.

  13. Who's bigger? where historical figures really rank

    CERN Document Server

    Skiena, Steven

    2014-01-01

    Is Hitler bigger than Napoleon? Washington bigger than Lincoln? Picasso bigger than Einstein? Quantitative analysts are rapidly finding homes in social and cultural domains, from finance to politics. What about history? In this fascinating book, Steve Skiena and Charles Ward bring quantitative analysis to bear on ranking and comparing historical reputations. They evaluate each person by aggregating the traces of millions of opinions, just as Google ranks webpages. The book includes a technical discussion for readers interested in the details of the methods, but no mathematical or computational background is necessary to understand the rankings or conclusions. Along the way, the authors present the rankings of more than one thousand of history's most significant people in science, politics, entertainment, and all areas of human endeavor. Anyone interested in history or biography can see where their favorite figures place in the grand scheme of things.

  14. Ranking Forestry Investments With Parametric Linear Programming

    Science.gov (United States)

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  15. Superfund Hazard Ranking System Training Course

    Science.gov (United States)

    The Hazard Ranking System (HRS) training course is a four and ½ day, intermediate-level course designed for personnel who are required to compile, draft, and review preliminary assessments (PAs), site inspections (SIs), and HRS documentation records/packag

  16. A cognitive model for aggregating people's rankings

    National Research Council Canada - National Science Library

    Lee, Michael D; Steyvers, Mark; Miller, Brent

    2014-01-01

    .... Applications of the model to 23 data sets, dealing with general knowledge and prediction tasks, show that the model performs well in producing an aggregate ranking that is often close to the ground...

  17. Pavement scores synthesis.

    Science.gov (United States)

    2009-02-01

    The purpose of this synthesis was to summarize the use of pavement scores by the states, including the : rating methods used, the score scales, and descriptions; if the scores are used for recommending pavement : maintenance and rehabilitation action...

  18. A Tale of Two Probabilities

    Science.gov (United States)

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  19. Systematic comparison of hedonic ranking and rating methods demonstrates few practical differences.

    Science.gov (United States)

    Kozak, Marcin; Cliff, Margaret A

    2013-08-01

    Hedonic ranking is one of the commonly used methods to evaluate consumer preferences. Some authors suggest that it is the best methodology for discriminating among products, while others recommend hedonic rating. These mixed findings suggest the statistical outcome(s) are dependent on the experimental conditions or a user's expectation of "what is" and "what is not" desirable for evaluating consumer preferences. Therefore, sensory and industry professionals may be uncertain or confused regarding the appropriate application of hedonic tests. This paper would like to put this controversy to rest, by evaluating 3 data sets (3 yogurts, 79 consumers; 6 yogurts, 109 consumers; 4 apple cultivars, 70 consumers) collected using the same consumers and by calculating nontied ranks from hedonic scores. Consumer responses were evaluated by comparing bivariate associations between the methods (nontied ranks, tied ranks, hedonic rating scores) using trellis displays, determining the number of consumers with discrepancies in their responses between the methods, and comparing mean values using conventional statistical analyses. Spearman's rank correlations (0.33-0.84) revealed significant differences between the methods for all products, whether or not means separation tests differentiated the products. The work illustrated the inherent biases associated with hedonic ranking and recommended alternate hedonic methodologies. © 2013 Institute of Food Technologists®

  20. Rank rigidity for CAT(0) cube complexes

    OpenAIRE

    Caprace, Pierre-Emmanuel; Sageev, Michah

    2010-01-01

    We prove that any group acting essentially without a fixed point at infinity on an irreducible finite-dimensional CAT(0) cube complex contains a rank one isometry. This implies that the Rank Rigidity Conjecture holds for CAT(0) cube complexes. We derive a number of other consequences for CAT(0) cube complexes, including a purely geometric proof of the Tits Alternative, an existence result for regular elements in (possibly non-uniform) lattices acting on cube complexes, and a characterization ...

  1. NUCLEAR POWER PLANTS SAFETY IMPROVEMENT PROJECTS RANKING

    OpenAIRE

    Григорян, Анна Сергеевна; Тигран Георгиевич ГРИГОРЯН; Квасневский, Евгений Анатольевич

    2013-01-01

    The ranking nuclear power plants safety improvement projects is the most important task for ensuring the efficiency of NPP project management office work. Total amount of projects in NPP portfolio may reach more than 400. Features of the nuclear power plants safety improvement projects ranking in NPP portfolio determine the choice of the decision verbal analysis as a method of decision-making, as it allows to quickly compare the number of alternatives that are not available at the time of con...

  2. Ranking Music Data by Relevance and Importance

    DEFF Research Database (Denmark)

    Ruxanda, Maria Magdalena; Nanopoulos, Alexandros; Jensen, Christian Søndergaard

    2008-01-01

    Due to the rapidly increasing availability of audio files on the Web, it is relevant to augment search engines with advanced audio search functionality. In this context, the ranking of the retrieved music is an important issue. This paper proposes a music ranking method capable of flexibly fusing...... the relevance and importance of music. The proposed method may support users with diverse needs when searching for music....

  3. Rank distributions: A panoramic macroscopic outlook

    Science.gov (United States)

    Eliazar, Iddo I.; Cohen, Morrel H.

    2014-01-01

    This paper presents a panoramic macroscopic outlook of rank distributions. We establish a general framework for the analysis of rank distributions, which classifies them into five macroscopic "socioeconomic" states: monarchy, oligarchy-feudalism, criticality, socialism-capitalism, and communism. Oligarchy-feudalism is shown to be characterized by discrete macroscopic rank distributions, and socialism-capitalism is shown to be characterized by continuous macroscopic size distributions. Criticality is a transition state between oligarchy-feudalism and socialism-capitalism, which can manifest allometric scaling with multifractal spectra. Monarchy and communism are extreme forms of oligarchy-feudalism and socialism-capitalism, respectively, in which the intrinsic randomness vanishes. The general framework is applied to three different models of rank distributions—top-down, bottom-up, and global—and unveils each model's macroscopic universality and versatility. The global model yields a macroscopic classification of the generalized Zipf law, an omnipresent form of rank distributions observed across the sciences. An amalgamation of the three models establishes a universal rank-distribution explanation for the macroscopic emergence of a prevalent class of continuous size distributions, ones governed by unimodal densities with both Pareto and inverse-Pareto power-law tails.

  4. Hierarchical Rank Aggregation with Applications to Nanotoxicology.

    Science.gov (United States)

    Patel, Trina; Telesca, Donatello; Rallo, Robert; George, Saji; Xia, Tian; Nel, André E

    2013-06-01

    The development of high throughput screening (HTS) assays in the field of nanotoxicology provide new opportunities for the hazard assessment and ranking of engineered nanomaterials (ENMs). It is often necessary to rank lists of materials based on multiple risk assessment parameters, often aggregated across several measures of toxicity and possibly spanning an array of experimental platforms. Bayesian models coupled with the optimization of loss functions have been shown to provide an effective framework for conducting inference on ranks. In this article we present various loss-function-based ranking approaches for comparing ENM within experiments and toxicity parameters. Additionally, we propose a framework for the aggregation of ranks across different sources of evidence while allowing for differential weighting of this evidence based on its reliability and importance in risk ranking. We apply these methods to high throughput toxicity data on two human cell-lines, exposed to eight different nanomaterials, and measured in relation to four cytotoxicity outcomes. This article has supplementary material online.

  5. Credit Scoring Modeling

    Directory of Open Access Journals (Sweden)

    Siana Halim

    2014-01-01

    Full Text Available It is generally easier to predict defaults accurately if a large data set (including defaults is available for estimating the prediction model. This puts not only small banks, which tend to have smaller data sets, at disadvantage. It can also pose a problem for large banks that began to collect their own historical data only recently, or banks that recently introduced a new rating system. We used a Bayesian methodology that enables banks with small data sets to improve their default probability. Another advantage of the Bayesian method is that it provides a natural way for dealing with structural differences between a bank’s internal data and additional, external data. In practice, the true scoring function may differ across the data sets, the small internal data set may contain information that is missing in the larger external data set, or the variables in the two data sets are not exactly the same but related. Bayesian method can handle such kind of problem.

  6. Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.

    Science.gov (United States)

    She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng

    2015-01-01

    Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.

  7. Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification

    Directory of Open Access Journals (Sweden)

    Qingshan She

    2015-01-01

    Full Text Available Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt’s estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.

  8. ARWU vs. Alternative ARWU Ranking: What are the Consequences for Lower Ranked Universities?

    Directory of Open Access Journals (Sweden)

    Milica Maričić

    2017-05-01

    Full Text Available The ARWU ranking has been a source of academic debate since its development in 2003, but the same does not account for the Alternative ARWU ranking. Namely, the Alternative ARWU ranking attempts to reduce the influence of the prestigious indicators Alumni and Award which are based on the number of received Nobel Prizes and Fields Medals by alumni or university staff. However, the consequences of the reduction of the two indicators have not been scrutinized in detail. Therefore, we propose a statistical approach to the comparison of the two rankings and an in-depth analysis of the Alternative ARWU groups. The obtained results, which are based on the official data, can provide new insights into the nature of the Alternative ARWU ranking. The presented approach might initiate further research on the Alternative ARWU ranking and on the impact of university ranking’s list length. JEL Classification: C10, C38, I23

  9. Two Phase Analysis of Ski Schools Customer Satisfaction: Multivariate Ranking and Cub Models

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti

    2014-06-01

    Full Text Available Monitoring tourists' opinions is an important issue also for companies providing sport services. The aim of this paper was to apply CUB models and nonparametric permutation methods to a large customer satisfaction survey performed in 2011 in the ski schools of Alto Adige (Italy. The two-phase data processing was mainly aimed to: establish a global ranking of a sample of five ski schools, on the basis of satisfaction scores for several specific service aspects; to estimate specific components of the respondents’ evaluation process (feeling and uncertainty and to detect if customers’ characteristics affected these two components. With the application of NPC-Global ranking we obtained a ranking of the evaluated ski schools simultaneously considering satisfaction scores of several service’s aspects. CUB models showed which aspects and subgroups were less satisfied giving tips on how to improve services and customer satisfaction.

  10. PageRank as a method to rank biomedical literature by importance.

    Science.gov (United States)

    Yates, Elliot J; Dixon, Louise C

    2015-01-01

    Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.

  11. RANK/RANK-Ligand/OPG: Ein neuer Therapieansatz in der Osteoporosebehandlung

    Directory of Open Access Journals (Sweden)

    Preisinger E

    2007-01-01

    Full Text Available Die Erforschung der Kopplungsmechanismen zur Osteoklastogenese, Knochenresorption und Remodellierung eröffnete neue mögliche Therapieansätze in der Behandlung der Osteoporose. Eine Schlüsselrolle beim Knochenabbau spielt der RANK- ("receptor activator of nuclear factor (NF- κB"- Ligand (RANKL. Durch die Bindung von RANKL an den Rezeptor RANK wird die Knochenresorption eingeleitet. OPG (Osteoprotegerin sowie der für den klinischen Gebrauch entwickelte humane monoklonale Antikörper (IgG2 Denosumab blockieren die Bindung von RANK-Ligand an RANK und verhindern den Knochenabbau.

  12. Probable doxycycline-induced acute pancreatitis.

    Science.gov (United States)

    Moy, Brian T; Kapila, Nikhil

    2016-03-01

    A probable case of doxycycline-induced pancreatitis is reported. A 51-year-old man was admitted to the emergency department with a one-week history of extreme fatigue, malaise, and confusion. Three days earlier he had been started on empirical doxycycline therapy for presumed Lyme disease; he was taking no other medications at the time of admission. A physical examination was remarkable for abdominal tenderness. Relevant laboratory data included a lipase concentration of 5410 units/L (normal range, 13-60 units/L), an amylase concentration of 1304 (normal range, 28-100 units/L), and a glycosylated hemoglobin concentration of 15.2% (normal, doxycycline was discontinued. With vasopressor support, aggressive fluid resuscitation, hemodialysis, and an insulin infusion, the patient's clinical course rapidly improved over five days. Scoring of the case via the method of Naranjo et al. yielded a score of 6, indicating a probable adverse reaction to doxycycline. A man developed AP three days after starting therapy with oral doxycycline, and the association between drug and reaction was determined to be probable. His case appears to be the third of doxycycline-associated AP, although tigecycline, tetracycline, and minocycline have also been implicated as causes of AP. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  13. A cautionary note on applying scores in stratified data.

    Science.gov (United States)

    Podgor, M J; Gastwirth, J L

    1994-12-01

    When rank tests are used to analyze stratified data, three methods for assigning scores to the observations have been proposed: (S) independently within each stratum (see Lehmann, 1975, Nonparametrics: Statistical Methods Based on Ranks; San Francisco: Holden-Day); (A) after aligning the observations within each stratum and then pooling the aligned observations (Hodges and Lehmann, 1962, Annals of Mathematical Statistics 33, 482-497); and (P) after pooling the observations across all strata (that is, without alignment) (Mantel, 1963, Journal of the American Statistical Association 58, 690-700; Mantel and Ciminera, 1979, Cancer Research 39, 4308-4315). Test statistics are formed for each method by combining the stratum-specific linear rank tests using the assigned scores. We show that method P is sensitive to the score function used in the case of two moderately sized strata. In general, we recommend methods S and A for use with moderate to large-sized strata.

  14. Ranking Fuzzy Numbers and Its Application to Products Attributes Preferences

    OpenAIRE

    Abdullah, Lazim; Fauzee, Nor Nashrah Ahmad

    2011-01-01

    Ranking is one of the widely used methods in fuzzy decision making environment. The recent ranking fuzzy numbers proposed by Wang and Li is claimed to be the improved version in ranking. However, the method was never been simplified and tested in real life application. This paper presents a four-step computation of ranking fuzzy numbers and its application in ranking attributes of selected chocolate products. The four steps algorithm was formulated to rank fuzzy numbers and followed by a tes...

  15. COMPARISON OF CHEMICAL SCREENING AND RANKING APPROACHES: THE WASTE MINIMIZATION PRIORITIZATION TOOL VERSUS TOXIC EQUIVALENCY POTENTIALS

    Science.gov (United States)

    Chemical screening in the United States is often conducted using scoring and ranking methodologies. Linked models accounting for chemical fate, exposure, and toxicological effects are generally preferred in Europe and in product Life Cycle Assessment. For the first time, a compar...

  16. The stability of bank efficiency rankings when risk preferences and objectives are different

    NARCIS (Netherlands)

    Koetter, M.

    2008-01-01

    We analyze the stability of efficiency rankings of German universal banks between 1993 and 2004. First, we estimate traditional efficiency scores with stochastic cost and alternative profit frontier analysis. Then, we explicitly allow for different risk preferences and measure efficiency with a

  17. EPA Seeks Public Comments on Addition of Subsurface Intrusion Component to the Superfund Hazard Ranking System

    Science.gov (United States)

    WASHINGTON -- The U.S. Environmental Protection Agency (EPA) is seeking public comment on the proposed addition of a subsurface intrusion (SsI) component to the Superfund Hazard Ranking System (HRS). The HRS is a scoring system EPA uses to identify

  18. Child Well-Being in Rich Countries : UNICEF's Ranking Revisited, and New Symmetric Aggregating Operators Exemplified

    NARCIS (Netherlands)

    Dijkstra, Theo K.

    In a report published in 2007 UNICEF measured six dimensions of child well-being for the majority of the economically advanced nations. No overall scores are given, but countries are listed in the order of their average rank on the dimensions, which are therefore implicitly assigned 'equal

  19. Social class rank, essentialism, and punitive judgment.

    Science.gov (United States)

    Kraus, Michael W; Keltner, Dacher

    2013-08-01

    Recent evidence suggests that perceptions of social class rank influence a variety of social cognitive tendencies, from patterns of causal attribution to moral judgment. In the present studies we tested the hypotheses that upper-class rank individuals would be more likely to endorse essentialist lay theories of social class categories (i.e., that social class is founded in genetically based, biological differences) than would lower-class rank individuals and that these beliefs would decrease support for restorative justice--which seeks to rehabilitate offenders, rather than punish unlawful action. Across studies, higher social class rank was associated with increased essentialism of social class categories (Studies 1, 2, and 4) and decreased support for restorative justice (Study 4). Moreover, manipulated essentialist beliefs decreased preferences for restorative justice (Study 3), and the association between social class rank and class-based essentialist theories was explained by the tendency to endorse beliefs in a just world (Study 2). Implications for how class-based essentialist beliefs potentially constrain social opportunity and mobility are discussed.

  20. Global network centrality of university rankings

    Science.gov (United States)

    Guo, Weisi; Del Vecchio, Marco; Pogrebna, Ganna

    2017-10-01

    Universities and higher education institutions form an integral part of the national infrastructure and prestige. As academic research benefits increasingly from international exchange and cooperation, many universities have increased investment in improving and enabling their global connectivity. Yet, the relationship of university performance and its global physical connectedness has not been explored in detail. We conduct, to our knowledge, the first large-scale data-driven analysis into whether there is a correlation between university relative ranking performance and its global connectivity via the air transport network. The results show that local access to global hubs (as measured by air transport network betweenness) strongly and positively correlates with the ranking growth (statistical significance in different models ranges between 5% and 1% level). We also found that the local airport's aggregate flight paths (degree) and capacity (weighted degree) has no effect on university ranking, further showing that global connectivity distance is more important than the capacity of flight connections. We also examined the effect of local city economic development as a confounding variable and no effect was observed suggesting that access to global transportation hubs outweighs economic performance as a determinant of university ranking. The impact of this research is that we have determined the importance of the centrality of global connectivity and, hence, established initial evidence for further exploring potential connections between university ranking and regional investment policies on improving global connectivity.

  1. A Cognitive Model for Aggregating People's Rankings

    Science.gov (United States)

    Lee, Michael D.; Steyvers, Mark; Miller, Brent

    2014-01-01

    We develop a cognitive modeling approach, motivated by classic theories of knowledge representation and judgment from psychology, for combining people's rankings of items. The model makes simple assumptions about how individual differences in knowledge lead to observed ranking data in behavioral tasks. We implement the cognitive model as a Bayesian graphical model, and use computational sampling to infer an aggregate ranking and measures of the individual expertise. Applications of the model to 23 data sets, dealing with general knowledge and prediction tasks, show that the model performs well in producing an aggregate ranking that is often close to the ground truth and, as in the “wisdom of the crowd” effect, usually performs better than most of individuals. We also present some evidence that the model outperforms the traditional statistical Borda count method, and that the model is able to infer people's relative expertise surprisingly well without knowing the ground truth. We discuss the advantages of the cognitive modeling approach to combining ranking data, and in wisdom of the crowd research generally, as well as highlighting a number of potential directions for future model development. PMID:24816733

  2. A Review of Outcomes of Seven World University Ranking Systems

    National Research Council Canada - National Science Library

    Mahmood Khosrowjerdi; Neda Zeraatkar

    2012-01-01

    There are many national and international ranking systems rank the universities and higher education institutions of the world, nationally or internationally, based on the same or different criteria...

  3. Optimal Interval for Success in Judo World-Ranking Competitions.

    Science.gov (United States)

    Franchini, Emerson; Takito, Monica Y; da Silva, Rodrigo M; Shiroma, Seihati A; Wicks, Lance; Julio, Ursula F

    2017-05-01

    To determine the optimal interval between competitions for success in the different events of the judo world tour. A total of 20,916 female and 29,900 male competition participations in the judo world-tour competitions held between January 2009 and December 2015 were analyzed, considering the dependent variable, winning a medal, and the independent variables, levels of competition. There was an increased probability of winning a medal when the interval was in the 10- to 13-wk range for both male and female athletes competing at Grand Prix, Continental-Championship, and World-Championship events, whereas for Grand Slam, only men had an increased probability of winning a medal in this interval range. Furthermore, men had increased probability of podium positions in Continental Championship, World Master, and Olympic Games when the interval was longer than 14 wk. Optimal interval period between successive competitions varies according to competition level and sex; shorter intervals (6-9 wk) were better for female athletes competing at the lowest competition level (Continental Open), but for most of the competitions, the 10- to 13-wk interval was detected as optimal for both male and female athletes (Grand Prix, Continental Championship, and World Championship), whereas for the ranking-based qualified male competitions (ie, Masters and Olympic Games), a longer period (>14 wk) is needed.

  4. Applied probability and stochastic processes

    CERN Document Server

    Sumita, Ushio

    1999-01-01

    Applied Probability and Stochastic Processes is an edited work written in honor of Julien Keilson. This volume has attracted a host of scholars in applied probability, who have made major contributions to the field, and have written survey and state-of-the-art papers on a variety of applied probability topics, including, but not limited to: perturbation method, time reversible Markov chains, Poisson processes, Brownian techniques, Bayesian probability, optimal quality control, Markov decision processes, random matrices, queueing theory and a variety of applications of stochastic processes. The book has a mixture of theoretical, algorithmic, and application chapters providing examples of the cutting-edge work that Professor Keilson has done or influenced over the course of his highly-productive and energetic career in applied probability and stochastic processes. The book will be of interest to academic researchers, students, and industrial practitioners who seek to use the mathematics of applied probability i...

  5. Factors predicting Gleason score 6 upgrading after radical prostatectomy

    OpenAIRE

    Milonas, Daimantas; Grybas, Aivaras; Auskalnis, Stasys; Gudinaviciene, Inga; Baltrimavicius, Ruslanas; Kincius, Marius; Jievaltas, Mindaugas

    2011-01-01

    Objectives Prostate cancer Gleason score 6 is the most common score detected on prostatic biopsy. We analyzed the clinical parameters that predict the likelihood of Gleason score upgrading after radical prostatectomy. Methods The study population consisted of 241 patients who underwent radical retropubic prostatectomy between Feb 2002 and Dec 2007 for Gleason score 6 adenocarcinoma. The influence of preoperative parameters on the probability of a Gleason score upgrading after surgery was eval...

  6. Ranking schools on external knowledge tests results

    Directory of Open Access Journals (Sweden)

    Gašper Cankar

    2007-01-01

    Full Text Available The paper discusses the use of external knowledge test results for school ranking and the implicit effect of such ranking. A question of validity is raised and a review of research literature and main known problems are presented. In many western countries publication of school results is a common practice and a similar trend can be observed in Slovenia. Experiences of other countries help to predict positive and negative aspects of such publication. Results of external knowledge tests produce very limited information about school quality—if we use other sources of information our ranking of schools can be very different. Nevertheless, external knowledge tests can yield useful information. If we want to improve quality in schools, we must allow schools to use this information themselves and improve from within. Broad public scrutiny is unnecessary and problematic—it moves the focus of school efforts from real improvement of quality to mere improvement of the school public image.

  7. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  8. Adaptive distributional extensions to DFR ranking

    DEFF Research Database (Denmark)

    Petersen, Casper; Simonsen, Jakob Grue; Järvelin, Kalervo

    2016-01-01

    Divergence From Randomness (DFR) ranking models assume that informative terms are distributed in a corpus differently than non-informative terms. Different statistical models (e.g. Poisson, geometric) are used to model the distribution of non-informative terms, producing different DFR models....... An informative term is then detected by measuring the divergence of its distribution from the distribution of non-informative terms. However, there is little empirical evidence that the distributions of non-informative terms used in DFR actually fit current datasets. Practically this risks providing a poor...... separation between informative and non-informative terms, thus compromising the discriminative power of the ranking model. We present a novel extension to DFR, which first detects the best-fitting distribution of non-informative terms in a collection, and then adapts the ranking computation to this best...

  9. Probability and statistics: selected problems

    OpenAIRE

    Machado, J.A. Tenreiro; Pinto, Carla M. A.

    2014-01-01

    Probability and Statistics—Selected Problems is a unique book for senior undergraduate and graduate students to fast review basic materials in Probability and Statistics. Descriptive statistics are presented first, and probability is reviewed secondly. Discrete and continuous distributions are presented. Sample and estimation with hypothesis testing are presented in the last two chapters. The solutions for proposed excises are listed for readers to references.

  10. Statistical methods for solar flare probability forecasting

    Science.gov (United States)

    Vecchia, D. F.; Tryon, P. V.; Caldwell, G. A.; Jones, R. W.

    1980-09-01

    The Space Environment Services Center (SESC) of the National Oceanic and Atmospheric Administration provides probability forecasts of regional solar flare disturbances. This report describes a statistical method useful to obtain 24 hour solar flare forecasts which, historically, have been subjectively formulated. In Section 1 of this report flare classifications of the SESC and the particular probability forecasts to be considered are defined. In Section 2 we describe the solar flare data base and outline general principles for effective data management. Three statistical techniques for solar flare probability forecasting are discussed in Section 3, viz, discriminant analysis, logistic regression, and multiple linear regression. We also review two scoring measures and suggest the logistic regression approach for obtaining 24 hour forecasts. In Section 4 a heuristic procedure is used to select nine basic predictors from the many available explanatory variables. Using these nine variables logistic regression is demonstrated by example in Section 5. We conclude in Section 6 with band broad suggestions regarding continued development of objective methods for solar flare probability forecasting.

  11. Sign rank versus Vapnik-Chervonenkis dimension

    Science.gov (United States)

    Alon, N.; Moran, Sh; Yehudayoff, A.

    2017-12-01

    This work studies the maximum possible sign rank of sign (N × N)-matrices with a given Vapnik-Chervonenkis dimension d. For d=1, this maximum is three. For d=2, this maximum is \\widetilde{\\Theta}(N1/2). For d >2, similar but slightly less accurate statements hold. The lower bounds improve on previous ones by Ben-David et al., and the upper bounds are novel. The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given Vapnik-Chervonenkis dimension, and the number of maximum classes of a given Vapnik-Chervonenkis dimension--answering a question of Frankl from 1989, and (ii) design an efficient algorithm that provides an O(N/log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the adjacency (N × N)-matrix of a Δ-regular graph with a second eigenvalue of absolute value λ and Δ ≤ N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ. We use this connection to prove the existence of a maximum class C\\subseteq\\{+/- 1\\}^N with Vapnik-Chervonenkis dimension 2 and sign rank \\widetilde{\\Theta}(N1/2). This answers a question of Ben-David et al. regarding the sign rank of large Vapnik-Chervonenkis classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics. Bibliography: 69 titles.

  12. Pulling Rank: A Plan to Help Students with College Choice in an Age of Rankings

    Science.gov (United States)

    Thacker, Lloyd

    2008-01-01

    Colleges and universities are "ranksteering"--driving under the influence of popular college rankings systems like "U.S. News and World Report's" Best Colleges. This article examines the criticisms of college rankings and describes how a group of education leaders is honing a plan to end the tyranny of the ratings game and better help students and…

  13. Ranking Entities in Networks via Lefschetz Duality

    DEFF Research Database (Denmark)

    Aabrandt, Andreas; Hansen, Vagn Lundsgaard; Poulsen, Bjarne

    2014-01-01

    In the theory of communication it is essential that agents are able to exchange information. This fact is closely related to the study of connected spaces in topology. A communication network may be modelled as a topological space such that agents can communicate if and only if they belong...... then be ranked according to how essential their positions are in the network by considering the effect of their respective absences. Defining a ranking of a network which takes the individual position of each entity into account has the purpose of assigning different roles to the entities, e.g. agents...

  14. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  15. An introduction to probability and statistical inference

    CERN Document Server

    Roussas, George G

    2003-01-01

    "The text is wonderfully written and has the mostcomprehensive range of exercise problems that I have ever seen." - Tapas K. Das, University of South Florida"The exposition is great; a mixture between conversational tones and formal mathematics; the appropriate combination for a math text at [this] level. In my examination I could find no instance where I could improve the book." - H. Pat Goeters, Auburn, University, Alabama* Contains more than 200 illustrative examples discussed in detail, plus scores of numerical examples and applications* Chapters 1-8 can be used independently for an introductory course in probability* Provides a substantial number of proofs

  16. Assessing vascular endothelial function using frequency and rank order statistics

    Science.gov (United States)

    Wu, Hsien-Tsai; Hsu, Po-Chun; Sun, Cheuk-Kwan; Liu, An-Bang; Lin, Zong-Lin; Tang, Chieh-Ju; Lo, Men-Tzung

    2013-08-01

    Using frequency and rank order statistics (FROS), this study analyzed the fluctuations in arterial waveform amplitudes recorded from an air pressure sensing system before and after reactive hyperemia (RH) induction by temporary blood flow occlusion to evaluate the vascular endothelial function of aged and diabetic subjects. The modified probability-weighted distance (PWD) calculated from the FROS was compared with the dilatation index (DI) to evaluate its validity and sensitivity in the assessment of vascular endothelial function. The results showed that the PWD can provide a quantitative determination of the structural changes in the arterial pressure signals associated with regulation of vascular tone and blood pressure by intact vascular endothelium after the application of occlusion stress. Our study suggests that the use of FROS is a reliable noninvasive approach to the assessment of vascular endothelial degeneration in aging and diabetes.

  17. Construction of a chemical ranking system of soil pollution substances for screening of priority soil contaminants in Korea.

    Science.gov (United States)

    Jeong, Seung-Woo; An, Youn-Joo

    2012-04-01

    The Korean government recently proposed expanding the number of soil-quality standards to 30 by 2015. The objectives of our study were to construct a reasonable protocol for screening priority soil contaminants for inclusion in the planned soil quality standard expansion. The chemical ranking system of soil pollution substances (CROSS) was first developed to serve as an analytical tool in chemical scoring and ranking of possible soil pollution substances. CROSS incorporates important parameters commonly used in several previous chemical ranking and scoring systems and the new soil pollution parameters. CROSS uses soil-related parameters in its algorithm, including information related to the soil environment, such as soil ecotoxicological data, the soil toxic release inventory (TRI), and soil partitioning coefficients. Soil TRI and monitoring data were incorporated as local specific parameters. In addition, CROSS scores the transportability of chemicals in soil because soil contamination may result in groundwater contamination. Dermal toxicity was used in CROSS only to consider contact with soil. CROSS uses a certainty score to incorporate data uncertainty. CROSS scores the importance of each candidate substance and assigns rankings on the basis of total scores. Cadmium was the most highly ranked. Generally, metals were ranked higher than other substances. Pentachlorophenol, phenol, dieldrin, and methyl tert-butyl ether were ranked the highest among chlorinated compounds, aromatic compounds, pesticides, and others, respectively. The priority substance list generated from CROSS will be used in selecting substances for possible inclusion in the Korean soil quality standard expansion; it will also provide important information for designing a soil-environment management scheme.

  18. Training Teachers to Teach Probability

    Science.gov (United States)

    Batanero, Carmen; Godino, Juan D.; Roa, Rafael

    2004-01-01

    In this paper we analyze the reasons why the teaching of probability is difficult for mathematics teachers, describe the contents needed in the didactical preparation of teachers to teach probability and analyze some examples of activities to carry out this training. These activities take into account the experience at the University of Granada,…

  19. Expected utility with lower probabilities

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1994-01-01

    An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...

  20. RRCRank: a fusion method using rank strategy for residue-residue contact prediction.

    Science.gov (United States)

    Jing, Xiaoyang; Dong, Qiwen; Lu, Ruqian

    2017-09-02

    In structural biology area, protein residue-residue contacts play a crucial role in protein structure prediction. Some researchers have found that the predicted residue-residue contacts could effectively constrain the conformational search space, which is significant for de novo protein structure prediction. In the last few decades, related researchers have developed various methods to predict residue-residue contacts, especially, significant performance has been achieved by using fusion methods in recent years. In this work, a novel fusion method based on rank strategy has been proposed to predict contacts. Unlike the traditional regression or classification strategies, the contact prediction task is regarded as a ranking task. First, two kinds of features are extracted from correlated mutations methods and ensemble machine-learning classifiers, and then the proposed method uses the learning-to-rank algorithm to predict contact probability of each residue pair. First, we perform two benchmark tests for the proposed fusion method (RRCRank) on CASP11 dataset and CASP12 dataset respectively. The test results show that the RRCRank method outperforms other well-developed methods, especially for medium and short range contacts. Second, in order to verify the superiority of ranking strategy, we predict contacts by using the traditional regression and classification strategies based on the same features as ranking strategy. Compared with these two traditional strategies, the proposed ranking strategy shows better performance for three contact types, in particular for long range contacts. Third, the proposed RRCRank has been compared with several state-of-the-art methods in CASP11 and CASP12. The results show that the RRCRank could achieve comparable prediction precisions and is better than three methods in most assessment metrics. The learning-to-rank algorithm is introduced to develop a novel rank-based method for the residue-residue contact prediction of proteins, which

  1. Introduction to probability with Mathematica

    CERN Document Server

    Hastings, Kevin J

    2009-01-01

    Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

  2. SOUTH AFRICAN ARMY RANKS AND INSIGNIA

    African Journals Online (AJOL)

    major, cap- tain, lieutenant;. Other Ranks : Warrant officer, staff sergeant, sergeant, corporal, lance-cor- poral, private.' We apparently had no need for second lieuten- ants at that time, and they were introduced only .... Army warrant officers can also hold the cmmon serv- ice posts of Sergeant-Major of Special Forces.

  3. Kinesiology Faculty Citations across Academic Rank

    Science.gov (United States)

    Knudson, Duane

    2015-01-01

    Citations to research reports are used as a measure for the influence of a scholar's research line when seeking promotion, grants, and awards. The current study documented the distributions of citations to kinesiology scholars of various academic ranks. Google Scholar Citations was searched for user profiles using five research interest areas…

  4. Biomechanics Scholar Citations across Academic Ranks

    Directory of Open Access Journals (Sweden)

    Knudson Duane

    2015-11-01

    Full Text Available Study aim: citations to the publications of a scholar have been used as a measure of the quality or influence of their research record. A world-wide descriptive study of the citations to the publications of biomechanics scholars of various academic ranks was conducted.

  5. Ranking Workplace Competencies: Student and Graduate Perceptions.

    Science.gov (United States)

    Rainsbury, Elizabeth; Hodges, Dave; Burchell, Noel; Lay, Mark

    2002-01-01

    New Zealand business students and graduates made similar rankings of the five most important workplace competencies: computer literacy, customer service orientation, teamwork and cooperation, self-confidence, and willingness to learn. Graduates placed greater importance on most of the 24 competencies, resulting in a statistically significant…

  6. Subject Gateway Sites and Search Engine Ranking.

    Science.gov (United States)

    Thelwall, Mike

    2002-01-01

    Discusses subject gateway sites and commercial search engines for the Web and presents an explanation of Google's PageRank algorithm. The principle question addressed is the conditions under which a gateway site will increase the likelihood that a target page is found in search engines. (LRW)

  7. Ranking related entities: components and analyses

    NARCIS (Netherlands)

    Bron, M.; Balog, K.; de Rijke, M.

    2010-01-01

    Related entity finding is the task of returning a ranked list of homepages of relevant entities of a specified type that need to engage in a given relationship with a given source entity. We propose a framework for addressing this task and perform a detailed analysis of four core components;

  8. Low-rank coal oil agglomeration

    Science.gov (United States)

    Knudson, C.L.; Timpe, R.C.

    1991-07-16

    A low-rank coal oil agglomeration process is described. High mineral content, a high ash content subbituminous coals are effectively agglomerated with a bridging oil which is partially water soluble and capable of entering the pore structure, and is usually coal-derived.

  9. An evaluation and critique of current rankings

    NARCIS (Netherlands)

    Federkeil, Gero; Westerheijden, Donald F.; van Vught, Franciscus A.; Ziegele, Frank

    2012-01-01

    This chapter raises the question of whether university league tables deliver relevant information to one of their key target groups – students. It examines the inherent biases and weaknesses in the methodologies of the major rankings and argues that the concentration on a single indicator of

  10. World University Ranking Methodologies: Stability and Variability

    Science.gov (United States)

    Fidler, Brian; Parsons, Christine

    2008-01-01

    There has been a steady growth in the number of national university league tables over the last 25 years. By contrast, "World University Rankings" are a more recent development and have received little serious academic scrutiny in peer-reviewed publications. Few researchers have evaluated the sources of data and the statistical…

  11. Statistical inference of Minimum Rank Factor Analysis

    NARCIS (Netherlands)

    Shapiro, A; Ten Berge, JMF

    For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the

  12. City Life: Rankings (Livability) versus Perceptions (Satisfaction)

    Science.gov (United States)

    Okulicz-Kozaryn, Adam

    2013-01-01

    I investigate the relationship between the popular Mercer city ranking (livability) and survey data (satisfactions). Livability aims to capture "objective" quality of life such as infrastructure. Survey items capture "subjective" quality of life such as satisfaction with city. The relationship between objective measures of quality of life and…

  13. Matrices with high completely positive semidefinite rank

    NARCIS (Netherlands)

    de Laat, David; Gribling, Sander; Laurent, Monique

    2017-01-01

    A real symmetric matrix M is completely positive semidefinite if it admits a Gram representation by (Hermitian) positive semidefinite matrices of any size d. The smallest such d is called the (complex) completely positive semidefinite rank of M , and it is an open question whether there exists an

  14. Ranking health between countries in international comparisons

    DEFF Research Database (Denmark)

    Brønnum-Hansen, Henrik

    2014-01-01

    Cross-national comparisons and ranking of summary measures of population health sometimes give rise to inconsistent and diverging conclusions. In order to minimise confusion, international comparative studies ought to be based on well-harmonised data with common standards of definitions...

  15. Comparing survival curves using rank tests

    NARCIS (Netherlands)

    Albers, Willem/Wim

    1990-01-01

    Survival times of patients can be compared using rank tests in various experimental setups, including the two-sample case and the case of paired data. Attention is focussed on two frequently occurring complications in medical applications: censoring and tail alternatives. A review is given of the

  16. Smooth rank one perturbations of selfadjoint operators

    NARCIS (Netherlands)

    Hassi, Seppo; Snoo, H.S.V. de; Willemsma, A.D.I.

    Let A be a selfadjoint operator in a Hilbert space aleph with inner product [.,.]. The rank one perturbations of A have the form A+tau [.,omega]omega, tau epsilon R, for some element omega epsilon aleph. In this paper we consider smooth perturbations, i.e. we consider omega epsilon dom \\A\\(k/2) for

  17. Primate Innovation: Sex, Age and Social Rank

    NARCIS (Netherlands)

    Reader, S.M.; Laland, K.N.

    2001-01-01

    Analysis of an exhaustive survey of primate behavior collated from the published literature revealed significant variation in rates of innovation among individuals of different sex, age and social rank. We searched approximately 1,000 articles in four primatology journals, together with other

  18. An algorithm for ranking assignments using reoptimization

    DEFF Research Database (Denmark)

    Pedersen, Christian Roed; Nielsen, Lars Relund; Andersen, Kim Allan

    2008-01-01

    We consider the problem of ranking assignments according to cost in the classical linear assignment problem. An algorithm partitioning the set of possible assignments, as suggested by Murty, is presented where, for each partition, the optimal assignment is calculated using a new reoptimization...... technique. Computational results for the new algorithm are presented...

  19. Returns to Tenure: Time or Rank?

    DEFF Research Database (Denmark)

    Buhai, Ioan Sebastian

    -specific investment, efficiency-wages or adverse-selection models. However, rent extracting arguments as suggested by the theory of internal labor markets, indicate that the relative position of the worker in the seniority hierarchy of the firm, her 'seniority rank', may also explain part of the observed returns...

  20. Scoring nail psoriasis

    NARCIS (Netherlands)

    Klaassen, K.M.G.; Kerkhof, P.C.M. van de; Bastiaens, M.T.; Plusje, L.G.; Baran, R.L.; Pasch, M.C.

    2014-01-01

    BACKGROUND: Scoring systems are indispensable in evaluating the severity of disease and monitoring treatment response. OBJECTIVE: We sought to evaluate the competence of various nail psoriasis severity scoring systems and to develop a new scoring system. METHODS: The authors conducted a prospective,

  1. Probabilistic relation between In-Degree and PageRank

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2008-01-01

    This paper presents a novel stochastic model that explains the relation between power laws of In-Degree and PageRank. PageRank is a popularity measure designed by Google to rank Web pages. We model the relation between PageRank and In-Degree through a stochastic equation, which is inspired by the

  2. The effect of new links on Google PageRank

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    2004-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. We study the effect of newly created links on Google PageRank. We discuss to

  3. World University Rankings: Take with a Large Pinch of Salt

    Science.gov (United States)

    Cheng, Soh Kay

    2011-01-01

    Equating the unequal is misleading, and this happens consistently in comparing rankings from different university ranking systems, as the NUT saga shows. This article illustrates the problem by analyzing the 2011 rankings of the top 100 universities in the AWUR, QSWUR and THEWUR ranking results. It also discusses the reasons why the rankings…

  4. Generalized Reduced Rank Tests using the Singular Value Decomposition

    NARCIS (Netherlands)

    Kleibergen, F.R.; Paap, R.

    2006-01-01

    We propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson [Annals of Mathematical Statistics (1951), 22, 327-351] sensitivity to the

  5. Some upper and lower bounds on PSD-rank

    NARCIS (Netherlands)

    T. J. Lee (Troy); Z. Wei (Zhaohui); R. M. de Wolf (Ronald)

    2014-01-01

    textabstractPositive semidefinite rank (PSD-rank) is a relatively new quantity with applications to combinatorial optimization and communication complexity. We first study several basic properties of PSD-rank, and then develop new techniques for showing lower bounds on the PSD-rank. All of these

  6. Some upper and lower bounds on PSD-rank

    NARCIS (Netherlands)

    Lee, T.; Wei, Z.; de Wolf, R.

    Positive semidefinite rank (PSD-rank) is a relatively new complexity measure on matrices, with applications to combinatorial optimization and communication complexity. We first study several basic properties of PSD-rank, and then develop new techniques for showing lower bounds on the PSD-rank. All

  7. Biology of RANK, RANKL, and osteoprotegerin

    Science.gov (United States)

    Boyce, Brendan F; Xing, Lianping

    2007-01-01

    The discovery of the receptor activator of nuclear factor-κB ligand (RANKL)/RANK/osteoprotegerin (OPG) system and its role in the regulation of bone resorption exemplifies how both serendipity and a logic-based approach can identify factors that regulate cell function. Before this discovery in the mid to late 1990s, it had long been recognized that osteoclast formation was regulated by factors expressed by osteoblast/stromal cells, but it had not been anticipated that members of the tumor necrosis factor superfamily of ligands and receptors would be involved or that the factors involved would have extensive functions beyond bone remodeling. RANKL/RANK signaling regulates the formation of multinucleated osteoclasts from their precursors as well as their activation and survival in normal bone remodeling and in a variety of pathologic conditions. OPG protects the skeleton from excessive bone resorption by binding to RANKL and preventing it from binding to its receptor, RANK. Thus, RANKL/OPG ratio is an important determinant of bone mass and skeletal integrity. Genetic studies in mice indicate that RANKL/RANK signaling is also required for lymph node formation and mammary gland lactational hyperplasia, and that OPG also protects arteries from medial calcification. Thus, these tumor necrosis factor superfamily members have important functions outside bone. Although our understanding of the mechanisms whereby they regulate osteoclast formation has advanced rapidly during the past 10 years, many questions remain about their roles in health and disease. Here we review our current understanding of the role of the RANKL/RANK/OPG system in bone and other tissues. PMID:17634140

  8. MedlineRanker: flexible ranking of biomedical literature.

    Science.gov (United States)

    Fontaine, Jean-Fred; Barbosa-Silva, Adriano; Schaefer, Martin; Huska, Matthew R; Muro, Enrique M; Andrade-Navarro, Miguel A

    2009-07-01

    The biomedical literature is represented by millions of abstracts available in the Medline database. These abstracts can be queried with the PubMed interface, which provides a keyword-based Boolean search engine. This approach shows limitations in the retrieval of abstracts related to very specific topics, as it is difficult for a non-expert user to find all of the most relevant keywords related to a biomedical topic. Additionally, when searching for more general topics, the same approach may return hundreds of unranked references. To address these issues, text mining tools have been developed to help scientists focus on relevant abstracts. We have implemented the MedlineRanker webserver, which allows a flexible ranking of Medline for a topic of interest without expert knowledge. Given some abstracts related to a topic, the program deduces automatically the most discriminative words in comparison to a random selection. These words are used to score other abstracts, including those from not yet annotated recent publications, which can be then ranked by relevance. We show that our tool can be highly accurate and that it is able to process millions of abstracts in a practical amount of time. MedlineRanker is free for use and is available at http://cbdm.mdc-berlin.de/tools/medlineranker.

  9. PageRank for low frequency earthquake detection

    Science.gov (United States)

    Aguiar, A. C.; Beroza, G. C.

    2013-12-01

    We have analyzed Hi-Net seismic waveform data during the April 2006 tremor episode in the Nankai Trough in SW Japan using the autocorrelation approach of Brown et al. (2008), which detects low frequency earthquakes (LFEs) based on pair-wise waveform matching. We have generalized this to exploit the fact that waveforms may repeat multiple times, on more than just a pair-wise basis. We are working towards developing a sound statistical basis for event detection, but that is complicated by two factors. First, the statistical behavior of the autocorrelations varies between stations. Analyzing one station at a time assures that the detection threshold will only depend on the station being analyzed. Second, the positive detections do not satisfy "closure." That is, if window A correlates with window B, and window B correlates with window C, then window A and window C do not necessarily correlate with one another. We want to evaluate whether or not a linked set of windows are correlated due to chance. To do this, we map our problem on to one that has previously been solved for web search, and apply Google's PageRank algorithm. PageRank is the probability of a 'random surfer' to visit a particular web page; it assigns a ranking for a webpage based on the amount of links associated with that page. For windows of seismic data instead of webpages, the windows with high probabilities suggest likely LFE signals. Once identified, we stack the matched windows to improve the snr and use these stacks as template signals to find other LFEs within continuous data. We compare the results among stations and declare a detection if they are found in a statistically significant number of stations, based on multinomial statistics. We compare our detections using the single-station method to detections found by Shelly et al. (2007) for the April 2006 tremor sequence in Shikoku, Japan. We find strong similarity between the results, as well as many new detections that were not found using

  10. Probability machines: consistent probability estimation using nonparametric learning machines.

    Science.gov (United States)

    Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A

    2012-01-01

    Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.

  11. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  12. Ranked Conservation Opportunity Areas for Region 7 (ECO_RES.RANKED_OAS)

    Science.gov (United States)

    The RANKED_OAS are all the Conservation Opportunity Areas identified by MoRAP that have subsequently been ranked by patch size, landform representation, and the targeted land cover class (highest rank for conservation management = 1 [LFRANK_NOR]). The OAs designate areas with potential for forest or grassland conservation because they are areas of natural or semi-natural land cover that are at least 75 meters away from roads and away from patch edges. The OAs were modeled by creating distance grids using the National Land Cover Database and the Census Bureau's TIGER roads files.

  13. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  14. UNIVERSITY RANKINGS BY COST OF LIVING ADJUSTED FACULTY COMPENSATION

    OpenAIRE

    Terrance Jalbert; Mercedes Jalbert; Karla Hayashi

    2010-01-01

    In this paper we rank 574 universities based on compensation paid to their faculty. The analysis examines universities both on a raw basis and on a cost of living adjusted basis. Rankings based on salary data and benefit data are presented. In addition rankings based on total compensation are presented. Separate rankings are provided for universities offering different degrees. The results indicate that rankings of universities based on raw and cost of living adjusted data are markedly differ...

  15. Advanced scoring method of eco-efficiency in European cities.

    Science.gov (United States)

    Moutinho, Victor; Madaleno, Mara; Robaina, Margarita; Villar, José

    2018-01-01

    This paper analyzes a set of selected German and French cities' performance in terms of the relative behavior of their eco-efficiencies, computed as the ratio of their gross domestic product (GDP) over their CO 2 emissions. For this analysis, eco-efficiency scores of the selected cities are computed using the data envelopment analysis (DEA) technique, taking the eco-efficiencies as outputs, and the inputs being the energy consumption, the population density, the labor productivity, the resource productivity, and the patents per inhabitant. Once DEA results are analyzed, the Malmquist productivity indexes (MPI) are used to assess the time evolution of the technical efficiency, technological efficiency, and productivity of the cities over the window periods 2000 to 2005 and 2005 to 2008. Some of the main conclusions are that (1) most of the analyzed cities seem to have suboptimal scales, being one of the causes of their inefficiency; (2) there is evidence that high GDP over CO 2 emissions does not imply high eco-efficiency scores, meaning that DEA like approaches are useful to complement more simplistic ranking procedures, pointing out potential inefficiencies at the input levels; (3) efficiencies performed worse during the period 2000-2005 than during the period 2005-2008, suggesting the possibility of corrective actions taken during or at the end of the first period but impacting only on the second period, probably due to an increasing environmental awareness of policymakers and governors; and (4) MPI analysis shows a positive technological evolution of all cities, according to the general technological evolution of the reference cities, reflecting a generalized convergence of most cities to their technological frontier and therefore an evolution in the right direction.

  16. Determining the Most Important Factors Involved in Ranking Orthopaedic Sports Medicine Fellowship Applicants.

    Science.gov (United States)

    Baweja, Rishi; Kraeutler, Matthew J; Mulcahey, Mary K; McCarty, Eric C

    2017-11-01

    Orthopaedic surgery residencies and certain fellowships are becoming increasingly competitive. Several studies have identified important factors to be taken into account when selecting medical students for residency interviews. Similar information for selecting orthopaedic sports medicine fellows does not exist. To determine the most important factors that orthopaedic sports medicine fellowship program directors (PDs) take into account when ranking applicants. Cross-sectional study. A brief survey was distributed electronically to PDs of the 92 orthopaedic sports medicine fellowship programs that are accredited by the Accreditation Council for Graduate Medical Education (ACGME). Each PD was asked to rank, in order, the 5 most important factors taken into account when ranking applicants based on a total list of 13 factors: the interview, the applicant's residency program, letters of recommendation (LORs), personal connections made through the applicant, research experience, an applicant's geographical ties to the city/town of the fellowship program, United States Medical Licensing Examination (USMLE) scores, Orthopaedic In-Training Examination (OITE) scores, history of being a competitive athlete in college, extracurricular activities/hobbies, volunteer experience, interest in a career in academics, and publications/research/posters. Factors were scored from 1 to 5, with a score of 5 representing the most important factor and 1 representing the fifth-most important factor. Of the 92 PDs contacted, 57 (62%) responded. Thirty-four PDs (37%) listed the interview as the most important factor in ranking fellowship applicants (overall score, 233). LORs (overall score, 196), an applicant's residency program (overall score, 133), publications/research/posters (overall score, 115), and personal connections (overall score, 90) were reported as the second- through fifth-most important factors, respectively. According to orthopaedic sports medicine fellowship PDs, the

  17. Probability with applications and R

    CERN Document Server

    Dobrow, Robert P

    2013-01-01

    An introduction to probability at the undergraduate level Chance and randomness are encountered on a daily basis. Authored by a highly qualified professor in the field, Probability: With Applications and R delves into the theories and applications essential to obtaining a thorough understanding of probability. With real-life examples and thoughtful exercises from fields as diverse as biology, computer science, cryptology, ecology, public health, and sports, the book is accessible for a variety of readers. The book's emphasis on simulation through the use of the popular R software language c

  18. A philosophical essay on probabilities

    CERN Document Server

    Laplace, Marquis de

    1996-01-01

    A classic of science, this famous essay by ""the Newton of France"" introduces lay readers to the concepts and uses of probability theory. It is of especial interest today as an application of mathematical techniques to problems in social and biological sciences.Generally recognized as the founder of the modern phase of probability theory, Laplace here applies the principles and general results of his theory ""to the most important questions of life, which are, in effect, for the most part, problems in probability."" Thus, without the use of higher mathematics, he demonstrates the application

  19. Introduction to probability and measure

    CERN Document Server

    Parthasarathy, K R

    2005-01-01

    According to a remark attributed to Mark Kac 'Probability Theory is a measure theory with a soul'. This book with its choice of proofs, remarks, examples and exercises has been prepared taking both these aesthetic and practical aspects into account.

  20. Free probability and random matrices

    CERN Document Server

    Mingo, James A

    2017-01-01

    This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

  1. Kriging for Simulation Metamodeling: Experimental Design, Reduced Rank Kriging, and Omni-Rank Kriging

    Science.gov (United States)

    Hosking, Michael Robert

    This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our

  2. Considerations on a posteriori probability

    Directory of Open Access Journals (Sweden)

    Corrado Gini

    2015-06-01

    Full Text Available In this first paper of 1911 relating to the sex ratio at birth, Gini repurposed a Laplace’s succession rule according to a Bayesian version. The Gini's intuition consisted in assuming for prior probability a Beta type distribution and introducing the "method of results (direct and indirect" for the determination of  prior probabilities according to the statistical frequency obtained from statistical data.

  3. DECOFF Probabilities of Failed Operations

    DEFF Research Database (Denmark)

    Gintautas, Tomas

    A statistical procedure of estimation of Probabilities of Failed Operations is described and exemplified using ECMWF weather forecasts and SIMO output from Rotor Lift test case models. Also safety factor influence is investigated. DECOFF statistical method is benchmarked against standard Alpha......-factor method defined by (DNV, 2011) and model performance is evaluated. Also, the effects that weather forecast uncertainty has on the output Probabilities of Failure is analysed and reported....

  4. Pulling Rank: Military Rank Affects Hormone Levels and Fairness in an Allocation Experiment

    Directory of Open Access Journals (Sweden)

    Benjamin Siart

    2016-11-01

    Full Text Available Status within social hierarchies has great effects on the lives of socially organized mammals. Its effects on human behavior and related physiology however is relatively little studied. The present study investigated the impact of military rank on fairness and behavior in relation to salivary cortisol (C and testosterone (T levels in male soldiers. For this purpose 180 members of the Austrian Armed Forces belonging to two distinct rank groups participated in two variations of a computer-based guard duty allocation experiment. The rank groups were 1 warrant officers (High Rank, HR and 2 enlisted men (Low Rank, LR. One soldier from each rank group participated in every experiment. At the beginning of the experiment, one participant was assigned to start standing guard and the other participant at rest. The participant who started at rest could choose if and when to relieve his fellow soldier and therefore had control over the experiment. In order to trigger perception of unfair behavior, an additional experiment was conducted which was manipulated by the experimenter. In the manipulated version both soldiers started in the standing guard position and were never relieved, believing that their opponent was at rest, not relieving them. Our aim was to test whether unfair behavior causes a physiological reaction. Saliva samples for hormone analysis were collected at regular intervals throughout the experiment.We found that in the un-manipulated setup high-ranking soldiers spent less time standing guard than lower ranking individuals. Rank was a significant predictor for C but not for T levels during the experiment. C levels in the HR group were higher than in LR group. C levels were also elevated in the manipulated experiment compared to the un-manipulated experiment, especially in LR. We assume that the elevated C levels in HR were caused by HR feeling their status challenged by the situation of having to negotiate with an individual of lower military

  5. Pulling Rank: Military Rank Affects Hormone Levels and Fairness in an Allocation Experiment.

    Science.gov (United States)

    Siart, Benjamin; Pflüger, Lena S; Wallner, Bernard

    2016-01-01

    Status within social hierarchies has great effects on the lives of socially organized mammals. Its effects on human behavior and related physiology, however, is relatively little studied. The present study investigated the impact of military rank on fairness and behavior in relation to salivary cortisol (C) and testosterone (T) levels in male soldiers. For this purpose 180 members of the Austrian Armed Forces belonging to two distinct rank groups participated in two variations of a computer-based guard duty allocation experiment. The rank groups were (1) warrant officers (high rank, HR) and (2) enlisted men (low rank, LR). One soldier from each rank group participated in every experiment. At the beginning of the experiment, one participant was assigned to start standing guard and the other participant at rest. The participant who started at rest could choose if and when to relieve his fellow soldier and therefore had control over the experiment. In order to trigger perception of unfair behavior, an additional experiment was conducted which was manipulated by the experimenter. In the manipulated version both soldiers started in the standing guard position and were never relieved, believing that their opponent was at rest, not relieving them. Our aim was to test whether unfair behavior causes a physiological reaction. Saliva samples for hormone analysis were collected at regular intervals throughout the experiment. We found that in the un-manipulated setup high-ranking soldiers spent less time standing guard than lower ranking individuals. Rank was a significant predictor for C but not for T levels during the experiment. C levels in the HR group were higher than in the LR group. C levels were also elevated in the manipulated experiment compared to the un-manipulated experiment, especially in LR. We assume that the elevated C levels in HR were caused by HR feeling their status challenged by the situation of having to negotiate with an individual of lower military rank

  6. Transition Probabilities of Gd I

    Science.gov (United States)

    Bilty, Katherine; Lawler, J. E.; Den Hartog, E. A.

    2011-01-01

    Rare earth transition probabilities are needed within the astrophysics community to determine rare earth abundances in stellar photospheres. The current work is part an on-going study of rare earth element neutrals. Transition probabilities are determined by combining radiative lifetimes measured using time-resolved laser-induced fluorescence on a slow atom beam with branching fractions measured from high resolution Fourier transform spectra. Neutral rare earth transition probabilities will be helpful in improving abundances in cool stars in which a significant fraction of rare earths are neutral. Transition probabilities are also needed for research and development in the lighting industry. Rare earths have rich spectra containing 100's to 1000's of transitions throughout the visible and near UV. This makes rare earths valuable additives in Metal Halide - High Intensity Discharge (MH-HID) lamps, giving them a pleasing white light with good color rendering. This poster presents the work done on neutral gadolinium. We will report radiative lifetimes for 135 levels and transition probabilities for upwards of 1500 lines of Gd I. The lifetimes are reported to ±5% and the transition probabilities range from 5% for strong lines to 25% for weak lines. This work is supported by the National Science Foundation under grant CTS 0613277 and the National Science Foundation's REU program through NSF Award AST-1004881.

  7. Pharmacophore-based similarity scoring for DOCK.

    Science.gov (United States)

    Jiang, Lingling; Rizzo, Robert C

    2015-01-22

    Pharmacophore modeling incorporates geometric and chemical features of known inhibitors and/or targeted binding sites to rationally identify and design new drug leads. In this study, we have encoded a three-dimensional pharmacophore matching similarity (FMS) scoring function into the structure-based design program DOCK. Validation and characterization of the method are presented through pose reproduction, crossdocking, and enrichment studies. When used alone, FMS scoring dramatically improves pose reproduction success to 93.5% (∼20% increase) and reduces sampling failures to 3.7% (∼6% drop) compared to the standard energy score (SGE) across 1043 protein-ligand complexes. The combined FMS+SGE function further improves success to 98.3%. Crossdocking experiments using FMS and FMS+SGE scoring, for six diverse protein families, similarly showed improvements in success, provided proper pharmacophore references are employed. For enrichment, incorporating pharmacophores during sampling and scoring, in most cases, also yield improved outcomes when docking and rank-ordering libraries of known actives and decoys to 15 systems. Retrospective analyses of virtual screenings to three clinical drug targets (EGFR, IGF-1R, and HIVgp41) using X-ray structures of known inhibitors as pharmacophore references are also reported, including a customized FMS scoring protocol to bias on selected regions in the reference. Overall, the results and fundamental insights gained from this study should benefit the docking community in general, particularly researchers using the new FMS method to guide computational drug discovery with DOCK.

  8. Discrepancies between multicriteria decision analysis-based ranking and intuitive ranking for pharmaceutical benefit-risk profiles in a hypothetical setting.

    Science.gov (United States)

    Hoshikawa, K; Ono, S

    2017-02-01

    Multicriteria decision analysis (MCDA) has been generally considered a promising decision-making methodology for the assessment of drug benefit-risk profiles. There have been many discussions in both public and private sectors on its feasibility and applicability, but it has not been employed in official decision-makings. For the purpose of examining to what extent MCDA would reflect the first-hand, intuitive preference of evaluators in practical pharmaceutical assessments, we conducted a questionnaire survey involving the participation of employees of pharmaceutical companies. Showing profiles of the efficacy and safety of four hypothetical drugs, each respondent was asked to rank them following the standard MCDA process and then to rank them intuitively (i.e. without applying any analytical framework). These two approaches resulted in substantially different ranking patterns from the same individuals, and the concordance rate was surprisingly low (17%). Although many respondents intuitively showed a preference for mild, balanced risk-benefit profiles over profiles with a conspicuous advantage in either risk or benefit, the ranking orders based on MCDA scores did not reflect the intuitive preference. Observed discrepancies between the rankings seemed to be primarily attributed to the structural characteristics of MCDA, which assumes that evaluation on each benefit and risk component should have monotonic impact on final scores. It would be difficult for MCDA to reflect commonly observed non-monotonic preferences for risk and benefit profiles. Possible drawbacks of MCDA should be further investigated prior to the real-world application of its benefit-risk assessment. © 2016 John Wiley & Sons Ltd.

  9. Right tail increasing dependence between scores

    Science.gov (United States)

    Fernández, M.; García, Jesús E.; González-López, V. A.; Romano, N.

    2017-07-01

    In this paper we investigate the behavior of the conditional probability Prob(U > u|V > v) of two records coming from students of an undergraduate course, where U is the score of calculus I, scaled in [0, 1] and V is the score of physics scaled in [0, 1], the physics subject is part of the admission test of the university. For purposes of comparison, we consider two different undergraduate courses, electrical engineering and mechanical engineering, during nine years, from 2003 to 2011. Through a Bayesian perspective we estimate Prob(U > u|V > v) year by year and course by course. We conclude that U is right tail increasing in V, in both courses and for all the years. Moreover, over these nine years, we observe different ranges of variability for the estimated probabilities of electrical engineering when compared to the estimated probabilities of mechanical engineering.

  10. Social Bookmarking Induced Active Page Ranking

    Science.gov (United States)

    Takahashi, Tsubasa; Kitagawa, Hiroyuki; Watanabe, Keita

    Social bookmarking services have recently made it possible for us to register and share our own bookmarks on the web and are attracting attention. The services let us get structured data: (URL, Username, Timestamp, Tag Set). And these data represent user interest in web pages. The number of bookmarks is a barometer of web page value. Some web pages have many bookmarks, but most of those bookmarks may have been posted far in the past. Therefore, even if a web page has many bookmarks, their value is not guaranteed. If most of the bookmarks are very old, the page may be obsolete. In this paper, by focusing on the timestamp sequence of social bookmarkings on web pages, we model their activation levels representing current values. Further, we improve our previously proposed ranking method for web search by introducing the activation level concept. Finally, through experiments, we show effectiveness of the proposed ranking method.

  11. Regression Estimator Using Double Ranked Set Sampling

    Directory of Open Access Journals (Sweden)

    Hani M. Samawi

    2002-06-01

    Full Text Available The performance of a regression estimator based on the double ranked set sample (DRSS scheme, introduced by Al-Saleh and Al-Kadiri (2000, is investigated when the mean of the auxiliary variable X is unknown. Our primary analysis and simulation indicates that using the DRSS regression estimator for estimating the population mean substantially increases relative efficiency compared to using regression estimator based on simple random sampling (SRS or ranked set sampling (RSS (Yu and Lam, 1997 regression estimator.  Moreover, the regression estimator using DRSS is also more efficient than the naïve estimators of the population mean using SRS, RSS (when the correlation coefficient is at least 0.4 and DRSS for high correlation coefficient (at least 0.91. The theory is illustrated using a real data set of trees.

  12. Low-rank quadratic semidefinite programming

    KAUST Repository

    Yuan, Ganzhao

    2013-04-01

    Low rank matrix approximation is an attractive model in large scale machine learning problems, because it can not only reduce the memory and runtime complexity, but also provide a natural way to regularize parameters while preserving learning accuracy. In this paper, we address a special class of nonconvex quadratic matrix optimization problems, which require a low rank positive semidefinite solution. Despite their non-convexity, we exploit the structure of these problems to derive an efficient solver that converges to their local optima. Furthermore, we show that the proposed solution is capable of dramatically enhancing the efficiency and scalability of a variety of concrete problems, which are of significant interest to the machine learning community. These problems include the Top-k Eigenvalue problem, Distance learning and Kernel learning. Extensive experiments on UCI benchmarks have shown the effectiveness and efficiency of our proposed method. © 2012.

  13. Classification of rank 2 cluster varieties

    DEFF Research Database (Denmark)

    Mandel, Travis

    We classify rank 2 cluster varieties (those whose corresponding skew-form has rank 2) according to the deformation type of a generic fiber U of their X-spaces, as defined by Fock and Goncharov. Our approach is based on the work of Gross, Hacking, and Keel for cluster varieties and log Calabi......-Yau surfaces. We find, for example, that U is "positive" (i.e., nearly affine) and either finite-type or non-acyclic (in the cluster sense) if and only if the monodromy of the tropicalization of U is one of Kodaira's matrices for the monodromy of an ellpitic fibration. In the positive cases, we also describe...... the action of the cluster modular group on the tropicalization of U....

  14. Deep Impact: Unintended consequences of journal rank

    CERN Document Server

    Brembs, Björn

    2013-01-01

    Much has been said about the increasing bureaucracy in science, stifling innovation, hampering the creativity of researchers and incentivizing misconduct, even outright fraud. Many anecdotes have been recounted, observations described and conclusions drawn about the negative impact of impact assessment on scientists and science. However, few of these accounts have drawn their conclusions from data, and those that have typically relied on a few studies. In this review, we present the most recent and pertinent data on the consequences that our current scholarly communication system has had on various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings and retractions). These data confirm previous suspicions: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altoge...

  15. Probabilistic Low-Rank Multitask Learning.

    Science.gov (United States)

    Kong, Yu; Shao, Ming; Li, Kang; Fu, Yun

    2017-01-04

    In this paper, we consider the problem of learning multiple related tasks simultaneously with the goal of improving the generalization performance of individual tasks. The key challenge is to effectively exploit the shared information across multiple tasks as well as preserve the discriminative information for each individual task. To address this, we propose a novel probabilistic model for multitask learning (MTL) that can automatically balance between low-rank and sparsity constraints. The former assumes a low-rank structure of the underlying predictive hypothesis space to explicitly capture the relationship of different tasks and the latter learns the incoherent sparse patterns private to each task. We derive and perform inference via variational Bayesian methods. Experimental results on both regression and classification tasks on real-world applications demonstrate the effectiveness of the proposed method in dealing with the MTL problems.

  16. Score Region Algebra: Building a Transparent XML-IR Database

    NARCIS (Netherlands)

    Mihajlovic, V.; Blok, H.E.; Hiemstra, Djoerd; Apers, Peter M.G.; Chowdhury, A.; Fuhr, N.; Ronthaler, M.; Schek, H-J.; Teiken, W.

    2005-01-01

    A unified database framework that will enable better comprehension of ranked XML retrieval is still a challenge in the XML database field. We propose a logical algebra, named score region algebra, that enables transparent specification of information retrieval (IR) models for XML databases. The

  17. Multiattribute utility scores for predicting family physicians' decisions regarding sinusitis

    NARCIS (Netherlands)

    de Bock, GH; Reijneveld, SA; van Houwelingen, JC; Knottnerus, JA; Kievit, J

    1999-01-01

    To examine whether multiattribute utility (MAU) scores can be used to predict family physicians' decisions regarding patients suspected to have sinusitis and rhinitis, 100 randomly selected family physicians from the Leiden area (The Netherlands) were asked to rank a set of six attributes regarding

  18. Ranking agility factors affecting hospitals in Iran

    OpenAIRE

    M. Abdi Talarposht; GH. Mahmodi; MA. Jahani

    2017-01-01

    Background: Agility is an effective response to the changing and unpredictable environment and using these changes as opportunities for organizational improvement. Objective: The aim of the present study was to rank the factors affecting agile supply chain of hospitals of Iran. Methods: This applied study was conducted by cross sectional-descriptive method at some point of 2015 for one year. The research population included managers, administrators, faculty members and experts were sele...

  19. Ranking images based on aesthetic qualities.

    OpenAIRE

    Gaur, Aarushi

    2015-01-01

    The qualitative assessment of image content and aesthetic impression is affected by various image attributes and relations between the attributes. Modelling of such assessments in the form of objective rankings and learning image representations based on them is not a straightforward problem. The criteria can be varied with different levels of complexity for various applications. A highly-complex problem could involve a large number of interrelated attributes and features alongside varied rul...

  20. Homological characterisation of Lambda-ranks

    OpenAIRE

    Howson, Susan

    1999-01-01

    If G is a pro-p, p-adic, Lie group and if $\\Lambda(G)$ denotes the Iwasawa algebra of G then we present a formula for determining the $\\Lambda(G)$-rank of a finitely generated $\\Lambda(G)$-module. This is given in terms of the G homology groups of the module. We explore some consequences of this for the structure of $\\Lambda(G)$-modules.

  1. Probably not future prediction using probability and statistical inference

    CERN Document Server

    Dworsky, Lawrence N

    2008-01-01

    An engaging, entertaining, and informative introduction to probability and prediction in our everyday lives Although Probably Not deals with probability and statistics, it is not heavily mathematical and is not filled with complex derivations, proofs, and theoretical problem sets. This book unveils the world of statistics through questions such as what is known based upon the information at hand and what can be expected to happen. While learning essential concepts including "the confidence factor" and "random walks," readers will be entertained and intrigued as they move from chapter to chapter. Moreover, the author provides a foundation of basic principles to guide decision making in almost all facets of life including playing games, developing winning business strategies, and managing personal finances. Much of the book is organized around easy-to-follow examples that address common, everyday issues such as: How travel time is affected by congestion, driving speed, and traffic lights Why different gambling ...

  2. Citation ranking versus peer evaluation of senior faculty research performance

    DEFF Research Database (Denmark)

    Meho, Lokman I.; Sonnenwald, Diane H.

    2000-01-01

    indicator of research performance of senior faculty members? Citation data, book reviews, and peer ranking were compiled and examined for faculty members specializing in Kurdish studies. Analysis shows that normalized citation ranking and citation content analysis data yield identical ranking results....... Analysis also shows that normalized citation ranking and citation content analysis, book reviews, and peer ranking perform similarly (i.e., are highly correlated) for high-ranked and low-ranked senior scholars. Additional evaluation methods and measures that take into account the context and content......The purpose of this study is to analyze the relationship between citation ranking and peer evaluation in assessing senior faculty research performance. Other studies typically derive their peer evaluation data directly from referees, often in the form of ranking. This study uses two additional...

  3. Higher-rank fields and currents

    Energy Technology Data Exchange (ETDEWEB)

    Gelfond, O.A. [Institute of System Research of Russian Academy of Sciences,Nakhimovsky prospect 36-1, 117218, Moscow (Russian Federation); I.E.Tamm Department of Theoretical Physics, Lebedev Physical Institute,Leninsky prospect 53, 119991, Moscow (Russian Federation); Vasiliev, M.A. [I.E.Tamm Department of Theoretical Physics, Lebedev Physical Institute,Leninsky prospect 53, 119991, Moscow (Russian Federation)

    2016-10-13

    Sp(2M) invariant field equations in the space M{sub M} with symmetric matrix coordinates are classified. Analogous results are obtained for Minkowski-like subspaces of M{sub M} which include usual 4d Minkowski space as a particular case. The constructed equations are associated with the tensor products of the Fock (singleton) representation of Sp(2M) of any rank r. The infinite set of higher-spin conserved currents multilinear in rank-one fields in M{sub M} is found. The associated conserved charges are supported by (rM−((r(r−1))/2))-dimensional differential forms in M{sub M}, that are closed by virtue of the rank-2r field equations. The cohomology groups H{sup p}(σ{sub −}{sup r}) with all p and r, which determine the form of appropriate gauge fields and their field equations, are found both for M{sub M} and for its Minkowski-like subspace.

  4. Association between Metabolic Syndrome and Job Rank.

    Science.gov (United States)

    Mehrdad, Ramin; Pouryaghoub, Gholamreza; Moradi, Mahboubeh

    2018-01-01

    The occupation of the people can influence the development of metabolic syndrome. To determine the association between metabolic syndrome and its determinants with the job rank in workers of a large car factory in Iran. 3989 male workers at a large car manufacturing company were invited to participate in this cross-sectional study. Demographic and anthropometric data of the participants, including age, height, weight, and abdominal circumference were measured. Blood samples were taken to measure lipid profile and blood glucose level. Metabolic syndrome was diagnosed in each participant based on ATPIII 2001 criteria. The workers were categorized based on their job rank into 3 groups of (1) office workers, (2) workers with physical exertion, and (3) workers with chemical exposure. The study characteristics, particularly the frequency of metabolic syndrome and its determinants were compared among the study groups. The prevalence of metabolic syndrome in our study was 7.7% (95% CI 6.9 to 8.5). HDL levels were significantly lower in those who had chemical exposure (p=0.045). Diastolic blood pressure was significantly higher in those who had mechanical exertion (p=0.026). The frequency of metabolic syndrome in the office workers, workers with physical exertion, and workers with chemical exposure was 7.3%, 7.9%, and 7.8%, respectively (p=0.836). Seemingly, there is no association between metabolic syndrome and job rank.

  5. [Ranke and modern surgery in Groningen].

    Science.gov (United States)

    van Gijn, Jan; Gijselhart, Joost P

    2012-01-01

    Hans Rudolph Ranke (1849-1887) studied medicine in Halle, located in the eastern part of Germany, where he also trained as a surgeon under Richard von Volkmann (1830-1889), during which time he became familiar with the new antiseptic technique that had been introduced by Joseph Lister (1827-1912). In 1878 he was appointed head of the department of surgery in Groningen, the Netherlands, where his predecessor had been chronically indisposed and developments were flagging. Within a few months, Ranke had introduced disinfection by using carbolic acid both before and during operations. For the disinfection of wound dressings, he replaced carbolic acid with thymol as this was less pungent and foul-smelling. The rate of postoperative infections dropped to a minimum despite the inadequate housing and living conditions of the patients with infectious diseases. In 1887, at the age of 37, Ranke died after a brief illness - possibly glomerulonephritis - only eight years after he had assumed office. A street in the city of Groningen near its present-day University Medical Centre has been named after him.

  6. Ranking agility factors affecting hospitals in Iran

    Directory of Open Access Journals (Sweden)

    M. Abdi Talarposht

    2017-04-01

    Full Text Available Background: Agility is an effective response to the changing and unpredictable environment and using these changes as opportunities for organizational improvement. Objective: The aim of the present study was to rank the factors affecting agile supply chain of hospitals of Iran. Methods: This applied study was conducted by cross sectional-descriptive method at some point of 2015 for one year. The research population included managers, administrators, faculty members and experts were selected hospitals. A total of 260 people were selected as sample from the health centers. The construct validity of the questionnaire was approved by confirmatory factor analysis test and its reliability was approved by Cronbach's alpha (α=0.97. All data were analyzed by Kolmogorov-Smirnov, Chi-square and Friedman tests. Findings: The development of staff skills, the use of information technology, the integration of processes, appropriate planning, and customer satisfaction and product quality had a significant impact on the agility of public hospitals of Iran (P<0.001. New product introductions had earned the highest ranking and the development of staff skills earned the lowest ranking. Conclusion: The new product introduction, market responsiveness and sensitivity, reduce costs, and the integration of organizational processes, ratings better to have acquired agility hospitals in Iran. Therefore, planners and officials of hospitals have to, through the promotion quality and variety of services customer-oriented, providing a basis for investing in the hospital and etc to apply for agility supply chain public hospitals of Iran.

  7. An Automated Approach for Ranking Journals to Help in Clinician Decision Support

    Science.gov (United States)

    Jonnalagadda, Siddhartha R.; Moosavinasab, Soheil; Nath, Chinmoy; Li, Dingcheng; Chute, Christopher G.; Liu, Hongfang

    2014-01-01

    Point of care access to knowledge from full text journal articles supports decision-making and decreases medical errors. However, it is an overwhelming task to search through full text journal articles and find quality information needed by clinicians. We developed a method to rate journals for a given clinical topic, Congestive Heart Failure (CHF). Our method enables filtering of journals and ranking of journal articles based on source journal in relation to CHF. We also obtained a journal priority score, which automatically rates any journal based on its importance to CHF. Comparing our ranking with data gathered by surveying 169 cardiologists, who publish on CHF, our best Multiple Linear Regression model showed a correlation of 0.880, based on five-fold cross validation. Our ranking system can be extended to other clinical topics. PMID:25954382

  8. Methods for evaluating and ranking transportation energy conservation programs. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1981-04-30

    Methods for comparative evaluations of the Office of Transportation programs designed to help achieve significant reductions in the consumption of petroleum by different forms of transportation while maintaining public, commercial, and industrial mobility are described. Assessments of the programs in terms of petroleum savings, incremental costs to consumers of the technologies and activities, probability of technical and market success, and external impacts due to environmental, economic, and social factors are inputs to the evaluation methodologies presented. The methods described for evaluating the programs on a comparative basis are three ranking functions and a policy matrix listing important attributes of the programs and the technologies and activities with which they are concerned and include the traditional net present value measure which computes the present worth of petroleum savings less the present worth of costs. This is modified by dividing by the present value of DOE funding to obtain a net present value per program dollar, which is the second ranking function. The third ranking function is broader in that it takes external impacts into account and is known as the comprehensive ranking function. Procedures are described for making computations of the ranking functions and the attributes that require computation. Computations are made for the electric vehicle, Stirling engine, gas turbine, and MPG mileage guide program. (MCW)

  9. Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers.

    Science.gov (United States)

    Eisinga, Rob; Heskes, Tom; Pelzer, Ben; Te Grotenhuis, Manfred

    2017-01-25

    The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .

  10. QMEANclust: estimation of protein model quality by combining a composite scoring function with structural density information

    Directory of Open Access Journals (Sweden)

    Schwede Torsten

    2009-05-01

    Full Text Available Abstract Background The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Results Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per

  11. Do efficiency scores depend on input mix?

    DEFF Research Database (Denmark)

    Asmild, Mette; Hougaard, Jens Leth; Kronborg, Dorte

    2013-01-01

    In this paper we examine the possibility of using the standard Kruskal-Wallis (KW) rank test in order to evaluate whether the distribution of efficiency scores resulting from Data Envelopment Analysis (DEA) is independent of the input (or output) mix of the observations. Since the DEA frontier...... is estimated, many standard assumptions for evaluating the KW test statistic are violated. Therefore, we propose to explore its statistical properties by the use of simulation studies. The simulations are performed conditional on the observed input mixes. The method, unlike existing approaches...

  12. Scoring functions and enrichment: a case study on Hsp90

    Directory of Open Access Journals (Sweden)

    Mitchell John BO

    2007-01-01

    Full Text Available Abstract Background The need for fast and accurate scoring functions has been driven by the increased use of in silico virtual screening twinned with high-throughput screening as a method to rapidly identify potential candidates in the early stages of drug development. We examine the ability of some the most common scoring functions (GOLD, ChemScore, DOCK, PMF, BLEEP and Consensus to discriminate correctly and efficiently between active and non-active compounds among a library of ~3,600 diverse decoy compounds in a virtual screening experiment against heat shock protein 90 (Hsp90. Results Firstly, we investigated two ranking methodologies, GOLDrank and BestScorerank. GOLDrank is based on ranks generated using GOLD. The various scoring functions, GOLD, ChemScore, DOCK, PMF, BLEEP and Consensus, are applied to the pose ranked number one by GOLD for that ligand. BestScorerank uses multiple poses for each ligand and independently chooses the best ranked pose of the ligand according to each different scoring function. Secondly, we considered the effect of introducing the Thr184 hydrogen bond tether to guide the docking process towards a particular solution, and its effect on enrichment. Thirdly, we considered normalisation to account for the known bias of scoring functions to select larger molecules. All the scoring functions gave fairly similar enrichments, with the exception of PMF which was consistently the poorest performer. In most cases, GOLD was marginally the best performing individual function; the Consensus score usually performed similarly to the best single scoring function. Our best results were obtained using the Thr184 tether in combination with the BestScorerank protocol and normalisation for molecular weight. For that particular combination, DOCK was the best individual function; DOCK recovered 90% of the actives in the top 10% of the ranked list; Consensus similarly recovered 89% of the actives in its top 10%. Conclusion Overall, we

  13. Ranking Fuzzy Numbers and Its Application to Products Attributes Preferences

    OpenAIRE

    Lazim Abdullah; Nor Nashrah Ahmad Fauzee

    2011-01-01

    Ranking is one of the widely used methods in fuzzy decision making environment. The recent ranking fuzzy numbers proposed by Wang and Li is claimed to be the improved version in ranking. However, the method was never been simplified and tested in real life application. This paper presents a four-step computation of ranking fuzzy numbers and its application in ranking attributes of selected chocolate products.  The four steps algorithm was formulated to rank fuzzy numbers and followed by a tes...

  14. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. SCORE - A DESCRIPTION.

    Science.gov (United States)

    SLACK, CHARLES W.

    REINFORCEMENT AND ROLE-REVERSAL TECHNIQUES ARE USED IN THE SCORE PROJECT, A LOW-COST PROGRAM OF DELINQUENCY PREVENTION FOR HARD-CORE TEENAGE STREET CORNER BOYS. COMMITTED TO THE BELIEF THAT THE BOYS HAVE THE POTENTIAL FOR ETHICAL BEHAVIOR, THE SCORE WORKER FOLLOWS B.F. SKINNER'S THEORY OF OPERANT CONDITIONING AND REINFORCES THE DELINQUENT'S GOOD…

  16. Continuation of probability density functions using a generalized Lyapunov approach

    Energy Technology Data Exchange (ETDEWEB)

    Baars, S., E-mail: s.baars@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Viebahn, J.P., E-mail: viebahn@cwi.nl [Centrum Wiskunde & Informatica (CWI), P.O. Box 94079, 1090 GB, Amsterdam (Netherlands); Mulder, T.E., E-mail: t.e.mulder@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); Kuehn, C., E-mail: ckuehn@ma.tum.de [Technical University of Munich, Faculty of Mathematics, Boltzmannstr. 3, 85748 Garching bei München (Germany); Wubs, F.W., E-mail: f.w.wubs@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Dijkstra, H.A., E-mail: h.a.dijkstra@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); School of Chemical and Biomolecular Engineering, Cornell University, Ithaca, NY (United States)

    2017-05-01

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.

  17. Conditions for rank reversal in supplier selection

    NARCIS (Netherlands)

    Telgen, Jan; Timmer, Judith B.

    Supplier selection is the process of selecting the best bid among a number of bids submitted by different suppliers. The bids are all evaluated against a set of predetermined criteria. If there are more criteria, the bids receive a score on each criterion. The bid with the best total score over all

  18. The Apgar Score.

    Science.gov (United States)

    2015-10-01

    The Apgar score provides an accepted and convenient method for reporting the status of the newborn infant immediately after birth and the response to resuscitation if needed. The Apgar score alone cannot be considered as evidence of, or a consequence of, asphyxia; does not predict individual neonatal mortality or neurologic outcome; and should not be used for that purpose. An Apgar score assigned during resuscitation is not equivalent to a score assigned to a spontaneously breathing infant. The American Academy of Pediatrics and the American College of Obstetricians and Gynecologists encourage use of an expanded Apgar score reporting form that accounts for concurrent resuscitative interventions. Copyright © 2015 by the American Academy of Pediatrics.

  19. Probability theory a foundational course

    CERN Document Server

    Pakshirajan, R P

    2013-01-01

    This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.

  20. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  1. VIBRATION ISOLATION SYSTEM PROBABILITY ANALYSIS

    Directory of Open Access Journals (Sweden)

    Smirnov Vladimir Alexandrovich

    2012-10-01

    Full Text Available The article deals with the probability analysis for a vibration isolation system of high-precision equipment, which is extremely sensitive to low-frequency oscillations even of submicron amplitude. The external sources of low-frequency vibrations may include the natural city background or internal low-frequency sources inside buildings (pedestrian activity, HVAC. Taking Gauss distribution into account, the author estimates the probability of the relative displacement of the isolated mass being still lower than the vibration criteria. This problem is being solved in the three dimensional space, evolved by the system parameters, including damping and natural frequency. According to this probability distribution, the chance of exceeding the vibration criteria for a vibration isolation system is evaluated. Optimal system parameters - damping and natural frequency - are being developed, thus the possibility of exceeding vibration criteria VC-E and VC-D is assumed to be less than 0.04.

  2. Probability on real Lie algebras

    CERN Document Server

    Franz, Uwe

    2016-01-01

    This monograph is a progressive introduction to non-commutativity in probability theory, summarizing and synthesizing recent results about classical and quantum stochastic processes on Lie algebras. In the early chapters, focus is placed on concrete examples of the links between algebraic relations and the moments of probability distributions. The subsequent chapters are more advanced and deal with Wigner densities for non-commutative couples of random variables, non-commutative stochastic processes with independent increments (quantum Lévy processes), and the quantum Malliavin calculus. This book will appeal to advanced undergraduate and graduate students interested in the relations between algebra, probability, and quantum theory. It also addresses a more advanced audience by covering other topics related to non-commutativity in stochastic calculus, Lévy processes, and the Malliavin calculus.

  3. Flood hazard probability mapping method

    Science.gov (United States)

    Kalantari, Zahra; Lyon, Steve; Folkeson, Lennart

    2015-04-01

    In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. Factors such as topography, land use, soil data and other PCDs were analysed in terms of their relative importance for flood generation. The specific objective was to test the methodology using statistical methods to identify factors having a significant role on controlling flooding. A second objective was to generate an index quantifying flood probability value for each cell, based on different weighted factors, in order to provide a more accurate analysis of potential high flood hazards than can be obtained using just a single variable. The ability of indicator covariance to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. By using PCD data, realistic representations of high probability flood regions was made, despite the magnitude of rain events. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.

  4. Incompatible Stochastic Processes and Complex Probabilities

    Science.gov (United States)

    Zak, Michail

    1997-01-01

    The definition of conditional probabilities is based upon the existence of a joint probability. However, a reconstruction of the joint probability from given conditional probabilities imposes certain constraints upon the latter, so that if several conditional probabilities are chosen arbitrarily, the corresponding joint probability may not exist.

  5. Ranking U-Sapiens 2010-2

    Directory of Open Access Journals (Sweden)

    Carlos-Roberto Peña-Barrera

    2011-08-01

    Full Text Available Los principales objetivos de esta investigación son los siguientes: (1 que la comunidad científica nacional e internacional y la sociedad en general co-nozcan los resultados del Ranking U-Sapiens Colombia 2010_2, el cual clasifica a cada institución de educación superior colombiana según puntaje, posición y cuartil; (2 destacar los movimientos más importantes al comparar los resultados del ranking 2010_1 con los del 2010_2; (3 publicar las respuestas de algunos actores de la academia nacional con respecto a la dinámica de la investigación en el país; (4 reconocer algunas instituciones, medios de comunicación e investigadores que se han interesado a modo de reflexión, referenciación o citación por esta investigación; y (5 dar a conocer el «Sello Ranking U-Sapiens Colombia» para las IES clasificadas. El alcance de este estudio en cuanto a actores abordó todas y cada una de las IES nacionales (aunque solo algunas lograran entrar al ranking y en cuanto a tiempo, un periodo referido al primer semestre de 2010 con respecto a: (1 los resultados 2010-1 de revistas indexadas en Publindex, (2 los programas de maestrías y doctorados activos durante 2010-1 según el Ministerio de Educación Nacional, y (3 los resultados de grupos de investigación clasificados para 2010 según Colciencias. El método empleado para esta investigación es el mismo que para el ranking 2010_1, salvo por una especificación aún más detallada en uno de los pasos del modelo (las variables α, β, γ; es completamente cuantitativo y los datos de las variables que fundamentan sus resultados provienen de Colciencias y el Ministerio de Educación Nacional; y en esta ocasión se darán a conocer los resultados por variable para 2010_1 y 2010_2. Los resultados más relevantes son estos: (1 entraron 8 IES al ranking y salieron 3; (2 las 3 primeras IES son públicas; (3 en total hay 6 instituciones universitarias en el ranking; (4 7 de las 10 primeras IES son

  6. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    Science.gov (United States)

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  7. Probability measures on metric spaces

    CERN Document Server

    Parthasarathy, K R

    2005-01-01

    In this book, the author gives a cohesive account of the theory of probability measures on complete metric spaces (which is viewed as an alternative approach to the general theory of stochastic processes). After a general description of the basics of topology on the set of measures, the author discusses regularity, tightness, and perfectness of measures, properties of sampling distributions, and metrizability and compactness theorems. Next, he describes arithmetic properties of probability measures on metric groups and locally compact abelian groups. Covered in detail are notions such as decom

  8. Probability and Statistics: 5 Questions

    DEFF Research Database (Denmark)

    Probability and Statistics: 5 Questions is a collection of short interviews based on 5 questions presented to some of the most influential and prominent scholars in probability and statistics. We hear their views on the fields, aims, scopes, the future direction of research and how their work fit...... in these respects. Interviews with Nick Bingham, Luc Bovens, Terrence L. Fine, Haim Gaifman, Donald Gillies, James Hawthorne, Carl Hoefer, James M. Joyce, Joseph B. Kadane Isaac Levi, D.H. Mellor, Patrick Suppes, Jan von Plato, Carl Wagner, Sandy Zabell...

  9. Knowledge typology for imprecise probabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G. D. (Gregory D.); Zucker, L. J. (Lauren J.)

    2002-01-01

    When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.

  10. Probability, Statistics, and Stochastic Processes

    CERN Document Server

    Olofsson, Peter

    2011-01-01

    A mathematical and intuitive approach to probability, statistics, and stochastic processes This textbook provides a unique, balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. This text combines a rigorous, calculus-based development of theory with a more intuitive approach that appeals to readers' sense of reason and logic, an approach developed through the author's many years of classroom experience. The text begins with three chapters that d

  11. Probability, statistics, and queueing theory

    CERN Document Server

    Allen, Arnold O

    1990-01-01

    This is a textbook on applied probability and statistics with computer science applications for students at the upper undergraduate level. It may also be used as a self study book for the practicing computer science professional. The successful first edition of this book proved extremely useful to students who need to use probability, statistics and queueing theory to solve problems in other fields, such as engineering, physics, operations research, and management science. The book has also been successfully used for courses in queueing theory for operations research students. This second edit

  12. RANK-ligand (RANKL) expression in young breast cancer patients and during pregnancy.

    Science.gov (United States)

    Azim, Hatem A; Peccatori, Fedro A; Brohée, Sylvain; Branstetter, Daniel; Loi, Sherene; Viale, Giuseppe; Piccart, Martine; Dougall, William C; Pruneri, Giancarlo; Sotiriou, Christos

    2015-02-21

    RANKL is important in mammary gland development during pregnancy and mediates the initiation and progression of progesterone-induced breast cancer. No clinical data are available on the effect of pregnancy on RANK/RANKL expression in young breast cancer patients. We used our previously published dataset of 65 pregnant and 130 matched young breast cancer patients with full clinical, pathological, and survival information. 85% of patients had available transcriptomic data as well. RANK/RANKL expression by immunohistochemistry using H-score on the primary tumor and adjacent normal tissue was performed. We examined the difference in expression of RANK/RANKL between pregnant and non-pregnant patients and their association with clinicopathological features and prognosis. We also evaluated genes and pathways associated with RANK/RANKL expression on primary tumors. RANKL but not RANK expression was more prevalent in the pregnant group, both on the tumor and adjacent normal tissue, independent of other clinicopathological factors (both P Pregnancy increases RANKL expression both in normal breast and primary tumors. These results could guide further development of RANKL-targeted therapy.

  13. Next Generation Nuclear Plant Phenomena Identification and Ranking Tables (PIRTs) Volume 5: Graphite PIRTs

    Energy Technology Data Exchange (ETDEWEB)

    Burchell, Timothy D [ORNL; Bratton, Rob [Idaho National Laboratory (INL); Marsden, Barry [University of Manchester, UK; Srinivasan, Makuteswara [U.S. Nuclear Regulatory Commission; Penfield, Scott [Technology Insights; Mitchell, Mark [PBMR (Pty) Ltd.; Windes, Will [Idaho National Laboratory (INL)

    2008-03-01

    Here we report the outcome of the application of the Nuclear Regulatory Commission (NRC) Phenomena Identification and Ranking Table (PIRT) process to the issue of nuclear-grade graphite for the moderator and structural components of a next generation nuclear plant (NGNP), considering both routine (normal operation) and postulated accident conditions for the NGNP. The NGNP is assumed to be a modular high-temperature gas-cooled reactor (HTGR), either a gas-turbine modular helium reactor (GTMHR) version [a prismatic-core modular reactor (PMR)] or a pebble-bed modular reactor (PBMR) version [a pebble bed reactor (PBR)] design, with either a direct- or indirect-cycle gas turbine (Brayton cycle) system for electric power production, and an indirect-cycle component for hydrogen production. NGNP design options with a high-pressure steam generator (Rankine cycle) in the primary loop are not considered in this PIRT. This graphite PIRT was conducted in parallel with four other NRC PIRT activities, taking advantage of the relationships and overlaps in subject matter. The graphite PIRT panel identified numerous phenomena, five of which were ranked high importance-low knowledge. A further nine were ranked with high importance and medium knowledge rank. Two phenomena were ranked with medium importance and low knowledge, and a further 14 were ranked medium importance and medium knowledge rank. The last 12 phenomena were ranked with low importance and high knowledge rank (or similar combinations suggesting they have low priority). The ranking/scoring rationale for the reported graphite phenomena is discussed. Much has been learned about the behavior of graphite in reactor environments in the 60-plus years since the first graphite rectors went into service. The extensive list of references in the Bibliography is plainly testament to this fact. Our current knowledge base is well developed. Although data are lacking for the specific grades being considered for Generation IV (Gen IV

  14. Sufficient Statistics for Divergence and the Probability of Misclassification

    Science.gov (United States)

    Quirein, J.

    1972-01-01

    One particular aspect is considered of the feature selection problem which results from the transformation x=Bz, where B is a k by n matrix of rank k and k is or = to n. It is shown that in general, such a transformation results in a loss of information. In terms of the divergence, this is equivalent to the fact that the average divergence computed using the variable x is less than or equal to the average divergence computed using the variable z. A loss of information in terms of the probability of misclassification is shown to be equivalent to the fact that the probability of misclassification computed using variable x is greater than or equal to the probability of misclassification computed using variable z. First, the necessary facts relating k-dimensional and n-dimensional integrals are derived. Then the mentioned results about the divergence and probability of misclassification are derived. Finally it is shown that if no information is lost (in x = Bz) as measured by the divergence, then no information is lost as measured by the probability of misclassification.

  15. Three Experiments Involving Probability Measurement Procedures with Mathematics Test Items.

    Science.gov (United States)

    Romberg, Thomas A.; And Others

    This is a report from the Project on Individually Guided Mathematics, Phase 2 Analysis of Mathematics Instruction. The report outlines some of the characteristics of probability measurement procedures for scoring objective tests, discusses hypothesized advantages and disadvantages of the methods, and reports the results of three experiments…

  16. Hierarchical Decompositions for the Computation of High-Dimensional Multivariate Normal Probabilities

    KAUST Repository

    Genton, Marc G.

    2017-09-07

    We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.

  17. Rank and Order: Evaluating the Performance of SNPs for Individual Assignment in a Non-Model Organism

    Science.gov (United States)

    Storer, Caroline G.; Pascal, Carita E.; Roberts, Steven B.; Templin, William D.; Seeb, Lisa W.; Seeb, James E.

    2012-01-01

    Single nucleotide polymorphisms (SNPs) are valuable tools for ecological and evolutionary studies. In non-model species, the use of SNPs has been limited by the number of markers available. However, new technologies and decreasing technology costs have facilitated the discovery of a constantly increasing number of SNPs. With hundreds or thousands of SNPs potentially available, there is interest in comparing and developing methods for evaluating SNPs to create panels of high-throughput assays that are customized for performance, research questions, and resources. Here we use five different methods to rank 43 new SNPs and 71 previously published SNPs for sockeye salmon: FST, informativeness (In), average contribution to principal components (LC), and the locus-ranking programs BELS and WHICHLOCI. We then tested the performance of these different ranking methods by creating 48- and 96-SNP panels of the top-ranked loci for each method and used empirical and simulated data to obtain the probability of assigning individuals to the correct population using each panel. All 96-SNP panels performed similarly and better than the 48-SNP panels except for the 96-SNP BELS panel. Among the 48-SNP panels, panels created from FST, In, and LC ranks performed better than panels formed using the top-ranked loci from the programs BELS and WHICHLOCI. The application of ranking methods to optimize panel performance will become more important as more high-throughput assays become available. PMID:23185290

  18. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan

    2009-05-01

    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  19. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan

    2013-09-01

    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  20. On the Nonnegative Rank of Euclidean Distance Matrices.

    Science.gov (United States)

    Lin, Matthew M; Chu, Moody T

    2010-09-01

    The Euclidean distance matrix for n distinct points in ℝ r is generically of rank r + 2. It is shown in this paper via a geometric argument that its nonnegative rank for the case r = 1 is generically n.