WorldWideScience

Sample records for rank sum tests

  1. The Distribution of the Sum of Signed Ranks

    Science.gov (United States)

    Albright, Brian

    2012-01-01

    We describe the calculation of the distribution of the sum of signed ranks and develop an exact recursive algorithm for the distribution as well as an approximation of the distribution using the normal. The results have applications to the non-parametric Wilcoxon signed-rank test.

  2. Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers

    NARCIS (Netherlands)

    Eisinga, R.N.; Heskes, T.M.; Pelzer, B.J.; Grotenhuis, H.F. te

    2017-01-01

    Background: The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to

  3. Uniform approximation is more appropriate for Wilcoxon Rank-Sum Test in gene set analysis.

    Directory of Open Access Journals (Sweden)

    Zhide Fang

    Full Text Available Gene set analysis is widely used to facilitate biological interpretations in the analyses of differential expression from high throughput profiling data. Wilcoxon Rank-Sum (WRS test is one of the commonly used methods in gene set enrichment analysis. It compares the ranks of genes in a gene set against those of genes outside the gene set. This method is easy to implement and it eliminates the dichotomization of genes into significant and non-significant in a competitive hypothesis testing. Due to the large number of genes being examined, it is impractical to calculate the exact null distribution for the WRS test. Therefore, the normal distribution is commonly used as an approximation. However, as we demonstrate in this paper, the normal approximation is problematic when a gene set with relative small number of genes is tested against the large number of genes in the complementary set. In this situation, a uniform approximation is substantially more powerful, more accurate, and less intensive in computation. We demonstrate the advantage of the uniform approximations in Gene Ontology (GO term analysis using simulations and real data sets.

  4. Convolutional Codes with Maximum Column Sum Rank for Network Streaming

    OpenAIRE

    Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish

    2015-01-01

    The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...

  5. Comparison of multianalyte proficiency test results by sum of ranking differences, principal component analysis, and hierarchical cluster analysis.

    Science.gov (United States)

    Škrbić, Biljana; Héberger, Károly; Durišić-Mladenović, Nataša

    2013-10-01

    Sum of ranking differences (SRD) was applied for comparing multianalyte results obtained by several analytical methods used in one or in different laboratories, i.e., for ranking the overall performances of the methods (or laboratories) in simultaneous determination of the same set of analytes. The data sets for testing of the SRD applicability contained the results reported during one of the proficiency tests (PTs) organized by EU Reference Laboratory for Polycyclic Aromatic Hydrocarbons (EU-RL-PAH). In this way, the SRD was also tested as a discriminant method alternative to existing average performance scores used to compare mutlianalyte PT results. SRD should be used along with the z scores--the most commonly used PT performance statistics. SRD was further developed to handle the same rankings (ties) among laboratories. Two benchmark concentration series were selected as reference: (a) the assigned PAH concentrations (determined precisely beforehand by the EU-RL-PAH) and (b) the averages of all individual PAH concentrations determined by each laboratory. Ranking relative to the assigned values and also to the average (or median) values pointed to the laboratories with the most extreme results, as well as revealed groups of laboratories with similar overall performances. SRD reveals differences between methods or laboratories even if classical test(s) cannot. The ranking was validated using comparison of ranks by random numbers (a randomization test) and using seven folds cross-validation, which highlighted the similarities among the (methods used in) laboratories. Principal component analysis and hierarchical cluster analysis justified the findings based on SRD ranking/grouping. If the PAH-concentrations are row-scaled, (i.e., z scores are analyzed as input for ranking) SRD can still be used for checking the normality of errors. Moreover, cross-validation of SRD on z scores groups the laboratories similarly. The SRD technique is general in nature, i.e., it can

  6. Power and sample size evaluation for the Cochran-Mantel-Haenszel mean score (Wilcoxon rank sum) test and the Cochran-Armitage test for trend.

    Science.gov (United States)

    Lachin, John M

    2011-11-10

    The power of a chi-square test, and thus the required sample size, are a function of the noncentrality parameter that can be obtained as the limiting expectation of the test statistic under an alternative hypothesis specification. Herein, we apply this principle to derive simple expressions for two tests that are commonly applied to discrete ordinal data. The Wilcoxon rank sum test for the equality of distributions in two groups is algebraically equivalent to the Mann-Whitney test. The Kruskal-Wallis test applies to multiple groups. These tests are equivalent to a Cochran-Mantel-Haenszel mean score test using rank scores for a set of C-discrete categories. Although various authors have assessed the power function of the Wilcoxon and Mann-Whitney tests, herein it is shown that the power of these tests with discrete observations, that is, with tied ranks, is readily provided by the power function of the corresponding Cochran-Mantel-Haenszel mean scores test for two and R > 2 groups. These expressions yield results virtually identical to those derived previously for rank scores and also apply to other score functions. The Cochran-Armitage test for trend assesses whether there is an monotonically increasing or decreasing trend in the proportions with a positive outcome or response over the C-ordered categories of an ordinal independent variable, for example, dose. Herein, it is shown that the power of the test is a function of the slope of the response probabilities over the ordinal scores assigned to the groups that yields simple expressions for the power of the test. Copyright © 2011 John Wiley & Sons, Ltd.

  7. [The relationship between Ridit analysis and rank sum test for one-way ordinal contingency table in medical research].

    Science.gov (United States)

    Wang, Ling; Xia, Jie-lai; Yu, Li-li; Li, Chan-juan; Wang, Su-zhen

    2008-06-01

    To explore several numerical methods of ordinal variable in one-way ordinal contingency table and their interrelationship, and to compare corresponding statistical analysis methods such as Ridit analysis and rank sum test. Formula deduction was based on five simplified grading approaches including rank_r(i), ridit_r(i), ridit_r(ci), ridit_r(mi), and table scores. Practical data set was verified by SAS8.2 in clinical practice (to test the effect of Shiwei solution in treatment for chronic tracheitis). Because of the linear relationship of rank_r(i) = N ridit_r(i) + 1/2 = N ridit_r(ci) = (N + 1) ridit_r(mi), the exact chi2 values in Ridit analysis based on ridit_r(i), ridit_r(ci), and ridit_r(mi), were completely the same, and they were equivalent to the Kruskal-Wallis H test. Traditional Ridit analysis was based on ridit_r(i), and its corresponding chi2 value calculated with an approximate variance (1/12) was conservative. The exact chi2 test of Ridit analysis should be used when comparing multiple groups in the clinical researches because of its special merits such as distribution of mean ridit value on (0,1) and clear graph expression. The exact chi2 test of Ridit analysis can be output directly by proc freq of SAS8.2 with ridit and modridit option (SCORES =). The exact chi2 test of Ridit analysis is equivalent to the Kruskal-Wallis H test, and should be used when comparing multiple groups in the clinical researches.

  8. Development of a new biofidelity ranking system for anthropomorphic test devices.

    Science.gov (United States)

    Rhule, Heather H; Maltese, Matthew R; Donnelly, Bruce R; Eppinger, Rolf H; Brunner, Jill K; Bolte, John H

    2002-11-01

    A new biofidelity assessment system is being developed and applied to three side impact dummies: the WorldSID-alpha, the ES-2 and the SID-HIII. This system quantifies (1) the ability of a dummy to load a vehicle as a cadaver does, "External Biofidelity," and (2) the ability of a dummy to replicate those cadaver responses that best predict injury potential, "Internal Biofidelity." The ranking system uses cadaver and dummy responses from head drop tests, thorax and shoulder pendulum tests, and whole body sled tests. Each test condition is assigned a weight factor based on the number of human subjects tested to form the biomechanical response corridor and how well the biofidelity tests represent FMVSS 214, side NCAP (SNCAP) and FMVSS 201 Pole crash environments. For each response requirement, the cumulative variance of the dummy response relative to the mean cadaver response (DCV) and the cumulative variance of the mean cadaver response relative to the mean plus one standard deviation (CCV) are calculated. The ratio of DCV/CCV expresses how well the dummy response duplicates the mean cadaver response: a smaller ratio indicating better biofidelity. For each test condition, the square root is taken of each Response Comparison Value (DCV/CCV), and then these values are averaged and multiplied by the appropriate Test Condition Weight. The weighted and averaged comparison values are then summed and divided by the sum of the Test Condition Weights to obtain a rank for each body region. Each dummy obtains an overall rank for External Biofidelity and an overall rank for Internal Biofidelity comprised of an average of the ranks from each body region. Of the three dummies studied, the selected comparison test data indicate that the WorldSID-alpha prototype dummy demonstrated the best overall External Biofidelity although improvement is needed in all of the dummies to better replicate human kinematics. All three dummies estimate potential injury assessment with similar levels of

  9. Use of rank sum method in identifying high occupational dose jobs for ALARA implementation

    International Nuclear Information System (INIS)

    Cho, Yeong Ho; Kang, Chang Sun

    1998-01-01

    The cost-effective reduction of occupational radiation exposure (ORE) dose at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORE dose data of existing plants. It is necessary to identify what are high ORE jobs for ALARA implementation. In this study, the Rank Sum Method (RSM) is used in identifying high ORE jobs. As a case study, the database of ORE-related maintenance and repair jobs for Kori Units 3 and 4 is used for assessment, and top twenty high ORE jobs are identified. The results are also verified and validated using the Friedman test, and RSM is found to be a very efficient way of analyzing the data. (author)

  10. Identifying and ranking influential spreaders in complex networks by combining a local-degree sum and the clustering coefficient

    Science.gov (United States)

    Li, Mengtian; Zhang, Ruisheng; Hu, Rongjing; Yang, Fan; Yao, Yabing; Yuan, Yongna

    2018-03-01

    Identifying influential spreaders is a crucial problem that can help authorities to control the spreading process in complex networks. Based on the classical degree centrality (DC), several improved measures have been presented. However, these measures cannot rank spreaders accurately. In this paper, we first calculate the sum of the degrees of the nearest neighbors of a given node, and based on the calculated sum, a novel centrality named clustered local-degree (CLD) is proposed, which combines the sum and the clustering coefficients of nodes to rank spreaders. By assuming that the spreading process in networks follows the susceptible-infectious-recovered (SIR) model, we perform extensive simulations on a series of real networks to compare the performances between the CLD centrality and other six measures. The results show that the CLD centrality has a competitive performance in distinguishing the spreading ability of nodes, and exposes the best performance to identify influential spreaders accurately.

  11. Comparison of different eigensolvers for calculating vibrational spectra using low-rank, sum-of-product basis functions

    Science.gov (United States)

    Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker

    2017-08-01

    Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.

  12. Aspects of analysis of small-sample right censored data using generalized Wilcoxon rank tests

    OpenAIRE

    Öhman, Marie-Louise

    1994-01-01

    The estimated bias and variance of commonly applied and jackknife variance estimators and observed significance level and power of standardised generalized Wilcoxon linear rank sum test statistics and tests, respectively, of Gehan and Prentice are compared in a Monte Carlo simulation study. The variance estimators are the permutational-, the conditional permutational- and the jackknife variance estimators of the test statistic of Gehan, and the asymptotic- and the jackknife variance estimator...

  13. RankProdIt: A web-interactive Rank Products analysis tool

    Directory of Open Access Journals (Sweden)

    Laing Emma

    2010-08-01

    Full Text Available Abstract Background The first objective of a DNA microarray experiment is typically to generate a list of genes or probes that are found to be differentially expressed or represented (in the case of comparative genomic hybridizations and/or copy number variation between two conditions or strains. Rank Products analysis comprises a robust algorithm for deriving such lists from microarray experiments that comprise small numbers of replicates, for example, less than the number required for the commonly used t-test. Currently, users wishing to apply Rank Products analysis to their own microarray data sets have been restricted to the use of command line-based software which can limit its usage within the biological community. Findings Here we have developed a web interface to existing Rank Products analysis tools allowing users to quickly process their data in an intuitive and step-wise manner to obtain the respective Rank Product or Rank Sum, probability of false prediction and p-values in a downloadable file. Conclusions The online interactive Rank Products analysis tool RankProdIt, for analysis of any data set containing measurements for multiple replicated conditions, is available at: http://strep-microarray.sbs.surrey.ac.uk/RankProducts

  14. Co-integration Rank Testing under Conditional Heteroskedasticity

    DEFF Research Database (Denmark)

    Cavaliere, Guiseppe; Rahbæk, Anders; Taylor, A.M. Robert

    null distributions of the rank statistics coincide with those derived by previous authors who assume either i.i.d. or (strict and covariance) stationary martingale difference innovations. We then propose wild bootstrap implementations of the co-integrating rank tests and demonstrate that the associated...... bootstrap rank statistics replicate the first-order asymptotic null distributions of the rank statistics. We show the same is also true of the corresponding rank tests based on the i.i.d. bootstrap of Swensen (2006). The wild bootstrap, however, has the important property that, unlike the i.i.d. bootstrap......, it preserves in the re-sampled data the pattern of heteroskedasticity present in the original shocks. Consistent with this, numerical evidence sug- gests that, relative to tests based on the asymptotic critical values or the i.i.d. bootstrap, the wild bootstrap rank tests perform very well in small samples un...

  15. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  16. Minkowski metrics in creating universal ranking algorithms

    Directory of Open Access Journals (Sweden)

    Andrzej Ameljańczyk

    2014-06-01

    Full Text Available The paper presents a general procedure for creating the rankings of a set of objects, while the relation of preference based on any ranking function. The analysis was possible to use the ranking functions began by showing the fundamental drawbacks of commonly used functions in the form of a weighted sum. As a special case of the ranking procedure in the space of a relation, the procedure based on the notion of an ideal element and generalized Minkowski distance from the element was proposed. This procedure, presented as universal ranking algorithm, eliminates most of the disadvantages of ranking functions in the form of a weighted sum.[b]Keywords[/b]: ranking functions, preference relation, ranking clusters, categories, ideal point, universal ranking algorithm

  17. Cointegration rank testing under conditional heteroskedasticity

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders Christian; Taylor, Robert M.

    2010-01-01

    We analyze the properties of the conventional Gaussian-based cointegrating rank tests of Johansen (1996, Likelihood-Based Inference in Cointegrated Vector Autoregressive Models) in the case where the vector of series under test is driven by globally stationary, conditionally heteroskedastic......, relative to tests based on the asymptotic critical values or the i.i.d. bootstrap, the wild bootstrap rank tests perform very well in small samples under a variety of conditionally heteroskedastic innovation processes. An empirical application to the term structure of interest rates is given....

  18. Intensity rankings of plyometric exercises using joint power absorption.

    Science.gov (United States)

    Van Lieshout, Kathryn G; Anderson, Joy G; Shelburne, Kevin B; Davidson, Bradley S

    2014-09-01

    Athletic trainers and physical therapists often progress patients through rehabilitation by selecting plyometric exercises of increasing intensity in preparation for return to sport. The purpose of this study was to quantify the intensity of seven plyometric movements commonly used in lower-extremity rehabilitation by joint-specific peak power absorption and the sum of the peak power. Ten collegiate athletes performed submaximal plyometric exercises in a single test session: vertical jump, forward jump, backward jump, box drop, box jump up, tuck jump, and depth jump. Three-dimensional kinematics and force platform data were collected to generate joint kinetics. Peak power absorption normalized to body mass was calculated at the ankle, knee, and hip, and averaged across repetitions. Joint peak power data were pooled across athletes and summed to obtain the sum of peak power. Movements were ranked from 1 (low) to 7 (high) based on the sum of peak power and joint peak power (ankle, knee, hip). The sum of peak power did not correspond with standard low, medium, and high subjective intensity ratings or joint peak power in all joints. Mixed model analyses revealed significant variance between the sum of peak power and joint peak power ranks in the forward jump, backward jump, box drop, and depth jump (P<0.05), but not in the vertical jump, box jump up, and tuck jump. Results provide intensity rankings that can be used directly by athletic trainers and physical therapists in developing protocols for rehabilitation specific to the injured joint. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Generalized Reduced Rank Tests using the Singular Value Decomposition

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); R. Paap (Richard)

    2003-01-01

    textabstractWe propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: necessity of a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson (1951), sensitivity to the ordering of the variables

  20. Generalized reduced rank tests using the singular value decomposition

    NARCIS (Netherlands)

    Kleibergen, F.R.; Paap, R.

    2002-01-01

    We propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: necessity of a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson (1951), sensitivity to the ordering of the variables for the LDU

  1. Asympotic efficiency of signed - rank symmetry tests under skew alternatives.

    OpenAIRE

    Alessandra Durio; Yakov Nikitin

    2002-01-01

    The efficiency of some known tests for symmetry such as the sign test, the Wilcoxon signed-rank test or more general linear signed rank tests was studied mainly under the classical alternatives of location. However it is interesting to compare the efficiencies of these tests under asymmetric alternatives like the so-called skew alternative proposed in Azzalini (1985). We find and compare local Bahadur efficiencies of linear signed-rank statistics for skew alternatives and discuss also the con...

  2. A Rank Test on Equality of Population Medians

    OpenAIRE

    Pooi Ah Hin

    2012-01-01

    The Kruskal-Wallis test is a non-parametric test for the equality of K population medians. The test statistic involved is a measure of the overall closeness of the K average ranks in the individual samples to the average rank in the combined sample. The resulting acceptance region of the test however may not be the smallest region with the required acceptance probability under the null hypothesis. Presently an alternative acceptance region is constructed such that it has the smallest size, ap...

  3. Adaptive linear rank tests for eQTL studies.

    Science.gov (United States)

    Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas

    2013-02-10

    Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Identification of significant features by the Global Mean Rank test.

    Science.gov (United States)

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2014-01-01

    With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.

  5. PageRank as a method to rank biomedical literature by importance.

    Science.gov (United States)

    Yates, Elliot J; Dixon, Louise C

    2015-01-01

    Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.

  6. Comparing survival curves using rank tests

    NARCIS (Netherlands)

    Albers, Willem/Wim

    1990-01-01

    Survival times of patients can be compared using rank tests in various experimental setups, including the two-sample case and the case of paired data. Attention is focussed on two frequently occurring complications in medical applications: censoring and tail alternatives. A review is given of the

  7. Rankings of International Achievement Test Performance and Economic Strength: Correlation or Conjecture?

    Directory of Open Access Journals (Sweden)

    CHRISTOPHER H. TIENKEN

    2008-04-01

    Full Text Available Examining a popular political notion, this article presents results from a series of Spearman Rho calculations conducted to investigate relationships between countries’ rankings on international tests of mathematics and science and future economic competitiveness as measured by the 2006 World Economic Forum’s Growth Competitiveness Index (GCI. The study investigated the existence of relationships between international test rankings from three different time periods during the last 50 years of U.S. education policy development (i.e., 1957–1982, 1983–2000, and 2001–2006 and 2006 GCI ranks. It extends previous research on the topic by investigating how GCI rankings in the top 50 percent and bottom 50 percent relate to rankings on international tests for the countries that participated in each test. The study found that the relationship between ranks on international tests of mathematics and science and future economic strength is stronger among nations with lower-performing economies. Nations with strong economies, such as the United States, demonstrate a weaker, nonsignificant relationship.

  8. The Seven Deadly Sins of World University Ranking: A Summary from Several Papers

    Science.gov (United States)

    Soh, Kaycheng

    2017-01-01

    World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…

  9. Grid-based lattice summation of electrostatic potentials by assembled rank-structured tensor approximation

    Science.gov (United States)

    Khoromskaia, Venera; Khoromskij, Boris N.

    2014-12-01

    Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.

  10. Sharp bounds on the ranks of negativity of certain sums

    Indian Academy of Sciences (India)

    with a finite rank of negativity k (i.e., k is the maximal dimension of any linear subspace ..... By linear algebra, we can choose a linear subspace E of L which is mapped ...... matics and its applications (Cambridge University Press) (1994) vol. 49.

  11. Adaptive designs for the one-sample log-rank test.

    Science.gov (United States)

    Schmidt, Rene; Faldum, Andreas; Kwiecien, Robert

    2017-09-22

    Traditional designs in phase IIa cancer trials are single-arm designs with a binary outcome, for example, tumor response. In some settings, however, a time-to-event endpoint might appear more appropriate, particularly in the presence of loss to follow-up. Then the one-sample log-rank test might be the method of choice. It allows to compare the survival curve of the patients under treatment to a prespecified reference survival curve. The reference curve usually represents the expected survival under standard of the care. In this work, convergence of the one-sample log-rank statistic to Brownian motion is proven using Rebolledo's martingale central limit theorem while accounting for staggered entry times of the patients. On this basis, a confirmatory adaptive one-sample log-rank test is proposed where provision is made for data dependent sample size reassessment. The focus is to apply the inverse normal method. This is done in two different directions. The first strategy exploits the independent increments property of the one-sample log-rank statistic. The second strategy is based on the patient-wise separation principle. It is shown by simulation that the proposed adaptive test might help to rescue an underpowered trial and at the same time lowers the average sample number (ASN) under the null hypothesis as compared to a single-stage fixed sample design. © 2017, The International Biometric Society.

  12. Test elements of direct sums and free products of free Lie algebras

    Indian Academy of Sciences (India)

    Abstract. We give a characterization of test elements of a direct sum of free Lie algebras in terms of test elements of the factors. In addition, we construct certain types of test elements and we prove that in a free product of free Lie algebras, product of the homogeneous test elements of the factors is also a test element.

  13. Test elements of direct sums and free products of free Lie algebras

    Indian Academy of Sciences (India)

    We give a characterization of test elements of a direct sum of free Lie algebras in terms of test elements of the factors. In addition, we construct certain types of test elements and we prove that in a free product of free Lie algebras, product of the homogeneous test elements of the factors is also a test element.

  14. Tensor rank is not multiplicative under the tensor product

    NARCIS (Netherlands)

    M. Christandl (Matthias); A. K. Jensen (Asger Kjærulff); J. Zuiddam (Jeroen)

    2018-01-01

    textabstractThe tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an ℓ-tensor. The tensor product of s and t is a (k+ℓ)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the

  15. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985807 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016

  16. Specific Features of Executive Dysfunction in Alzheimer-Type Mild Dementia Based on Computerized Cambridge Neuropsychological Test Automated Battery (CANTAB) Test Results.

    Science.gov (United States)

    Kuzmickienė, Jurgita; Kaubrys, Gintaras

    2016-10-08

    BACKGROUND The primary manifestation of Alzheimer's disease (AD) is decline in memory. Dysexecutive symptoms have tremendous impact on functional activities and quality of life. Data regarding frontal-executive dysfunction in mild AD are controversial. The aim of this study was to assess the presence and specific features of executive dysfunction in mild AD based on Cambridge Neuropsychological Test Automated Battery (CANTAB) results. MATERIAL AND METHODS Fifty newly diagnosed, treatment-naïve, mild, late-onset AD patients (MMSE ≥20, AD group) and 25 control subjects (CG group) were recruited in this prospective, cross-sectional study. The CANTAB tests CRT, SOC, PAL, SWM were used for in-depth cognitive assessment. Comparisons were performed using the t test or Mann-Whitney U test, as appropriate. Correlations were evaluated by Pearson r or Spearman R. Statistical significance was set at p<0.05. RESULTS AD and CG groups did not differ according to age, education, gender, or depression. Few differences were found between groups in the SOC test for performance measures: Mean moves (minimum 3 moves): AD (Rank Sum=2227), CG (Rank Sum=623), p<0.001. However, all SOC test time measures differed significantly between groups: SOC Mean subsequent thinking time (4 moves): AD (Rank Sum=2406), CG (Rank Sum=444), p<0.001. Correlations were weak between executive function (SOC) and episodic/working memory (PAL, SWM) (R=0.01-0.38) or attention/psychomotor speed (CRT) (R=0.02-0.37). CONCLUSIONS Frontal-executive functions are impaired in mild AD patients. Executive dysfunction is highly prominent in time measures, but minimal in performance measures. Executive disorders do not correlate with a decline in episodic and working memory or psychomotor speed in mild AD.

  17. Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics

    International Nuclear Information System (INIS)

    Xu, Bin; Zhang, Hongen; Wang, Zhijian; Zhang, Jianbo

    2012-01-01

    By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.

  18. Tensor rank is not multiplicative under the tensor product

    DEFF Research Database (Denmark)

    Christandl, Matthias; Jensen, Asger Kjærulff; Zuiddam, Jeroen

    2018-01-01

    The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an ℓ-tensor. The tensor product of s and t is a (k+ℓ)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the connection b...

  19. Use of percentile rank sum method in identifying repetitive high occupational radiation dose jobs in a nuclear power plant

    International Nuclear Information System (INIS)

    Cho, Y.H.; Ko, H.S.; Kim, S.H.; Kang, C.S.; Moon, J.H.; Kim, K.D.

    2004-01-01

    The cost-effective reduction of occupational radiation dose (ORD) at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORD data of existing plants. Through the data analysis, it is required to identify what are the jobs of repetitive high ORD at the nuclear power plant. In general the point value method commonly used, over-estimates the role of mean and median values to identify the high ORD jobs which can lead to misjudgment. In this study, Percentile Rank Sum Method (PRSM) is proposed to identify repetitive high ORD jobs, which is based on non-parametric statistical theory. As a case study, the method is applied to ORD data of maintenance and repair jobs at Kori units 3 and 4 that are pressurized water reactors with 950 MWe capacity and have been operated since 1986 and 1987, respectively in Korea. The results were verified and validated, and PRSM has been demonstrated to be an efficient method of analyzing the data. (authors)

  20. Tensor rank is not multiplicative under the tensor product

    OpenAIRE

    Christandl, Matthias; Jensen, Asger Kjærulff; Zuiddam, Jeroen

    2017-01-01

    The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the connection between restrictions and degenerations. A result of our study is that tensor rank is not in general multiplicative under the tensor product. This answers a question of Draisma and Saptharishi. Specif...

  1. Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Bin, E-mail: xubin211@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Public Administration College, Zhejiang Gongshang University, Hangzhou, 310018 (China); Zhang, Hongen, E-mail: hongen777@163.com [Department of Physics, Zhejiang University, Hangzhou, 310027 (China); Wang, Zhijian, E-mail: wangzj@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Zhang, Jianbo, E-mail: jbzhang08@zju.edu.cn [Department of Physics, Zhejiang University, Hangzhou, 310027 (China)

    2012-03-19

    By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.

  2. Partial sums of lagged cross-products of AR residuals and a test for white noise

    NARCIS (Netherlands)

    de Gooijer, J.G.

    2008-01-01

    Partial sums of lagged cross-products of AR residuals are defined. By studying the sample paths of these statistics, changes in residual dependence can be detected that might be missed by statistics using only the total sum of cross-products. Also, a test statistic for white noise is proposed. It is

  3. Generating pseudo test collections for learning to rank scientific articles

    NARCIS (Netherlands)

    Berendsen, R.; Tsagkias, M.; de Rijke, M.; Meij, E.

    2012-01-01

    Pseudo test collections are automatically generated to provide training material for learning to rank methods. We propose a method for generating pseudo test collections in the domain of digital libraries, where data is relatively sparse, but comes with rich annotations. Our intuition is that

  4. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  5. A novel three-stage distance-based consensus ranking method

    Science.gov (United States)

    Aghayi, Nazila; Tavana, Madjid

    2018-05-01

    In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.

  6. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985556 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/kalina-0474065.pdf

  7. Fast local fragment chaining using sum-of-pair gap costs

    DEFF Research Database (Denmark)

    Otto, Christian; Hoffmann, Steve; Gorodkin, Jan

    2011-01-01

    , and rank the fragments to improve the specificity. Results: Here we present a fast and flexible fragment chainer that for the first time also supports a sum-of-pair gap cost model. This model has proven to achieve a higher accuracy and sensitivity in its own field of application. Due to a highly time...... alignment heuristics alone. By providing both the linear and the sum-of-pair gap cost model, a wider range of application can be covered. The software clasp is available at http://www.bioinf.uni-leipzig.de/Software/clasp/....

  8. Test Scores, Class Rank and College Performance: Lessons for Broadening Access and Promoting Success.

    Science.gov (United States)

    Niu, Sunny X; Tienda, Marta

    2012-04-01

    Using administrative data for five Texas universities that differ in selectivity, this study evaluates the relative influence of two key indicators for college success-high school class rank and standardized tests. Empirical results show that class rank is the superior predictor of college performance and that test score advantages do not insulate lower ranked students from academic underperformance. Using the UT-Austin campus as a test case, we conduct a simulation to evaluate the consequences of capping students admitted automatically using both achievement metrics. We find that using class rank to cap the number of students eligible for automatic admission would have roughly uniform impacts across high schools, but imposing a minimum test score threshold on all students would have highly unequal consequences by greatly reduce the admission eligibility of the highest performing students who attend poor high schools while not jeopardizing admissibility of students who attend affluent high schools. We discuss the implications of the Texas admissions experiment for higher education in Europe.

  9. Some relations between rank, chromatic number and energy of graphs

    International Nuclear Information System (INIS)

    Akbari, S.; Ghorbani, E.; Zare, S.

    2006-08-01

    The energy of a graph G is defined as the sum of the absolute values of all eigenvalues of G and denoted by E(G). Let G be a graph and rank(G) be the rank of the adjacency matrix of G. In this paper we characterize all the graphs with E(G) = rank(G). Among other results we show that apart from a few families of graphs, E(G) ≥ 2max(χ(G), n - χ(G--bar)), where G-bar and χ(G) are the complement and the chromatic number of G, respectively. Moreover some new lower bounds for E(G) in terms of rank(G) are given. (author)

  10. Irritancy ranking of anionic detergents using one-time occlusive, repeated occlusive and repeated open tests

    NARCIS (Netherlands)

    Tupker, RA; Bunte, EE; Fidler, [No Value; Wiechers, JW; Coenraads, PJ

    Discrepancies between the one-time patch test and the wash test regarding the ranking of irritancy of detergents have been found in the literature. The aim of the present study was to investigate the concordance of irritancy rank order of 4 anionic detergents tested by 3 different exposure methods,

  11. Pearson's chi-square test and rank correlation inferences for clustered data.

    Science.gov (United States)

    Shih, Joanna H; Fay, Michael P

    2017-09-01

    Pearson's chi-square test has been widely used in testing for association between two categorical responses. Spearman rank correlation and Kendall's tau are often used for measuring and testing association between two continuous or ordered categorical responses. However, the established statistical properties of these tests are only valid when each pair of responses are independent, where each sampling unit has only one pair of responses. When each sampling unit consists of a cluster of paired responses, the assumption of independent pairs is violated. In this article, we apply the within-cluster resampling technique to U-statistics to form new tests and rank-based correlation estimators for possibly tied clustered data. We develop large sample properties of the new proposed tests and estimators and evaluate their performance by simulations. The proposed methods are applied to a data set collected from a PET/CT imaging study for illustration. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  12. Another Argument in Favour of Wilcoxon's Signed Rank Test

    OpenAIRE

    Rosenblatt, Jonathan; Benjamini, Yoav

    2013-01-01

    The Wilcoxon Signed Rank test is typically called upon when testing whether a symmetric distribution has a specified centre and the Gaussianity is in question. As with all insurance policies it comes with a cost, even if small, in terms of power versus a t-test, when the distribution is indeed Gaussian. In this note we further show that even when the distribution tested is Gaussian there need not be power loss at all, if the alternative is of a mixture type rather than a shift. The signed ran...

  13. Block models and personalized PageRank.

    Science.gov (United States)

    Kloumann, Isabel M; Ugander, Johan; Kleinberg, Jon

    2017-01-03

    Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods through the "seed set expansion problem": given a subset [Formula: see text] of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work, we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter [Formula: see text] that depends on the block model parameters. This connection provides a formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance, despite being simple linear classification rules, and are even competitive with belief propagation.

  14. [Computerized ranking test in three French universities: Staff experience and students' feedback].

    Science.gov (United States)

    Roux, D; Meyer, G; Cymbalista, F; Bouaziz, J-D; Falgarone, G; Tesniere, A; Gervais, J; Cariou, A; Peffault de Latour, R; Marat, M; Moenaert, E; Guebli, T; Rodriguez, O; Lefort, A; Dreyfuss, D; Hajage, D; Ricard, J-D

    2016-03-01

    The year 2016 will be pivotal for the evaluation of French medical students with the introduction of the first computerized National Ranking Test (ECNi). The SIDES, online electronic system for medical student evaluation, was created for this purpose. All the universities have already organized faculty exams but few a joint computerized ranking test at several universities simultaneously. We report our experience on the organization of a mock ECNi by universities Paris Descartes, Paris Diderot and Paris 13. Docimological, administrative and technical working groups were created to organize this ECNi. Students in their fifth year of medical studies, who will be the first students to sit for the official ECNi in 2016, were invited to attend this mock exam that represented more than 50% of what will be proposed in 2016. A final electronic questionnaire allowed a docimological and organizational evaluation by students. An analysis of ratings and rankings and their distribution on a 1000-point scale were performed. Sixty-four percent of enrolled students (i.e., 654) attended the three half-day exams. No difference in total score and ranking between the three universities was observed. Students' feedback was extremely positive. Normalized over 1000 points, 99% of students were scored on 300 points only. Progressive clinical cases were the most discriminating test. The organization of a mock ECNi involving multiple universities was a docimological and technical success but required an important administrative, technical and teaching investment. Copyright © 2016 Société nationale française de médecine interne (SNFMI). Published by Elsevier SAS. All rights reserved.

  15. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  16. Impact of controlling the sum of error probability in the sequential probability ratio test

    Directory of Open Access Journals (Sweden)

    Bijoy Kumarr Pradhan

    2013-05-01

    Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

  17. A multivariate rank test for comparing mass size distributions

    KAUST Repository

    Lombard, F.

    2012-04-01

    Particle size analyses of a raw material are commonplace in the mineral processing industry. Knowledge of particle size distributions is crucial in planning milling operations to enable an optimum degree of liberation of valuable mineral phases, to minimize plant losses due to an excess of oversize or undersize material or to attain a size distribution that fits a contractual specification. The problem addressed in the present paper is how to test the equality of two or more underlying size distributions. A distinguishing feature of these size distributions is that they are not based on counts of individual particles. Rather, they are mass size distributions giving the fractions of the total mass of a sampled material lying in each of a number of size intervals. As such, the data are compositional in nature, using the terminology of Aitchison [1] that is, multivariate vectors the components of which add to 100%. In the literature, various versions of Hotelling\\'s T 2 have been used to compare matched pairs of such compositional data. In this paper, we propose a robust test procedure based on ranks as a competitor to Hotelling\\'s T 2. In contrast to the latter statistic, the power of the rank test is not unduly affected by the presence of outliers or of zeros among the data. © 2012 Copyright Taylor and Francis Group, LLC.

  18. A Multiobjective Programming Method for Ranking All Units Based on Compensatory DEA Model

    Directory of Open Access Journals (Sweden)

    Haifang Cheng

    2014-01-01

    Full Text Available In order to rank all decision making units (DMUs on the same basis, this paper proposes a multiobjective programming (MOP model based on a compensatory data envelopment analysis (DEA model to derive a common set of weights that can be used for the full ranking of all DMUs. We first revisit a compensatory DEA model for ranking all units, point out the existing problem for solving the model, and present an improved algorithm for which an approximate global optimal solution of the model can be obtained by solving a sequence of linear programming. Then, we applied the key idea of the compensatory DEA model to develop the MOP model in which the objectives are to simultaneously maximize all common weights under constraints that the sum of efficiency values of all DMUs is equal to unity and the sum of all common weights is also equal to unity. In order to solve the MOP model, we transform it into a single objective programming (SOP model using a fuzzy programming method and solve the SOP model using the proposed approximation algorithm. To illustrate the ranking method using the proposed method, two numerical examples are solved.

  19. Testing rank-dependent utility theory for health outcomes.

    Science.gov (United States)

    Oliver, Adam

    2003-10-01

    Systematic violations of expected utility theory (EU) have been reported in the context of both money and health outcomes. Rank-dependent utility theory (RDU) is currently the most popular and influential alternative theory of choice under circumstances of risk. This paper reports a test of the descriptive performance of RDU compared to EU in the context of health. When one of the options is certain, violations of EU that can be explained by RDU are found. When both options are risky, no evidence that RDU is a descriptive improvement over EU is found, though this finding may be due to the low power of the tests. Copyright 2002 John Wiley & Sons, Ltd.

  20. Rank-Constrained Beamforming for MIMO Cognitive Interference Channel

    Directory of Open Access Journals (Sweden)

    Duoying Zhang

    2016-01-01

    Full Text Available This paper considers the spectrum sharing multiple-input multiple-output (MIMO cognitive interference channel, in which multiple primary users (PUs coexist with multiple secondary users (SUs. Interference alignment (IA approach is introduced that guarantees that secondary users access the licensed spectrum without causing harmful interference to the PUs. A rank-constrained beamforming design is proposed where the rank of the interferences and the desired signals is concerned. The standard interferences metric for the primary link, that is, interference temperature, is investigated and redesigned. The work provides a further improvement that optimizes the dimension of the interferences in the cognitive interference channel, instead of the power of the interference leakage. Due to the nonconvexity of the rank, the developed optimization problems are further approximated as convex form and are solved via choosing the transmitter precoder and receiver subspace iteratively. Numerical results show that the proposed designs can improve the achievable degree of freedom (DoF of the primary links and provide the considerable sum rate for both secondary and primary transmissions under the rank constraints.

  1. How Well Does the Sum Score Summarize the Test? Summability as a Measure of Internal Consistency

    NARCIS (Netherlands)

    Goeman, J.J.; De, Jong N.H.

    2018-01-01

    Many researchers use Cronbach's alpha to demonstrate internal consistency, even though it has been shown numerous times that Cronbach's alpha is not suitable for this. Because the intention of questionnaire and test constructers is to summarize the test by its overall sum score, we advocate

  2. Different goodness of fit tests for Rayleigh distribution in ranked set sampling

    Directory of Open Access Journals (Sweden)

    Amer Al-Omari

    2016-03-01

    Full Text Available In this paper, different goodness of fit tests for the Rayleigh distribution are considered based on simple random sampling (SRS and ranked set sampling (RSS techniques. The performance of the suggested estimators is evaluated in terms of the power of the tests by using Monte Carlo simulation. It is found that the suggested RSS tests perform better than their counterparts  in SRS.

  3. Linear-rank testing of a non-binary, responder-analysis, efficacy score to evaluate pharmacotherapies for substance use disorders.

    Science.gov (United States)

    Holmes, Tyson H; Li, Shou-Hua; McCann, David J

    2016-11-23

    The design of pharmacological trials for management of substance use disorders is shifting toward outcomes of successful individual-level behavior (abstinence or no heavy use). While binary success/failure analyses are common, McCann and Li (CNS Neurosci Ther 2012; 18: 414-418) introduced "number of beyond-threshold weeks of success" (NOBWOS) scores to avoid dichotomized outcomes. NOBWOS scoring employs an efficacy "hurdle" with values reflecting duration of success. Here, we evaluate NOBWOS scores rigorously. Formal analysis of mathematical structure of NOBWOS scores is followed by simulation studies spanning diverse conditions to assess operating characteristics of five linear-rank tests on NOBWOS scores. Simulations include assessment of Fisher's exact test applied to hurdle component. On average, statistical power was approximately equal for five linear-rank tests. Under none of conditions examined did Fisher's exact test exhibit greater statistical power than any of the linear-rank tests. These linear-rank tests provide good Type I and Type II error control for comparing distributions of NOBWOS scores between groups (e.g. active vs. placebo). All methods were applied to re-analyses of data from four clinical trials of differing lengths and substances of abuse. These linear-rank tests agreed across all trials in rejecting (or not) their null (equality of distributions) at ≤ 0.05. © The Author(s) 2016.

  4. Ranking nano-enabled hybrid media for simultaneous removal of contaminants with different chemistries: Pseudo-equilibrium sorption tests versus column tests.

    Science.gov (United States)

    Custodio, Tomas; Garcia, Jose; Markovski, Jasmina; McKay Gifford, James; Hristovski, Kiril D; Olson, Larry W

    2017-12-15

    The underlying hypothesis of this study was that pseudo-equilibrium and column testing conditions would provide the same sorbent ranking trends although the values of sorbents' performance descriptors (e.g. sorption capacity) may vary because of different kinetics and competition effects induced by the two testing approaches. To address this hypothesis, nano-enabled hybrid media were fabricated and its removal performances were assessed for two model contaminants under multi-point batch pseudo-equilibrium and continuous-flow conditions. Calculation of simultaneous removal capacity indices (SRC) demonstrated that the more resource demanding continuous-flow tests are able to generate the same performance rankings as the ones obtained by conducing the simpler pseudo-equilibrium tests. Furthermore, continuous overlap between the 98% confidence boundaries for each SRC index trend, not only validated the hypothesis that both testing conditions provide the same ranking trends, but also pointed that SRC indices are statistically the same for each media, regardless of employed method. In scenarios where rapid screening of new media is required to obtain the best performing synthesis formulation, use of pseudo-equilibrium tests proved to be reliable. Considering that kinetics induced effects on sorption capacity must not be neglected, more resource demanding column test could be conducted only with the top performing media that exhibit the highest sorption capacity. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    Science.gov (United States)

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  6. Tensor rank is not multiplicative under the tensor product

    NARCIS (Netherlands)

    M. Christandl (Matthias); A. K. Jensen (Asger Kjærulff); J. Zuiddam (Jeroen)

    2017-01-01

    textabstractThe tensor rank of a tensor is the smallest number r such that the tensor can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor (not to be confused with the "tensor Kronecker product" used in

  7. Selection of suitable e-learning approach using TOPSIS technique with best ranked criteria weights

    Science.gov (United States)

    Mohammed, Husam Jasim; Kasim, Maznah Mat; Shaharanee, Izwan Nizal Mohd

    2017-11-01

    This paper compares the performances of four rank-based weighting assessment techniques, Rank Sum (RS), Rank Reciprocal (RR), Rank Exponent (RE), and Rank Order Centroid (ROC) on five identified e-learning criteria to select the best weights method. A total of 35 experts in a public university in Malaysia were asked to rank the criteria and to evaluate five e-learning approaches which include blended learning, flipped classroom, ICT supported face to face learning, synchronous learning, and asynchronous learning. The best ranked criteria weights are defined as weights that have the least total absolute differences with the geometric mean of all weights, were then used to select the most suitable e-learning approach by using TOPSIS method. The results show that RR weights are the best, while flipped classroom approach implementation is the most suitable approach. This paper has developed a decision framework to aid decision makers (DMs) in choosing the most suitable weighting method for solving MCDM problems.

  8. Rankings of International Achievement Test Performance and Economic Strength: Correlation or Conjecture?

    Science.gov (United States)

    Tienken, Christopher H.

    2008-01-01

    Examining a popular political notion, this article presents results from a series of Spearman Rho calculations conducted to investigate relationships between countries' rankings on international tests of mathematics and science and future economic competitiveness as measured by the 2006 World Economic Forum's Growth Competitiveness Index (GCI).…

  9. Electronuclear sum rules for the lightest nuclei

    International Nuclear Information System (INIS)

    Efros, V.D.

    1992-01-01

    It is shown that the model-independent longitudinal electronuclear sum rules for nuclei with A = 3 and A = 4 have an accuracy on the order of a percent in the traditional single-nucleon approximation with free nucleons for the nuclear charge-density operator. This makes it possible to test this approximation by using these sum rules. The longitudinal sum rules for A = 3 and A = 4 are calculated using the wave functions of these nuclei corresponding to a large set of realistic NN interactions. The values of the model-independent sum rules lie in the range of values calculated by this method. Model-independent expressions are obtained for the transverse sum rules for nuclei with A = 3 and A = 4. These sum rules are calculated using a large set of realistic wave functions of these nuclei. The contribution of the convection current and the changes in the results for different versions of realistic NN forces are given. 29 refs., 4 tabs

  10. VaRank: a simple and powerful tool for ranking genetic variants

    Directory of Open Access Journals (Sweden)

    Véronique Geoffroy

    2015-03-01

    Full Text Available Background. Most genetic disorders are caused by single nucleotide variations (SNVs or small insertion/deletions (indels. High throughput sequencing has broadened the catalogue of human variation, including common polymorphisms, rare variations or disease causing mutations. However, identifying one variation among hundreds or thousands of others is still a complex task for biologists, geneticists and clinicians.Results. We have developed VaRank, a command-line tool for the ranking of genetic variants detected by high-throughput sequencing. VaRank scores and prioritizes variants annotated either by Alamut Batch or SnpEff. A barcode allows users to quickly view the presence/absence of variants (with homozygote/heterozygote status in analyzed samples. VaRank supports the commonly used VCF input format for variants analysis thus allowing it to be easily integrated into NGS bioinformatics analysis pipelines. VaRank has been successfully applied to disease-gene identification as well as to molecular diagnostics setup for several hundred patients.Conclusions. VaRank is implemented in Tcl/Tk, a scripting language which is platform-independent but has been tested only on Unix environment. The source code is available under the GNU GPL, and together with sample data and detailed documentation can be downloaded from http://www.lbgi.fr/VaRank/.

  11. An electrophysiological signature of summed similarity in visual working memory.

    Science.gov (United States)

    van Vugt, Marieke K; Sekuler, Robert; Wilson, Hugh R; Kahana, Michael J

    2013-05-01

    Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory experiments using electrocorticographic/depth electrode recordings and scalp electroencephalography. On each experimental trial, participants judged whether a test face had been among a small set of recently studied faces. Consistent with summed-similarity theory, participants' tendency to endorse a test item increased as a function of its summed similarity to the items on the just-studied list. To characterize this behavioral effect of summed similarity, we successfully fit a summed-similarity model to individual participant data from each experiment. Using the parameters determined from fitting the summed-similarity model to the behavioral data, we examined the relation between summed similarity and brain activity. We found that 4-9 Hz theta activity in the medial temporal lobe and 2-4 Hz delta activity recorded from frontal and parietal cortices increased with summed similarity. These findings demonstrate direct neural correlates of the similarity computations that form the foundation of several major cognitive theories of human recognition memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  12. Identifying the most influential spreaders in complex networks by an Extended Local K-Shell Sum

    Science.gov (United States)

    Yang, Fan; Zhang, Ruisheng; Yang, Zhao; Hu, Rongjing; Li, Mengtian; Yuan, Yongna; Li, Keqin

    Identifying influential spreaders is crucial for developing strategies to control the spreading process on complex networks. Following the well-known K-Shell (KS) decomposition, several improved measures are proposed. However, these measures cannot identify the most influential spreaders accurately. In this paper, we define a Local K-Shell Sum (LKSS) by calculating the sum of the K-Shell indices of the neighbors within 2-hops of a given node. Based on the LKSS, we propose an Extended Local K-Shell Sum (ELKSS) centrality to rank spreaders. The ELKSS is defined as the sum of the LKSS of the nearest neighbors of a given node. By assuming that the spreading process on networks follows the Susceptible-Infectious-Recovered (SIR) model, we perform extensive simulations on a series of real networks to compare the performance between the ELKSS centrality and other six measures. The results show that the ELKSS centrality has a better performance than the six measures to distinguish the spreading ability of nodes and to identify the most influential spreaders accurately.

  13. Electronuclear sum rules

    International Nuclear Information System (INIS)

    Arenhoevel, H.; Drechsel, D.; Weber, H.J.

    1978-01-01

    Generalized sum rules are derived by integrating the electromagnetic structure functions along lines of constant ratio of momentum and energy transfer. For non-relativistic systems these sum rules are related to the conventional photonuclear sum rules by a scaling transformation. The generalized sum rules are connected with the absorptive part of the forward scattering amplitude of virtual photons. The analytic structure of the scattering amplitudes and the possible existence of dispersion relations have been investigated in schematic relativistic and non-relativistic models. While for the non-relativistic case analyticity does not hold, the relativistic scattering amplitude is analytical for time-like (but not for space-like) photons and relations similar to the Gell-Mann-Goldberger-Thirring sum rule exist. (Auth.)

  14. Coulomb sum rules in the relativistic Fermi gas model

    International Nuclear Information System (INIS)

    Do Dang, G.; L'Huillier, M.; Nguyen Giai, Van.

    1986-11-01

    Coulomb sum rules are studied in the framework of the Fermi gas model. A distinction is made between mathematical and observable sum rules. Differences between non-relativistic and relativistic Fermi gas predictions are stressed. A method to deduce a Coulomb response function from the longitudinal response is proposed and tested numerically. This method is applied to the 40 Ca data to obtain the experimental Coulomb sum rule as a function of momentum transfer

  15. Faculty Rank System, Research Motivation, and Faculty Research Productivity: Measure Refinement and Theory Testing.

    Science.gov (United States)

    Tien, Flora F.; Blackburn, Robert T.

    1996-01-01

    A study explored the relationship between the traditional system of college faculty rank and faculty research productivity from the perspectives of behavioral reinforcement theory and selection function. Six hypotheses were generated and tested, using data from a 1989 national faculty survey. Results failed to support completely either the…

  16. A simple surrogate test method to rank the wear performance of prospective ceramic materials under hip prosthesis edge-loading conditions.

    Science.gov (United States)

    Sanders, Anthony P; Brannon, Rebecca M

    2014-02-01

    This research has developed a novel test method for evaluating the wear resistance of ceramic materials under severe contact stresses simulating edge loading in prosthetic hip bearings. Simply shaped test specimens - a cylinder and a spheroid - were designed as surrogates for an edge-loaded, head/liner implant pair. Equivalency of the simpler specimens was assured in the sense that their theoretical contact dimensions and pressures were identical, according to Hertzian contact theory, to those of the head/liner pair. The surrogates were fabricated in three ceramic materials: Al2 O3 , zirconia-toughened alumina (ZTA), and ZrO2 . They were mated in three different material pairs and reciprocated under a 200 N normal contact force for 1000-2000 cycles, which created small (material pairs were ranked by their wear resistance, quantified by the volume of abraded material measured using an interferometer. Similar tests were performed on edge-loaded hip implants in the same material pairs. The surrogates replicated the wear rankings of their full-scale implant counterparts and mimicked their friction force trends. The results show that a proxy test using simple test specimens can validly rank the wear performance of ceramic materials under severe, edge-loading contact stresses, while replicating the beginning stage of edge-loading wear. This simple wear test is therefore potentially useful for screening and ranking new, prospective materials early in their development, to produce optimized candidates for more complicated full-scale hip simulator wear tests. Copyright © 2013 Wiley Periodicals, Inc.

  17. Composite Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-03-07

    In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.

  18. Composite Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-01-01

    In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.

  19. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    Science.gov (United States)

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  20. Monte Carlo Simulations Comparing Fisher Exact Test and Unequal Variances t Test for Analysis of Differences Between Groups in Brief Hospital Lengths of Stay.

    Science.gov (United States)

    Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U

    2017-12-01

    We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.

  1. Standard test method for ranking resistance of materials to sliding wear using block-on-ring wear test

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This test method covers laboratory procedures for determining the resistance of materials to sliding wear. The test utilizes a block-on-ring friction and wear testing machine to rank pairs of materials according to their sliding wear characteristics under various conditions. 1.2 An important attribute of this test is that it is very flexible. Any material that can be fabricated into, or applied to, blocks and rings can be tested. Thus, the potential materials combinations are endless. However, the interlaboratory testing has been limited to metals. In addition, the test can be run with various lubricants, liquids, or gaseous atmospheres, as desired, to simulate service conditions. Rotational speed and load can also be varied to better correspond to service requirements. 1.3 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. Wear test results are reported as the volume loss in cubic millimetres for both the block and ring. Materials...

  2. Simple Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-01-01

    We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.

  3. Simple Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-03-07

    We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.

  4. A bayesian approach to QCD sum rules

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2010-01-01

    QCD sum rules are analyzed with the help of the Maximum Entropy Method. We develop a new technique based on the Bayesion inference theory, which allows us to directly obtain the spectral function of a given correlator from the results of the operator product expansion given in the deep euclidean 4-momentum region. The most important advantage of this approach is that one does not have to make any a priori assumptions about the functional form of the spectral function, such as the 'pole + continuum' ansatz that has been widely used in QCD sum rule studies, but only needs to specify the asymptotic values of the spectral function at high and low energies as an input. As a first test of the applicability of this method, we have analyzed the sum rules of the ρ-meson, a case where the sum rules are known to work well. Our results show a clear peak structure in the region of the experimental mass of the ρ-meson. We thus demonstrate that the Maximum Entropy Method is successfully applied and that it is an efficient tool in the analysis of QCD sum rules. (author)

  5. Statistical methods for ranking data

    CERN Document Server

    Alvo, Mayer

    2014-01-01

    This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors’ website.

  6. Expansion around half-integer values, binomial sums, and inverse binomial sums

    International Nuclear Information System (INIS)

    Weinzierl, Stefan

    2004-01-01

    I consider the expansion of transcendental functions in a small parameter around rational numbers. This includes in particular the expansion around half-integer values. I present algorithms which are suitable for an implementation within a symbolic computer algebra system. The method is an extension of the technique of nested sums. The algorithms allow in addition the evaluation of binomial sums, inverse binomial sums and generalizations thereof

  7. About the use of rank transformation in sensitivity analysis of model output

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Sobol', Ilya M

    1995-01-01

    Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis

  8. Ranking the adaptive capacity of nations to climate change when socio-political goals are explicit

    International Nuclear Information System (INIS)

    Haddad, B.M.

    2005-01-01

    The typical categories for measuring national adaptive capacity to climate change include a nation's wealth, technology, education, information, skills, infrastructure, access to resources, and management capabilities. Resulting rankings predictably mirror more general rankings of economic development, such as the Human Development Index. This approach is incomplete since it does not consider the normative or motivational context of adaptation. For what purpose or toward what goal does a nation aspire, and in that context, what is its adaptive capacity? This paper posits 11 possible national socio-political goals that fall into the three categories of teleological legitimacy, procedural legitimacy, and norm-based decision rules. A model that sorts nations in terms of adaptive capacity based on national socio-political aspirations is presented. While the aspiration of maximizing summed utility matches typical existing rankings, alternative aspirations, including contractarian liberalism, technocratic management, and dictatorial/religious rule alter the rankings. An example describes how this research can potentially inform how priorities are set for international assistance for climate change adaptation. (author)

  9. Rank-based testing of equal survivorship based on cross-sectional survival data with or without prospective follow-up.

    Science.gov (United States)

    Chan, Kwun Chuen Gary; Qin, Jing

    2015-10-01

    Existing linear rank statistics cannot be applied to cross-sectional survival data without follow-up since all subjects are essentially censored. However, partial survival information are available from backward recurrence times and are frequently collected from health surveys without prospective follow-up. Under length-biased sampling, a class of linear rank statistics is proposed based only on backward recurrence times without any prospective follow-up. When follow-up data are available, the proposed rank statistic and a conventional rank statistic that utilizes follow-up information from the same sample are shown to be asymptotically independent. We discuss four ways to combine these two statistics when follow-up is present. Simulations show that all combined statistics have substantially improved power compared with conventional rank statistics, and a Mantel-Haenszel test performed the best among the proposal statistics. The method is applied to a cross-sectional health survey without follow-up and a study of Alzheimer's disease with prospective follow-up. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. A New Sum Analogous to Gauss Sums and Its Fourth Power Mean

    Directory of Open Access Journals (Sweden)

    Shaofeng Ru

    2014-01-01

    Full Text Available The main purpose of this paper is to use the analytic methods and the properties of Gauss sums to study the computational problem of one kind of new sum analogous to Gauss sums and give an interesting fourth power mean and a sharp upper bound estimate for it.

  11. RankProd 2.0: a refactored bioconductor package for detecting differentially expressed features in molecular profiling datasets.

    Science.gov (United States)

    Del Carratore, Francesco; Jankevics, Andris; Eisinga, Rob; Heskes, Tom; Hong, Fangxin; Breitling, Rainer

    2017-09-01

    The Rank Product (RP) is a statistical technique widely used to detect differentially expressed features in molecular profiling experiments such as transcriptomics, metabolomics and proteomics studies. An implementation of the RP and the closely related Rank Sum (RS) statistics has been available in the RankProd Bioconductor package for several years. However, several recent advances in the understanding of the statistical foundations of the method have made a complete refactoring of the existing package desirable. We implemented a completely refactored version of the RankProd package, which provides a more principled implementation of the statistics for unpaired datasets. Moreover, the permutation-based P -value estimation methods have been replaced by exact methods, providing faster and more accurate results. RankProd 2.0 is available at Bioconductor ( https://www.bioconductor.org/packages/devel/bioc/html/RankProd.html ) and as part of the mzMatch pipeline ( http://www.mzmatch.sourceforge.net ). rainer.breitling@manchester.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  12. A Generalized Kruskal-Wallis Test Incorporating Group Uncertainty with Application to Genetic Association Studies

    OpenAIRE

    Acar, Elif F.; Sun, Lei

    2012-01-01

    Motivated by genetic association studies of SNPs with genotype uncertainty, we propose a generalization of the Kruskal-Wallis test that incorporates group uncertainty when comparing k samples. The extended test statistic is based on probability-weighted rank-sums and follows an asymptotic chi-square distribution with k-1 degrees of freedom under the null hypothesis. Simulation studies confirm the validity and robustness of the proposed test in finite samples. Application to a genome-wide asso...

  13. Importance measures in risk-informed decision making: Ranking, optimisation and configuration control

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K., E-mail: jussi.vaurio@pp1.inet.fi [Prometh Solutions, Hiihtaejaenkuja 3K, 06100 Porvoo (Finland)

    2011-11-15

    This paper describes roles, extensions and applications of importance measures of components and configurations for making risk-informed decisions relevant to system operations, maintenance and safety. Basic importance measures and their relationships are described for independent and mutually exclusive events and for groups of events associated with common cause failures. The roles of importances are described mainly in two groups of activities: (a) ranking safety significance of systems, structures, components and human actions for preventive safety assurance activities, and (b) making decisions about permissible permanent and temporary configurations and allowed configuration times for regulation, technical specifications and for on-line risk monitoring. Criticality importance and sums of criticalities turn out to be appropriate measures for ranking and optimization. Several advantages are pointed out and consistent ranking of pipe segments for in-service inspection is provided as an example. Risk increase factor and its generalization risk gain are most appropriately used to assess corrective priorities and acceptability of a situation when components are already failed or when planning to take one or more components out of service for maintenance. Precise definitions are introduced for multi-failure configurations and it is shown how they can be assessed under uncertainties, in particular when common cause failures or success states may be involved. A general weighted average method is compared to other candidate methods in benchmark cases. It is the preferable method for prediction when a momentary configuration is known or only partially known. Potential applications and optimization of allowed outage times are described. The results show how to generalize and apply various importance measures to ranking and optimization and how to manage configurations in uncertain multi-failure situations. - Highlights: > Rigorous methods developed for using importances

  14. Importance measures in risk-informed decision making: Ranking, optimisation and configuration control

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2011-01-01

    This paper describes roles, extensions and applications of importance measures of components and configurations for making risk-informed decisions relevant to system operations, maintenance and safety. Basic importance measures and their relationships are described for independent and mutually exclusive events and for groups of events associated with common cause failures. The roles of importances are described mainly in two groups of activities: (a) ranking safety significance of systems, structures, components and human actions for preventive safety assurance activities, and (b) making decisions about permissible permanent and temporary configurations and allowed configuration times for regulation, technical specifications and for on-line risk monitoring. Criticality importance and sums of criticalities turn out to be appropriate measures for ranking and optimization. Several advantages are pointed out and consistent ranking of pipe segments for in-service inspection is provided as an example. Risk increase factor and its generalization risk gain are most appropriately used to assess corrective priorities and acceptability of a situation when components are already failed or when planning to take one or more components out of service for maintenance. Precise definitions are introduced for multi-failure configurations and it is shown how they can be assessed under uncertainties, in particular when common cause failures or success states may be involved. A general weighted average method is compared to other candidate methods in benchmark cases. It is the preferable method for prediction when a momentary configuration is known or only partially known. Potential applications and optimization of allowed outage times are described. The results show how to generalize and apply various importance measures to ranking and optimization and how to manage configurations in uncertain multi-failure situations. - Highlights: → Rigorous methods developed for using importances

  15. Finding Sums for an Infinite Class of Alternating Series

    Science.gov (United States)

    Chen, Zhibo; Wei, Sheng; Xiao, Xuerong

    2012-01-01

    Calculus II students know that many alternating series are convergent by the Alternating Series Test. However, they know few alternating series (except geometric series and some trivial ones) for which they can find the sum. In this article, we present a method that enables the students to find sums for infinitely many alternating series in the…

  16. Rank Dynamics

    Science.gov (United States)

    Gershenson, Carlos

    Studies of rank distributions have been popular for decades, especially since the work of Zipf. For example, if we rank words of a given language by use frequency (most used word in English is 'the', rank 1; second most common word is 'of', rank 2), the distribution can be approximated roughly with a power law. The same applies for cities (most populated city in a country ranks first), earthquakes, metabolism, the Internet, and dozens of other phenomena. We recently proposed ``rank diversity'' to measure how ranks change in time, using the Google Books Ngram dataset. Studying six languages between 1800 and 2009, we found that the rank diversity curves of languages are universal, adjusted with a sigmoid on log-normal scale. We are studying several other datasets (sports, economies, social systems, urban systems, earthquakes, artificial life). Rank diversity seems to be universal, independently of the shape of the rank distribution. I will present our work in progress towards a general description of the features of rank change in time, along with simple models which reproduce it

  17. PageRank tracker: from ranking to tracking.

    Science.gov (United States)

    Gong, Chen; Fu, Keren; Loza, Artur; Wu, Qiang; Liu, Jia; Yang, Jie

    2014-06-01

    Video object tracking is widely used in many real-world applications, and it has been extensively studied for over two decades. However, tracking robustness is still an issue in most existing methods, due to the difficulties with adaptation to environmental or target changes. In order to improve adaptability, this paper formulates the tracking process as a ranking problem, and the PageRank algorithm, which is a well-known webpage ranking algorithm used by Google, is applied. Labeled and unlabeled samples in tracking application are analogous to query webpages and the webpages to be ranked, respectively. Therefore, determining the target is equivalent to finding the unlabeled sample that is the most associated with existing labeled set. We modify the conventional PageRank algorithm in three aspects for tracking application, including graph construction, PageRank vector acquisition and target filtering. Our simulations with the use of various challenging public-domain video sequences reveal that the proposed PageRank tracker outperforms mean-shift tracker, co-tracker, semiboosting and beyond semiboosting trackers in terms of accuracy, robustness and stability.

  18. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    Science.gov (United States)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-09-01

    A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  19. Analysis of diffractive pd to Xd and pp to Xp interactions and test of the finite-mass sum rule

    CERN Document Server

    Akimov, Y; Golovanov, L B; Goulianos, K; Gross, D; Malamud, E; Melissinos, A C; Mukhin, S; Nitz, D; Olsen, S; Sticker, H; Tsarev, V A; Yamada, R; Zimmerman, P

    1976-01-01

    The first moment finite mass sum rule is tested by utilising cross- sections for pp to Xp extracted from recent Fermilab data on pd to Xd and also comparing with CERN ISR data. The dependences on M/sub x//sup 2/, t and s are also discussed. (11 refs).

  20. Pitch ranking, electrode discrimination, and physiological spread of excitation using current steering in cochlear implants

    Science.gov (United States)

    Goehring, Jenny L.; Neff, Donna L.; Baudhuin, Jacquelyn L.; Hughes, Michelle L.

    2014-01-01

    The first objective of this study was to determine whether adaptive pitch-ranking and electrode-discrimination tasks with cochlear-implant (CI) recipients produce similar results for perceiving intermediate “virtual-channel” pitch percepts using current steering. Previous studies have not examined both behavioral tasks in the same subjects with current steering. A second objective was to determine whether a physiological metric of spatial separation using the electrically evoked compound action potential spread-of-excitation (ECAP SOE) function could predict performance in the behavioral tasks. The metric was the separation index (Σ), defined as the difference in normalized amplitudes between two adjacent ECAP SOE functions, summed across all masker electrodes. Eleven CII or 90 K Advanced Bionics (Valencia, CA) recipients were tested using pairs of electrodes from the basal, middle, and apical portions of the electrode array. The behavioral results, expressed as d′, showed no significant differences across tasks. There was also no significant effect of electrode region for either task. ECAP Σ was not significantly correlated with pitch ranking or electrode discrimination for any of the electrode regions. Therefore, the ECAP separation index is not sensitive enough to predict perceptual resolution of virtual channels. PMID:25480063

  1. Decomposing tensors with structured matrix factors reduces to rank-1 approximations

    DEFF Research Database (Denmark)

    Comon, Pierre; Sørensen, Mikael; Tsigaridas, Elias

    2010-01-01

    Tensor decompositions permit to estimate in a deterministic way the parameters in a multi-linear model. Applications have been already pointed out in antenna array processing and digital communications, among others, and are extremely attractive provided some diversity at the receiver is availabl....... As opposed to the widely used ALS algorithm, non-iterative algorithms are proposed in this paper to compute the required tensor decomposition into a sum of rank-1 terms, when some factor matrices enjoy some structure, such as block-Hankel, triangular, band, etc....

  2. Wilcoxon's signed-rank statistic: what null hypothesis and why it matters.

    Science.gov (United States)

    Li, Heng; Johnson, Terri

    2014-01-01

    In statistical literature, the term 'signed-rank test' (or 'Wilcoxon signed-rank test') has been used to refer to two distinct tests: a test for symmetry of distribution and a test for the median of a symmetric distribution, sharing a common test statistic. To avoid potential ambiguity, we propose to refer to those two tests by different names, as 'test for symmetry based on signed-rank statistic' and 'test for median based on signed-rank statistic', respectively. The utility of such terminological differentiation should become evident through our discussion of how those tests connect and contrast with sign test and one-sample t-test. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  3. Testing the encoding elaboration hypothesis: The effects of exemplar ranking on recognition and recall.

    Science.gov (United States)

    Schnur, P

    1977-11-01

    Two experiments investigated the effects of exemplar ranking on retention. High-ranking exemplars are words judged to be prototypical of a given category; low-ranking exemplars are words judged to be atypical of a given category. In Experiment 1, an incidental learning paradigm was used to measure reaction time to answer an encoding question as well as subsequent recognition. It was found that low-ranking exemplars were classified more slowly but recognized better than high-ranking exemplars. Other comparisons of the effects of category encoding, rhyme encoding, and typescript encoding on response latency and recognition replicated the results of Craik and Tulving (1975). In Experiment 2, unanticipated free recall of live previously learned paired associate lists revealed that a list composed of low-ranking exemplars was better recalled than a comparable list composed of high-ranking exemplars. Moreover, this was true only when the lists were studied in the context of appropriate category cues. These findings are discussed in terms of the encoding elaboration hypothesis.

  4. A cross-benchmark comparison of 87 learning to rank methods

    NARCIS (Netherlands)

    Tax, N.; Bockting, S.; Hiemstra, D.

    2015-01-01

    Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered

  5. Ranking the adaptive capacity of nations to climate change when socio-political goals are explicit

    Energy Technology Data Exchange (ETDEWEB)

    Haddad, B.M. [University of California, Santa Cruz, CA (United States)

    2005-07-01

    The typical categories for measuring national adaptive capacity to climate change include a nation's wealth, technology, education, information, skills, infrastructure, access to resources, and management capabilities. Resulting rankings predictably mirror more general rankings of economic development, such as the Human Development Index. This approach is incomplete since it does not consider the normative or motivational context of adaptation. For what purpose or toward what goal does a nation aspire, and in that context, what is its adaptive capacity? This paper posits 11 possible national socio-political goals that fall into the three categories of teleological legitimacy, procedural legitimacy, and norm-based decision rules. A model that sorts nations in terms of adaptive capacity based on national socio-political aspirations is presented. While the aspiration of maximizing summed utility matches typical existing rankings, alternative aspirations, including contractarian liberalism, technocratic management, and dictatorial/religious rule alter the rankings. An example describes how this research can potentially inform how priorities are set for international assistance for climate change adaptation. (author)

  6. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander; Nowak, Wolfgang

    2014-01-01

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 1e+8, problem sizes 1.5e+13 and 2e+15 estimation points for Kriging and spatial design.

  7. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander

    2014-01-08

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 1e+8, problem sizes 1.5e+13 and 2e+15 estimation points for Kriging and spatial design.

  8. Dietary risk ranking for residual antibiotics in cultured aquatic products around Tai Lake, China.

    Science.gov (United States)

    Song, Chao; Li, Le; Zhang, Cong; Qiu, Liping; Fan, Limin; Wu, Wei; Meng, Shunlong; Hu, Gengdong; Chen, Jiazhang; Liu, Ying; Mao, Aimin

    2017-10-01

    Antibiotics are widely used in aquaculture and therefore may be present as a dietary risk in cultured aquatic products. Using the Tai Lake Basin as a study area, we assessed the presence of 15 antibiotics in 5 widely cultured aquatic species using a newly developed dietary risk ranking approach. By assigning scores to each factor involved in the ranking matrices, the scores of dietary risks per antibiotic and per aquatic species were calculated. The results indicated that fluoroquinolone antibiotics posed the highest dietary risk in all aquatic species. Then, the total scores per aquatic species were summed by all 15 antibiotic scores of antibiotics, it was found that Crab (Eriocheir sinensis) had the highest dietary risks. Finally, the most concerned antibiotic category and aquatic species were selected. This study highlighted the importance of dietary risk ranking in the production and consumption of cultured aquatic products around Tai Lake. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Small Sample Properties of the Wilcoxon Signed Rank Test with Discontinuous and Dependent Observations

    OpenAIRE

    Nadine Chlass; Jens J. Krueger

    2007-01-01

    This Monte-Carlo study investigates sensitivity of the Wilcoxon signed rank test to certain assumption violations in small samples. Emphasis is put on within-sample-dependence, between-sample dependence, and the presence of ties. Our results show that both assumption violations induce severe size distortions and entail power losses. Surprisingly, these consequences do vary substantially with other properties the data may display. Results provided are particularly relevant for experimental set...

  10. Discovering author impact: A PageRank perspective

    OpenAIRE

    Yan, Erjia; Ding, Ying

    2010-01-01

    This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International So...

  11. Adiabatic quantum algorithm for search engine ranking.

    Science.gov (United States)

    Garnerone, Silvano; Zanardi, Paolo; Lidar, Daniel A

    2012-06-08

    We propose an adiabatic quantum algorithm for generating a quantum pure state encoding of the PageRank vector, the most widely used tool in ranking the relative importance of internet pages. We present extensive numerical simulations which provide evidence that this algorithm can prepare the quantum PageRank state in a time which, on average, scales polylogarithmically in the number of web pages. We argue that the main topological feature of the underlying web graph allowing for such a scaling is the out-degree distribution. The top-ranked log(n) entries of the quantum PageRank state can then be estimated with a polynomial quantum speed-up. Moreover, the quantum PageRank state can be used in "q-sampling" protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes designed for the same task. This can be used to decide whether to run a classical update of the PageRank.

  12. Selecting Sums in Arrays

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2008-01-01

    In an array of n numbers each of the \\binomn2+nUnknown control sequence '\\binom' contiguous subarrays define a sum. In this paper we focus on algorithms for selecting and reporting maximal sums from an array of numbers. First, we consider the problem of reporting k subarrays inducing the k largest...... sums among all subarrays of length at least l and at most u. For this problem we design an optimal O(n + k) time algorithm. Secondly, we consider the problem of selecting a subarray storing the k’th largest sum. For this problem we prove a time bound of Θ(n · max {1,log(k/n)}) by describing...... an algorithm with this running time and by proving a matching lower bound. Finally, we combine the ideas and obtain an O(n· max {1,log(k/n)}) time algorithm that selects a subarray storing the k’th largest sum among all subarrays of length at least l and at most u....

  13. Zero-Sum Flows in Designs

    International Nuclear Information System (INIS)

    Akbari, S.; Khosrovshahi, G.B.; Mofidi, A.

    2010-07-01

    Let D be a t-(v, k, λ) design and let N i (D), for 1 ≤ i ≤ t, be the higher incidence matrix of D, a (0, 1)-matrix of size (v/i) x b, where b is the number of blocks of D. A zero-sum flow of D is a nowhere-zero real vector in the null space of N 1 (D). A zero-sum k-flow of D is a zero-sum flow with values in {±,...,±(k-1)}. In this paper we show that every non-symmetric design admits an integral zero-sum flow, and consequently we conjecture that every non-symmetric design admits a zero-sum 5-flow. Similarly, the definition of zero-sum flow can be extended to N i (D), 1 ≤ i ≤ t. Let D = t-(v,k, (v-t/k-t)) be the complete design. We conjecture that N t (D) admits a zero-sum 3-flow and prove this conjecture for t = 2. (author)

  14. Small sum privacy and large sum utility in data publishing.

    Science.gov (United States)

    Fu, Ada Wai-Chee; Wang, Ke; Wong, Raymond Chi-Wing; Wang, Jia; Jiang, Minhao

    2014-08-01

    While the study of privacy preserving data publishing has drawn a lot of interest, some recent work has shown that existing mechanisms do not limit all inferences about individuals. This paper is a positive note in response to this finding. We point out that not all inference attacks should be countered, in contrast to all existing works known to us, and based on this we propose a model called SPLU. This model protects sensitive information, by which we refer to answers for aggregate queries with small sums, while queries with large sums are answered with higher accuracy. Using SPLU, we introduce a sanitization algorithm to protect data while maintaining high data utility for queries with large sums. Empirical results show that our method behaves as desired. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Comparison of Anthropometry and Lower Limb Power Qualities According to Different Levels and Ranking Position of Competitive Surfers.

    Science.gov (United States)

    Fernandez-Gamboa, Iosu; Yanci, Javier; Granados, Cristina; Camara, Jesus

    2017-08-01

    Fernandez-Gamboa, I, Yanci, J, Granados, C, and Camara, J. Comparison of anthropometry and lower limb power qualities according to different levels and ranking position of competitive surfers. J Strength Cond Res 31(8): 2231-2237, 2017-The aim of this study was to compare competitive surfers' lower limb power output depending on their competitive level, and to evaluate the association between competition rankings. Twenty competitive surfers were divided according to the competitive level as follows: international (INT) or national (NAT), and competitive ranking (RANK1-50 or RANK51-100). Vertical jump and maximal peak power of the lower limbs were measured. No differences were found between INT and NAT surfers in the anthropometric variables, in the vertical jump, or in lower extremity power; although the NAT group had higher levels on the elasticity index, squat jumps (SJs), and counter movement jumps (CMJs) compared with the INT group. The RANK1-50 group had a lower biceps skinfold (p RANK1-50 group. Moderate to large significant correlations were obtained between the surfers' ranking position and some skinfolds, the sum of skinfolds, and vertical jump. Results demonstrate that surfers' physical performance seems to be an accurate indicator of ranking positioning, also revealing that vertical jump capacity and anthropometric variables play an important role in their competitive performance, which may be important when considering their power training.

  16. Neutrino mass sum rules and symmetries of the mass matrix

    Energy Technology Data Exchange (ETDEWEB)

    Gehrlein, Julia [Karlsruhe Institute of Technology, Institut fuer Theoretische Teilchenphysik, Karlsruhe (Germany); Universidad Autonoma de Madrid, Departamento de Fisica Teorica, Madrid (Spain); Instituto de Fisica Teorica UAM/CSIC, Madrid (Spain); Spinrath, Martin [Karlsruhe Institute of Technology, Institut fuer Theoretische Teilchenphysik, Karlsruhe (Germany); National Center for Theoretical Sciences, Physics Division, Hsinchu (China)

    2017-05-15

    Neutrino mass sum rules have recently gained again more attention as a powerful tool to discriminate and test various flavour models in the near future. A related question which has not yet been discussed fully satisfactorily was the origin of these sum rules and if they are related to any residual or accidental symmetry. We will address this open issue here systematically and find previous statements confirmed. Namely, the sum rules are not related to any enhanced symmetry of the Lagrangian after family symmetry breaking but they are simply the result of a reduction of free parameters due to skillful model building. (orig.)

  17. PageRank and rank-reversal dependence on the damping factor

    Science.gov (United States)

    Son, S.-W.; Christensen, C.; Grassberger, P.; Paczuski, M.

    2012-12-01

    PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d0=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d0.

  18. PageRank and rank-reversal dependence on the damping factor.

    Science.gov (United States)

    Son, S-W; Christensen, C; Grassberger, P; Paczuski, M

    2012-12-01

    PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d_{0}=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d_{0}.

  19. Jackknife Variance Estimator for Two Sample Linear Rank Statistics

    Science.gov (United States)

    1988-11-01

    Accesion For - - ,NTIS GPA&I "TIC TAB Unann c, nc .. [d Keywords: strong consistency; linear rank test’ influence function . i , at L By S- )Distribut...reverse if necessary and identify by block number) FIELD IGROUP SUB-GROUP Strong consistency; linear rank test; influence function . 19. ABSTRACT

  20. Ranking structures and rank-rank correlations of countries: The FIFA and UEFA cases

    Science.gov (United States)

    Ausloos, Marcel; Cloots, Rudi; Gadomski, Adam; Vitanov, Nikolay K.

    2014-04-01

    Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures in both cases.

  1. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander; Nowak, Wolfgang

    2014-01-01

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.

  2. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.

  3. Sums and Gaussian vectors

    CERN Document Server

    Yurinsky, Vadim Vladimirovich

    1995-01-01

    Surveys the methods currently applied to study sums of infinite-dimensional independent random vectors in situations where their distributions resemble Gaussian laws. Covers probabilities of large deviations, Chebyshev-type inequalities for seminorms of sums, a method of constructing Edgeworth-type expansions, estimates of characteristic functions for random vectors obtained by smooth mappings of infinite-dimensional sums to Euclidean spaces. A self-contained exposition of the modern research apparatus around CLT, the book is accessible to new graduate students, and can be a useful reference for researchers and teachers of the subject.

  4. Reduced-Rank Chip-Level MMSE Equalization for the 3G CDMA Forward Link with Code-Multiplexed Pilot

    Directory of Open Access Journals (Sweden)

    Goldstein J Scott

    2002-01-01

    Full Text Available This paper deals with synchronous direct-sequence code-division multiple access (CDMA transmission using orthogonal channel codes in frequency selective multipath, motivated by the forward link in 3G CDMA systems. The chip-level minimum mean square error (MMSE estimate of the (multiuser synchronous sum signal transmitted by the base, followed by a correlate and sum, has been shown to perform very well in saturated systems compared to a Rake receiver. In this paper, we present the reduced-rank, chip-level MMSE estimation based on the multistage nested Wiener filter (MSNWF. We show that, for the case of a known channel, only a small number of stages of the MSNWF is needed to achieve near full-rank MSE performance over a practical single-to-noise ratio (SNR range. This holds true even for an edge-of-cell scenario, where two base stations are contributing near equal-power signals, as well as for the single base station case. We then utilize the code-multiplexed pilot channel to train the MSNWF coefficients and show that adaptive MSNWF operating in a very low rank subspace performs slightly better than full-rank recursive least square (RLS and significantly better than least mean square (LMS. An important advantage of the MSNWF is that it can be implemented in a lattice structure, which involves significantly less computation than RLS. We also present structured MMSE equalizers that exploit the estimate of the multipath arrival times and the underlying channel structure to project the data vector onto a much lower dimensional subspace. Specifically, due to the sparseness of high-speed CDMA multipath channels, the channel vector lies in the subspace spanned by a small number of columns of the pulse shaping filter convolution matrix. We demonstrate that the performance of these structured low-rank equalizers is much superior to unstructured equalizers in terms of convergence speed and error rates.

  5. A random-sum Wilcoxon statistic and its application to analysis of ROC and LROC data.

    Science.gov (United States)

    Tang, Liansheng Larry; Balakrishnan, N

    2011-01-01

    The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.

  6. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Using Squares to Sum Squares

    Science.gov (United States)

    DeTemple, Duane

    2010-01-01

    Purely combinatorial proofs are given for the sum of squares formula, 1[superscript 2] + 2[superscript 2] + ... + n[superscript 2] = n(n + 1) (2n + 1) / 6, and the sum of sums of squares formula, 1[superscript 2] + (1[superscript 2] + 2[superscript 2]) + ... + (1[superscript 2] + 2[superscript 2] + ... + n[superscript 2]) = n(n + 1)[superscript 2]…

  8. Methods for the analysis of complex fluorescence decays: sum of Becquerel functions versus sum of exponentials

    International Nuclear Information System (INIS)

    Menezes, Filipe; Fedorov, Alexander; Baleizão, Carlos; Berberan-Santos, Mário N; Valeur, Bernard

    2013-01-01

    Ensemble fluorescence decays are usually analyzed with a sum of exponentials. However, broad continuous distributions of lifetimes, either unimodal or multimodal, occur in many situations. A simple and flexible fitting function for these cases that encompasses the exponential is the Becquerel function. In this work, the applicability of the Becquerel function for the analysis of complex decays of several kinds is tested. For this purpose, decays of mixtures of four different fluorescence standards (binary, ternary and quaternary mixtures) are measured and analyzed. For binary and ternary mixtures, the expected sum of narrow distributions is well recovered from the Becquerel functions analysis, if the correct number of components is used. For ternary mixtures, however, satisfactory fits are also obtained with a number of Becquerel functions smaller than the true number of fluorophores in the mixture, at the expense of broadening the lifetime distributions of the fictitious components. The quaternary mixture studied is well fitted with both a sum of three exponentials and a sum of two Becquerel functions, showing the inevitable loss of information when the number of components is large. Decays of a fluorophore in a heterogeneous environment, known to be represented by unimodal and broad continuous distributions (as previously obtained by the maximum entropy method), are also measured and analyzed. It is concluded that these distributions can be recovered by the Becquerel function method with an accuracy similar to that of the much more complex maximum entropy method. It is also shown that the polar (or phasor) plot is not always helpful for ascertaining the degree (and kind) of complexity of a fluorescence decay. (paper)

  9. SibRank: Signed bipartite network analysis for neighbor-based collaborative ranking

    Science.gov (United States)

    Shams, Bita; Haratizadeh, Saman

    2016-09-01

    Collaborative ranking is an emerging field of recommender systems that utilizes users' preference data rather than rating values. Unfortunately, neighbor-based collaborative ranking has gained little attention despite its more flexibility and justifiability. This paper proposes a novel framework, called SibRank that seeks to improve the state of the art neighbor-based collaborative ranking methods. SibRank represents users' preferences as a signed bipartite network, and finds similar users, through a novel personalized ranking algorithm in signed networks.

  10. Credal Sum-Product Networks

    NARCIS (Netherlands)

    Maua, Denis Deratani; Cozman, Fabio Gagli; Conaty, Diarmaid; de Campos, Cassio P.

    2017-01-01

    Sum-product networks are a relatively new and increasingly popular class of (precise) probabilistic graphical models that allow for marginal inference with polynomial effort. As with other probabilistic models, sum-product networks are often learned from data and used to perform classification.

  11. Ranking nodes in growing networks: When PageRank fails.

    Science.gov (United States)

    Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng

    2015-11-10

    PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm's efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank's performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.

  12. BridgeRank: A novel fast centrality measure based on local structure of the network

    Science.gov (United States)

    Salavati, Chiman; Abdollahpouri, Alireza; Manbari, Zhaleh

    2018-04-01

    Ranking nodes in complex networks have become an important task in many application domains. In a complex network, influential nodes are those that have the most spreading ability. Thus, identifying influential nodes based on their spreading ability is a fundamental task in different applications such as viral marketing. One of the most important centrality measures to ranking nodes is closeness centrality which is efficient but suffers from high computational complexity O(n3) . This paper tries to improve closeness centrality by utilizing the local structure of nodes and presents a new ranking algorithm, called BridgeRank centrality. The proposed method computes local centrality value for each node. For this purpose, at first, communities are detected and the relationship between communities is completely ignored. Then, by applying a centrality in each community, only one best critical node from each community is extracted. Finally, the nodes are ranked based on computing the sum of the shortest path length of nodes to obtained critical nodes. We have also modified the proposed method by weighting the original BridgeRank and selecting several nodes from each community based on the density of that community. Our method can find the best nodes with high spread ability and low time complexity, which make it applicable to large-scale networks. To evaluate the performance of the proposed method, we use the SIR diffusion model. Finally, experiments on real and artificial networks show that our method is able to identify influential nodes so efficiently, and achieves better performance compared to other recent methods.

  13. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  14. Evaluation of Term Ranking Algorithms for Pseudo-Relevance Feedback in MEDLINE Retrieval.

    Science.gov (United States)

    Yoo, Sooyoung; Choi, Jinwook

    2011-06-01

    The purpose of this study was to investigate the effects of query expansion algorithms for MEDLINE retrieval within a pseudo-relevance feedback framework. A number of query expansion algorithms were tested using various term ranking formulas, focusing on query expansion based on pseudo-relevance feedback. The OHSUMED test collection, which is a subset of the MEDLINE database, was used as a test corpus. Various ranking algorithms were tested in combination with different term re-weighting algorithms. Our comprehensive evaluation showed that the local context analysis ranking algorithm, when used in combination with one of the reweighting algorithms - Rocchio, the probabilistic model, and our variants - significantly outperformed other algorithm combinations by up to 12% (paired t-test; p algorithm pairs, at least in the context of the OHSUMED corpus. Comparative experiments on term ranking algorithms were performed in the context of a subset of MEDLINE documents. With medical documents, local context analysis, which uses co-occurrence with all query terms, significantly outperformed various term ranking methods based on both frequency and distribution analyses. Furthermore, the results of the experiments demonstrated that the term rank-based re-weighting method contributed to a remarkable improvement in mean average precision.

  15. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    International Nuclear Information System (INIS)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-01-01

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  16. Using incomplete citation data for MEDLINE results ranking.

    Science.gov (United States)

    Herskovic, Jorge R; Bernstam, Elmer V

    2005-01-01

    Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.

  17. Strategic alternatives ranking methodology: Multiple RCRA incinerator evaluation test case

    International Nuclear Information System (INIS)

    Baker, G.; Thomson, R.D.; Reece, J.; Springer, L.; Main, D.

    1988-01-01

    This paper presents an important process approach to permit quantification and ranking of multiple alternatives being considered in remedial actions or hazardous waste strategies. This process is a methodology for evaluating programmatic options in support of site selection or environmental analyses. Political or other less tangible motivations for alternatives may be quantified by means of establishing the range of significant variables, weighting their importance, and by establishing specific criteria for scoring individual alternatives. An application of the process to a recent AFLC program permitted ranking incineration alternatives from a list of over 130 options. The process forced participation by the organizations to be effected, allowed a consensus of opinion to be achieved, allowed complete flexibility to evaluate factor sensitivity, and resulted in strong, quantifiable support for any subsequent site-selection action NEPA documents

  18. RankExplorer: Visualization of Ranking Changes in Large Time Series Data.

    Science.gov (United States)

    Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin

    2012-12-01

    For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.

  19. Neutrino mass sum-rule

    Science.gov (United States)

    Damanik, Asan

    2018-03-01

    Neutrino mass sum-rele is a very important research subject from theoretical side because neutrino oscillation experiment only gave us two squared-mass differences and three mixing angles. We review neutrino mass sum-rule in literature that have been reported by many authors and discuss its phenomenological implications.

  20. Sums of squares of integers

    CERN Document Server

    Moreno, Carlos J

    2005-01-01

    Introduction Prerequisites Outline of Chapters 2 - 8 Elementary Methods Introduction Some Lemmas Two Fundamental Identities Euler's Recurrence for Sigma(n)More Identities Sums of Two Squares Sums of Four Squares Still More Identities Sums of Three Squares An Alternate Method Sums of Polygonal Numbers Exercises Bernoulli Numbers Overview Definition of the Bernoulli Numbers The Euler-MacLaurin Sum Formula The Riemann Zeta Function Signs of Bernoulli Numbers Alternate The von Staudt-Clausen Theorem Congruences of Voronoi and Kummer Irregular Primes Fractional Parts of Bernoulli Numbers Exercises Examples of Modular Forms Introduction An Example of Jacobi and Smith An Example of Ramanujan and Mordell An Example of Wilton: t (n) Modulo 23 An Example of Hamburger Exercises Hecke's Theory of Modular FormsIntroduction Modular Group ? and its Subgroup ? 0 (N) Fundamental Domains For ? and ? 0 (N) Integral Modular Forms Modular Forms of Type Mk(? 0(N);chi) and Euler-Poincare series Hecke Operators Dirichlet Series and ...

  1. Sum rules in classical scattering

    International Nuclear Information System (INIS)

    Bolle, D.; Osborn, T.A.

    1981-01-01

    This paper derives sum rules associated with the classical scattering of two particles. These sum rules are the analogs of Levinson's theorem in quantum mechanics which provides a relationship between the number of bound-state wavefunctions and the energy integral of the time delay of the scattering process. The associated classical relation is an identity involving classical time delay and an integral over the classical bound-state density. We show that equalities between the Nth-order energy moment of the classical time delay and the Nth-order energy moment of the classical bound-state density hold in both a local and a global form. Local sum rules involve the time delay defined on a finite but otherwise arbitrary coordinate space volume S and the bound-state density associated with this same region. Global sum rules are those that obtain when S is the whole coordinate space. Both the local and global sum rules are derived for potentials of arbitrary shape and for scattering in any space dimension. Finally the set of classical sum rules, together with the known quantum mechanical analogs, are shown to provide a unified method of obtaining the high-temperature expansion of the classical, respectively the quantum-mechanical, virial coefficients

  2. Low ranks make the difference : How achievement goals and ranking information affect cooperation intentions

    NARCIS (Netherlands)

    Poortvliet, P. Marijn; Janssen, Onne; Van Yperen, N.W.; Van de Vliert, E.

    This investigation tested the joint effect of achievement goals and ranking information on information exchange intentions with a commensurate exchange partner. Results showed that individuals with performance goals were less inclined to cooperate with an exchange partner when they had low or high

  3. Counting Triangles to Sum Squares

    Science.gov (United States)

    DeMaio, Joe

    2012-01-01

    Counting complete subgraphs of three vertices in complete graphs, yields combinatorial arguments for identities for sums of squares of integers, odd integers, even integers and sums of the triangular numbers.

  4. Cosmic Sum Rules

    DEFF Research Database (Denmark)

    T. Frandsen, Mads; Masina, Isabella; Sannino, Francesco

    2011-01-01

    We introduce new sum rules allowing to determine universal properties of the unknown component of the cosmic rays and show how it can be used to predict the positron fraction at energies not yet explored by current experiments and to constrain specific models.......We introduce new sum rules allowing to determine universal properties of the unknown component of the cosmic rays and show how it can be used to predict the positron fraction at energies not yet explored by current experiments and to constrain specific models....

  5. Ranking metrics in gene set enrichment analysis: do they matter?

    Science.gov (United States)

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner

  6. A Case-Based Reasoning Method with Rank Aggregation

    Science.gov (United States)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  7. Weighted Discriminative Dictionary Learning based on Low-rank Representation

    International Nuclear Information System (INIS)

    Chang, Heyou; Zheng, Hao

    2017-01-01

    Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods. (paper)

  8. 7254 ACCEPTABILITY OF DIFFERENT LIPID-BASED NUTRIENT ...

    African Journals Online (AJOL)

    Marlène Hébié

    2013-01-01

    Jan 1, 2013 ... acceptability tests with one of four sets of LNS products: LNS-30 g sweet, .... data and analyzed using non-parametric tests: Kruskal–Wallis test, Wilcoxon signed- rank test or Wilcoxon sum rank test to compare 2 groups and ...

  9. Atp1a3-deficient heterozygous mice show lower rank in the hierarchy and altered social behavior.

    Science.gov (United States)

    Sugimoto, H; Ikeda, K; Kawakami, K

    2017-10-23

    Atp1a3 is the Na-pump alpha3 subunit gene expressed mainly in neurons of the brain. Atp1a3-deficient heterozygous mice (Atp1a3 +/- ) show altered neurotransmission and deficits of motor function after stress loading. To understand the function of Atp1a3 in a social hierarchy, we evaluated social behaviors (social interaction, aggression, social approach and social dominance) of Atp1a3 +/- and compared the rank and hierarchy structure between Atp1a3 +/- and wild-type mice within a housing cage using the round-robin tube test and barbering observations. Formation of a hierarchy decreases social conflict and promote social stability within the group. The hierarchical rank is a reflection of social dominance within a cage, which is heritable and can be regulated by specific genes in mice. Here we report: (1) The degree of social interaction but not aggression was lower in Atp1a3 +/- than wild-type mice, and Atp1a3 +/- approached Atp1a3 +/- mice more frequently than wild type. (2) The frequency of barbering was lower in the Atp1a3 +/- group than in the wild-type group, while no difference was observed in the mixed-genotype housing condition. (3) Hierarchy formation was not different between Atp1a3 +/- and wild type. (4) Atp1a3 +/- showed a lower rank in the mixed-genotype housing condition than that in the wild type, indicating that Atp1a3 regulates social dominance. In sum, Atp1a3 +/- showed unique social behavior characteristics of lower social interaction and preference to approach the same genotype mice and a lower ranking in the hierarchy. © 2017 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  10. Multi-criteria ranking of energy generation scenarios with Monte Carlo simulation

    International Nuclear Information System (INIS)

    Baležentis, Tomas; Streimikiene, Dalia

    2017-01-01

    Highlights: • Two advanced optimization models were applied for EU energy policy scenarios development. • Several advanced MCDA were applied for energy policy scenarios ranking: WASPAS, ARAS, TOPSIS. • A Monte Carlo simulation was applied for sensitivity analysis of scenarios ranking. • New policy insights in terms of energy scenarios forecasting were provided based on research conducted. - Abstract: Integrated Assessment Models (IAMs) are omnipresent in energy policy analysis. Even though IAMs can successfully handle uncertainty pertinent to energy planning problems, they render multiple variables as outputs of the modelling. Therefore, policy makers are faced with multiple energy development scenarios and goals. Specifically, technical, environmental, and economic aspects are represented by multiple criteria, which, in turn, are related to conflicting objectives. Preferences of decision makers need to be taken into account in order to facilitate effective energy planning. Multi-criteria decision making (MCDM) tools are relevant in aggregating diverse information and thus comparing alternative energy planning options. The paper aims at ranking European Union (EU) energy development scenarios based on several IAMs with respect to multiple criteria. By doing so, we account for uncertainty surrounding policy priorities outside the IAM. In order to follow a sustainable approach, the ranking of policy options is based on EU energy policy priorities: energy efficiency improvements, increased use of renewables, reduction in and low mitigations costs of GHG emission. The ranking of scenarios is based on the estimates rendered by the two advanced IAMs relying on different approaches, namely TIAM and WITCH. The data are fed into the three MCDM techniques: the method of weighted aggregated sum/product assessment (WASPAS), the Additive Ratio Assessment (ARAS) method, and technique for order preference by similarity to ideal solution (TOPSIS). As MCDM techniques allow

  11. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. RANK and RANK ligand expression in primary human osteosarcoma

    Directory of Open Access Journals (Sweden)

    Daniel Branstetter

    2015-09-01

    Our results demonstrate RANKL expression was observed in the tumor element in 68% of human OS using IHC. However, the staining intensity was relatively low and only 37% (29/79 of samples exhibited≥10% RANKL positive tumor cells. RANK expression was not observed in OS tumor cells. In contrast, RANK expression was clearly observed in other cells within OS samples, including the myeloid osteoclast precursor compartment, osteoclasts and in giant osteoclast cells. The intensity and frequency of RANKL and RANK staining in OS samples were substantially less than that observed in GCTB samples. The observation that RANKL is expressed in OS cells themselves suggests that these tumors may mediate an osteoclastic response, and anti-RANKL therapy may potentially be protective against bone pathologies in OS. However, the absence of RANK expression in primary human OS cells suggests that any autocrine RANKL/RANK signaling in human OS tumor cells is not operative, and anti-RANKL therapy would not directly affect the tumor.

  13. Ranking nodes in growing networks: When PageRank fails

    Science.gov (United States)

    Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng

    2015-11-01

    PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm’s efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank’s performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.

  14. Variants of the Borda count method for combining ranked classifier hypotheses

    NARCIS (Netherlands)

    van Erp, Merijn; Schomaker, Lambert; Schomaker, Lambert; Vuurpijl, Louis

    2000-01-01

    The Borda count is a simple yet effective method of combining rankings. In pattern recognition, classifiers are often able to return a ranked set of results. Several experiments have been conducted to test the ability of the Borda count and two variant methods to combine these ranked classifier

  15. Decompounding random sums: A nonparametric approach

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted; Pitts, Susan M.

    Observations from sums of random variables with a random number of summands, known as random, compound or stopped sums arise within many areas of engineering and science. Quite often it is desirable to infer properties of the distribution of the terms in the random sum. In the present paper we...... review a number of applications and consider the nonlinear inverse problem of inferring the cumulative distribution function of the components in the random sum. We review the existing literature on non-parametric approaches to the problem. The models amenable to the analysis are generalized considerably...

  16. Social class rank, essentialism, and punitive judgment.

    Science.gov (United States)

    Kraus, Michael W; Keltner, Dacher

    2013-08-01

    Recent evidence suggests that perceptions of social class rank influence a variety of social cognitive tendencies, from patterns of causal attribution to moral judgment. In the present studies we tested the hypotheses that upper-class rank individuals would be more likely to endorse essentialist lay theories of social class categories (i.e., that social class is founded in genetically based, biological differences) than would lower-class rank individuals and that these beliefs would decrease support for restorative justice--which seeks to rehabilitate offenders, rather than punish unlawful action. Across studies, higher social class rank was associated with increased essentialism of social class categories (Studies 1, 2, and 4) and decreased support for restorative justice (Study 4). Moreover, manipulated essentialist beliefs decreased preferences for restorative justice (Study 3), and the association between social class rank and class-based essentialist theories was explained by the tendency to endorse beliefs in a just world (Study 2). Implications for how class-based essentialist beliefs potentially constrain social opportunity and mobility are discussed.

  17. A Family Longevity Selection Score: Ranking Sibships by Their Longevity, Size, and Availability for Study

    DEFF Research Database (Denmark)

    Sebastiani, Paola; Hadley, Evan C; Province, Michael

    2009-01-01

    Family studies of exceptional longevity can potentially identify genetic and other factors contributing to long life and healthy aging. Although such studies seek families that are exceptionally long lived, they also need living members who can provide DNA and phenotype information. On the basis...... of these considerations, the authors developed a metric to rank families for selection into a family study of longevity. Their measure, the family longevity selection score (FLoSS), is the sum of 2 components: 1) an estimated family longevity score built from birth-, gender-, and nation-specific cohort survival...... probabilities and 2) a bonus for older living siblings. The authors examined properties of FLoSS-based family rankings by using data from 3 ongoing studies: the New England Centenarian Study, the Framingham Heart Study, and screenees for the Long Life Family Study. FLoSS-based selection yields families...

  18. RANK/RANK-Ligand/OPG: Ein neuer Therapieansatz in der Osteoporosebehandlung

    Directory of Open Access Journals (Sweden)

    Preisinger E

    2007-01-01

    Full Text Available Die Erforschung der Kopplungsmechanismen zur Osteoklastogenese, Knochenresorption und Remodellierung eröffnete neue mögliche Therapieansätze in der Behandlung der Osteoporose. Eine Schlüsselrolle beim Knochenabbau spielt der RANK- ("receptor activator of nuclear factor (NF- κB"- Ligand (RANKL. Durch die Bindung von RANKL an den Rezeptor RANK wird die Knochenresorption eingeleitet. OPG (Osteoprotegerin sowie der für den klinischen Gebrauch entwickelte humane monoklonale Antikörper (IgG2 Denosumab blockieren die Bindung von RANK-Ligand an RANK und verhindern den Knochenabbau.

  19. Sensitivity ranking for freshwater invertebrates towards hydrocarbon contaminants.

    Science.gov (United States)

    Gerner, Nadine V; Cailleaud, Kevin; Bassères, Anne; Liess, Matthias; Beketov, Mikhail A

    2017-11-01

    Hydrocarbons have an utmost economical importance but may also cause substantial ecological impacts due to accidents or inadequate transportation and use. Currently, freshwater biomonitoring methods lack an indicator that can unequivocally reflect the impacts caused by hydrocarbons while being independent from effects of other stressors. The aim of the present study was to develop a sensitivity ranking for freshwater invertebrates towards hydrocarbon contaminants, which can be used in hydrocarbon-specific bioindicators. We employed the Relative Sensitivity method and developed the sensitivity ranking S hydrocarbons based on literature ecotoxicological data supplemented with rapid and mesocosm test results. A first validation of the sensitivity ranking based on an earlier field study has been conducted and revealed the S hydrocarbons ranking to be promising for application in sensitivity based indicators. Thus, the first results indicate that the ranking can serve as the core component of future hydrocarbon-specific and sensitivity trait based bioindicators.

  20. Momentum sum rules for fragmentation functions

    International Nuclear Information System (INIS)

    Meissner, S.; Metz, A.; Pitonyak, D.

    2010-01-01

    Momentum sum rules for fragmentation functions are considered. In particular, we give a general proof of the Schaefer-Teryaev sum rule for the transverse momentum dependent Collins function. We also argue that corresponding sum rules for related fragmentation functions do not exist. Our model-independent analysis is supplemented by calculations in a simple field-theoretical model.

  1. Multiparty symmetric sum types

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Yoshida, Nobuko; Honda, Kohei

    2010-01-01

    This paper introduces a new theory of multiparty session types based on symmetric sum types, by which we can type non-deterministic orchestration choice behaviours. While the original branching type in session types can represent a choice made by a single participant and accepted by others...... determining how the session proceeds, the symmetric sum type represents a choice made by agreement among all the participants of a session. Such behaviour can be found in many practical systems, including collaborative workflow in healthcare systems for clinical practice guidelines (CPGs). Processes...... with the symmetric sums can be embedded into the original branching types using conductor processes. We show that this type-driven embedding preserves typability, satisfies semantic soundness and completeness, and meets the encodability criteria adapted to the typed setting. The theory leads to an efficient...

  2. Low-ranking female Japanese macaques make efforts for social grooming.

    Science.gov (United States)

    Kurihara, Yosuke

    2016-04-01

    Grooming is essential to build social relationships in primates. Its importance is universal among animals from different ranks; however, rank-related differences in feeding patterns can lead to conflicts between feeding and grooming in low-ranking animals. Unifying the effects of dominance rank on feeding and grooming behaviors contributes to revealing the importance of grooming. Here, I tested whether the grooming behavior of low-ranking females were similar to that of high-ranking females despite differences in their feeding patterns. I followed 9 Japanese macaques Macaca fuscata fuscata adult females from the Arashiyama group, and analyzed the feeding patterns and grooming behaviors of low- and high-ranking females. Low-ranking females fed on natural foods away from the provisioning site, whereas high-ranking females obtained more provisioned food at the site. Due to these differences in feeding patterns, low-ranking females spent less time grooming than high-ranking females. However, both low- and high-ranking females performed grooming around the provisioning site, which was linked to the number of neighboring individuals for low-ranking females and feeding on provisioned foods at the site for high-ranking females. The similarity in grooming area led to a range and diversity of grooming partners that did not differ with rank. Thus, low-ranking females can obtain small amounts of provisioned foods and perform grooming with as many partners around the provisioning site as high-ranking females. These results highlight the efforts made by low-ranking females to perform grooming and suggest the importance of grooming behavior in group-living primates.

  3. Low-ranking female Japanese macaques make efforts for social grooming

    Science.gov (United States)

    Kurihara, Yosuke

    2016-01-01

    Abstract Grooming is essential to build social relationships in primates. Its importance is universal among animals from different ranks; however, rank-related differences in feeding patterns can lead to conflicts between feeding and grooming in low-ranking animals. Unifying the effects of dominance rank on feeding and grooming behaviors contributes to revealing the importance of grooming. Here, I tested whether the grooming behavior of low-ranking females were similar to that of high-ranking females despite differences in their feeding patterns. I followed 9 Japanese macaques Macaca fuscata fuscata adult females from the Arashiyama group, and analyzed the feeding patterns and grooming behaviors of low- and high-ranking females. Low-ranking females fed on natural foods away from the provisioning site, whereas high-ranking females obtained more provisioned food at the site. Due to these differences in feeding patterns, low-ranking females spent less time grooming than high-ranking females. However, both low- and high-ranking females performed grooming around the provisioning site, which was linked to the number of neighboring individuals for low-ranking females and feeding on provisioned foods at the site for high-ranking females. The similarity in grooming area led to a range and diversity of grooming partners that did not differ with rank. Thus, low-ranking females can obtain small amounts of provisioned foods and perform grooming with as many partners around the provisioning site as high-ranking females. These results highlight the efforts made by low-ranking females to perform grooming and suggest the importance of grooming behavior in group-living primates. PMID:29491896

  4. Current algebra sum rules for Reggeons

    CERN Document Server

    Carlitz, R

    1972-01-01

    The interplay between the constraints of chiral SU/sub 2/*SU/sub 2/ symmetry and Regge asymptotic behaviour is investigated. The author reviews the derivation of various current algebra sum rules in a study of the reaction pi + alpha to pi + beta . These sum rules imply that all particles may be classified in multiplets of SU/sub 2/*SU/sub 2/ and that each of these multiplets may contain linear combinations of an infinite number of physical states. Extending his study to the reaction pi + alpha to pi + pi + beta , he derives new sum rules involving commutators of the axial charge with the reggeon coupling matrices of the rho and f Regge trajectories. Some applications of these new sum rules are noted, and the general utility of these and related sum rules is discussed. (17 refs).

  5. Critical test of isotropic periodic sum techniques with group-based cut-off schemes.

    Science.gov (United States)

    Nozawa, Takuma; Yasuoka, Kenji; Takahashi, Kazuaki Z

    2018-03-08

    Truncation is still chosen for many long-range intermolecular interaction calculations to efficiently compute free-boundary systems, macromolecular systems and net-charge molecular systems, for example. Advanced truncation methods have been developed for long-range intermolecular interactions. Every truncation method can be implemented as one of two basic cut-off schemes, namely either an atom-based or a group-based cut-off scheme. The former computes interactions of "atoms" inside the cut-off radius, whereas the latter computes interactions of "molecules" inside the cut-off radius. In this work, the effect of group-based cut-off is investigated for isotropic periodic sum (IPS) techniques, which are promising cut-off treatments to attain advanced accuracy for many types of molecular system. The effect of group-based cut-off is clearly different from that of atom-based cut-off, and severe artefacts are observed in some cases. However, no severe discrepancy from the Ewald sum is observed with the extended IPS techniques.

  6. Correlation of Cognitive Abilities Level, Age and Ranks in Judo

    Directory of Open Access Journals (Sweden)

    Kraček Stanislav

    2016-11-01

    Full Text Available The aim of this paper is to ascertain the correlation between selected cognitive abilities, age and performance of judokas according to ranking. The study group consisted of judokas in the age group 18 ± 2.4 years. The Stroop Color-Word Test - Victoria Version (VST was the instrument used to determine the level of cognitive abilities. The data obtained were measured by the Pearson Correlation (r correlation test. The results of the study show an associative relationship of indirect correlation (p < 0.01 between age and all of the three categories of the Stroop test. This is an indirect correlation, so the higher the age, the lower the time (better performance of the probands in the Stroop test. There was no statistically significant correlation between performance in the categories of the Stroop test and rankings. The outcomes show that the level of selected cognitive abilities depends on age, but the level of the selected cognitive abilities does not affect the ranking of the judokas.

  7. Sum rules for quasifree scattering of hadrons

    Science.gov (United States)

    Peterson, R. J.

    2018-02-01

    The areas d σ /d Ω of fitted quasifree scattering peaks from bound nucleons for continuum hadron-nucleus spectra measuring d2σ /d Ω d ω are converted to sum rules akin to the Coulomb sums familiar from continuum electron scattering spectra from nuclear charge. Hadronic spectra with or without charge exchange of the beam are considered. These sums are compared to the simple expectations of a nonrelativistic Fermi gas, including a Pauli blocking factor. For scattering without charge exchange, the hadronic sums are below this expectation, as also observed with Coulomb sums. For charge exchange spectra, the sums are near or above the simple expectation, with larger uncertainties. The strong role of hadron-nucleon in-medium total cross sections is noted from use of the Glauber model.

  8. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  9. Sum rules for collisional processes

    International Nuclear Information System (INIS)

    Oreg, J.; Goldstein, W.H.; Bar-Shalom, A.; Klapisch, M.

    1991-01-01

    We derive level-to-configuration sum rules for dielectronic capture and for collisional excitation and ionization. These sum rules give the total transition rate from a detailed atomic level to an atomic configuration. For each process, we show that it is possible to factor out the dependence on continuum-electron wave functions. The remaining explicit level dependence of each rate is then obtained from the matrix element of an effective operator acting on the bound orbitals only. In a large class of cases, the effective operator reduces to a one-electron monopole whose matrix element is proportional to the statistical weight of the level. We show that even in these cases, nonstatistical level dependence enters through the dependence of radial integrals on continuum orbitals. For each process, explicit analytic expressions for the level-to-configuration sum rules are given for all possible cases. Together with the well-known J-file sum rule for radiative rates [E. U. Condon and G. H. Shortley, The Theory of Atomic Spectra (University Press, Cambridge, 1935)], the sum rules offer a systematic and efficient procedure for collapsing high-multiplicity configurations into ''effective'' levels for the purpose of modeling the population kinetics of ionized heavy atoms in plasma

  10. QCD Sum Rules, a Modern Perspective

    CERN Document Server

    Colangelo, Pietro; Colangelo, Pietro; Khodjamirian, Alexander

    2001-01-01

    An introduction to the method of QCD sum rules is given for those who want to learn how to use this method. Furthermore, we discuss various applications of sum rules, from the determination of quark masses to the calculation of hadronic form factors and structure functions. Finally, we explain the idea of the light-cone sum rules and outline the recent development of this approach.

  11. Validating rankings in soccer championships

    Directory of Open Access Journals (Sweden)

    Annibal Parracho Sant'Anna

    2012-08-01

    Full Text Available The final ranking of a championship is determined by quality attributes combined with other factors which should be filtered out of any decision on relegation or draft for upper level tournaments. Factors like referees' mistakes and difficulty of certain matches due to its accidental importance to the opponents should have their influence reduced. This work tests approaches to combine classification rules considering the imprecision of the number of points as a measure of quality and of the variables that provide reliable explanation for it. Two home-advantage variables are tested and shown to be apt to enter as explanatory variables. Independence between the criteria is checked against the hypothesis of maximal correlation. The importance of factors and of composition rules is evaluated on the basis of correlation between rank vectors, number of classes and number of clubs in tail classes. Data from five years of the Brazilian Soccer Championship are analyzed.

  12. On Learning Ring-Sum-Expansions

    DEFF Research Database (Denmark)

    Fischer, Paul; Simon, H. -U.

    1992-01-01

    The problem of learning ring-sum-expansions from examples is studied. Ring-sum-expansions (RSE) are representations of Boolean functions over the base {#123;small infinum, (+), 1}#125;, which reflect arithmetic operations in GF(2). k-RSE is the class of ring-sum-expansions containing only monomials...... of length at most k:. term-RSE is the class of ring-sum-expansions having at most I: monomials. It is shown that k-RSE, k>or=1, is learnable while k-term-RSE, k>2, is not learnable if RPnot=NP. Without using a complexity-theoretical hypothesis, it is proven that k-RSE, k>or=1, and k-term-RSE, k>or=2 cannot...... be learned from positive (negative) examples alone. However, if the restriction that the hypothesis which is output by the learning algorithm is also a k-RSE is suspended, then k-RSE is learnable from positive (negative) examples only. Moreover, it is proved that 2-term-RSE is learnable by a conjunction...

  13. A strategy to discover genes that carry multi-allelic or mono-allelic risk for common diseases: A cohort allelic sums test (CAST)

    International Nuclear Information System (INIS)

    Morgenthaler, Stephan; Thilly, William G.

    2007-01-01

    A method is described to discover if a gene carries one or more allelic mutations that confer risk for any specified common disease. The method does not depend upon genetic linkage of risk-conferring mutations to high frequency genetic markers such as single nucleotide polymorphisms. Instead, the sums of allelic mutation frequencies in case and control cohorts are determined and a statistical test is applied to discover if the difference in these sums is greater than would be expected by chance. A statistical model is presented that defines the ability of such tests to detect significant gene-disease relationships as a function of case and control cohort sizes and key confounding variables: zygosity and genicity, environmental risk factors, errors in diagnosis, limits to mutant detection, linkage of neutral and risk-conferring mutations, ethnic diversity in the general population and the expectation that among all exonic mutants in the human genome greater than 90% will be neutral with regard to any effect on disease risk. Means to test the null hypothesis for, and determine the statistical power of, each test are provided. For this 'cohort allelic sums test' or 'CAST', the statistical model and test are provided as an Excel (TM) program, CASTAT (C) at http://epidemiology.mit.edu. Based on genetics, technology and statistics, a strategy of enumerating the mutant alleles carried in the exons and splice sites of the estimated ∼25,000 human genes in case cohort samples of 10,000 persons for each of 100 common diseases is proposed and evaluated: A wide range of possible conditions of multi-allelic or mono-allelic and monogenic, multigenic or polygenic (including epistatic) risk are found to be detectable using the statistical criteria of 1 or 10 ''false positive'' gene associations per 25,000 gene-disease pair-wise trials and a statistical power of >0.8. Using estimates of the distribution of both neutral and gene-inactivating nondeleterious mutations in humans and

  14. Sums and products of sets and estimates of rational trigonometric sums in fields of prime order

    Energy Technology Data Exchange (ETDEWEB)

    Garaev, Mubaris Z [National Autonomous University of Mexico, Institute of Mathematics (Mexico)

    2010-11-16

    This paper is a survey of main results on the problem of sums and products of sets in fields of prime order and their applications to estimates of rational trigonometric sums. Bibliography: 85 titles.

  15. Sum formulas for reductive algebraic groups

    DEFF Research Database (Denmark)

    Andersen, Henning Haahr; Kulkarni, Upendra

    2008-01-01

    \\supset V^1 \\cdots \\supset V^r = 0$. The sum of the positive terms in this filtration satisfies a well known sum formula. If $T$ denotes a tilting module either for $G$ or $U_q$ then we can similarly filter the space $\\Hom_G(V,T)$, respectively $\\Hom_{U_q}(V,T)$ and there is a sum formula for the positive...... terms here as well. We give an easy and unified proof of these two (equivalent) sum formulas. Our approach is based on an Euler type identity which we show holds without any restrictions on $p$ or $l$. In particular, we get rid of previous such restrictions in the tilting module case....

  16. Profitability as a business goal: the multicriteria approach to the ranking of the five largest Croatian banks

    Directory of Open Access Journals (Sweden)

    Višnja Vojvodić Rosenzweig

    2012-01-01

    Full Text Available Background: The ranking of commercial banks is usually based on using a single criterion, the size of assets or income. A multicriteria approach allows a more complex analysis of their business efficiency. Objectives: This paper proposes the ranking of banks based on six financial criteria using a multicriteria approach implementing a goal programming model. The criteria are classified into three basic groups: profitability, credit risk and solvency. Methods/Approach: Business performance is evaluated using a score for each bank, calculated as the weighted sum of relative values of individual indicators. Results: In the process of solving the corresponding goal programming problem, the weights are calculated. It is assumed that the goal of each bank is the highest profitability. Because of the market competition among banks, the weights of indicators depend on the performance of each bank. This method is applied to the five biggest Croatian banks (ZABA, PBZ, ERSTE, RBA and HYPO. Conclusion: For the observed period (2010, the highest priority is given to profitability and then to credit risk. The ranking is achieved by using a multicriteria model.

  17. Hazard Ranking Methodology for Assessing Health Impacts of Unconventional Natural Gas Development and Production: The Maryland Case Study.

    Directory of Open Access Journals (Sweden)

    Meleah D Boyle

    Full Text Available The recent growth of unconventional natural gas development and production (UNGDP has outpaced research on the potential health impacts associated with the process. The Maryland Marcellus Shale Public Health Study was conducted to inform the Maryland Marcellus Shale Safe Drilling Initiative Advisory Commission, State legislators and the Governor about potential public health impacts associated with UNGDP so they could make an informed decision that considers the health and well-being of Marylanders. In this paper, we describe an impact assessment and hazard ranking methodology we used to assess the potential public health impacts for eight hazards associated with the UNGDP process. The hazard ranking included seven metrics: 1 presence of vulnerable populations (e.g. children under the age of 5, individuals over the age of 65, surface owners, 2 duration of exposure, 3 frequency of exposure, 4 likelihood of health effects, 5 magnitude/severity of health effects, 6 geographic extent, and 7 effectiveness of setbacks. Overall public health concern was determined by a color-coded ranking system (low, moderately high, and high that was generated based on the overall sum of the scores for each hazard. We provide three illustrative examples of applying our methodology for air quality and health care infrastructure which were ranked as high concern and for water quality which was ranked moderately high concern. The hazard ranking was a valuable tool that allowed us to systematically evaluate each of the hazards and provide recommendations to minimize the hazards.

  18. Study of QCD medium by sum rules

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Institute of Nuclear Physics, Calcutta (India)

    1998-08-01

    Though it has no analogue in condensed matter physics, the thermal QCD sum rules can, nevertheless, answer questions of condensed matter type about the QCD medium. The ingredients needed to write such sum rules, viz. the operator product expansion and the spectral representation at finite temperature, are reviewed in detail. The sum rules are then actually written for the case of correlation function of two vector currents. Collecting information on the thermal average of the higher dimension operators from other sources, we evaluate these sum rules for the temperature dependent {rho}-meson parameters. Possibility of extracting more information from the combined set of all sum rules from different correlation functions is also discussed. (author) 30 refs., 2 figs.

  19. Coloring sums of extensions of certain graphs

    Directory of Open Access Journals (Sweden)

    Johan Kok

    2017-12-01

    Full Text Available We recall that the minimum number of colors that allow a proper coloring of graph $G$ is called the chromatic number of $G$ and denoted $\\chi(G$. Motivated by the introduction of the concept of the $b$-chromatic sum of a graph the concept of $\\chi'$-chromatic sum and $\\chi^+$-chromatic sum are introduced in this paper. The extended graph $G^x$ of a graph $G$ was recently introduced for certain regular graphs. This paper furthers the concepts of $\\chi'$-chromatic sum and $\\chi^+$-chromatic sum to extended paths and cycles. Bipartite graphs also receive some attention. The paper concludes with patterned structured graphs. These last said graphs are typically found in chemical and biological structures.

  20. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  1. Robinson's radiation damping sum rule: Reaffirmation and extension

    International Nuclear Information System (INIS)

    Mane, S.R.

    2011-01-01

    Robinson's radiation damping sum rule is one of the classic theorems of accelerator physics. Recently Orlov has claimed to find serious flaws in Robinson's proof of his sum rule. In view of the importance of the subject, I have independently examined the derivation of the Robinson radiation damping sum rule. Orlov's criticisms are without merit: I work through Robinson's derivation and demonstrate that Orlov's criticisms violate well-established mathematical theorems and are hence not valid. I also show that Robinson's derivation, and his damping sum rule, is valid in a larger domain than that treated by Robinson himself: Robinson derived his sum rule under the approximation of a small damping rate, but I show that Robinson's sum rule applies to arbitrary damping rates. I also display more concise derivations of the sum rule using matrix differential equations. I also show that Robinson's sum rule is valid in the vicinity of a parametric resonance.

  2. Model dependence of energy-weighted sum rules

    International Nuclear Information System (INIS)

    Kirson, M.W.

    1977-01-01

    The contribution of the nucleon-nucleon interaction to energy-weighted sum rules for electromagnetic multipole transitions is investigated. It is found that only isoscalar electric transitions might have model-independent energy-weighted sum rules. For these transitions, explicit momentum and angular momentum dependence of the nuclear force give rise to corrections to the sum rule which are found to be negligibly small, thus confirming the model independence of these specific sum rules. These conclusions are unaffected by correlation effects. (author)

  3. Low-rank coal research

    Energy Technology Data Exchange (ETDEWEB)

    Weber, G. F.; Laudal, D. L.

    1989-01-01

    This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).

  4. Extremum uncertainty product and sum states

    Energy Technology Data Exchange (ETDEWEB)

    Mehta, C L; Kumar, S [Indian Inst. of Tech., New Delhi. Dept. of Physics

    1978-01-01

    The extremum product states and sum states of the uncertainties in non-commuting observables have been examined. These are illustrated by two specific examples of harmonic oscillator and the angular momentum states. It shows that the coherent states of the harmonic oscillator are characterized by the minimum uncertainty sum <(..delta..q)/sup 2/>+<(..delta..p)/sup 2/>. The extremum values of the sums and products of the uncertainties of the components of the angular momentum are also obtained.

  5. Efficient Rank Reduction of Correlation Matrices

    NARCIS (Netherlands)

    I. Grubisic (Igor); R. Pietersz (Raoul)

    2005-01-01

    textabstractGeometric optimisation algorithms are developed that efficiently find the nearest low-rank correlation matrix. We show, in numerical tests, that our methods compare favourably to the existing methods in the literature. The connection with the Lagrange multiplier method is established,

  6. Inverse-moment chiral sum rules

    International Nuclear Information System (INIS)

    Golowich, E.; Kambor, J.

    1996-01-01

    A general class of inverse-moment sum rules was previously derived by the authors in a chiral perturbation theory (ChPT) study at two-loop order of the isospin and hypercharge vector-current propagators. Here, we address the evaluation of the inverse-moment sum rules in terms of existing data and theoretical constraints. Two kinds of sum rules are seen to occur: those which contain as-yet undetermined O(q 6 ) counterterms and those free of such quantities. We use the former to obtain phenomenological evaluations of two O(q 6 ) counterterms. Light is shed on the important but difficult issue regarding contributions of higher orders in the ChPT expansion. copyright 1996 The American Physical Society

  7. Use of the dry-weight-rank method of botanical analysis in the ...

    African Journals Online (AJOL)

    The dry-weight-rank method of botanical analysis was tested in the highveld of the Eastern Transvaal and was found to be an efficient and accurate means of determining the botanical composition of veld herbage. Accuracy was increased by weighting ranks on the basis of quadrat yield, and by allocation of equal ranks to ...

  8. Prototyping a Distributed Information Retrieval System That Uses Statistical Ranking.

    Science.gov (United States)

    Harman, Donna; And Others

    1991-01-01

    Built using a distributed architecture, this prototype distributed information retrieval system uses statistical ranking techniques to provide better service to the end user. Distributed architecture was shown to be a feasible alternative to centralized or CD-ROM information retrieval, and user testing of the ranking methodology showed both…

  9. How to Rank Journals.

    Science.gov (United States)

    Bradshaw, Corey J A; Brook, Barry W

    2016-01-01

    There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68-0.84 Spearman's ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.

  10. A generalized Kruskal-Wallis test incorporating group uncertainty with application to genetic association studies.

    Science.gov (United States)

    Acar, Elif F; Sun, Lei

    2013-06-01

    Motivated by genetic association studies of SNPs with genotype uncertainty, we propose a generalization of the Kruskal-Wallis test that incorporates group uncertainty when comparing k samples. The extended test statistic is based on probability-weighted rank-sums and follows an asymptotic chi-square distribution with k - 1 degrees of freedom under the null hypothesis. Simulation studies confirm the validity and robustness of the proposed test in finite samples. Application to a genome-wide association study of type 1 diabetic complications further demonstrates the utilities of this generalized Kruskal-Wallis test for studies with group uncertainty. The method has been implemented as an open-resource R program, GKW. © 2013, The International Biometric Society.

  11. 7 CFR 42.132 - Determining cumulative sum values.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Determining cumulative sum values. 42.132 Section 42... Determining cumulative sum values. (a) The parameters for the on-line cumulative sum sampling plans for AQL's... 3 1 2.5 3 1 2 1 (b) At the beginning of the basic inspection period, the CuSum value is set equal to...

  12. QCD and power corrections to sum rules in deep-inelastic lepton-nucleon scattering

    International Nuclear Information System (INIS)

    Ravindran, V.; Neerven, W.L. van

    2001-01-01

    In this paper we study QCD and power corrections to sum rules which show up in deep-inelastic lepton-hadron scattering. Furthermore we will make a distinction between fundamental sum rules which can be derived from quantum field theory and those which are of a phenomenological origin. Using current algebra techniques the fundamental sum rules can be expressed into expectation values of (partially) conserved (axial-)vector currents sandwiched between hadronic states. These expectation values yield the quantum numbers of the corresponding hadron which are determined by the underlying flavour group SU(n) F . In this case one can show that there exist an intimate relation between the appearance of power and QCD corrections. The above features do not hold for phenomenological sum rules, hereafter called non-fundamental. They have no foundation in quantum field theory and they mostly depend on certain assumptions made for the structure functions like super-convergence relations or the parton model. Therefore only the fundamental sum rules provide us with a stringent test of QCD

  13. Harmonic sums and polylogarithms generated by cyclotomic polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2011-05-15

    The computation of Feynman integrals in massive higher order perturbative calculations in renormalizable Quantum Field Theories requires extensions of multiply nested harmonic sums, which can be generated as real representations by Mellin transforms of Poincare-iterated integrals including denominators of higher cyclotomic polynomials. We derive the cyclotomic harmonic polylogarithms and harmonic sums and study their algebraic and structural relations. The analytic continuation of cyclotomic harmonic sums to complex values of N is performed using analytic representations. We also consider special values of the cyclotomic harmonic polylogarithms at argument x=1, resp., for the cyclotomic harmonic sums at N{yields}{infinity}, which are related to colored multiple zeta values, deriving various of their relations, based on the stuffle and shuffle algebras and three multiple argument relations. We also consider infinite generalized nested harmonic sums at roots of unity which are related to the infinite cyclotomic harmonic sums. Basis representations are derived for weight w=1,2 sums up to cyclotomy l=20. (orig.)

  14. Evaluation of the osteoclastogenic process associated with RANK / RANK-L / OPG in odontogenic myxomas

    Science.gov (United States)

    González-Galván, María del Carmen; Mosqueda-Taylor, Adalberto; Bologna-Molina, Ronell; Setien-Olarra, Amaia; Marichalar-Mendia, Xabier; Aguirre-Urizar, José-Manuel

    2018-01-01

    Background Odontogenic myxoma (OM) is a benign intraosseous neoplasm that exhibits local aggressiveness and high recurrence rates. Osteoclastogenesis is an important phenomenon in the tumor growth of maxillary neoplasms. RANK (Receptor Activator of Nuclear Factor κappa B) is the signaling receptor of RANK-L (Receptor activator of nuclear factor kappa-Β ligand) that activates the osteoclasts. OPG (osteoprotegerin) is a decoy receptor for RANK-L that inhibits pro-osteoclastogenesis. The RANK / RANKL / OPG system participates in the regulation of osteolytic activity under normal conditions, and its alteration has been associated with greater bone destruction, and also with tumor growth. Objectives To analyze the immunohistochemical expression of OPG, RANK and RANK-L proteins in odontogenic myxomas (OMs) and their relationship with the tumor size. Material and Methods Eighteen OMs, 4 small ( 3cm) and 18 dental follicles (DF) that were included as control were studied by means of standard immunohistochemical procedure with RANK, RANKL and OPG antibodies. For the evaluation, 5 fields (40x) of representative areas of OM and DF were selected where the expression of each antibody was determined. Descriptive and comparative statistical analyses were performed with the obtained data. Results There are significant differences in the expression of RANK in OM samples as compared to DF (p = 0.022) and among the OMSs and OMLs (p = 0.032). Also a strong association is recognized in the expression of RANK-L and OPG in OM samples. Conclusions Activation of the RANK / RANK-L / OPG triad seems to be involved in the mechanisms of bone balance and destruction, as well as associated with tumor growth in odontogenic myxomas. Key words:Odontogenic myxoma, dental follicle, RANK, RANK-L, OPG, osteoclastogenesis. PMID:29680857

  15. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  16. Spin structure of the neutron ({sup 3}He) and the Bjoerken sum rule

    Energy Technology Data Exchange (ETDEWEB)

    Meziani, Z.E. [Stanford Univ., CA (United States)

    1994-12-01

    A first measurement of the longitudinal asymmetry of deep-inelastic scattering of polarized electrons from a polarized {sup 3}He target at energies ranging from 19 to 26 GeV has been performed at the Stanford Linear Accelerator Center (SLAC). The spin-structure function of the neutron g{sub 1}{sup n} has been extracted from the measured asymmetries. The Quark Parton Model (QPM) interpretation of the nucleon spin-structure function is examined in light of the new results. A test of the Ellis-Jaffe sum rule (E-J) on the neutron is performed at high momentum transfer and found to be satisfied. Furthermore, combining the proton results of the European Muon Collaboration (EMC) and the neutron results of E-142, the Bjoerken sum rule test is carried at high Q{sup 2} where higher order Perturbative Quantum Chromodynamics (PQCD) corrections and higher-twist corrections are smaller. The sum rule is saturated to within one standard deviation.

  17. Opinion formation driven by PageRank node influence on directed networks

    Science.gov (United States)

    Eom, Young-Ho; Shepelyansky, Dima L.

    2015-10-01

    We study a two states opinion formation model driven by PageRank node influence and report an extensive numerical study on how PageRank affects collective opinion formations in large-scale empirical directed networks. In our model the opinion of a node can be updated by the sum of its neighbor nodes' opinions weighted by the node influence of the neighbor nodes at each step. We consider PageRank probability and its sublinear power as node influence measures and investigate evolution of opinion under various conditions. First, we observe that all networks reach steady state opinion after a certain relaxation time. This time scale is decreasing with the heterogeneity of node influence in the networks. Second, we find that our model shows consensus and non-consensus behavior in steady state depending on types of networks: Web graph, citation network of physics articles, and LiveJournal social network show non-consensus behavior while Wikipedia article network shows consensus behavior. Third, we find that a more heterogeneous influence distribution leads to a more uniform opinion state in the cases of Web graph, Wikipedia, and Livejournal. However, the opposite behavior is observed in the citation network. Finally we identify that a small number of influential nodes can impose their own opinion on significant fraction of other nodes in all considered networks. Our study shows that the effects of heterogeneity of node influence on opinion formation can be significant and suggests further investigations on the interplay between node influence and collective opinion in networks.

  18. QCD sum-rules for V-A spectral functions

    International Nuclear Information System (INIS)

    Chakrabarti, J.; Mathur, V.S.

    1980-01-01

    The Borel transformation technique of Shifman et al is used to obtain QCD sum-rules for V-A spectral functions. In contrast to the situation in the original Weinberg sum-rules and those of Bernard et al, the problem of saturating the sum-rules by low lying resonances is brought under control. Furthermore, the present sum-rules, on saturation, directly determine useful phenomenological parameters

  19. Some Finite Sums Involving Generalized Fibonacci and Lucas Numbers

    Directory of Open Access Journals (Sweden)

    E. Kılıç

    2011-01-01

    Full Text Available By considering Melham's sums (Melham, 2004, we compute various more general nonalternating sums, alternating sums, and sums that alternate according to (−12+1 involving the generalized Fibonacci and Lucas numbers.

  20. Sum rules for nuclear collective excitations

    International Nuclear Information System (INIS)

    Bohigas, O.

    1978-07-01

    Characterizations of the response function and of integral properties of the strength function via a moment expansion are discussed. Sum rule expressions for the moments in the RPA are derived. The validity of these sum rules for both density independent and density dependent interactions is proved. For forces of the Skyrme type, analytic expressions for the plus one and plus three energy weighted sum rules are given for isoscalar monopole and quadrupole operators. From these, a close relationship between the monopole and quadrupole energies is shown and their dependence on incompressibility and effective mass is studied. The inverse energy weighted sum rule is computed numerically for the monopole operator, and an upper bound for the width of the monopole resonance is given. Finally the reliability of moments given by the RPA with effective interactions is discussed using simple soluble models for the hamiltonian, and also by comparison with experimental data

  1. The effect of uncertainties in distance-based ranking methods for multi-criteria decision making

    Science.gov (United States)

    Jaini, Nor I.; Utyuzhnikov, Sergei V.

    2017-08-01

    Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.

  2. Sum rules in the response function method

    International Nuclear Information System (INIS)

    Takayanagi, Kazuo

    1990-01-01

    Sum rules in the response function method are studied in detail. A sum rule can be obtained theoretically by integrating the imaginary part of the response function over the excitation energy with a corresponding energy weight. Generally, the response function is calculated perturbatively in terms of the residual interaction, and the expansion can be described by diagrammatic methods. In this paper, we present a classification of the diagrams so as to clarify which diagram has what contribution to which sum rule. This will allow us to get insight into the contributions to the sum rules of all the processes expressed by Goldstone diagrams. (orig.)

  3. Time evolution of Wikipedia network ranking

    Science.gov (United States)

    Eom, Young-Ho; Frahm, Klaus M.; Benczúr, András; Shepelyansky, Dima L.

    2013-12-01

    We study the time evolution of ranking and spectral properties of the Google matrix of English Wikipedia hyperlink network during years 2003-2011. The statistical properties of ranking of Wikipedia articles via PageRank and CheiRank probabilities, as well as the matrix spectrum, are shown to be stabilized for 2007-2011. A special emphasis is done on ranking of Wikipedia personalities and universities. We show that PageRank selection is dominated by politicians while 2DRank, which combines PageRank and CheiRank, gives more accent on personalities of arts. The Wikipedia PageRank of universities recovers 80% of top universities of Shanghai ranking during the considered time period.

  4. Evaluation of fasting state-/oral glucose tolerance test-derived measures of insulin release for the detection of genetically impaired β-cell function.

    Directory of Open Access Journals (Sweden)

    Silke A Herzberg-Schäfer

    Full Text Available BACKGROUND: To date, fasting state- and different oral glucose tolerance test (OGTT-derived measures are used to estimate insulin release with reasonable effort in large human cohorts required, e.g., for genetic studies. Here, we evaluated twelve common (or recently introduced fasting state-/OGTT-derived indices for their suitability to detect genetically determined β-cell dysfunction. METHODOLOGY/PRINCIPAL FINDINGS: A cohort of 1364 White European individuals at increased risk for type 2 diabetes was characterized by OGTT with glucose, insulin, and C-peptide measurements and genotyped for single nucleotide polymorphisms (SNPs known to affect glucose- and incretin-stimulated insulin secretion. One fasting state- and eleven OGTT-derived indices were calculated and statistically evaluated. After adjustment for confounding variables, all tested SNPs were significantly associated with at least two insulin secretion measures (p≤0.05. The indices were ranked according to their associations' statistical power, and the ranks an index obtained for its associations with all the tested SNPs (or a subset were summed up resulting in a final ranking. This approach revealed area under the curve (AUC(Insulin(0-30/AUC(Glucose(0-30 as the best-ranked index to detect SNP-dependent differences in insulin release. Moreover, AUC(Insulin(0-30/AUC(Glucose(0-30, corrected insulin response (CIR, AUC(C-Peptide(0-30/AUC(Glucose(0-30, AUC(C-Peptide(0-120/AUC(Glucose(0-120, two different formulas for the incremental insulin response from 0-30 min, i.e., the insulinogenic indices (IGI(2 and IGI(1, and insulin 30 min were significantly higher-ranked than homeostasis model assessment of β-cell function (HOMA-B; p<0.05. AUC(C-Peptide(0-120/AUC(Glucose(0-120 was best-ranked for the detection of SNPs involved in incretin-stimulated insulin secretion. In all analyses, HOMA-β displayed the highest rank sums and, thus, scored last. CONCLUSIONS/SIGNIFICANCE: With AUC(Insulin(0

  5. 'Sum rules' for preequilibrium reactions

    International Nuclear Information System (INIS)

    Hussein, M.S.

    1981-03-01

    Evidence that suggests a correct relationship between the optical transmission matrix, P, and the several correlation widths, gamma sub(n), found in nsmission matrix, P, and the several correlation widths, n, found in multistep compound (preequilibrium) nuclear reactions, is presented. A second sum rule is also derived within the shell model approach to nuclear reactions. Indications of the potential usefulness of the sum rules in preequilibrium studies are given. (Author) [pt

  6. Simulation approach to coincidence summing in {gamma}-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Dziri, S., E-mail: samir.dziri@iphc.cnrs.fr [Groupe RaMsEs, Institut Pluridisciplinaire Hubert Curien (IPHC), University of Strasbourg, CNRS, IN2P3, UMR 7178, 23 rue de Loess, BP 28, 67037 Strasbourg Cedex 2 (France); Nourreddine, A.; Sellam, A.; Pape, A.; Baussan, E. [Groupe RaMsEs, Institut Pluridisciplinaire Hubert Curien (IPHC), University of Strasbourg, CNRS, IN2P3, UMR 7178, 23 rue de Loess, BP 28, 67037 Strasbourg Cedex 2 (France)

    2012-07-15

    Some of the radionuclides used for efficiency calibration of a HPGe spectrometer are subject to coincidence-summing (CS) and account must be taken of the phenomenon to obtain quantitative results when counting samples to determine their activity. We have used MCNPX simulations, which do not take CS into account, to obtain {gamma}-ray peak intensities that were compared to those observed experimentally. The loss or gain of a measured peak intensity relative to the simulated peak is attributed to CS. CS correction factors are compared with those of ETNA and GESPECOR. Application to a test sample prepared with known radionuclides gave values close to the published activities. - Highlights: Black-Right-Pointing-Pointer Coincidence summing occurs when the solid angle is increased. Black-Right-Pointing-Pointer The loss of counts gives rise to an approximative efficiency curves, this means a wrong quantitative data. Black-Right-Pointing-Pointer To overcome this problem we need mono-energetic source, otherwise, the MCNPX simulation allows by comparison with the experiment data to get the coincidence summing correction factors. Black-Right-Pointing-Pointer By multiplying these factors by the approximative efficiency, we obtain the accurate efficiency.

  7. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  8. QCD sum rules in a Bayesian approach

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2011-01-01

    A novel technique is developed, in which the Maximum Entropy Method is used to analyze QCD sum rules. The main advantage of this approach lies in its ability of directly generating the spectral function of a given operator. This is done without the need of making an assumption about the specific functional form of the spectral function, such as in the 'pole + continuum' ansatz that is frequently used in QCD sum rule studies. Therefore, with this method it should in principle be possible to distinguish narrow pole structures form continuum states. To check whether meaningful results can be extracted within this approach, we have first investigated the vector meson channel, where QCD sum rules are traditionally known to provide a valid description of the spectral function. Our results exhibit a significant peak in the region of the experimentally observed ρ-meson mass, which agrees with earlier QCD sum rules studies and shows that the Maximum Entropy Method is a useful tool for analyzing QCD sum rules.

  9. ArrayVigil: a methodology for statistical comparison of gene signatures using segregated-one-tailed (SOT) Wilcoxon's signed-rank test.

    Science.gov (United States)

    Khan, Haseeb Ahmad

    2005-01-28

    Due to versatile diagnostic and prognostic fidelity molecular signatures or fingerprints are anticipated as the most powerful tools for cancer management in the near future. Notwithstanding the experimental advancements in microarray technology, methods for analyzing either whole arrays or gene signatures have not been firmly established. Recently, an algorithm, ArraySolver has been reported by Khan for two-group comparison of microarray gene expression data using two-tailed Wilcoxon signed-rank test. Most of the molecular signatures are composed of two sets of genes (hybrid signatures) wherein up-regulation of one set and down-regulation of the other set collectively define the purpose of a gene signature. Since the direction of a selected gene's expression (positive or negative) with respect to a particular disease condition is known, application of one-tailed statistics could be a more relevant choice. A novel method, ArrayVigil, is described for comparing hybrid signatures using segregated-one-tailed (SOT) Wilcoxon signed-rank test and the results compared with integrated-two-tailed (ITT) procedures (SPSS and ArraySolver). ArrayVigil resulted in lower P values than those obtained from ITT statistics while comparing real data from four signatures.

  10. Robust Weighted Sum Harvested Energy Maximization for SWIPT Cognitive Radio Networks Based on Particle Swarm Optimization.

    Science.gov (United States)

    Tuan, Pham Viet; Koo, Insoo

    2017-10-06

    In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.

  11. Learning to rank figures within a biomedical article.

    Directory of Open Access Journals (Sweden)

    Feifan Liu

    Full Text Available Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. This ever-increasing sheer volume has made it difficult for scientists to effectively and accurately access figures of their interest, the process of which is crucial for validating research facts and for formulating or testing novel research hypotheses. Current figure search applications can't fully meet this challenge as the "bag of figures" assumption doesn't take into account the relationship among figures. In our previous study, hundreds of biomedical researchers have annotated articles in which they serve as corresponding authors. They ranked each figure in their paper based on a figure's importance at their discretion, referred to as "figure ranking". Using this collection of annotated data, we investigated computational approaches to automatically rank figures. We exploited and extended the state-of-the-art listwise learning-to-rank algorithms and developed a new supervised-learning model BioFigRank. The cross-validation results show that BioFigRank yielded the best performance compared with other state-of-the-art computational models, and the greedy feature selection can further boost the ranking performance significantly. Furthermore, we carry out the evaluation by comparing BioFigRank with three-level competitive domain-specific human experts: (1 First Author, (2 Non-Author-In-Domain-Expert who is not the author nor co-author of an article but who works in the same field of the corresponding author of the article, and (3 Non-Author-Out-Domain-Expert who is not the author nor co-author of an article and who may or may not work in the same field of the corresponding author of an article. Our results show that BioFigRank outperforms Non-Author-Out-Domain-Expert and performs as well as Non-Author-In-Domain-Expert. Although BioFigRank underperforms First Author, since most biomedical researchers are either in- or

  12. Learning to rank figures within a biomedical article.

    Science.gov (United States)

    Liu, Feifan; Yu, Hong

    2014-01-01

    Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. This ever-increasing sheer volume has made it difficult for scientists to effectively and accurately access figures of their interest, the process of which is crucial for validating research facts and for formulating or testing novel research hypotheses. Current figure search applications can't fully meet this challenge as the "bag of figures" assumption doesn't take into account the relationship among figures. In our previous study, hundreds of biomedical researchers have annotated articles in which they serve as corresponding authors. They ranked each figure in their paper based on a figure's importance at their discretion, referred to as "figure ranking". Using this collection of annotated data, we investigated computational approaches to automatically rank figures. We exploited and extended the state-of-the-art listwise learning-to-rank algorithms and developed a new supervised-learning model BioFigRank. The cross-validation results show that BioFigRank yielded the best performance compared with other state-of-the-art computational models, and the greedy feature selection can further boost the ranking performance significantly. Furthermore, we carry out the evaluation by comparing BioFigRank with three-level competitive domain-specific human experts: (1) First Author, (2) Non-Author-In-Domain-Expert who is not the author nor co-author of an article but who works in the same field of the corresponding author of the article, and (3) Non-Author-Out-Domain-Expert who is not the author nor co-author of an article and who may or may not work in the same field of the corresponding author of an article. Our results show that BioFigRank outperforms Non-Author-Out-Domain-Expert and performs as well as Non-Author-In-Domain-Expert. Although BioFigRank underperforms First Author, since most biomedical researchers are either in- or out

  13. Which Basic Rules Underlie Social Judgments? Agency Follows a Zero-Sum Principle and Communion Follows a Non-Zero-Sum Principle.

    Science.gov (United States)

    Dufner, Michael; Leising, Daniel; Gebauer, Jochen E

    2016-05-01

    How are people who generally see others positively evaluated themselves? We propose that the answer to this question crucially hinges on the content domain: We hypothesize that Agency follows a "zero-sum principle" and therefore people who see others ashighin Agency are perceived aslowin Agency themselves. In contrast, we hypothesize that Communion follows a "non-zero-sum principle" and therefore people who see others ashighin Communion are perceived ashighin Communion themselves. We tested these hypotheses in a round-robin and a half-block study. Perceiving others as agentic was indeed linked to being perceived as low in Agency. To the contrary, perceiving others as communal was linked to being perceived as high in Communion, but only when people directly interacted with each other. These results help to clarify the nature of Agency and Communion and offer explanations for divergent findings in the literature. © 2016 by the Society for Personality and Social Psychology, Inc.

  14. Multiplex PageRank.

    Directory of Open Access Journals (Sweden)

    Arda Halu

    Full Text Available Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  15. Multiplex PageRank.

    Science.gov (United States)

    Halu, Arda; Mondragón, Raúl J; Panzarasa, Pietro; Bianconi, Ginestra

    2013-01-01

    Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  16. DIM SUM: demography and individual migration simulated using a Markov chain.

    Science.gov (United States)

    Brown, Jeremy M; Savidge, Kevin; McTavish, Emily Jane B

    2011-03-01

    An increasing number of studies seek to infer demographic history, often jointly with genetic relationships. Despite numerous analytical methods for such data, few simulations have investigated the methods' power and robustness, especially when underlying assumptions have been violated. DIM SUM (Demography and Individual Migration Simulated Using a Markov chain) is a stand-alone Java program for the simulation of population demography and individual migration while recording ancestor-descendant relationships. It does not employ coalescent assumptions or discrete population boundaries. It is extremely flexible, allowing the user to specify border positions, reactions of organisms to borders, local and global carrying capacities, individual dispersal kernels, rates of reproduction and strategies for sampling individuals. Spatial variables may be specified using image files (e.g., as exported from gis software) and may vary through time. In combination with software for genetic marker simulation, DIM SUM will be useful for testing phylogeographic (e.g., nested clade phylogeographic analysis, coalescent-based tests and continuous-landscape frameworks) and landscape-genetic methods, specifically regarding violations of coalescent assumptions. It can also be used to explore the qualitative features of proposed demographic scenarios (e.g. regarding biological invasions) and as a pedagogical tool. DIM SUM (with user's manual) can be downloaded from http://code.google.com/p/bio-dimsum. © 2010 Blackwell Publishing Ltd.

  17. Harmonic sums, polylogarithms, special numbers, and their generalizations

    International Nuclear Information System (INIS)

    Ablinger, Jakob

    2013-04-01

    In these introductory lectures we discuss classes of presently known nested sums, associated iterated integrals, and special constants which hierarchically appear in the evaluation of massless and massive Feynman diagrams at higher loops. These quantities are elements of stuffle and shuffle algebras implying algebraic relations being widely independent of the special quantities considered. They are supplemented by structural relations. The generalizations are given in terms of generalized harmonic sums, (generalized) cyclotomic sums, and sums containing in addition binomial and inverse-binomial weights. To all these quantities iterated integrals and special numbers are associated. We also discuss the analytic continuation of nested sums of different kind to complex values of the external summation bound N.

  18. Harmonic sums, polylogarithms, special numbers, and their generalizations

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2013-04-15

    In these introductory lectures we discuss classes of presently known nested sums, associated iterated integrals, and special constants which hierarchically appear in the evaluation of massless and massive Feynman diagrams at higher loops. These quantities are elements of stuffle and shuffle algebras implying algebraic relations being widely independent of the special quantities considered. They are supplemented by structural relations. The generalizations are given in terms of generalized harmonic sums, (generalized) cyclotomic sums, and sums containing in addition binomial and inverse-binomial weights. To all these quantities iterated integrals and special numbers are associated. We also discuss the analytic continuation of nested sums of different kind to complex values of the external summation bound N.

  19. Transition sum rules in the shell model

    Science.gov (United States)

    Lu, Yi; Johnson, Calvin W.

    2018-03-01

    An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.

  20. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  1. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  2. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  3. Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable

    Energy Technology Data Exchange (ETDEWEB)

    Menkov, V. [Indiana Univ., Bloomington, IN (United States)

    1996-12-31

    An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.

  4. Improving Ranking Using Quantum Probability

    OpenAIRE

    Melucci, Massimo

    2011-01-01

    The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...

  5. Evaluating chiral symmetry restoration through the use of sum rules

    Directory of Open Access Journals (Sweden)

    Rapp Ralf

    2012-11-01

    Full Text Available We pursue the idea of assessing chiral restoration via in-medium modifications of hadronic spectral functions of chiral partners. The usefulness of sum rules in this endeavor is illustrated, focusing on the vector/axial-vector channel. We first present an update on obtaining quantitative results for pertinent vacuum spectral functions. These serve as a basis upon which the in-medium spectral functions can be constructed. A novel feature of our analysis of the vacuum spectral functions is the need to include excited resonances, dictated by satisfying the Weinberg-type sum rules. This includes excited states in both the vector and axial-vector channels.We also analyze the QCD sum rule for the finite temperature vector spectral function, based on a ρ spectral function tested in dilepton data which develops a shoulder at low energies.We find that the ρ′ peak flattens off which may be a sign of chiral restoration, though a study of the finite temperature axial-vector spectral function remains to be carried out.

  6. Simulations of charge summing and threshold dispersion effects in Medipix3

    International Nuclear Information System (INIS)

    Pennicard, D.; Ballabriga, R.; Llopart, X.; Campbell, M.; Graafsma, H.

    2011-01-01

    A novel feature of the Medipix3 photon-counting pixel readout chip is inter-pixel communication. By summing together the signals from neighbouring pixels at a series of 'summing nodes', and assigning each hit to the node with the highest signal, the chip can compensate for charge-sharing effects. However, previous experimental tests have demonstrated that the node-to-node variation in the detector's response is very large. Using computer simulations, it is shown that this variation is due to threshold dispersion, which results in many hits being assigned to whichever summing node in the vicinity has the lowest threshold level. A reduction in threshold variation would attenuate but not solve this issue. A new charge summing and hit assignment process is proposed, where the signals in individual pixels are used to determine the hit location, and then signals from neighbouring pixels are summed to determine whether the total photon energy is above threshold. In simulation, this new mode accurately assigns each hit to the pixel with the highest pulse height without any losses or double counting. - Research highlights: → Medipix3 readout chip compensates charge sharing using inter-pixel communication. → In initial production run, the flat-field response is unexpectedly nonuniform. → This effect is reproduced in simulation, and is caused by threshold dispersion. → A new inter-pixel communication process is proposed. → Simulations demonstrate the new process should give much better uniformity.

  7. How Many Alternatives Can Be Ranked? A Comparison of the Paired Comparison and Ranking Methods.

    Science.gov (United States)

    Ock, Minsu; Yi, Nari; Ahn, Jeonghoon; Jo, Min-Woo

    2016-01-01

    To determine the feasibility of converting ranking data into paired comparison (PC) data and suggest the number of alternatives that can be ranked by comparing a PC and a ranking method. Using a total of 222 health states, a household survey was conducted in a sample of 300 individuals from the general population. Each respondent performed a PC 15 times and a ranking method 6 times (two attempts of ranking three, four, and five health states, respectively). The health states of the PC and the ranking method were constructed to overlap each other. We converted the ranked data into PC data and examined the consistency of the response rate. Applying probit regression, we obtained the predicted probability of each method. Pearson correlation coefficients were determined between the predicted probabilities of those methods. The mean absolute error was also assessed between the observed and the predicted values. The overall consistency of the response rate was 82.8%. The Pearson correlation coefficients were 0.789, 0.852, and 0.893 for ranking three, four, and five health states, respectively. The lowest mean absolute error was 0.082 (95% confidence interval [CI] 0.074-0.090) in ranking five health states, followed by 0.123 (95% CI 0.111-0.135) in ranking four health states and 0.126 (95% CI 0.113-0.138) in ranking three health states. After empirically examining the consistency of the response rate between a PC and a ranking method, we suggest that using five alternatives in the ranking method may be superior to using three or four alternatives. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. A generalization of Friedman's rank statistic

    NARCIS (Netherlands)

    Kroon, de J.; Laan, van der P.

    1983-01-01

    In this paper a very natural generalization of the two·way analysis of variance rank statistic of FRIEDMAN is given. The general distribution-free test procedure based on this statistic for the effect of J treatments in a random block design can be applied in general two-way layouts without

  9. Fixed mass and scaling sum rules

    International Nuclear Information System (INIS)

    Ward, B.F.L.

    1975-01-01

    Using the correspondence principle (continuity in dynamics), the approach of Keppell-Jones-Ward-Taha to fixed mass and scaling current algebraic sum rules is extended so as to consider explicitly the contributions of all classes of intermediate states. A natural, generalized formulation of the truncation ideas of Cornwall, Corrigan, and Norton is introduced as a by-product of this extension. The formalism is illustrated in the familiar case of the spin independent Schwinger term sum rule. New sum rules are derived which relate the Regge residue functions of the respective structure functions to their fixed hadronic mass limits for q 2 → infinity. (Auth.)

  10. Neophilia Ranking of Scientific Journals.

    Science.gov (United States)

    Packalen, Mikko; Bhattacharya, Jay

    2017-01-01

    The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)-these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists' incentives to pursue innovative work.

  11. Shapley Value for Constant-sum Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.B.

    2002-01-01

    It is proved that Young's axiomatization for the Shapley value by marginalism, efficiency, and symmetry is still valid for the Shapley value defined on the class of nonnegative constant-sum games and on the entire class of constant-sum games as well. To support an interest to study the class of

  12. Hierarchical partial order ranking

    International Nuclear Information System (INIS)

    Carlsen, Lars

    2008-01-01

    Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritisation of polluted sites is given. - Hierarchical partial order ranking of polluted sites has been developed for prioritization based on a large number of parameters

  13. Intra- and inter-rater reliability of the Sollerman hand function test in patients with chronic stroke

    DEFF Research Database (Denmark)

    Brogårdh, Christina; Persson, Ann L; Sjölund, Bengt H

    2007-01-01

    PURPOSE: To examine whether the Sollerman hand function test is reliable in a test-retest situation in patients with chronic stroke. METHOD: Three independent examiners observed each patient at three experimental sessions; two days in week 1 (short-term test-retest) and one day in week 4 (long...... test seems to be a reliable test in patients with chronic stroke, but we recommend that the same examiner evaluates a patient's hand function pre- and post-treatment.......-term test-retest). A total of 24 patients with chronic stroke (mean age; 59.7 years, mean time since stroke onset 29.6 months) participated. The examiners simultaneously assessed the patients' ability to perform 20 subtests. Both ordinal data (generalized kappa) and total sum scores (Spearman's rank...

  14. Research Productivity and Rankings of Anesthesiology Departments in Canada and the United States: The Relationship Between the h-Index and Other Common Metrics [RETRACTED].

    Science.gov (United States)

    Bunting, Alexandra C; Alavifard, Sepand; Walker, Benjamin; Miller, Donald R; Ramsay, Tim; Boet, Sylvain

    2018-03-05

    To evaluate the relative research productivity and ranking of anesthesiology departments in Canada and the United States, using the Hirsch index (h-index) and 4 other previously validated metrics. We identified 150 anesthesiology departments in Canada and the United States with an accredited residency program. Publications for each of the 150 departments were identified using Thomson's Institute for Scientific Information Web of Science, and the citation report for each department was exported. The bibliometric data were used to calculate publication metrics for 3 time periods: cumulative (1945-2014), 10 years (2005-2014), and 5 years (2010-2014). The following group metrics were then used to determine the publication impact and relative ranking of all 150 departments: h-index, m-index, total number of publications, sum of citations, and average number of citations per article. Ranking for each metric were also stratified by using a proxy for departmental size. The most common journals in which US and Canadian anesthesiology departments publish their work were identified. The majority (23 of the top 25) of top-ranked anesthesiology departments are in the United States, and 2 of the top 25 departments (University of Toronto; McGill University) are in Canada. There was a strong positive relationship between each of h-index, total number of publications, and the sum of citations (0.91-0.97; P productivity on most metrics. The most frequent journals in which US and Canadian anesthesiology departments publish are Anesthesiology, Anesthesia and Analgesia, and the Canadian Journal of Anesthesia. Our study ranked the Canadian and US anesthesiology departmental research productivity using the h-index applied to each department, total number of publications, total number of citations, and average number of citations. The strong relationship between the h-index and both the number of publications and number of citations of anesthesiology departments shows that the departments

  15. A Bayesian analysis of QCD sum rules

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2011-01-01

    A new technique has recently been developed, in which the Maximum Entropy Method is used to analyze QCD sum rules. This approach has the virtue of being able to directly generate the spectral function of a given operator, without the need of making an assumption about its specific functional form. To investigate whether useful results can be extracted within this method, we have first studied the vector meson channel, where QCD sum rules are traditionally known to provide a valid description of the spectral function. Our results show a significant peak in the region of the experimentally observed ρ-meson mass, which is in agreement with earlier QCD sum rules studies and suggests that the Maximum Entropy Method is a strong tool for analyzing QCD sum rules.

  16. SUMS Counts-Related Projects

    Data.gov (United States)

    Social Security Administration — Staging Instance for all SUMs Counts related projects including: Redeterminations/Limited Issue, Continuing Disability Resolution, CDR Performance Measures, Initial...

  17. Incorporating X–ray summing into gamma–gamma signature quantification

    International Nuclear Information System (INIS)

    Britton, R.; Jackson, M.J.; Davies, A.V.

    2016-01-01

    A method for quantifying coincidence signatures has been extended to incorporate the effects of X–ray summing, and tested using a high–efficiency γ–γ system. An X–ray library has been created, allowing all possible γ, X–ray and conversion electron cascades to be generated. The equations for calculating efficiency and cascade summing corrected coincidence signature probabilities have also been extended from a two γ, two detector ‘special case’ to an arbitrarily large system. The coincidence library generated is fully searchable by energy, nuclide, coincidence pair, γ multiplicity, cascade probability and the half–life of the cascade, allowing the user to quickly identify coincidence signatures of interest. The method and software described is inherently flexible, as it only requires evaluated nuclear data, an X–ray library, and accurate efficiency characterisations to quickly and easily calculate coincidence signature probabilities for a variety of systems. Additional uses for the software include the fast identification of γ coincidence signals with required multiplicities and branching ratios, identification of the optimal coincidence signatures to measure for a particular system, and the calculation of cascade summing corrections for single detector systems. - Highlights: • Method for incorporating X-ray summing into coincidence measurements developed. • Calculation routines have been extended to an arbitrarily large detector system, and re-written to take advantage of multiple computing cores. • Data collected in list-mode with all events timestamped for offline coincidence analysis. • Coincidence analysis of environmental samples will dramatically improve the detection sensitivity achievable.

  18. A Survey on PageRank Computing

    OpenAIRE

    Berkhin, Pavel

    2005-01-01

    This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for web pages independent of their textual content, solely based on the hyperlink structure of the web. PageRank is typically used as a web search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much mor...

  19. 7 CFR 1726.205 - Multiparty lump sum quotations.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Multiparty lump sum quotations. 1726.205 Section 1726....205 Multiparty lump sum quotations. The borrower or its engineer must contact a sufficient number of... basis of written lump sum quotations, the borrower will select the supplier or contractor based on the...

  20. Rank-based model selection for multiple ions quantum tomography

    International Nuclear Information System (INIS)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  1. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    Science.gov (United States)

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  2. Adler Function, DIS sum rules and Crewther Relations

    International Nuclear Information System (INIS)

    Baikov, P.A.; Chetyrkin, K.G.; Kuehn, J.H.

    2010-01-01

    The current status of the Adler function and two closely related Deep Inelastic Scattering (DIS) sum rules, namely, the Bjorken sum rule for polarized DIS and the Gross-Llewellyn Smith sum rule are briefly reviewed. A new result is presented: an analytical calculation of the coefficient function of the latter sum rule in a generic gauge theory in order O(α s 4 ). It is demonstrated that the corresponding Crewther relation allows to fix two of three colour structures in the O(α s 4 ) contribution to the singlet part of the Adler function.

  3. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models

    DEFF Research Database (Denmark)

    Guiseppe, Cavaliere; Rahbæk, Anders; Taylor, A.M. Robert

    with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice....

  4. Sum rules for neutrino oscillations

    International Nuclear Information System (INIS)

    Kobzarev, I.Yu.; Martemyanov, B.V.; Okun, L.B.; Schepkin, M.G.

    1981-01-01

    Sum rules for neutrino oscillations are obtained. The derivation of the general form of the s matrix for two stage process lsub(i)sup(-)→ν→lsub(k)sup(+-) (where lsub(i)sup(-)e, μ, tau, ... are initial leptons with flavor i and lsub(k)sup(+-) is final lepton) is presented. The consideration of two stage process lsub(i)sup(-)→ν→lsub(k)sup(+-) gives the possibility to take into account neutrino masses and to obtain the expressions for the oscillating cross sections. In the case of Dirac and left-handed Majorana neutrino is obtained the sum rule for the quantities 1/Vsub(K)σ(lsub(i)sup(-)→lsub(K)sup(+-)), (where Vsub(K) is a velocity of lsub(K)). In the left-handed Majorana neutrino case there is an additional antineutrino admixture leading to lsub(i)sup(-)→lsub(K)sup(+) process. Both components (neutrino and antineutrino) oscillate independently. The sums Σsub(K)1/Vsub(k)σ(lsub(i)sup(-) - lsub(K)sup(+-) then oscillate due to the presence of left-handed antineutrinos and right-handed neutrinos which do not take part in weak interactions. If right-handed currents are added sum rules analogous to considered above may be obtained. All conclusions are valid in the general case when CP is not conserved [ru

  5. Endogenous Versus Exogenous Shocks in Complex Networks: An Empirical Test Using Book Sale Rankings

    Science.gov (United States)

    Sornette, D.; Deschâtres, F.; Gilbert, T.; Ageon, Y.

    2004-11-01

    We study the precursory and recovery signatures accompanying shocks in complex networks, that we test on a unique database of the Amazon.com ranking of book sales. We find clear distinguishing signatures classifying two types of sales peaks. Exogenous peaks occur abruptly and are followed by a power law relaxation, while endogenous peaks occur after a progressively accelerating power law growth followed by an approximately symmetrical power law relaxation which is slower than for exogenous peaks. These results are rationalized quantitatively by a simple model of epidemic propagation of interactions with long memory within a network of acquaintances. The observed relaxation of sales implies that the sales dynamics is dominated by cascades rather than by the direct effects of news or advertisements, indicating that the social network is close to critical.

  6. Succinct partial sums and fenwick trees

    DEFF Research Database (Denmark)

    Bille, Philip; Christiansen, Anders Roy; Prezza, Nicola

    2017-01-01

    We consider the well-studied partial sums problem in succint space where one is to maintain an array of n k-bit integers subject to updates such that partial sums queries can be efficiently answered. We present two succint versions of the Fenwick Tree – which is known for its simplicity...... and practicality. Our results hold in the encoding model where one is allowed to reuse the space from the input data. Our main result is the first that only requires nk + o(n) bits of space while still supporting sum/update in O(logbn)/O(blogbn) time where 2 ≤ b ≤ log O(1)n. The second result shows how optimal...... time for sum/update can be achieved while only slightly increasing the space usage to nk + o(nk) bits. Beyond Fenwick Trees, the results are primarily based on bit-packing and sampling – making them very practical – and they also allow for simple optimal parallelization....

  7. Isospin sum rules for inclusive cross-sections

    NARCIS (Netherlands)

    Rotelli, P.; Suttorp, L.G.

    1972-01-01

    A systematic analysis of isospin sum rules is presented for the distribution functions of strong, electromagnetic weak inclusive processes. The general expression for these sum rules is given and some new examples are presented.

  8. Wikipedia ranking of world universities

    Science.gov (United States)

    Lages, José; Patt, Antoine; Shepelyansky, Dima L.

    2016-03-01

    We use the directed networks between articles of 24 Wikipedia language editions for producing the wikipedia ranking of world Universities (WRWU) using PageRank, 2DRank and CheiRank algorithms. This approach allows to incorporate various cultural views on world universities using the mathematical statistical analysis independent of cultural preferences. The Wikipedia ranking of top 100 universities provides about 60% overlap with the Shanghai university ranking demonstrating the reliable features of this approach. At the same time WRWU incorporates all knowledge accumulated at 24 Wikipedia editions giving stronger highlights for historically important universities leading to a different estimation of efficiency of world countries in university education. The historical development of university ranking is analyzed during ten centuries of their history.

  9. Effects of temperature sum on vitamin C concentration and yield of sea buckthorn (Hippophae rhamnoides fruit: optimal time of fruit harvest

    Directory of Open Access Journals (Sweden)

    Yingmou Yao

    1993-12-01

    Full Text Available To investigate the effects of temperature sum on vitamin C concentration (Vc, yield and maturity of sea buckthorn fruit (Hippophae rhamnoides L. and to predict the optimal harvest time, berries were collected from eight genotypes at an interval of about one week from August 16 to December 2. Maturity was visually observed, berry weight measured and Vc determined. Berries matured at 1165-1316 degree-days (d.d.. Vc reached maximum at about 1229 d.d., while fruit size and yield reached maximum at 1380 d.d.. Mathematical models of polynomial equations were highly significant for predicting the effects of temperature sum on Vc, maturity and fruit yield. Optimal harvest time for maximizing Vc, yield or economic income could be determined according to differential equations. Great variations in Vc, fruit maturity and fruit size suggested good opportunities for selection and breeding. Low rank correlations in vitamin C concentration during fruit maturity, however, call for special attention in selection and breeding.

  10. Gauss Sum Factorization with Cold Atoms

    International Nuclear Information System (INIS)

    Gilowski, M.; Wendrich, T.; Mueller, T.; Ertmer, W.; Rasel, E. M.; Jentsch, Ch.; Schleich, W. P.

    2008-01-01

    We report the first implementation of a Gauss sum factorization algorithm by an internal state Ramsey interferometer using cold atoms. A sequence of appropriately designed light pulses interacts with an ensemble of cold rubidium atoms. The final population in the involved atomic levels determines a Gauss sum. With this technique we factor the number N=263193

  11. Where Does Latin "Sum" Come From?

    Science.gov (United States)

    Nyman, Martti A.

    1977-01-01

    The derivation of Latin "sum,""es(s),""est" from Indo-European "esmi,""est,""esti" involves methodological problems. It is claimed here that the development of "sum" from "esmi" is related to the origin of the variation "est-st" (less than"esti"). The study is primarily concerned with this process, but chronological suggestions are also made. (CHK)

  12. AP600 passive containment cooling system phenomena identification and ranking table

    International Nuclear Information System (INIS)

    Spencer, D.R.; Woodcock, Joel

    1999-01-01

    This paper presents the Phenomena Identification and Ranking Table (PIRT) used in the containment Design Basis Analysis (DBA) for the AP600 nuclear power plant. The PIRT is a tool generally applied to best estimate thermal hydraulic analyses. In the conservative analytical modeling approach used for the AP600 DBA containment pressure response, the PIRT was a tool used to show completeness and relevance of the test database in accordance with the Code of Federal Regulations for advanced plant design. Additionally, the ranking of phenomena by relative importance in a PIRT allows appropriate focusing of resources during model development and licensing review. The focus of the paper is on the organization and structure of the PIRT to show level of detail and format accepted for the AP600, for potential application to other containment designs or accident scenarios. Conclusions of general interest are discussed regarding table organization and structure, the process for developing relative ranking and incorporating expert opinion, and the definition and usage of the relative ranking in support of the conservative evaluation model. The AP600 containment evaluation model approach, as influenced by the relative rankings, is briefly described to put into context this unique application of the PIRT to a conservative methodology. The bases for relative ranking of each phenomenon, which included expert opinion, and quantitative results of scaling and testing, was submitted to the NRC as part of AP600-specific evaluations. Since a PIRT supports the sufficiency of both a testing program and analytical modeling, the process followed to generate and confirm the PIRT, an important part of the licensing acceptance, was a focus of extensive NRC review. General descriptions of key phenomena are provided to aid in understanding the containment PIRT for more general applications for containment evaluations of other PWR designs or for other scenarios. (author)

  13. The End of Academia?: From "Cogito Ergo Sum" to "Consumo Ergo Sum" Germany and Malaysia in Comparison

    Science.gov (United States)

    Lim, Kim-Hui,; Har, Wai-Mun

    2008-01-01

    The lack of academic and thinking culture is getting more worried and becomes a major challenge to our academia society this 21st century. Few directions that move academia from "cogito ergo sum" to "consumo ergo sum" are actually leading us to "the end of academia". Those directions are: (1) the death of dialectic;…

  14. Gottfried sum rule and mesonic exchanges in deuteron

    International Nuclear Information System (INIS)

    Kaptari, L.P.

    1991-01-01

    Recent NMC data on the experimental value of the Gottfried Sum are discussed. It is shown that the Gottfried Sum is sensitive to the nuclear structure corrections, viz. themesonic exchanges and binding effects. A new estimation of the Gottfried Sum is given. The obtained result is close to the quark-parton prediction of 1/3. 11 refs.; 2 figs

  15. Statistical sums of strings on hyperellyptic surfaces

    International Nuclear Information System (INIS)

    Lebedev, D.; Morozov, A.

    1987-01-01

    Contributions of hyperellyptic surfaces to statistical sums of string theories are presented. Available results on hyperellyptic surface give the apportunity to check factorization of three-loop statsum. Some remarks on the vanishing statistical sum are presented

  16. A new generalization of Hardy–Berndt sums

    Indian Academy of Sciences (India)

    4,11,18]. Berndt and Goldberg [4] found analytic properties of these sums and established infinite trigonometric series representations for them. The most important properties of Hardy–. Berndt sums are reciprocity theorems due to Berndt [3] ...

  17. Spectral-based features ranking for gamelan instruments identification using filter techniques

    Directory of Open Access Journals (Sweden)

    Diah P Wulandari

    2013-03-01

    Full Text Available In this paper, we describe an approach of spectral-based features ranking for Javanese gamelaninstruments identification using filter techniques. The model extracted spectral-based features set of thesignal using Short Time Fourier Transform (STFT. The rank of the features was determined using the fivealgorithms; namely ReliefF, Chi-Squared, Information Gain, Gain Ratio, and Symmetric Uncertainty. Then,we tested the ranked features by cross validation using Support Vector Machine (SVM. The experimentshowed that Gain Ratio algorithm gave the best result, it yielded accuracy of 98.93%.

  18. Maximising information recovery from rank-order codes

    Science.gov (United States)

    Sen, B.; Furber, S.

    2007-04-01

    The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.

  19. Zero-sum bias: perceived competition despite unlimited resources.

    Science.gov (United States)

    Meegan, Daniel V

    2010-01-01

    Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party) when it is actually non-zero-sum. The experimental participants were students at a university where students' grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed.

  20. University Rankings and Social Science

    OpenAIRE

    Marginson, S.

    2014-01-01

    University rankings widely affect the behaviours of prospective students and their families, university executive leaders, academic faculty, governments and investors in higher education. Yet the social science foundations of global rankings receive little scrutiny. Rankings that simply recycle reputation without any necessary connection to real outputs are of no common value. It is necessary that rankings be soundly based in scientific terms if a virtuous relationship between performance and...

  1. Development and first application of an operating events ranking tool

    International Nuclear Information System (INIS)

    Šimić, Zdenko; Zerger, Benoit; Banov, Reni

    2015-01-01

    Highlights: • A method using analitycal hierarchy process for ranking operating events is developed and tested. • The method is applied for 5 years of U.S. NRC Licensee Event Reports (1453 events). • Uncertainty and sensitivity of the ranking results are evaluated. • Real events assessment shows potential of the method for operating experience feedback. - Abstract: The operating experience feedback is important for maintaining and improving safety and availability in nuclear power plants. Detailed investigation of all events is challenging since it requires excessive resources, especially in case of large event databases. This paper presents an event groups ranking method to complement the analysis of individual operating events. The basis for the method is the use of an internationally accepted events characterization scheme that allows different ways of events grouping and ranking. The ranking method itself consists of implementing the analytical hierarchy process (AHP) by means of a custom developed tool which allows events ranking based on ranking indexes pre-determined by expert judgment. Following the development phase, the tool was applied to analyze a complete set of 5 years of real nuclear power plants operating events (1453 events). The paper presents the potential of this ranking method to identify possible patterns throughout the event database and therefore to give additional insights into the events as well as to give quantitative input for the prioritization of further more detailed investigation of selected event groups

  2. 24 CFR 599.401 - Ranking of applications.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Ranking of applications. 599.401... Communities § 599.401 Ranking of applications. (a) Ranking order. Rural and urban applications will be ranked... applications ranked first. (b) Separate ranking categories. After initial ranking, both rural and urban...

  3. The Eccentric-distance Sum of Some Graphs

    OpenAIRE

    P, Padmapriya; Mathad, Veena

    2017-01-01

    Let $G = (V,E)$ be a simple connected graph. Theeccentric-distance sum of $G$ is defined as$\\xi^{ds}(G) =\\ds\\sum_{\\{u,v\\}\\subseteq V(G)} [e(u)+e(v)] d(u,v)$, where $e(u)$ %\\dsis the eccentricity of the vertex $u$ in $G$ and $d(u,v)$ is thedistance between $u$ and $v$. In this paper, we establish formulaeto calculate the eccentric-distance sum for some graphs, namelywheel, star, broom, lollipop, double star, friendship, multi-stargraph and the join of $P_{n-2}$ and $P_2$.

  4. The eccentric-distance sum of some graphs

    Directory of Open Access Journals (Sweden)

    Padmapriya P

    2017-04-01

    Full Text Available Let $G = (V,E$ be a simple connected graph. Theeccentric-distance sum of $G$ is defined as$\\xi^{ds}(G =\\ds\\sum_{\\{u,v\\}\\subseteq V(G} [e(u+e(v] d(u,v$, where $e(u$ %\\dsis the eccentricity of the vertex $u$ in $G$ and $d(u,v$ is thedistance between $u$ and $v$. In this paper, we establish formulaeto calculate the eccentric-distance sum for some graphs, namelywheel, star, broom, lollipop, double star, friendship, multi-stargraph and the join of $P_{n-2}$ and $P_2$.

  5. On Page Rank

    NARCIS (Netherlands)

    Hoede, C.

    In this paper the concept of page rank for the world wide web is discussed. The possibility of describing the distribution of page rank by an exponential law is considered. It is shown that the concept is essentially equal to that of status score, a centrality measure discussed already in 1953 by

  6. Citation graph based ranking in Invenio

    CERN Document Server

    Marian, Ludmila; Rajman, Martin; Vesely, Martin

    2010-01-01

    Invenio is the web-based integrated digital library system developed at CERN. Within this framework, we present four types of ranking models based on the citation graph that complement the simple approach based on citation counts: time-dependent citation counts, a relevancy ranking which extends the PageRank model, a time-dependent ranking which combines the freshness of citations with PageRank and a ranking that takes into consideration the external citations. We present our analysis and results obtained on two main data sets: Inspire and CERN Document Server. Our main contributions are: (i) a study of the currently available ranking methods based on the citation graph; (ii) the development of new ranking methods that correct some of the identified limitations of the current methods such as treating all citations of equal importance, not taking time into account or considering the citation graph complete; (iii) a detailed study of the key parameters for these ranking methods. (The original publication is ava...

  7. QCD sum rules and applications to nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, T D [Maryland Univ., College Park, MD (United States). Dept. of Physics; [Washington Univ., Seattle, WA (United States). Dept. of Physics and Inst. for Nuclear Theory; Furnstahl, R J [Ohio State Univ., Columbus, OH (United States). Dept. of Physics; Griegel, D K [Maryland Univ., College Park, MD (United States). Dept. of Physics; [TRIUMF, Vancouver, BC (Canada); Xuemin, J

    1994-12-01

    Applications of QCD sum-rule methods to the physics of nuclei are reviewed, with an emphasis on calculations of baryon self-energies in infinite nuclear matter. The sum-rule approach relates spectral properties of hadrons propagating in the finite-density medium, such as optical potentials for quasinucleons, to matrix elements of QCD composite operators (condensates). The vacuum formalism for QCD sum rules is generalized to finite density, and the strategy and implementation of the approach is discussed. Predictions for baryon self-energies are compared to those suggested by relativistic nuclear physics phenomenology. Sum rules for vector mesons in dense nuclear matter are also considered. (author). 153 refs., 8 figs.

  8. QCD sum rules and applications to nuclear physics

    International Nuclear Information System (INIS)

    Cohen, T.D.; Xuemin, J.

    1994-12-01

    Applications of QCD sum-rule methods to the physics of nuclei are reviewed, with an emphasis on calculations of baryon self-energies in infinite nuclear matter. The sum-rule approach relates spectral properties of hadrons propagating in the finite-density medium, such as optical potentials for quasinucleons, to matrix elements of QCD composite operators (condensates). The vacuum formalism for QCD sum rules is generalized to finite density, and the strategy and implementation of the approach is discussed. Predictions for baryon self-energies are compared to those suggested by relativistic nuclear physics phenomenology. Sum rules for vector mesons in dense nuclear matter are also considered. (author)

  9. Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization

    OpenAIRE

    Huang, Xiaofei

    2006-01-01

    The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derive...

  10. COMPARISON OF SIMPLE SUM AND DIVISIA MONETARY AGGREGATES USING PANEL DATA ANALYSIS

    Directory of Open Access Journals (Sweden)

    Sadullah CELIK

    2009-07-01

    Full Text Available It is well documented that financial innovation has led to poor performance of simple sum method of monetary aggregation destabilizing the historical relationship between monetary aggregates and ultimate target variables like rate of growth and rate of unemployment during the liberalization period of 1980s. This study tries to emphasize the superiority of an alternative method of aggregation over the simple sum method, namely Divisia monetary aggregates, employing panel data analysis for United States, United Kingdom, Euro Area and Japan for the period between 1980Q1 and 1993Q3. After investigating the order of stationarity of the panel data set through several panel unit root tests, we perform advanced panel cointegration tests to check the existence of a long run link between the Divisia monetary aggregates and income and interest rates in a simple Keynesian money demand function.

  11. On poisson-stopped-sums that are mixed poisson

    OpenAIRE

    Valero Baya, Jordi; Pérez Casany, Marta; Ginebra Molins, Josep

    2013-01-01

    Maceda (1948) characterized the mixed Poisson distributions that are Poisson-stopped-sum distributions based on the mixing distribution. In an alternative characterization of the same set of distributions here the Poisson-stopped-sum distributions that are mixed Poisson distributions is proved to be the set of Poisson-stopped-sums of either a mixture of zero-truncated Poisson distributions or a zero-modification of it. Peer Reviewed

  12. Volume sums of polar Blaschke–Minkowski homomorphisms

    Indian Academy of Sciences (India)

    In this article, we establish Minkowski and Aleksandrov–Fenchel type inequalities for the volume sum of polars of Blaschke–Minkowski homomorphisms. Keywords. Blaschke–Minkowski homomorphism; volume differences; volume sum; projection body operator. 2010 Mathematics Subject Classification. 52A40, 52A30. 1.

  13. Bootstrap Determination of the Co-integration Rank in Heteroskedastic VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A.M.Robert

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio [PLR] co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying VAR model which obtain under the reduced rank null hypothesis. They propose methods based on an i.i.d. bootstrap re-sampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co-integrated VAR model with i.i.d. innovations. In this paper we investigate...... the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap re-sampling scheme, when time-varying behaviour is present in either the conditional or unconditional variance of the innovations. We show that the bootstrap PLR tests are asymptotically correctly sized and...

  14. Bootstrap Determination of the Co-Integration Rank in Heteroskedastic VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A. M. Robert

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio [PLR] co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying VAR model which obtain under the reduced rank null hypothesis. They propose methods based on an i.i.d. bootstrap re-sampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co-integrated VAR model with i.i.d. innovations. In this paper we investigate...... the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap re-sampling scheme, when time-varying behaviour is present in either the conditional or unconditional variance of the innovations. We show that the bootstrap PLR tests are asymptotically correctly sized and...

  15. Gaussian sum rules for optical functions

    International Nuclear Information System (INIS)

    Kimel, I.

    1981-12-01

    A new (Gaussian) type of sum rules (GSR) for several optical functions, is presented. The functions considered are: dielectric permeability, refractive index, energy loss function, rotatory power and ellipticity (circular dichroism). While reducing to the usual type of sum rules in a certain limit, the GSR contain in general, a Gaussian factor that serves to improve convergence. GSR might be useful in analysing experimental data. (Author) [pt

  16. The Gross-Llewellyn Smith sum rule

    International Nuclear Information System (INIS)

    Scott, W.G.

    1981-01-01

    We present the most recent data on the Gross-Llewellyn Smith sum rule obtained from the combined BEBC Narrow Band Neon and GGM-PS Freon neutrino/antineutrino experiments. The data for the Gross-Llewellyn Smith sum rule as a function of q 2 suggest a smaller value for the QCD coupling constant parameter Λ than is obtained from the analysis of the higher moments. (author)

  17. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  18. University Rankings: The Web Ranking

    Science.gov (United States)

    Aguillo, Isidro F.

    2012-01-01

    The publication in 2003 of the Ranking of Universities by Jiao Tong University of Shanghai has revolutionized not only academic studies on Higher Education, but has also had an important impact on the national policies and the individual strategies of the sector. The work gathers the main characteristics of this and other global university…

  19. Zero-sum bias: perceived competition despite unlimited resources

    Directory of Open Access Journals (Sweden)

    Daniel V Meegan

    2010-11-01

    Full Text Available Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party when it is actually non-zero-sum. The experimental participants were students at a university where students’ grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed.

  20. Ranking Specific Sets of Objects.

    Science.gov (United States)

    Maly, Jan; Woltran, Stefan

    2017-01-01

    Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.

  1. PageRank of integers

    International Nuclear Information System (INIS)

    Frahm, K M; Shepelyansky, D L; Chepelianskii, A D

    2012-01-01

    We up a directed network tracing links from a given integer to its divisors and analyze the properties of the Google matrix of this network. The PageRank vector of this matrix is computed numerically and it is shown that its probability is approximately inversely proportional to the PageRank index thus being similar to the Zipf law and the dependence established for the World Wide Web. The spectrum of the Google matrix of integers is characterized by a large gap and a relatively small number of nonzero eigenvalues. A simple semi-analytical expression for the PageRank of integers is derived that allows us to find this vector for matrices of billion size. This network provides a new PageRank order of integers. (paper)

  2. Network meta-analysis of diagnostic test accuracy studies identifies and ranks the optimal diagnostic tests and thresholds for health care policy and decision-making.

    Science.gov (United States)

    Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J

    2018-07-01

    Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Compound sums and their applications in finance

    NARCIS (Netherlands)

    R. Helmers (Roelof); B. Tarigan

    2003-01-01

    textabstractCompound sums arise frequently in insurance (total claim size in a portfolio) and in accountancy (total error amount in audit populations). As the normal approximation for compound sums usually performs very badly, one may look for better methods for approximating the distribution of a

  4. Superconvergent sum rules for the normal reflectivity

    International Nuclear Information System (INIS)

    Furuya, K.; Zimerman, A.H.; Villani, A.

    1976-05-01

    Families of superconvergent relations for the normal reflectivity function are written. Sum rules connecting the difference of phases of the reflectivities of two materials are also considered. Finally superconvergence relations and sum rules for magneto-reflectivity in the Faraday and Voigt regimes are also studied

  5. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  6. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  7. University Rankings and Social Science

    Science.gov (United States)

    Marginson, Simon

    2014-01-01

    University rankings widely affect the behaviours of prospective students and their families, university executive leaders, academic faculty, governments and investors in higher education. Yet the social science foundations of global rankings receive little scrutiny. Rankings that simply recycle reputation without any necessary connection to real…

  8. Calculating vibrational spectra with sum of product basis functions without storing full-dimensional vectors or matrices.

    Science.gov (United States)

    Leclerc, Arnaud; Carrington, Tucker

    2014-05-07

    We propose an iterative method for computing vibrational spectra that significantly reduces the memory cost of calculations. It uses a direct product primitive basis, but does not require storing vectors with as many components as there are product basis functions. Wavefunctions are represented in a basis each of whose functions is a sum of products (SOP) and the factorizable structure of the Hamiltonian is exploited. If the factors of the SOP basis functions are properly chosen, wavefunctions are linear combinations of a small number of SOP basis functions. The SOP basis functions are generated using a shifted block power method. The factors are refined with a rank reduction algorithm to cap the number of terms in a SOP basis function. The ideas are tested on a 20-D model Hamiltonian and a realistic CH3CN (12 dimensional) potential. For the 20-D problem, to use a standard direct product iterative approach one would need to store vectors with about 10(20) components and would hence require about 8 × 10(11) GB. With the approach of this paper only 1 GB of memory is necessary. Results for CH3CN agree well with those of a previous calculation on the same potential.

  9. Singular f-sum rule for superfluid 4He

    International Nuclear Information System (INIS)

    Wong, V.K.

    1979-01-01

    The validity and applicability to inelastic neutron scattering of a singular f-sum rule for superfluid helium, proposed by Griffin to explain the rhosub(s) dependence in S(k, ω) as observed by Woods and Svensson, are examined in the light of similar sum rules rigorously derived for anharmonic crystals and Bose liquids. It is concluded that the singular f-sum rules are only of microscopic interest. (Auth,)

  10. The sum of friends’ and lovers’ self-control scores predicts relationship quality

    NARCIS (Netherlands)

    Vohs, K.D.; Finkenauer, C.; Baumeister, R.F.

    2011-01-01

    What combination of partners' trait self-control levels produces the best relationship outcomes? The authors tested three hypotheses-complementarity (large difference in trait self-control scores), similarity (small difference in self-control scores), and totality (large sum of self-control

  11. Two-dimensional ranking of Wikipedia articles

    Science.gov (United States)

    Zhirov, A. O.; Zhirov, O. V.; Shepelyansky, D. L.

    2010-10-01

    The Library of Babel, described by Jorge Luis Borges, stores an enormous amount of information. The Library exists ab aeterno. Wikipedia, a free online encyclopaedia, becomes a modern analogue of such a Library. Information retrieval and ranking of Wikipedia articles become the challenge of modern society. While PageRank highlights very well known nodes with many ingoing links, CheiRank highlights very communicative nodes with many outgoing links. In this way the ranking becomes two-dimensional. Using CheiRank and PageRank we analyze the properties of two-dimensional ranking of all Wikipedia English articles and show that it gives their reliable classification with rich and nontrivial features. Detailed studies are done for countries, universities, personalities, physicists, chess players, Dow-Jones companies and other categories.

  12. Lattice QCD evaluation of baryon magnetic moment sum rules

    International Nuclear Information System (INIS)

    Leinweber, D.B.

    1991-05-01

    Magnetic moment combinations and sum rules are evaluated using recent results for the magnetic moments of octet baryons determined in a numerical simulation of quenched QCD. The model-independent and parameter-free results of the lattice calculations remove some of the confusion and contradiction surrounding past magnetic moment sum rule analyses. The lattice results reveal the underlying quark dynamics investigated by magnetic moment sum rules and indicate the origin of magnetic moment quenching for the non-strange quarks in Σ. In contrast to previous sum rule analyses, the magnetic moments of nonstrange quarks in Ξ are seen to be enhanced in the lattice results. In most cases, the spin-dependent dynamics and center-of-mass effects giving rise to baryon dependence of the quark moments are seen to be sufficient to violate the sum rules in agreement with experimental measurements. In turn, the sum rules are used to further examine the results of the lattice simulation. The Sachs sum rule suggests that quark loop contributions not included in present lattice calculations may play a key role in removing the discrepancies between lattice and experimental ratios of magnetic moments. This is supported by other sum rules sensitive to quark loop contributions. A measure of the isospin symmetry breaking in the effective quark moments due to quark loop contributions is in agreement with model expectations. (Author) 16 refs., 2 figs., 2 tabs

  13. Determination of the Bjorken Sum and Strong Coupling from Polarized Structure Functions

    CERN Document Server

    Altarelli, Guido; Forte, Stefano; Ridolfi, G; Altarelli, Guido; Ball, Richard D.; Forte, Stefano; Ridolfi, Giovanni

    1997-01-01

    We present a NLO perturbative analysis of all available data on the polarized structure function g_1(x,Q^2) with the aim of making a quantitative test of the validity of the Bjorken sum rule, of measuring \\alpha_s, and of deriving helicity fractions. We take particular care over the small x extrapolation, since it is now known that Regge behaviour is unreliable at perturbative scales. For fixed \\alpha_s we find that if all the most recent data are included g_A=1.18\\pm0.09, confirming the Bjorken sum rule at the 8% level. We further show that the value of \\alpha_s is now reasonably well constrained by scaling violations in the structure function data, despite the fact that it cannot yet be reliably fixed by the value of the Bjorken sum: our final result is \\alpha_s(m_Z) = 0.120+0.010-0.008. We also confirm earlier indications of a sizeable positive gluon polarization in the nucleon.

  14. Unidirectional ring-laser operation using sum-frequency mixing

    DEFF Research Database (Denmark)

    Tidemand-Lichtenberg, Peter; Cheng, Haynes Pak Hay; Pedersen, Christian

    2010-01-01

    A technique enforcing unidirectional operation of ring lasers is proposed and demonstrated. The approach relies on sum-frequency mixing between a single-pass laser and one of the two counterpropagating intracavity fields of the ring laser. Sum-frequency mixing introduces a parametric loss for the...... where lossless second-order nonlinear materials are available. Numerical modeling and experimental demonstration of parametric-induced unidirectional operation of a diode-pumped solid-state 1342 nm cw ring laser are presented.......A technique enforcing unidirectional operation of ring lasers is proposed and demonstrated. The approach relies on sum-frequency mixing between a single-pass laser and one of the two counterpropagating intracavity fields of the ring laser. Sum-frequency mixing introduces a parametric loss...

  15. Chiral corrections to the Adler-Weisberger sum rule

    Science.gov (United States)

    Beane, Silas R.; Klco, Natalie

    2016-12-01

    The Adler-Weisberger sum rule for the nucleon axial-vector charge, gA , offers a unique signature of chiral symmetry and its breaking in QCD. Its derivation relies on both algebraic aspects of chiral symmetry, which guarantee the convergence of the sum rule, and dynamical aspects of chiral symmetry breaking—as exploited using chiral perturbation theory—which allow the rigorous inclusion of explicit chiral symmetry breaking effects due to light-quark masses. The original derivations obtained the sum rule in the chiral limit and, without the benefit of chiral perturbation theory, made various attempts at extrapolating to nonvanishing pion masses. In this paper, the leading, universal, chiral corrections to the chiral-limit sum rule are obtained. Using PDG data, a recent parametrization of the pion-nucleon total cross sections in the resonance region given by the SAID group, as well as recent Roy-Steiner equation determinations of subthreshold amplitudes, threshold parameters, and correlated low-energy constants, the Adler-Weisberger sum rule is confronted with experimental data. With uncertainty estimates associated with the cross-section parametrization, the Goldberger-Treimann discrepancy, and the truncation of the sum rule at O (Mπ4) in the chiral expansion, this work finds gA=1.248 ±0.010 ±0.007 ±0.013 .

  16. Ranking of risk significant components for the Davis-Besse Component Cooling Water System

    International Nuclear Information System (INIS)

    Seniuk, P.J.

    1994-01-01

    Utilities that run nuclear power plants are responsible for testing pumps and valves, as specified by the American Society of Mechanical Engineers (ASME) that are required for safe shutdown, mitigating the consequences of an accident, and maintaining the plant in a safe condition. These inservice components are tested according to ASME Codes, either the earlier requirements of the ASME Boiler and Pressure Vessel Code, Section XI, or the more recent requirements of the ASME Operation and Maintenance Code, Section IST. These codes dictate test techniques and frequencies regardless of the component failure rate or significance of failure consequences. A probabilistic risk assessment or probabilistic safety assessment may be used to evaluate the component importance for inservice test (IST) risk ranking, which is a combination of failure rate and failure consequences. Resources for component testing during the normal quarterly verification test or postmaintenance test are expensive. Normal quarterly testing may cause component unavailability. Outage testing may increase outage cost with no real benefit. This paper identifies the importance ranking of risk significant components in the Davis-Besse component cooling water system. Identifying the ranking of these risk significant IST components adds technical insight for developing the appropriate test technique and test frequency

  17. Analytic and algorithmic aspects of generalized harmonic sums and polylogarithms

    International Nuclear Information System (INIS)

    Ablinger, Jakob; Schneider, Carsten

    2013-01-01

    In recent three-loop calculations of massive Feynman integrals within Quantum Chromodynamics (QCD) and, e.g., in recent combinatorial problems the so-called generalized harmonic sums (in short S-sums) arise. They are characterized by rational (or real) numerator weights also different from ±1. In this article we explore the algorithmic and analytic properties of these sums systematically. We work out the Mellin and inverse Mellin transform which connects the sums under consideration with the associated Poincare iterated integrals, also called generalized harmonic polylogarithms. In this regard, we obtain explicit analytic continuations by means of asymptotic expansions of the S-sums which started to occur frequently in current QCD calculations. In addition, we derive algebraic and structural relations, like differentiation w.r.t. the external summation index and different multi-argument relations, for the compactification of S-sum expressions. Finally, we calculate algebraic relations for infinite S-sums, or equivalently for generalized harmonic polylogarithms evaluated at special values. The corresponding algorithms and relations are encoded in the computer algebra package HarmonicSums.

  18. Analytic and algorithmic aspects of generalized harmonic sums and polylogarithms

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2013-01-15

    In recent three-loop calculations of massive Feynman integrals within Quantum Chromodynamics (QCD) and, e.g., in recent combinatorial problems the so-called generalized harmonic sums (in short S-sums) arise. They are characterized by rational (or real) numerator weights also different from {+-}1. In this article we explore the algorithmic and analytic properties of these sums systematically. We work out the Mellin and inverse Mellin transform which connects the sums under consideration with the associated Poincare iterated integrals, also called generalized harmonic polylogarithms. In this regard, we obtain explicit analytic continuations by means of asymptotic expansions of the S-sums which started to occur frequently in current QCD calculations. In addition, we derive algebraic and structural relations, like differentiation w.r.t. the external summation index and different multi-argument relations, for the compactification of S-sum expressions. Finally, we calculate algebraic relations for infinite S-sums, or equivalently for generalized harmonic polylogarithms evaluated at special values. The corresponding algorithms and relations are encoded in the computer algebra package HarmonicSums.

  19. Analytic and algorithmic aspects of generalized harmonic sums and polylogarithms

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040, Linz (Austria); Blümlein, Johannes [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany)

    2013-08-15

    In recent three-loop calculations of massive Feynman integrals within Quantum Chromodynamics (QCD) and, e.g., in recent combinatorial problems the so-called generalized harmonic sums (in short S-sums) arise. They are characterized by rational (or real) numerator weights also different from ±1. In this article we explore the algorithmic and analytic properties of these sums systematically. We work out the Mellin and inverse Mellin transform which connects the sums under consideration with the associated Poincaré iterated integrals, also called generalized harmonic polylogarithms. In this regard, we obtain explicit analytic continuations by means of asymptotic expansions of the S-sums which started to occur frequently in current QCD calculations. In addition, we derive algebraic and structural relations, like differentiation with respect to the external summation index and different multi-argument relations, for the compactification of S-sum expressions. Finally, we calculate algebraic relations for infinite S-sums, or equivalently for generalized harmonic polylogarithms evaluated at special values. The corresponding algorithms and relations are encoded in the computer algebra package HarmonicSums.

  20. Premium Pricing of Liability Insurance Using Random Sum Model

    Directory of Open Access Journals (Sweden)

    Mujiati Dwi Kartikasari

    2017-03-01

    Full Text Available Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we analyze premium pricing using random sum model based on compound distribution

  1. Evaluation of the multi-sums for large scale problems

    International Nuclear Information System (INIS)

    Bluemlein, J.; Hasselhuhn, A.; Schneider, C.

    2012-02-01

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter ε can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  2. Evaluation of the multi-sums for large scale problems

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, J.; Hasselhuhn, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation

    2012-02-15

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter {epsilon} can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  3. 14 CFR 1214.1105 - Final ranking.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Final ranking. 1214.1105 Section 1214.1105... Recruitment and Selection Program § 1214.1105 Final ranking. Final rankings will be based on a combination of... preference will be included in this final ranking in accordance with applicable regulations. ...

  4. Ranking agility factors affecting hospitals in Iran

    Directory of Open Access Journals (Sweden)

    M. Abdi Talarposht

    2017-04-01

    Full Text Available Background: Agility is an effective response to the changing and unpredictable environment and using these changes as opportunities for organizational improvement. Objective: The aim of the present study was to rank the factors affecting agile supply chain of hospitals of Iran. Methods: This applied study was conducted by cross sectional-descriptive method at some point of 2015 for one year. The research population included managers, administrators, faculty members and experts were selected hospitals. A total of 260 people were selected as sample from the health centers. The construct validity of the questionnaire was approved by confirmatory factor analysis test and its reliability was approved by Cronbach's alpha (α=0.97. All data were analyzed by Kolmogorov-Smirnov, Chi-square and Friedman tests. Findings: The development of staff skills, the use of information technology, the integration of processes, appropriate planning, and customer satisfaction and product quality had a significant impact on the agility of public hospitals of Iran (P<0.001. New product introductions had earned the highest ranking and the development of staff skills earned the lowest ranking. Conclusion: The new product introduction, market responsiveness and sensitivity, reduce costs, and the integration of organizational processes, ratings better to have acquired agility hospitals in Iran. Therefore, planners and officials of hospitals have to, through the promotion quality and variety of services customer-oriented, providing a basis for investing in the hospital and etc to apply for agility supply chain public hospitals of Iran.

  5. Data depth and rank-based tests for covariance and spectral density matrices

    KAUST Repository

    Chau, Joris

    2017-06-26

    In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.

  6. Data depth and rank-based tests for covariance and spectral density matrices

    KAUST Repository

    Chau, Joris; Ombao, Hernando; Sachs, Rainer von

    2017-01-01

    In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.

  7. A ring test of in vitro neutral detergent fiber digestibility: analytical variability and sample ranking.

    Science.gov (United States)

    Hall, M B; Mertens, D R

    2012-04-01

    In vitro neutral detergent fiber (NDF) digestibility (NDFD) is an empirical measurement of fiber fermentability by rumen microbes. Variation is inherent in all assays and may be increased as multiple steps or differing procedures are used to assess an empirical measure. The main objective of this study was to evaluate variability within and among laboratories of 30-h NDFD values analyzed in repeated runs. Subsamples of alfalfa (n=4), corn forage (n=5), and grass (n=5) ground to pass a 6-mm screen passed a test for homogeneity. The 14 samples were sent to 10 laboratories on 3 occasions over 12 mo. Laboratories ground the samples and ran 1 to 3 replicates of each sample within fermentation run and analyzed 2 or 3 sets of samples. Laboratories used 1 of 2 NDFD procedures: 8 labs used procedures related to the 1970 Goering and Van Soest (GVS) procedure using fermentation vessels or filter bags, and 2 used a procedure with preincubated inoculum (PInc). Means and standard deviations (SD) of sample replicates within run within laboratory (lab) were evaluated with a statistical model that included lab, run within lab, sample, and lab × sample interaction as factors. All factors affected mean values for 30-h NDFD. The lab × sample effect suggests against a simple lab bias in mean values. The SD ranged from 0.49 to 3.37% NDFD and were influenced by lab and run within lab. The GVS procedure gave greater NDFD values than PInc, with an average difference across all samples of 17% NDFD. Because of the differences between GVS and PInc, we recommend using results in contexts appropriate to each procedure. The 95% probability limits for within-lab repeatability and among-lab reproducibility for GVS mean values were 10.2 and 13.4%, respectively. These percentages describe the span of the range around the mean into which 95% of analytical results for a sample fall for values generated within a lab and among labs. This degree of precision was supported in that the average maximum

  8. Testing University Rankings Statistically: Why this Perhaps is not such a Good Idea after All. Some Reflections on Statistical Power, Effect Size, Random Sampling and Imaginary Populations

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2012-01-01

    In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...

  9. Extremal extensions for the sum of nonnegative selfadjoint relations

    NARCIS (Netherlands)

    Hassi, Seppo; Sandovici, Adrian; De Snoo, Henk; Winkler, Henrik

    2007-01-01

    The sum A + B of two nonnegative selfadjoint relations (multivalued operators) A and B is a nonnegative relation. The class of all extremal extensions of the sum A + B is characterized as products of relations via an auxiliary Hilbert space associated with A and B. The so-called form sum extension

  10. Universal scaling in sports ranking

    International Nuclear Information System (INIS)

    Deng Weibing; Li Wei; Cai Xu; Bulou, Alain; Wang Qiuping A

    2012-01-01

    Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters. (paper)

  11. Website visibility the theory and practice of improving rankings

    CERN Document Server

    Weideman, Melius

    2009-01-01

    The quest to achieve high website rankings in search engine results is a prominent subject for both academics and website owners/coders. Website Visibility marries academic research results to the world of the information practitioner and contains a focused look at the elements which contribute to website visibility, providing support for the application of each element with relevant research. A series of real-world case studies with tested examples of research on website visibility elements and their effect on rankings are reviewed.Written by a well-respected academic and practitioner in the

  12. Compound Poisson Approximations for Sums of Random Variables

    OpenAIRE

    Serfozo, Richard F.

    1986-01-01

    We show that a sum of dependent random variables is approximately compound Poisson when the variables are rarely nonzero and, given they are nonzero, their conditional distributions are nearly identical. We give several upper bounds on the total-variation distance between the distribution of such a sum and a compound Poisson distribution. Included is an example for Markovian occurrences of a rare event. Our bounds are consistent with those that are known for Poisson approximations for sums of...

  13. Recurrent fuzzy ranking methods

    Science.gov (United States)

    Hajjari, Tayebeh

    2012-11-01

    With the increasing development of fuzzy set theory in various scientific fields and the need to compare fuzzy numbers in different areas. Therefore, Ranking of fuzzy numbers plays a very important role in linguistic decision-making, engineering, business and some other fuzzy application systems. Several strategies have been proposed for ranking of fuzzy numbers. Each of these techniques has been shown to produce non-intuitive results in certain case. In this paper, we reviewed some recent ranking methods, which will be useful for the researchers who are interested in this area.

  14. Permutational distribution of the log-rank statistic under random censorship with applications to carcinogenicity assays.

    Science.gov (United States)

    Heimann, G; Neuhaus, G

    1998-03-01

    In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.

  15. 28 CFR 523.16 - Lump sum awards.

    Science.gov (United States)

    2010-07-01

    ... satisfactory performance of an unusually hazardous assignment; (c) An act which protects the lives of staff or... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.16 Lump sum awards. Any staff member may recommend... award is calculated. No seniority is accrued for such awards. Staff may recommend lump sum awards of...

  16. Contextual effects on the perceived health benefits of exercise: the exercise rank hypothesis.

    Science.gov (United States)

    Maltby, John; Wood, Alex M; Vlaev, Ivo; Taylor, Michael J; Brown, Gordon D A

    2012-12-01

    Many accounts of social influences on exercise participation describe how people compare their behaviors to those of others. We develop and test a novel hypothesis, the exercise rank hypothesis, of how this comparison can occur. The exercise rank hypothesis, derived from evolutionary theory and the decision by sampling model of judgment, suggests that individuals' perceptions of the health benefits of exercise are influenced by how individuals believe the amount of exercise ranks in comparison with other people's amounts of exercise. Study 1 demonstrated that individuals' perceptions of the health benefits of their own current exercise amounts were as predicted by the exercise rank hypothesis. Study 2 demonstrated that the perceptions of the health benefits of an amount of exercise can be manipulated by experimentally changing the ranked position of the amount within a comparison context. The discussion focuses on how social norm-based interventions could benefit from using rank information.

  17. Ranking Operations Management conferences

    NARCIS (Netherlands)

    Steenhuis, H.J.; de Bruijn, E.J.; Gupta, Sushil; Laptaned, U

    2007-01-01

    Several publications have appeared in the field of Operations Management which rank Operations Management related journals. Several ranking systems exist for journals based on , for example, perceived relevance and quality, citation, and author affiliation. Many academics also publish at conferences

  18. Systematics of strength function sum rules

    Directory of Open Access Journals (Sweden)

    Calvin W. Johnson

    2015-11-01

    Full Text Available Sum rules provide useful insights into transition strength functions and are often expressed as expectation values of an operator. In this letter I demonstrate that non-energy-weighted transition sum rules have strong secular dependences on the energy of the initial state. Such non-trivial systematics have consequences: the simplification suggested by the generalized Brink–Axel hypothesis, for example, does not hold for most cases, though it weakly holds in at least some cases for electric dipole transitions. Furthermore, I show the systematics can be understood through spectral distribution theory, calculated via traces of operators and of products of operators. Seen through this lens, violation of the generalized Brink–Axel hypothesis is unsurprising: one expects sum rules to evolve with excitation energy. Furthermore, to lowest order the slope of the secular evolution can be traced to a component of the Hamiltonian being positive (repulsive or negative (attractive.

  19. Vacuum structure and QCD sum rules

    International Nuclear Information System (INIS)

    Shifman, M.A.

    1992-01-01

    The method of the QCD sum rules was and still is one of the most productive tools in a wide range of problems associated with the hadronic phenomenology. Many heuristic ideas, computational devices, specific formulae which are useful to theorists working not only in hadronic physics, have been accumulated in this method. Some of the results and approaches which have originally been developed in connection with the QCD sum rules can be and are successfully applied in related fields, as supersymmetric gauge theories, nontraditional schemes of quarks and leptons, etc. The amount of literature on these and other more basic problems in hadronic physics has grown enormously in recent years. This volume presents a collection of papers which provide an overview of all basic elements of the sum rule approach and priority has been given to the works which seemed most useful from a pedagogical point of view

  20. Vacuum structure and QCD sum rules

    International Nuclear Information System (INIS)

    Shifman, M.A.

    1992-01-01

    The method of the QCD sum rules was and still is one of the most productive tools in a wide range of problems associated with the hadronic phenomenology. Many heuristic ideas, computational devices, specific formulae which are useful to theorists working not only in hadronic physics, have been accumulated in this method. Some of the results and approaches which have been originally developed in connection with the QCD sum rules can be and are successfully applied in related fields, such as supersymmetric gauge theories, nontraditional schemes of quarks and leptons, etc. The amount of literature on these and other more basic problems in hadronic physics has grown enormously in recent years. This collection of papers provides an overview of all basic elements of the sum rule approach. Priority has been given to those works which seemed most useful from a pedagogical point of view

  1. Ranking Accounting Authors and Departments in Accounting Education: Different Methodologies--Significantly Different Results

    Science.gov (United States)

    Bernardi, Richard A.; Zamojcin, Kimberly A.; Delande, Taylor L.

    2016-01-01

    This research tests whether Holderness Jr., D. K., Myers, N., Summers, S. L., & Wood, D. A. [(2014). "Accounting education research: Ranking institutions and individual scholars." "Issues in Accounting Education," 29(1), 87-115] accounting-education rankings are sensitive to a change in the set of journals used. It provides…

  2. Moments of the weighted sum-of-digits function | Larcher ...

    African Journals Online (AJOL)

    The weighted sum-of-digits function is a slight generalization of the well known sum-of-digits function with the difference that here the digits are weighted by some weights. So for example in this concept also the alternated sum-of-digits function is included. In this paper we compute the first and the second moment of the ...

  3. Development of the modified sum-peak method and its application

    International Nuclear Information System (INIS)

    Ogata, Y.; Miyahara, H.; Ishihara, M.; Ishigure, N.; Yamamoto, S.; Kojima, S.

    2016-01-01

    As the sum-peak method requires the total count rate as well as the peak count rates and the sum peak count rate, this meets difficulties when a sample contains other radionuclides than the one to be measured. To solve the problem, a new method using solely the peak and the sum peak count rates was developed. The method was theoretically and experimentally confirmed using "6"0Co, "2"2Na and "1"3"4Cs. We demonstrate that the modified sum-peak method is quite simple and practical and is useful to measure multiple nuclides. - Highlights: • A modified sum-peak method for simple radioactivity measurement was developed. • The method solely requires the peak count rates and the sum peak count rate. • The method is applicable to multiple radionuclides.

  4. Subset-sum phase transitions and data compression

    Science.gov (United States)

    Merhav, Neri

    2011-09-01

    We propose a rigorous analysis approach for the subset-sum problem in the context of lossless data compression, where the phase transition of the subset-sum problem is directly related to the passage between ambiguous and non-ambiguous decompression, for a compression scheme that is based on specifying the sequence composition. The proposed analysis lends itself to straightforward extensions in several directions of interest, including non-binary alphabets, incorporation of side information at the decoder (Slepian-Wolf coding), and coding schemes based on multiple subset sums. It is also demonstrated that the proposed technique can be used to analyze the critical behavior in a more involved situation where the sequence composition is not specified by the encoder.

  5. Country-specific determinants of world university rankings

    OpenAIRE

    Pietrucha, Jacek

    2017-01-01

    This paper examines country-specific factors that affect the three most influential world university rankings (the Academic Ranking of World Universities, the QS World University Ranking, and the Times Higher Education World University Ranking). We run a cross sectional regression that covers 42–71 countries (depending on the ranking and data availability). We show that the position of universities from a country in the ranking is determined by the following country-specific variables: econom...

  6. A folk-psychological ranking of personality facets

    Directory of Open Access Journals (Sweden)

    Eka Roivainen

    2016-10-01

    Full Text Available Background Which personality facets should a general personality test measure? No consensus exists on the facet structure of personality, the nature of facets, or the correct method of identifying the most significant facets. However, it can be hypothesized (the lexical hypothesis that high frequency personality describing words more likely represent important personality facets and rarely used words refer to less significant aspects of personality. Participants and procedure A ranking of personality facets was performed by studying the frequency of the use of popular personality adjectives in causal clauses (because he is a kind person on the Internet and in books as attributes of the word person (kind person. Results In Study 1, the 40 most frequently used adjectives had a cumulative usage frequency equal to that of the rest of the 295 terms studied. When terms with a higher-ranking dictionary synonym or antonym were eliminated, 23 terms remained, which represent 23 different facets. In Study 2, clusters of synonymous terms were examined. Within the top 30 clusters, personality terms were used 855 times compared to 240 for the 70 lower-ranking clusters. Conclusions It is hypothesized that personality facets represented by the top-ranking terms and clusters of terms are important and impactful independent of their correlation with abstract underlying personality factors (five/six factor models. Compared to hierarchical personality models, lists of important facets probably better cover those aspects of personality that are situated between the five or six major domains.

  7. AptRank: an adaptive PageRank model for protein function prediction on   bi-relational graphs.

    Science.gov (United States)

    Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael

    2017-06-15

    Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Spectral function sum rules in quantum chromodynamics. I. Charged currents sector

    International Nuclear Information System (INIS)

    Floratos, E.G.; Narison, Stephan; Rafael, Eduardo de.

    1978-07-01

    The Weinberg sum rules of the algebra of currents are reconsidered in the light of quantum chromodynamics (QCD). The authors derive new finite energy sum rules which replace the old Weinberg sum rules. The new sum rules are convergent and the rate of convergence is explicitly calculated in perturbative QCD at the one loop approximation. Phenomenological applications of these sum rules in the charged current sector are also discussed

  9. Structural relations between nested harmonic sums

    International Nuclear Information System (INIS)

    Bluemlein, J.

    2008-07-01

    We describe the structural relations between nested harmonic sums emerging in the description of physical single scale quantities up to the 3-loop level in renormalizable gauge field theories. These are weight w=6 harmonic sums. We identify universal basic functions which allow to describe a large class of physical quantities and derive their complex analysis. For the 3-loop QCD Wilson coefficients 35 basic functions are required, whereas a subset of 15 describes the 3-loop anomalous dimensions. (orig.)

  10. Structural relations between nested harmonic sums

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, J.

    2008-07-15

    We describe the structural relations between nested harmonic sums emerging in the description of physical single scale quantities up to the 3-loop level in renormalizable gauge field theories. These are weight w=6 harmonic sums. We identify universal basic functions which allow to describe a large class of physical quantities and derive their complex analysis. For the 3-loop QCD Wilson coefficients 35 basic functions are required, whereas a subset of 15 describes the 3-loop anomalous dimensions. (orig.)

  11. Sequence robust association test for familial data.

    Science.gov (United States)

    Dai, Wei; Yang, Ming; Wang, Chaolong; Cai, Tianxi

    2017-09-01

    Genome-wide association studies (GWAS) and next generation sequencing studies (NGSS) are often performed in family studies to improve power in identifying genetic variants that are associated with clinical phenotypes. Efficient analysis of genome-wide studies with familial data is challenging due to the difficulty in modeling shared but unmeasured genetic and/or environmental factors that cause dependencies among family members. Existing genetic association testing procedures for family studies largely rely on generalized estimating equations (GEE) or linear mixed-effects (LME) models. These procedures may fail to properly control for type I errors when the imposed model assumptions fail. In this article, we propose the Sequence Robust Association Test (SRAT), a fully rank-based, flexible approach that tests for association between a set of genetic variants and an outcome, while accounting for within-family correlation and adjusting for covariates. Comparing to existing methods, SRAT has the advantages of allowing for unknown correlation structures and weaker assumptions about the outcome distribution. We provide theoretical justifications for SRAT and show that SRAT includes the well-known Wilcoxon rank sum test as a special case. Extensive simulation studies suggest that SRAT provides better protection against type I error rate inflation, and could be much more powerful for settings with skewed outcome distribution than existing methods. For illustration, we also apply SRAT to the familial data from the Framingham Heart Study and Offspring Study to examine the association between an inflammatory marker and a few sets of genetic variants. © 2017, The International Biometric Society.

  12. The α3S corrections to the Bjorken sum rule for polarized electro-production and to the Gross-Llewellyn Smith sum rule

    International Nuclear Information System (INIS)

    Larin, S.A.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica; Vermaseren, J.A.M.

    1990-01-01

    The next-next-to-leading order QCD corrections to the Gross-Llewellyn Smith sum rule for deep inelastic neutrino-nucleon scattering and to the Bjorken sum rule for polarized electron-nucleon scattering have been computed. This involved the proper treatment of γ 5 inside the loop integrals with dimensional regularization. It is found that the difference between the two sum rules are entirely due to a class of 6 three loop graphs and is of the order of 1% of the leading QCD term. Hence the Q 2 behavior of both sum rules should be the same if the physics is described adequately by the lower order terms of perturbative QCD. (author). 12 refs.; 2 figs.; 4 tabs

  13. Universal emergence of PageRank

    Energy Technology Data Exchange (ETDEWEB)

    Frahm, K M; Georgeot, B; Shepelyansky, D L, E-mail: frahm@irsamc.ups-tlse.fr, E-mail: georgeot@irsamc.ups-tlse.fr, E-mail: dima@irsamc.ups-tlse.fr [Laboratoire de Physique Theorique du CNRS, IRSAMC, Universite de Toulouse, UPS, 31062 Toulouse (France)

    2011-11-18

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter {alpha} Element-Of ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when {alpha} {yields} 1. The whole network can be divided into a core part and a group of invariant subspaces. For {alpha} {yields} 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at {alpha} {yields} 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  14. Universal emergence of PageRank

    International Nuclear Information System (INIS)

    Frahm, K M; Georgeot, B; Shepelyansky, D L

    2011-01-01

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter α ∈ ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when α → 1. The whole network can be divided into a core part and a group of invariant subspaces. For α → 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at α → 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  15. Automatic figure ranking and user interfacing for intelligent figure search.

    Directory of Open Access Journals (Sweden)

    Hong Yu

    2010-10-01

    Full Text Available Figures are important experimental results that are typically reported in full-text bioscience articles. Bioscience researchers need to access figures to validate research facts and to formulate or to test novel research hypotheses. On the other hand, the sheer volume of bioscience literature has made it difficult to access figures. Therefore, we are developing an intelligent figure search engine (http://figuresearch.askhermes.org. Existing research in figure search treats each figure equally, but we introduce a novel concept of "figure ranking": figures appearing in a full-text biomedical article can be ranked by their contribution to the knowledge discovery.We empirically validated the hypothesis of figure ranking with over 100 bioscience researchers, and then developed unsupervised natural language processing (NLP approaches to automatically rank figures. Evaluating on a collection of 202 full-text articles in which authors have ranked the figures based on importance, our best system achieved a weighted error rate of 0.2, which is significantly better than several other baseline systems we explored. We further explored a user interfacing application in which we built novel user interfaces (UIs incorporating figure ranking, allowing bioscience researchers to efficiently access important figures. Our evaluation results show that 92% of the bioscience researchers prefer as the top two choices the user interfaces in which the most important figures are enlarged. With our automatic figure ranking NLP system, bioscience researchers preferred the UIs in which the most important figures were predicted by our NLP system than the UIs in which the most important figures were randomly assigned. In addition, our results show that there was no statistical difference in bioscience researchers' preference in the UIs generated by automatic figure ranking and UIs by human ranking annotation.The evaluation results conclude that automatic figure ranking and user

  16. Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆

    Science.gov (United States)

    Cao, Houwei; Verma, Ragini; Nenkova, Ani

    2015-01-01

    We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.

  17. Cosmic-ray sum rules

    International Nuclear Information System (INIS)

    Frandsen, Mads T.; Masina, Isabella; Sannino, Francesco

    2011-01-01

    We introduce new sum rules allowing to determine universal properties of the unknown component of the cosmic rays; we show how they can be used to predict the positron fraction at energies not yet explored by current experiments, and to constrain specific models.

  18. Experimental results of the betatron sum resonance

    International Nuclear Information System (INIS)

    Wang, Y.; Ball, M.; Brabson, B.

    1993-06-01

    The experimental observations of motion near the betatron sum resonance, ν x + 2ν z = 13, are presented. A fast quadrupole (Panofsky-style ferrite picture-frame magnet with a pulsed power supplier) producing a betatron tune shift of the order of 0.03 at rise time of 1 μs was used. This quadrupole was used to produce betatron tunes which jumped past and then crossed back through a betatron sum resonance line. The beam response as function of initial betatron amplitudes were recorded turn by turn. The correlated growth of the action variables, J x and J z , was observed. The phase space plots in the resonance frame reveal the features of particle motion near the nonlinear sum resonance region

  19. A Ranking Approach to Genomic Selection.

    Science.gov (United States)

    Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori

    2015-01-01

    Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.

  20. Ranking Fragment Ions Based on Outlier Detection for Improved Label-Free Quantification in Data-Independent Acquisition LC-MS/MS

    Science.gov (United States)

    Bilbao, Aivett; Zhang, Ying; Varesio, Emmanuel; Luban, Jeremy; Strambio-De-Castillia, Caterina; Lisacek, Frédérique; Hopfgartner, Gérard

    2016-01-01

    Data-independent acquisition LC-MS/MS techniques complement supervised methods for peptide quantification. However, due to the wide precursor isolation windows, these techniques are prone to interference at the fragment ion level, which in turn is detrimental for accurate quantification. The “non-outlier fragment ion” (NOFI) ranking algorithm has been developed to assign low priority to fragment ions affected by interference. By using the optimal subset of high priority fragment ions these interfered fragment ions are effectively excluded from quantification. NOFI represents each fragment ion as a vector of four dimensions related to chromatographic and MS fragmentation attributes and applies multivariate outlier detection techniques. Benchmarking conducted on a well-defined quantitative dataset (i.e. the SWATH Gold Standard), indicates that NOFI on average is able to accurately quantify 11-25% more peptides than the commonly used Top-N library intensity ranking method. The sum of the area of the Top3-5 NOFIs produces similar coefficients of variation as compared to the library intensity method but with more accurate quantification results. On a biologically relevant human dendritic cell digest dataset, NOFI properly assigns low priority ranks to 85% of annotated interferences, resulting in sensitivity values between 0.92 and 0.80 against 0.76 for the Spectronaut interference detection algorithm. PMID:26412574

  1. Dynamic Matrix Rank

    DEFF Research Database (Denmark)

    Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands

    2009-01-01

    We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions....

  2. Group social rank is associated with performance on a spatial learning task.

    Science.gov (United States)

    Langley, Ellis J G; van Horik, Jayden O; Whiteside, Mark A; Madden, Joah R

    2018-02-01

    Dominant individuals differ from subordinates in their performances on cognitive tasks across a suite of taxa. Previous studies often only consider dyadic relationships, rather than the more ecologically relevant social hierarchies or networks, hence failing to account for how dyadic relationships may be adjusted within larger social groups. We used a novel statistical method: randomized Elo-ratings, to infer the social hierarchy of 18 male pheasants, Phasianus colchicus , while in a captive, mixed-sex group with a linear hierarchy. We assayed individual learning performance of these males on a binary spatial discrimination task to investigate whether inter-individual variation in performance is associated with group social rank. Task performance improved with increasing trial number and was positively related to social rank, with higher ranking males showing greater levels of success. Motivation to participate in the task was not related to social rank or task performance, thus indicating that these rank-related differences are not a consequence of differences in motivation to complete the task. Our results provide important information about how variation in cognitive performance relates to an individual's social rank within a group. Whether the social environment causes differences in learning performance or instead, inherent differences in learning ability predetermine rank remains to be tested.

  3. Learning Preference Models from Data: On the Problem of Label Ranking and Its Variants

    Science.gov (United States)

    Hüllermeier, Eyke; Fürnkranz, Johannes

    The term “preference learning” refers to the application of machine learning methods for inducing preference models from empirical data. In the recent literature, corresponding problems appear in various guises. After a brief overview of the field, this work focuses on a particular learning scenario called label ranking where the problem is to learn a mapping from instances to rankings over a finite number of labels. Our approach for learning such a ranking function, called ranking by pairwise comparison (RPC), first induces a binary preference relation from suitable training data, using a natural extension of pairwise classification. A ranking is then derived from this relation by means of a ranking procedure. This paper elaborates on a key advantage of such an approach, namely the fact that our learner can be adapted to different loss functions by using different ranking procedures on the same underlying order relations. In particular, the Spearman rank correlation is minimized by using a simple weighted voting procedure. Moreover, we discuss a loss function suitable for settings where candidate labels must be tested successively until a target label is found. In this context, we propose the idea of “empirical conditioning” of class probabilities. A related ranking procedure, called “ranking through iterated choice”, is investigated experimentally.

  4. Hitting the Rankings Jackpot

    Science.gov (United States)

    Chapman, David W.

    2008-01-01

    Recently, Samford University was ranked 27th in the nation in a report released by "Forbes" magazine. In this article, the author relates how the people working at Samford University were surprised at its ranking. Although Samford is the largest privately institution in Alabama, its distinguished academic achievements aren't even…

  5. [Lack of interest in general practice during the National Ranking Examination in 2005].

    Science.gov (United States)

    Lanson, Yves

    2006-03-01

    The second national ranking test took place in 2005 in the same conditions as the year before. Analysis of the results permits us to assess whether the objectives of this reform have been met so far. Data crossing of the results provided by the national testing center allowed us to rank: 1) the appeal of specializations for each geographic subdivision, 2) the candidates by medical school, and 3) the appeal of each subdivision by candidate rank. 66% of the students were classified high enough to be able to choose any specialization. Trends observed from the first examination, in 2004, were confirmed, with the clear desirability of medical specializations and a certain lack of interest in occupational medicine and public health. All the surgery posts were filled, even though the number of posts had increased enormously since the first examination. After adjustment for the number of posts available, the specializations in decreasing order of popularity were: medical, surgical, pediatrics, anesthesiology, gynecology-obstetrics, general medicine, psychiatry, and biology. Approximately 1000 posts in general medicine were not filled. The medical schools whose students ranked highest were Paris Pitié, Paris V, Paris West, Lyon North, Grenoble, and Aix-Marseille. Some medical schools did less well than previously: Marseille very slightly and Angers substantially. Strasbourg, Nancy Amiens and Bobigny were at the bottom of the list. The cities most desired for internships were Paris, Toulouse, Lyon, and Aix-Marseille, while Brest, Nancy, Limoges and the West Indies were ranked lowest, although each was chosen by highly ranked candidate. Two thirds of the students were ranked high enough to allow them a free choice of specializations. All the specializations except public health and occupational medicine had very highly ranked students. Medical specializations are the most desired, but surgery remains highly demanded, despite a substantial increase in the number of posts. All

  6. College Rankings as an Interorganizational Dependency: Establishing the Foundation for Strategic and Institutional Accounts

    Science.gov (United States)

    Bastedo, Michael N.; Bowman, Nicholas A.

    2011-01-01

    Higher education administrators believe that revenues are linked to college rankings and act accordingly, particularly those at research universities. Although rankings are clearly influential for many schools and colleges, this fundamental assumption has yet to be tested empirically. Drawing on data from multiple resource providers in higher…

  7. Sum rules for the quarkonium systems

    International Nuclear Information System (INIS)

    Burnel, A.; Caprasse, H.

    1980-01-01

    In the framework of the radial Schroedinger equation we derive in a very simple way sum rules relating the potential to physical quantities such as the energy eigenvalues and the square of the lth derivative of the eigenfunctions at the origin. These sum rules contain as particular cases well-known results such as the quantum version of the Clausius theorem in classical mechanics as well as Kramers's relations for the Coulomb potential. Several illustrations are given and the possibilities of applying them to the quarkonium systems are considered

  8. Operator-sum representation for bosonic Gaussian channels

    International Nuclear Information System (INIS)

    Ivan, J. Solomon; Sabapathy, Krishna Kumar; Simon, R.

    2011-01-01

    Operator-sum or Kraus representations for single-mode bosonic Gaussian channels are developed, and several of their consequences explored. The fact that the two-mode metaplectic operators acting as unitary purification of these channels do not, in their canonical form, mix the position and momentum variables is exploited to present a procedure which applies uniformly to all families in the Holevo classification. In this procedure the Kraus operators of every quantum-limited Gaussian channel can be simply read off from the matrix elements of a corresponding metaplectic operator. Kraus operators are employed to bring out, in the Fock basis, the manner in which the antilinear, unphysical matrix transposition map when accompanied by injection of a threshold classical noise becomes a physical channel, denoted D(κ) in the Holevo classification. The matrix transposition channels D(κ), D(κ -1 ) turn out to be a dual pair in the sense that their Kraus operators are related by the adjoint operation. The amplifier channel with amplification factor κ and the beam-splitter channel with attenuation factor κ -1 turn out to be mutually dual in the same sense. The action of the quantum-limited attenuator and amplifier channels as simply scaling maps on suitable quasiprobabilities in phase space is examined in the Kraus picture. Consideration of cumulants is used to examine the issue of fixed points. The semigroup property of the amplifier and attenuator families leads in both cases to a Zeno-like effect arising as a consequence of interrupted evolution. In the cases of entanglement-breaking channels a description in terms of rank 1 Kraus operators is shown to emerge quite simply. In contradistinction, it is shown that there is not even one finite rank operator in the entire linear span of Kraus operators of the quantum-limited amplifier or attenuator families, an assertion far stronger than the statement that these are not entanglement breaking channels. A characterization of

  9. On contribution of instantons to nucleon sum rules

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Kochelev, N.I.

    1989-01-01

    The contribution of instantons to nucleon QCD sum rules is obtained. It is shown that this contribution does provide stabilization of the sum rules and leads to formation of a nucleon as a bound state of quarks in the instanton field. 17 refs.; 3 figs

  10. Compton scattering from nuclei and photo-absorption sum rules

    International Nuclear Information System (INIS)

    Gorchtein, Mikhail; Hobbs, Timothy; Londergan, J. Timothy; Szczepaniak, Adam P.

    2011-01-01

    We revisit the photo-absorption sum rule for real Compton scattering from the proton and from nuclear targets. In analogy with the Thomas-Reiche-Kuhn sum rule appropriate at low energies, we propose a new 'constituent quark model' sum rule that relates the integrated strength of hadronic resonances to the scattering amplitude on constituent quarks. We study the constituent quark model sum rule for several nuclear targets. In addition, we extract the α=0 pole contribution for both proton and nuclei. Using the modern high-energy proton data, we find that the α=0 pole contribution differs significantly from the Thomson term, in contrast with the original findings by Damashek and Gilman.

  11. A tilting approach to ranking influence

    KAUST Repository

    Genton, Marc G.

    2014-12-01

    We suggest a new approach, which is applicable for general statistics computed from random samples of univariate or vector-valued or functional data, to assessing the influence that individual data have on the value of a statistic, and to ranking the data in terms of that influence. Our method is based on, first, perturbing the value of the statistic by ‘tilting’, or reweighting, each data value, where the total amount of tilt is constrained to be the least possible, subject to achieving a given small perturbation of the statistic, and, then, taking the ranking of the influence of data values to be that which corresponds to ranking the changes in data weights. It is shown, both theoretically and numerically, that this ranking does not depend on the size of the perturbation, provided that the perturbation is sufficiently small. That simple result leads directly to an elegant geometric interpretation of the ranks; they are the ranks of the lengths of projections of the weights onto a ‘line’ determined by the first empirical principal component function in a generalized measure of covariance. To illustrate the generality of the method we introduce and explore it in the case of functional data, where (for example) it leads to generalized boxplots. The method has the advantage of providing an interpretable ranking that depends on the statistic under consideration. For example, the ranking of data, in terms of their influence on the value of a statistic, is different for a measure of location and for a measure of scale. This is as it should be; a ranking of data in terms of their influence should depend on the manner in which the data are used. Additionally, the ranking recognizes, rather than ignores, sign, and in particular can identify left- and right-hand ‘tails’ of the distribution of a random function or vector.

  12. Adler-Weisberger sum rule for WLWL→WLWL scattering

    International Nuclear Information System (INIS)

    Pham, T.N.

    1991-01-01

    We analyse the Adler-Weisberger sum rule for W L W L →W L W L scattering. We find that at some energy, the W L W L total cross section must be large to saturate the sum rule. Measurements at future colliders would be needed to check the sum rule and to obtain the decay rates Γ(H→W L W L , Z L Z L ) which would be modified by the existence of a P-wave vector meson resonance in the standard model with strongly interacting Higgs sector or in technicolour models. (orig.)

  13. Parity of Θ+(1540) from QCD sum rules

    International Nuclear Information System (INIS)

    Lee, Su Houng; Kim, Hungchong; Kwon, Youngshin

    2005-01-01

    The QCD sum rule for the pentaquark Θ + , first analyzed by Sugiyama, Doi and Oka, is reanalyzed with a phenomenological side that explicitly includes the contribution from the two-particle reducible kaon-nucleon intermediate state. The magnitude for the overlap of the Θ + interpolating current with the kaon-nucleon state is obtained by using soft-kaon theorem and a separate sum rule for the ground state nucleon with the pentaquark nucleon interpolating current. It is found that the K-N intermediate state constitutes only 10% of the sum rule so that the original claim that the parity of Θ + is negative remains valid

  14. Multi-dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs

    Science.gov (United States)

    Alias, Christophe; Darte, Alain; Feautrier, Paul; Gonnord, Laure

    Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.

  15. sumé

    African Journals Online (AJOL)

    Tracie1

    sumé. L'activité traduisant est un processus très compliqué qui exige la connaissance extralinguistique chez le traducteur. Ce travail est basé sur la traduction littéraire. La traduction littéraire consistedes textes littéraires que comprennent la poésie, le théâtre, et la prose. La traduction littéraire a quelques problèmes ...

  16. Ranking adverse drug reactions with crowdsourcing.

    Science.gov (United States)

    Gottlieb, Assaf; Hoehndorf, Robert; Dumontier, Michel; Altman, Russ B

    2015-03-23

    There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. The intent of the study was to rank ADRs according to severity. We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.

  17. Ranking Adverse Drug Reactions With Crowdsourcing

    KAUST Repository

    Gottlieb, Assaf

    2015-03-23

    Background: There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. Objective: The intent of the study was to rank ADRs according to severity. Methods: We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. Results: There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. Conclusions: ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making.

  18. Premium Pricing of Liability Insurance Using Random Sum Model

    OpenAIRE

    Kartikasari, Mujiati Dwi

    2017-01-01

    Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we ...

  19. Ranking scientific publications: the effect of nonlinearity

    Science.gov (United States)

    Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; di, Zengru

    2014-10-01

    Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected.

  20. Ranking scientific publications: the effect of nonlinearity.

    Science.gov (United States)

    Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; Di, Zengru

    2014-10-17

    Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected.

  1. Sum Rate Maximization of D2D Communications in Cognitive Radio Network Using Cheating Strategy

    Directory of Open Access Journals (Sweden)

    Yanjing Sun

    2018-01-01

    Full Text Available This paper focuses on the cheating algorithm for device-to-device (D2D pairs that reuse the uplink channels of cellular users. We are concerned about the way how D2D pairs are matched with cellular users (CUs to maximize their sum rate. In contrast with Munkres’ algorithm which gives the optimal matching in terms of the maximum throughput, Gale-Shapley algorithm ensures the stability of the system on the same time and achieves a men-optimal stable matching. In our system, D2D pairs play the role of “men,” so that each D2D pair could be matched to the CU that ranks as high as possible in the D2D pair’s preference list. It is found by previous studies that, by unilaterally falsifying preference lists in a particular way, some men can get better partners, while no men get worse off. We utilize this theory to exploit the best cheating strategy for D2D pairs. We find out that to acquire such a cheating strategy, we need to seek as many and as large cabals as possible. To this end, we develop a cabal finding algorithm named RHSTLC, and also we prove that it reaches the Pareto optimality. In comparison with other algorithms proposed by related works, the results show that our algorithm can considerably improve the sum rate of D2D pairs.

  2. A 2-categorical state sum model

    Energy Technology Data Exchange (ETDEWEB)

    Baratin, Aristide, E-mail: abaratin@uwaterloo.ca [Department of Applied Mathematics, University of Waterloo, 200 University Ave W, Waterloo, Ontario N2L 3G1 (Canada); Freidel, Laurent, E-mail: lfreidel@perimeterinstitute.ca [Perimeter Institute for Theoretical Physics, 31 Caroline Str. N, Waterloo, Ontario N2L 2Y5 (Canada)

    2015-01-15

    It has long been argued that higher categories provide the proper algebraic structure underlying state sum invariants of 4-manifolds. This idea has been refined recently, by proposing to use 2-groups and their representations as specific examples of 2-categories. The challenge has been to make these proposals fully explicit. Here, we give a concrete realization of this program. Building upon our earlier work with Baez and Wise on the representation theory of 2-groups, we construct a four-dimensional state sum model based on a categorified version of the Euclidean group. We define and explicitly compute the simplex weights, which may be viewed a categorified analogue of Racah-Wigner 6j-symbols. These weights solve a hexagon equation that encodes the formal invariance of the state sum under the Pachner moves of the triangulation. This result unravels the combinatorial formulation of the Feynman amplitudes of quantum field theory on flat spacetime proposed in A. Baratin and L. Freidel [Classical Quantum Gravity 24, 2027–2060 (2007)] which was shown to lead after gauge-fixing to Korepanov’s invariant of 4-manifolds.

  3. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    Science.gov (United States)

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  4. Sum rules for nuclear excitations with the Skyrme-Landau interaction

    International Nuclear Information System (INIS)

    Liu Kehfei; Luo Hongde; Ma Zhongyu; Feng Man; Shen Qingbiao

    1991-01-01

    The energy-weighted sum rules for electric, magnetic, Fermi and Gamow-Teller transitions with the Skyrme-Landau interaction are derived from the double commutators and numerically calculated in a HF + RPA formalism. As a numerical check of the Thouless theorem, our self-consistent calculations show that the calculated RPA strengths exhaust more than 85% of the sum rules in most cases. The well known non-energy-weighted sum rules for Fermi and Gamow-Teller transitions are also checked numerically. The sum rules are exhausted by more than 94% in these cases. (orig.)

  5. Partial sums of arithmetical functions with absolutely convergent ...

    Indian Academy of Sciences (India)

    Keywords. Ramanujan expansions; average order; error terms; sum-of-divisors functions; Jordan's totient functions. 2010 Mathematics Subject Classification. 11N37, 11A25, 11K65. 1. Introduction. The theory of Ramanujan sums and Ramanujan expansions has emerged from the seminal article [10] of Ramanujan. In 1918 ...

  6. Country-specific determinants of world university rankings.

    Science.gov (United States)

    Pietrucha, Jacek

    2018-01-01

    This paper examines country-specific factors that affect the three most influential world university rankings (the Academic Ranking of World Universities, the QS World University Ranking, and the Times Higher Education World University Ranking). We run a cross sectional regression that covers 42-71 countries (depending on the ranking and data availability). We show that the position of universities from a country in the ranking is determined by the following country-specific variables: economic potential of the country, research and development expenditure, long-term political stability (freedom from war, occupation, coups and major changes in the political system), and institutional variables, including government effectiveness.

  7. Ranking in evolving complex networks

    Science.gov (United States)

    Liao, Hao; Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng; Zhou, Ming-Yang

    2017-05-01

    Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithms perform, and which are their possible biases that may impair their effectiveness. Many popular ranking algorithms (such as Google's PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks. We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes.

  8. A method for generating permutation distribution of ranks in a k ...

    African Journals Online (AJOL)

    ... in a combinatorial sense the distribution of the ranks is obtained via its generating function. The formulas are defined recursively to speed up computations using the computer algebra system Mathematica. Key words: Partitions, generating functions, combinatorics, permutation test, exact tests, computer algebra, k-sample, ...

  9. Light cone sum rules in nonabelian gauge field theory

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Bern Univ. (Switzerland). Inst. fuer Theoretische Physik

    1981-03-24

    The author examines, in the context of nonabelian gauge field theory, the derivation of the light cone sum rules which were obtained earlier on the assumption of dominance of canonical singularity in the current commutator on the light cone. The retarded scaling functions appearing in the sum rules are numbers known in terms of the charges of the quarks and the number of quarks and gluons in the theory. Possible applications of the sum rules are suggested.

  10. Groundwater contaminant plume ranking

    International Nuclear Information System (INIS)

    1988-08-01

    Containment plumes at Uranium Mill Tailings Remedial Action (UMTRA) Project sites were ranked to assist in Subpart B (i.e., restoration requirements of 40 CFR Part 192) compliance strategies for each site, to prioritize aquifer restoration, and to budget future requests and allocations. The rankings roughly estimate hazards to the environment and human health, and thus assist in determining for which sites cleanup, if appropriate, will provide the greatest benefits for funds available. The rankings are based on the scores that were obtained using the US Department of Energy's (DOE) Modified Hazard Ranking System (MHRS). The MHRS and HRS consider and score three hazard modes for a site: migration, fire and explosion, and direct contact. The migration hazard mode score reflects the potential for harm to humans or the environment from migration of a hazardous substance off a site by groundwater, surface water, and air; it is a composite of separate scores for each of these routes. For ranking the containment plumes at UMTRA Project sites, it was assumed that each site had been remediated in compliance with the EPA standards and that relict contaminant plumes were present. Therefore, only the groundwater route was scored, and the surface water and air routes were not considered. Section 2.0 of this document describes the assumptions and procedures used to score the groundwater route, and Section 3.0 provides the resulting scores for each site. 40 tabs

  11. Light cone sum rules for single-pion electroproduction

    International Nuclear Information System (INIS)

    Mallik, S.

    1978-01-01

    Light cone dispersion sum rules (of low energy and superconvergence types) are derived for nucleon matrix elements of the commutator involving electromagnetic and divergence of axial vector currents. The superconvergence type sum rules in the fixed mass limit are rewritten without requiring the knowledge of Regge subtractions. The retarded scaling functions occurring in these sum rules are evaluated within the framework of quark light cone algebra of currents. Besides a general consistency check of the framework underlying the derivation, the author infers, on the basis of crude evaluation of scaling functions, an upper limit of 100 MeV for the bare mass of nonstrange quarks. (Auth.)

  12. Spectral sum rules for the three-body problem

    International Nuclear Information System (INIS)

    Bolle, D.; Osborn, T.A.

    1982-01-01

    This paper derives a number of sum rules for nonrelativistic three-body scattering. These rules are valid for any finite region μ in the six-dimensional coordinate space. They relate energy moments of the trace of the onshell time-delay operator to the energy-weighted probability for finding the three-body bound-state wave functions in the region μ. If μ is all of the six-dimensional space, the global form of the sum rules is obtained. In this form the rules constitute higher-order Levinson's theorems for the three-body problem. Finally, the sum rules are extended to allow the energy momtns have complex powers

  13. Moessbauer sum rules for use with synchrotron sources

    International Nuclear Information System (INIS)

    Lipkin, Harry J.

    1999-01-01

    The availability of tunable synchrotron radiation sources with millivolt resolution has opened new prospects for exploring dynamics of complex systems with Moessbauer spectroscopy. Early Moessbauer treatments and moment sum rules are extended to treat inelastic excitations measured in synchrotron experiments, with emphasis on the unique new conditions absent in neutron scattering and arising in resonance scattering: prompt absorption, delayed emission, recoil-free transitions and coherent forward scattering. The first moment sum rule normalizes the inelastic spectrum. New sum rules obtained for higher moments include the third moment proportional to the second derivative of the potential acting on the Moessbauer nucleus and independent of temperature in the the harmonic approximation

  14. Faraday effect revisited: sum rules and convergence issues

    DEFF Research Database (Denmark)

    Cornean, Horia; Nenciu, Gheorghe

    2010-01-01

    This is the third paper of a series revisiting the Faraday effect. The question of the absolute convergence of the sums over the band indices entering the Verdet constant is considered. In general, sum rules and traces per unit volume play an important role in solid-state physics, and they give...

  15. Standard test method for ranking resistance of plastics to sliding wear using block-on-ring wear test—cumulative wear method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2003-01-01

    1.1 This test method covers laboratory procedures for determining the resistance of plastics to sliding wear. The test utilizes a block-on-ring friction and wear testing machine to rank plastics according to their sliding wear characteristics against metals or other solids. 1.2 An important attribute of this test is that it is very flexible. Any material that can be fabricated into, or applied to, blocks and rings can be tested. Thus, the potential materials combinations are endless. In addition, the test can be run with different gaseous atmospheres and elevated temperatures, as desired, to simulate service conditions. 1.3 Wear test results are reported as the volume loss in cubic millimetres for the block and ring. Materials of higher wear resistance will have lower volume loss. 1.4 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with it...

  16. Inhibition of osteoclastogenesis by RNA interference targeting RANK

    Directory of Open Access Journals (Sweden)

    Ma Ruofan

    2012-08-01

    Full Text Available Abstract Background Osteoclasts and osteoblasts regulate bone resorption and formation to allow bone remodeling and homeostasis. The balance between bone resorption and formation is disturbed by abnormal recruitment of osteoclasts. Osteoclast differentiation is dependent on the receptor activator of nuclear factor NF-kappa B (RANK ligand (RANKL as well as the macrophage colony-stimulating factor (M-CSF. The RANKL/RANK system and RANK signaling induce osteoclast formation mediated by various cytokines. The RANK/RANKL pathway has been primarily implicated in metabolic, degenerative and neoplastic bone disorders or osteolysis. The central role of RANK/RANKL interaction in osteoclastogenesis makes RANK an attractive target for potential therapies in treatment of osteolysis. The purpose of this study was to assess the effect of inhibition of RANK expression in mouse bone marrow macrophages on osteoclast differentiation and bone resorption. Methods Three pairs of short hairpin RNAs (shRNA targeting RANK were designed and synthesized. The optimal shRNA was selected among three pairs of shRNAs by RANK expression analyzed by Western blot and Real-time PCR. We investigated suppression of osteoclastogenesis of mouse bone marrow macrophages (BMMs using the optimal shRNA by targeting RANK. Results Among the three shRANKs examined, shRANK-3 significantly suppressed [88.3%] the RANK expression (p Conclusions These findings suggest that retrovirus-mediated shRNA targeting RANK inhibits osteoclast differentiation and osteolysis. It may appear an attractive target for preventing osteolysis in humans with a potential clinical application.

  17. On Rank and Nullity

    Science.gov (United States)

    Dobbs, David E.

    2012-01-01

    This note explains how Emil Artin's proof that row rank equals column rank for a matrix with entries in a field leads naturally to the formula for the nullity of a matrix and also to an algorithm for solving any system of linear equations in any number of variables. This material could be used in any course on matrix theory or linear algebra.

  18. Strategic planning at the national level: Evaluating and ranking energy projects by environmental impact

    International Nuclear Information System (INIS)

    Thorhallsdottir, Thora Ellen

    2007-01-01

    A method for evaluating and ranking energy alternatives based on impact upon the natural environment and cultural heritage was developed as part of the first phase of an Icelandic framework plan for the use of hydropower and geothermal energy. The three step procedure involved assessing i) site values and ii) development impacts within a multi-criteria analysis, and iii) ranking the alternatives from worst to best choice from an environmental-cultural heritage point of view. The natural environment was treated as four main classes (landscape + wilderness, geology + hydrology, species, and ecosystem/habitat types + soils), while cultural heritage constituted one class. Values and impacts were assessed within a common matrix with 6 agglomerated attributes: 1) diversity, richness, 2) rarity, 3) size (area), completeness, pristineness, 4) information (epistemological, typological, scientific and educational) and symbolic value, 5) international responsibility, and 6) scenic value. Standardized attribute scores were used to derive total class scores whose weighted sums yielded total site value and total impact. The final output was a one-dimensional ranking obtained by Analytical Hierarchical Process considering total predicted impacts, total site values, risks and uncertainties as well as special site values. The value/impact matrix is compact (31 cell scores) but was considered to be of sufficient resolution and has the advantage of facilitating overview and communication of the methods and results. The classes varied widely in the extent to which value assessments could be based on established scientific procedures and the project highlighted the immense advantage of an internationally accepted frame of reference, first for establishing the theoretical and scientific foundation, second as a tool for evaluation, and third for allowing a global perspective

  19. Ranking economic history journals

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Weisdorf, Jacob Louis

    2010-01-01

    This study ranks-for the first time-12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We also...... compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential for economic...... history, and that, although economic history is quite independent from economics as a whole, knowledge exchange between the two fields is indeed going on....

  20. Ranking Economic History Journals

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Weisdorf, Jacob Louis

    This study ranks - for the first time - 12 international academic journals that have economic history as their main topic. The ranking is based on data collected for the year 2007. Journals are ranked using standard citation analysis where we adjust for age, size and self-citation of journals. We...... also compare the leading economic history journals with the leading journals in economics in order to measure the influence on economics of economic history, and vice versa. With a few exceptions, our results confirm the general idea about what economic history journals are the most influential...... for economic history, and that, although economic history is quite independent from economics as a whole, knowledge exchange between the two fields is indeed going on....

  1. A Universal Rank-Size Law

    Science.gov (United States)

    2016-01-01

    A mere hyperbolic law, like the Zipf’s law power function, is often inadequate to describe rank-size relationships. An alternative theoretical distribution is proposed based on theoretical physics arguments starting from the Yule-Simon distribution. A modeling is proposed leading to a universal form. A theoretical suggestion for the “best (or optimal) distribution”, is provided through an entropy argument. The ranking of areas through the number of cities in various countries and some sport competition ranking serves for the present illustrations. PMID:27812192

  2. On the ranking of chemicals based on their PBT characteristics: comparison of different ranking methodologies using selected POPs as an illustrative example.

    Science.gov (United States)

    Sailaukhanuly, Yerbolat; Zhakupbekova, Arai; Amutova, Farida; Carlsen, Lars

    2013-01-01

    Knowledge of the environmental behavior of chemicals is a fundamental part of the risk assessment process. The present paper discusses various methods of ranking of a series of persistent organic pollutants (POPs) according to the persistence, bioaccumulation and toxicity (PBT) characteristics. Traditionally ranking has been done as an absolute (total) ranking applying various multicriteria data analysis methods like simple additive ranking (SAR) or various utility functions (UFs) based rankings. An attractive alternative to these ranking methodologies appears to be partial order ranking (POR). The present paper compares different ranking methods like SAR, UF and POR. Significant discrepancies between the rankings are noted and it is concluded that partial order ranking, as a method without any pre-assumptions concerning possible relation between the single parameters, appears as the most attractive ranking methodology. In addition to the initial ranking partial order methodology offers a wide variety of analytical tools to elucidate the interplay between the objects to be ranked and the ranking parameters. In the present study is included an analysis of the relative importance of the single P, B and T parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Teaching quantum physics by the sum over paths approach and GeoGebra simulations

    International Nuclear Information System (INIS)

    Malgieri, M; Onorato, P; De Ambrosis, A

    2014-01-01

    We present a research-based teaching sequence in introductory quantum physics using the Feynman sum over paths approach. Our reconstruction avoids the historical pathway, and starts by reconsidering optics from the standpoint of the quantum nature of light, analysing both traditional and modern experiments. The core of our educational path lies in the treatment of conceptual and epistemological themes, peculiar of quantum theory, based on evidence from quantum optics, such as the single photon Mach–Zehnder and Zhou–Wang–Mandel experiments. The sequence is supported by a collection of interactive simulations, realized in the open source GeoGebra environment, which we used to assist students in learning the basics of the method, and help them explore the proposed experimental situations as modeled in the sum over paths perspective. We tested our approach in the context of a post-graduate training course for pre-service physics teachers; according to the data we collected, student teachers displayed a greatly improved understanding of conceptual issues, and acquired significant abilities in using the sum over path method for problem solving. (paper)

  4. Statistical sum of bosonic string, compactified on an orbifold

    International Nuclear Information System (INIS)

    Morozov, A.; Ol'shanetskij, M.

    1986-01-01

    Expression for statistical sum of bosonic string, compactified on a singular orbifold, is presented. All the information about the orbifold is encoded the specific combination of theta-functions, which the statistical sum is expressed through

  5. Efficient yellow beam generation by intracavity sum frequency ...

    Indian Academy of Sciences (India)

    2014-02-06

    Feb 6, 2014 ... We present our studies on dual wavelength operation using a single Nd:YVO4 crystal and its intracavity sum frequency generation by considering the influence of the thermal lensing effect on the performance of the laser. A KTP crystal cut for type-II phase matching was used for intracavity sum frequency ...

  6. A set of sums for continuous dual q-2-Hahn polynomials

    International Nuclear Information System (INIS)

    Gade, R. M.

    2009-01-01

    An infinite set {τ (l) (y;r,z)} r,lisanelementofN 0 of linear sums of continuous dual q -2 -Hahn polynomials with prefactors depending on a complex parameter z is studied. The sums τ (l) (y;r,z) have an interpretation in context with tensor product representations of the quantum affine algebra U q ' (sl(2)) involving both a positive and a negative discrete series representation. For each l>0, the sum τ (l) (y;r,z) can be expressed in terms of the sum τ (0) (y;r,z), continuous dual q 2 -Hahn polynomials, and their associated polynomials. The sum τ (0) (y;r,z) is obtained as a combination of eight basic hypergeometric series. Moreover, an integral representation is provided for the sums τ (l) (y;r,z) with the complex parameter restricted by |zq| -2 -Hahn polynomials.

  7. Implementation of Chaotic Gaussian Particle Swarm Optimization for Optimize Learning-to-Rank Software Defect Prediction Model Construction

    Science.gov (United States)

    Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.

    2018-03-01

    Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.

  8. Derivation of sum rules for quark and baryon fields

    International Nuclear Information System (INIS)

    Bongardt, K.

    1978-01-01

    In an analogous way to the Weinberg sum rules, two spectral-function sum rules for quark and baryon fields are derived by means of the concept of lightlike charges. The baryon sum rules are valid for the case of SU 3 as well as for SU 4 and the one-particle approximation yields a linear mass relation. This relation is not in disagreement with the normal linear GMO formula for the baryons. The calculated masses of the first resonance states agree very well with the experimental data

  9. Gene Ranking of RNA-Seq Data via Discriminant Non-Negative Matrix Factorization.

    Science.gov (United States)

    Jia, Zhilong; Zhang, Xiang; Guan, Naiyang; Bo, Xiaochen; Barnes, Michael R; Luo, Zhigang

    2015-01-01

    RNA-sequencing is rapidly becoming the method of choice for studying the full complexity of transcriptomes, however with increasing dimensionality, accurate gene ranking is becoming increasingly challenging. This paper proposes an accurate and sensitive gene ranking method that implements discriminant non-negative matrix factorization (DNMF) for RNA-seq data. To the best of our knowledge, this is the first work to explore the utility of DNMF for gene ranking. When incorporating Fisher's discriminant criteria and setting the reduced dimension as two, DNMF learns two factors to approximate the original gene expression data, abstracting the up-regulated or down-regulated metagene by using the sample label information. The first factor denotes all the genes' weights of two metagenes as the additive combination of all genes, while the second learned factor represents the expression values of two metagenes. In the gene ranking stage, all the genes are ranked as a descending sequence according to the differential values of the metagene weights. Leveraging the nature of NMF and Fisher's criterion, DNMF can robustly boost the gene ranking performance. The Area Under the Curve analysis of differential expression analysis on two benchmarking tests of four RNA-seq data sets with similar phenotypes showed that our proposed DNMF-based gene ranking method outperforms other widely used methods. Moreover, the Gene Set Enrichment Analysis also showed DNMF outweighs others. DNMF is also computationally efficient, substantially outperforming all other benchmarked methods. Consequently, we suggest DNMF is an effective method for the analysis of differential gene expression and gene ranking for RNA-seq data.

  10. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  11. On the Computation of Correctly Rounded Sums

    DEFF Research Database (Denmark)

    Kornerup, Peter; Lefevre, Vincent; Louvet, Nicolas

    2012-01-01

    This paper presents a study of some basic blocks needed in the design of floating-point summation algorithms. In particular, in radix-2 floating-point arithmetic, we show that among the set of the algorithms with no comparisons performing only floating-point additions/subtractions, the 2Sum...... algorithm introduced by Knuth is minimal, both in terms of number of operations and depth of the dependency graph. We investigate the possible use of another algorithm, Dekker's Fast2Sum algorithm, in radix-10 arithmetic. We give methods for computing, in radix 10, the floating-point number nearest...... the average value of two floating-point numbers. We also prove that under reasonable conditions, an algorithm performing only round-to-nearest additions/subtractions cannot compute the round-to-nearest sum of at least three floating-point numbers. Starting from an algorithm due to Boldo and Melquiond, we also...

  12. Comparing classical and quantum PageRanks

    Science.gov (United States)

    Loke, T.; Tang, J. W.; Rodriguez, J.; Small, M.; Wang, J. B.

    2017-01-01

    Following recent developments in quantum PageRanking, we present a comparative analysis of discrete-time and continuous-time quantum-walk-based PageRank algorithms. Relative to classical PageRank and to different extents, the quantum measures better highlight secondary hubs and resolve ranking degeneracy among peripheral nodes for all networks we studied in this paper. For the discrete-time case, we investigated the periodic nature of the walker's probability distribution for a wide range of networks and found that the dominant period does not grow with the size of these networks. Based on this observation, we introduce a new quantum measure using the maximum probabilities of the associated walker during the first couple of periods. This is particularly important, since it leads to a quantum PageRanking scheme that is scalable with respect to network size.

  13. Individual wealth rank, community wealth inequality, and self-reported adult poor health: a test of hypotheses with panel data (2002-2006) from native Amazonians, Bolivia.

    Science.gov (United States)

    Undurraga, Eduardo A; Nyberg, Colleen; Eisenberg, Dan T A; Magvanjav, Oyunbileg; Reyes-García, Victoria; Huanca, Tomás; Leonard, William R; McDade, Thomas W; Tanner, Susan; Vadez, Vincent; Godoy, Ricardo

    2010-12-01

    Growing evidence suggests that economic inequality in a community harms the health of a person. Using panel data from a small-scale, preindustrial rural society, we test whether individual wealth rank and village wealth inequality affects self-reported poor health in a foraging-farming native Amazonian society. A person's wealth rank was negatively but weakly associated with self-reported morbidity. Each step up/year in the village wealth hierarchy reduced total self-reported days ill by 0.4 percent. The Gini coefficient of village wealth inequality bore a positive association with self-reported poor health that was large in size, but not statistically significant. We found small village wealth inequality, and evidence that individual economic rank did not change. The modest effects may have to do with having used subjective rather than objective measures of health, having small village wealth inequality, and with the possibly true modest effect of a person's wealth rank on health in a small-scale, kin-based society. Finally, we also found that an increase in mean individual wealth by village was related to worse self-reported health. As the Tsimane' integrate into the market economy, their possibilities of wealth accumulation rise, which may affect their well-being. Our work contributes to recent efforts in biocultural anthropology to link the study of social inequalities, human biology, and human-environment interactions.

  14. A family longevity selection score: ranking sibships by their longevity, size, and availability for study.

    Science.gov (United States)

    Sebastiani, Paola; Hadley, Evan C; Province, Michael; Christensen, Kaare; Rossi, Winifred; Perls, Thomas T; Ash, Arlene S

    2009-12-15

    Family studies of exceptional longevity can potentially identify genetic and other factors contributing to long life and healthy aging. Although such studies seek families that are exceptionally long lived, they also need living members who can provide DNA and phenotype information. On the basis of these considerations, the authors developed a metric to rank families for selection into a family study of longevity. Their measure, the family longevity selection score (FLoSS), is the sum of 2 components: 1) an estimated family longevity score built from birth-, gender-, and nation-specific cohort survival probabilities and 2) a bonus for older living siblings. The authors examined properties of FLoSS-based family rankings by using data from 3 ongoing studies: the New England Centenarian Study, the Framingham Heart Study, and screenees for the Long Life Family Study. FLoSS-based selection yields families with exceptional longevity, satisfactory sibship sizes and numbers of living siblings, and high ages. Parameters in the FLoSS formula can be tailored for studies of specific populations or age ranges or with different conditions. The first component of the FLoSS also provides a conceptually sound survival measure to characterize exceptional longevity in individuals or families in various types of studies and correlates well with later-observed longevity.

  15. Adaptive Game Level Creation through Rank-based Interactive Evolution

    DEFF Research Database (Denmark)

    Liapis, Antonios; Martínez, Héctor Pérez; Togelius, Julian

    2013-01-01

    as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using...

  16. 29 CFR 4044.75 - Other lump sum benefits.

    Science.gov (United States)

    2010-07-01

    ... sum benefits. The value of a lump sum benefit which is not covered under § 4044.73 or § 4044.74 is equal to— (a) The value under the qualifying bid, if an insurer provides the benefit; or (b) The present value of the benefit as of the date of distribution, determined using reasonable actuarial assumptions...

  17. Modified Phenomena Identification and Ranking Table (PIRT) for Uncertainty Analysis

    International Nuclear Information System (INIS)

    Gol-Mohamad, Mohammad P.; Modarres, Mohammad; Mosleh, Ali

    2006-01-01

    This paper describes a methodology of characterizing important phenomena, which is also part of a broader research by the authors called 'Modified PIRT'. The methodology provides robust process of phenomena identification and ranking process for more precise quantification of uncertainty. It is a two-step process of identifying and ranking methodology based on thermal-hydraulics (TH) importance as well as uncertainty importance. Analytical Hierarchical Process (AHP) has been used for as a formal approach for TH identification and ranking. Formal uncertainty importance technique is used to estimate the degree of credibility of the TH model(s) used to represent the important phenomena. This part uses subjective justification by evaluating available information and data from experiments, and code predictions. The proposed methodology was demonstrated by developing a PIRT for large break loss of coolant accident LBLOCA for the LOFT integral facility with highest core power (test LB-1). (authors)

  18. New QCD sum rules for nucleon axial-vector coupling constants

    International Nuclear Information System (INIS)

    Lee, F.X.; Leinweber, D.B.; Jin, X.

    1997-01-01

    Two new sets of QCD sum rules for the nucleon axial-vector coupling constants are derived using the external-field technique and generalized interpolating fields. An in-depth study of the predicative ability of these sum rules is carried out using a Monte Carlo based uncertainty analysis. The results show that the standard implementation of the QCD sum rule method has only marginal predicative power for the nucleon axial-vector coupling constants, as the relative errors are large. The errors range from approximately 50% to 100% compared to the nucleon mass obtained from the same method, which has only a 10%- 25% error. The origin of the large errors is examined. Previous analyses of these coupling constants are based on sum rules that have poor operator product expansion convergence and large continuum contributions. Preferred sum rules are identified and their predictions are obtained. We also investigate the new sum rules with an alternative treatment of the problematic transitions which are not exponentially suppressed in the standard treatment. The alternative treatment provides exponential suppression of their contributions relative to the ground state. Implications for other nucleon current matrix elements are also discussed. copyright 1997 The American Physical Society

  19. Bootstrap Determination of the Co-Integration Rank in Heteroskedastic VAR Models

    DEFF Research Database (Denmark)

    Cavaliere, G.; Rahbek, Anders; Taylor, A.M.R.

    2014-01-01

    In a recent paper Cavaliere et al. (2012) develop bootstrap implementations of the (pseudo-) likelihood ratio (PLR) co-integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates...... of the underlying vector autoregressive (VAR) model which obtain under the reduced rank null hypothesis. They propose methods based on an independent and individual distributed (i.i.d.) bootstrap resampling scheme and establish the validity of their proposed bootstrap procedures in the context of a co......-integrated VAR model with i.i.d. innovations. In this paper we investigate the properties of their bootstrap procedures, together with analogous procedures based on a wild bootstrap resampling scheme, when time-varying behavior is present in either the conditional or unconditional variance of the innovations. We...

  20. The novel nomogram of Gleason sum upgrade: possible application for the eligible criteria of low dose rate brachytherapy.

    Science.gov (United States)

    Budäus, Lars; Graefen, Markus; Salomon, Georg; Isbarn, Hendrik; Lughezzani, Giovanni; Sun, Maxine; Chun, Felix K H; Schlomm, Thorsten; Steuber, Thomas; Haese, Alexander; Koellermann, Jens; Sauter, Guido; Fisch, Margit; Heinzer, Hans; Huland, Hartwig; Karakiewicz, Pierre I

    2010-10-01

    To examine the rate of Gleason sum upgrading (GSU) from a sum of 6 to a Gleason sum of ≥7 in patients undergoing radical prostatectomy (RP), who fulfilled the recommendations for low dose rate brachytherapy (Gleason sum 6, prostate-specific antigen ≤10 ng/mL, clinical stage ≤T2a and prostate volume ≤50 mL), and to test the performance of an existing nomogram for prediction of GSU in this specific cohort of patients. The analysis focused on 414 patients, who fulfilled the European Society for Therapeutic Radiation and Oncology and American Brachytherapy Society criteria for low dose rate brachytherapy (LD-BT) and underwent a 10-core prostate biopsy followed by RP. The rate of GSU was tabulated and the ability of available clinical and pathological parameters for predicting GSU was tested. Finally, the performance of an existing GSU nomogram was explored. The overall rate of GSU was 35.5%. When applied to LD-BT candidates, the existing nomogram was 65.8% accurate versus 70.8% for the new nomogram. In decision curve analysis tests, the new nomogram fared substantially better than the assumption that no patient is upgraded and better than the existing nomogram. GSU represents an important issue in LD-BT candidates. The new nomogram might improve patient selection for LD-BT and cancer control outcome by excluding patients with an elevated probability of GSU. © 2010 The Japanese Urological Association.

  1. On the Laplace transform of the Weinberg type sum rules

    International Nuclear Information System (INIS)

    Narison, S.

    1981-09-01

    We consider the Laplace transform of various sum rules of the Weinberg type including the leading non-perturbative effects. We show from the third type Weinberg sum rules that 7.5 to 8.9 1 coupling to the W boson, while the second sum rule gives an upper bound on the A 1 mass (Msub(A 1 ) < or approx. 1.25 GeV). (author)

  2. Error analysis of stochastic gradient descent ranking.

    Science.gov (United States)

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  3. Interindividual testing of water-soluble oral contrast media in respect of diagnostic ranking, side effects and taste

    International Nuclear Information System (INIS)

    Staebler, A.; Fink, U.; Siuda, S.; Neville, S.

    1989-01-01

    Three groups of patients (n = 55, 52 and 54) were examined with the X-ray contrast media Gastrografin, Peritrast-Oral GI, and Telebrix Gastro to assess the diagnostic ranking, side effects and taste of watersoluble oral contrast media. No significant differences were seen in respect of diagnostic ranking and side effects. Side effects were exclusively abdominal symptoms; there was no difference with regard to laxative action. Telebrix Gastroas accepted significantly better in respect of taste than Gastrografin and Peritrast-Oral GI. (orig.) [de

  4. The Influence of Wealth, Transparency, and Democracy on the Number of Top Ranked Universities

    Science.gov (United States)

    Jabnoun, Naceur

    2015-01-01

    Purpose: This paper aims to explore the influence of wealth, transparency and democracy on the number of universities per million people ranked among the top 300 and 500. The highly ranked universities in the world tend to be concentrated in a few countries. Design/Methodology/Approach: ANOVA was used to test the differences between the two groups…

  5. Contests with rank-order spillovers

    NARCIS (Netherlands)

    M.R. Baye (Michael); D. Kovenock (Dan); C.G. de Vries (Casper)

    2012-01-01

    textabstractThis paper presents a unified framework for characterizing symmetric equilibrium in simultaneous move, two-player, rank-order contests with complete information, in which each player's strategy generates direct or indirect affine "spillover" effects that depend on the rank-order of her

  6. Proximinality in generalized direct sums

    Directory of Open Access Journals (Sweden)

    Darapaneni Narayana

    2004-01-01

    Full Text Available We consider proximinality and transitivity of proximinality for subspaces of finite codimension in generalized direct sums of Banach spaces. We give several examples of Banach spaces where proximinality is transitive among subspaces of finite codimension.

  7. Measurement sum theory and application - Application to low level measurements

    International Nuclear Information System (INIS)

    Puydarrieux, S.; Bruel, V.; Rivier, C.; Crozet, M.; Vivier, A.; Manificat, G.; Thaurel, B.; Mokili, M.; Philippot, B.; Bohaud, E.

    2015-09-01

    In laboratories, most of the Total Sum methods implemented today use substitution or censure methods for nonsignificant or negative values, and thus create biases which can sometimes be quite large. They are usually positive, and generate, for example, becquerel (Bq) counting or 'administrative' quantities of materials (= 'virtual'), thus artificially falsifying the records kept by the laboratories under regulatory requirements (environment release records, waste records, etc.). This document suggests a methodology which will enable the user to avoid such biases. It is based on the following two fundamental rules: - The Total Sum of measurement values must be established based on all the individual measurement values, even those considered non-significant including the negative values. Any modification of these values, under the pretext that they are not significant, will inevitably lead to biases in the accumulated result and falsify the evaluation of its uncertainty. - In Total Sum operations, the decision thresholds are arrived at in a similar way to the approach used for uncertainties. The document deals with four essential aspects of the notion of 'measurement Total Sums': - The expression of results and associated uncertainties close to Decision Thresholds, and Detection or Quantification Limits, - The Total Sum of these measurements: sum or mean, - The calculation of the uncertainties associated with the Total Sums, - Result presentation (particularly when preparing balance sheets or reports, etc.) Several case studies arising from different situations are used to illustrate the methodology: environmental monitoring reports, release reports, and chemical impurity Total Sums for the qualification of a finished product. The special case of material balances, in which the measurements are usually all significant and correlated (the covariance term cannot then be ignored) will be the subject of a future second document. This

  8. Integrals of Lagrange functions and sum rules

    Energy Technology Data Exchange (ETDEWEB)

    Baye, Daniel, E-mail: dbaye@ulb.ac.be [Physique Quantique, CP 165/82, Universite Libre de Bruxelles, B 1050 Bruxelles (Belgium); Physique Nucleaire Theorique et Physique Mathematique, CP 229, Universite Libre de Bruxelles, B 1050 Bruxelles (Belgium)

    2011-09-30

    Exact values are derived for some matrix elements of Lagrange functions, i.e. orthonormal cardinal functions, constructed from orthogonal polynomials. They are obtained with exact Gauss quadratures supplemented by corrections. In the particular case of Lagrange-Laguerre and shifted Lagrange-Jacobi functions, sum rules provide exact values for matrix elements of 1/x and 1/x{sup 2} as well as for the kinetic energy. From these expressions, new sum rules involving Laguerre and shifted Jacobi zeros and weights are derived. (paper)

  9. Rank distributions: A panoramic macroscopic outlook

    Science.gov (United States)

    Eliazar, Iddo I.; Cohen, Morrel H.

    2014-01-01

    This paper presents a panoramic macroscopic outlook of rank distributions. We establish a general framework for the analysis of rank distributions, which classifies them into five macroscopic "socioeconomic" states: monarchy, oligarchy-feudalism, criticality, socialism-capitalism, and communism. Oligarchy-feudalism is shown to be characterized by discrete macroscopic rank distributions, and socialism-capitalism is shown to be characterized by continuous macroscopic size distributions. Criticality is a transition state between oligarchy-feudalism and socialism-capitalism, which can manifest allometric scaling with multifractal spectra. Monarchy and communism are extreme forms of oligarchy-feudalism and socialism-capitalism, respectively, in which the intrinsic randomness vanishes. The general framework is applied to three different models of rank distributions—top-down, bottom-up, and global—and unveils each model's macroscopic universality and versatility. The global model yields a macroscopic classification of the generalized Zipf law, an omnipresent form of rank distributions observed across the sciences. An amalgamation of the three models establishes a universal rank-distribution explanation for the macroscopic emergence of a prevalent class of continuous size distributions, ones governed by unimodal densities with both Pareto and inverse-Pareto power-law tails.

  10. Importance of intrinsic and non-network contribution in PageRank centrality and its effect on PageRank localization

    OpenAIRE

    Deyasi, Krishanu

    2016-01-01

    PageRank centrality is used by Google for ranking web-pages to present search result for a user query. Here, we have shown that PageRank value of a vertex also depends on its intrinsic, non-network contribution. If the intrinsic, non-network contributions of the vertices are proportional to their degrees or zeros, then their PageRank centralities become proportion to their degrees. Some simulations and empirical data are used to support our study. In addition, we have shown that localization ...

  11. Test on the effectiveness of the sum over paths approach in favoring the construction of an integrated knowledge of quantum physics in high school

    Directory of Open Access Journals (Sweden)

    Massimiliano Malgieri

    2017-01-01

    Full Text Available In this paper we present the results of a research-based teaching-learning sequence on introductory quantum physics based on Feynman’s sum over paths approach in the Italian high school. Our study focuses on students’ understanding of two founding ideas of quantum physics, wave particle duality and the uncertainty principle. In view of recent research reporting the fragmentation of students’ mental models of quantum concepts after initial instruction, we collected and analyzed data using the assessment tools provided by knowledge integration theory. Our results on the group of n=14 students who performed the final test indicate that the functional explanation of wave particle duality provided by the sum over paths approach may be effective in leading students to build consistent mental models of quantum objects, and in providing them with a unified perspective on both the photon and the electron. Results on the uncertainty principle are less clear cut, as the improvements over traditional instruction appear less significant. Given the low number of students in the sample, this work should be interpreted as a case study, and we do not attempt to draw definitive conclusions. However, our study suggests that (i the sum over paths approach may deserve more attention from researchers and educators as a possible route to introduce basic concepts of quantum physics in high school, and (ii more research should be focused not only on the correctness of students’ mental models on individual concepts, but also on the ability of students to connect different ideas and experiments related to quantum theory in an organized whole.

  12. Ranking as parameter estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Guy, Tatiana Valentine

    2009-01-01

    Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf

  13. Prostatectomy-based validation of combined urine and plasma test for predicting high grade prostate cancer

    DEFF Research Database (Denmark)

    Albitar, Maher; Ma, Wanlong; Lund, Lars

    2018-01-01

    standard formulas, while comparisons between groups were performed using the Wilcoxon Rank Sum, Kruskal-Wallis, Chi-Square, and Fisher's exact test. RESULTS: GS as assigned by standard 10-12 core biopsies was 3 + 3 in 90 (29.4%), 3 + 4 in 122 (39.8%), 4 + 3 in 50 (16.3%), and > 4 + 3 in 44 (14.4%) patients....... CONCLUSIONS: This plasma/urine biomarker test accurately predicts high grade cancer as determined by prostatectomy with a sensitivity at 92-97%, while the sensitivity of core biopsies was 78%....... of a test using cell-free RNA levels of biomarkers in predicting prostatectomy results. METHODS: This multicenter community-based prospective study was conducted using urine/blood samples collected from 306 patients. All recruited patients were treatment-naïve, without metastases, and had been biopsied...

  14. Fractional cointegration rank estimation

    DEFF Research Database (Denmark)

    Lasak, Katarzyna; Velasco, Carlos

    the parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...

  15. Luttinger and Hubbard sum rules: are they compatible?

    International Nuclear Information System (INIS)

    Matho, K.

    1992-01-01

    A so-called Hubbard sum rule determines the weight of a satellite in fermionic single-particle excitations with strong local repulsion (U→∞). Together with the Luttinger sum rule, this imposes two different energy scales on the remaining finite excitations. In the Hubbard chain, this has been identified microscopically as being due to a separation of spin and charge. (orig.)

  16. A Shuttle Upper Atmosphere Mass Spectrometer /SUMS/ experiment

    Science.gov (United States)

    Blanchard, R. C.; Duckett, R. J.; Hinson, E. W.

    1982-01-01

    A magnetic mass spectrometer is currently being adapted to the Space Shuttle Orbiter to provide repeated high altitude atmosphere data to support in situ rarefied flow aerodynamics research, i.e., in the high velocity, low density flight regime. The experiment, called Shuttle Upper Atmosphere Mass Spectrometer (SUMS), is the first attempt to design mass spectrometer equipment for flight vehicle aerodynamic data extraction. The SUMS experiment will provide total freestream atmospheric quantitites, principally total mass density, above altitudes at which conventional pressure measurements are valid. Experiment concepts, the expected flight profile, tradeoffs in the design of the total system and flight data reduction plans are discussed. Development plans are based upon a SUMS first flight after the Orbiter initial development flights.

  17. Predicting Rank Attainment in Political Science: What Else besides Publications Affects Promotion?

    Science.gov (United States)

    Hesli, Vicki L.; Lee, Jae Mook; Mitchell, Sara McLaughlin

    2012-01-01

    We report the results of hypotheses tests about the effects of several measures of research, teaching, and service on the likelihood of achieving the ranks of associate and full professor. In conducting these tests, we control for institutional and individual background characteristics. We focus our tests on the link between productivity and…

  18. Use of exp(iS[x]) in the sum over histories

    International Nuclear Information System (INIS)

    Anderson, A.

    1994-01-01

    The use of tsumexp(iS[x]) is the generic form for a sum over histories in configuration space is discussed critically and placed in its proper context. The standard derivation of the sum over paths by discretizing the paths is reviewed, and it is shown that the form tsumexp(iS[x]) is justified only for Schroedinger-type systems which are at most second order in the momenta. Extending this derivation to the relativistic free particle, the causal Green's function is expressed as a sum over timelike paths, and the Feynman Green's function is expressed both as a sum over paths which only go one way in time and as a sum over paths which move forward and backward in time. The weighting of the paths is shown not to be exp(iS[x]) is any of these cases. The role of the inner product and the operator ordering of the wave equation in defining the sum over histories is discussed

  19. Inclusive sum rules and spectra of neutrons at the ISR

    International Nuclear Information System (INIS)

    Grigoryan, A.A.

    1975-01-01

    Neutron spectra in pp collisions at ISR energies are studied in the framework of sum rules for inclusive processes. The contributions of protons, π- and E- mesons to the energy sum rule are calculated at √5 = 53 GeV. It is shown by means of this sum rule that the spectra of neutrons at the ISR are in contradiction with the spectra of other particles also measured at the ISR

  20. Diversifying customer review rankings.

    Science.gov (United States)

    Krestel, Ralf; Dokoohaki, Nima

    2015-06-01

    E-commerce Web sites owe much of their popularity to consumer reviews accompanying product descriptions. On-line customers spend hours and hours going through heaps of textual reviews to decide which products to buy. At the same time, each popular product has thousands of user-generated reviews, making it impossible for a buyer to read everything. Current approaches to display reviews to users or recommend an individual review for a product are based on the recency or helpfulness of each review. In this paper, we present a framework to rank product reviews by optimizing the coverage of the ranking with respect to sentiment or aspects, or by summarizing all reviews with the top-K reviews in the ranking. To accomplish this, we make use of the assigned star rating for a product as an indicator for a review's sentiment polarity and compare bag-of-words (language model) with topic models (latent Dirichlet allocation) as a mean to represent aspects. Our evaluation on manually annotated review data from a commercial review Web site demonstrates the effectiveness of our approach, outperforming plain recency ranking by 30% and obtaining best results by combining language and topic model representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Energy-weighted sum rules for mesons in hot and dense matter

    NARCIS (Netherlands)

    Cabrera, D.; Polls, A.; Ramos, A.; Tolos Rigueiro, Laura

    2009-01-01

    We study energy-weighted sum rules of the pion and kaon propagator in nuclear matter at finite temperature. The sum rules are obtained from matching the Dyson form of the meson propagator with its spectral Lehmann representation at low and high energies. We calculate the sum rules for specific

  2. Hepatitis C virus infection influences the S-methadone metabolite plasma concentration.

    Directory of Open Access Journals (Sweden)

    Shiow-Ling Wu

    Full Text Available Heroin-dependent patients typically contract hepatitis C virus (HCV at a disproportionately high level due to needle exchange. The liver is the primary target organ of HCV infection and also the main organ responsible for drug metabolism. Methadone maintenance treatment (MMT is a major treatment regimen for opioid dependence. HCV infection may affect methadone metabolism but this has rarely been studied. In our current study, we aimed to test the hypothesis that HCV may influence the methadone dosage and its plasma metabolite concentrations in a MMT cohort from Taiwan.A total of 366 MMT patients were recruited. The levels of plasma hepatitis B virus (HBV, HCV, human immunodeficiency virus (HIV antibodies (Ab, liver aspartate aminotransferase (AST and alanine aminotransferase (ALT, as well as methadone and its metabolite 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine (EDDP were measured along with the urine morphine concentration and amphetamine screening.Of the 352 subjects in our cohort with HCV test records, 95% were found to be positive for plasma anti-HCV antibody. The liver functional parameters of AST (Wilcoxon Rank-Sum test, P = 0.02 and ALT (Wilcoxon Rank-Sum test, P = 0.04, the plasma methadone concentrations (Wilcoxon Rank-Sum test, P = 0.043 and the R-enantiomer of methadone concentrations (Wilcoxon Rank-Sum test, P = 0.032 were significantly higher in the HCV antibody-positive subjects than in the HCV antibody-negative patients, but not the S-EDDP/methadone dose ratio. The HCV levels correlated with the methadone dose (β= 14.65 and 14.13; P = 0.029 and 0.03 and the S-EDDP/methadone dose ratio (β= -0.41 and -0.40; P = 0.00084 and 0.002 in both univariate and multivariate regression analyses.We conclude that HCV may influence the methadone dose and plasma S-EDDP/methadone dose ratio in MMT patients in this preliminary study.

  3. Standardization of I-125. Sum-Peak Coincidence Counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    2011-01-01

    I-125 is a nuclide which presents difficulties for standardization. The sum-peak method is one of the procedures used to standardize this radionuclide. Initially NaI (Tl)detectors and then the semiconductor detectors with higher resolution have been used.This paper describes the different methods based on the sum-peak procedure and the different expressions used to calculate the activity are deduced. We describe a general procedure for obtaining all of the above equations and many more. We analyze the influence of uncertainties in the used parameters in the uncertainty of the activity. We give a complete example of the transmission of uncertainty and the effects of correlations in the uncertainty of the activity of the sample. High-resolution spectra show an unresolved doublet of 62.0 keV and 62.8 keV. The paper presents two approaches to solve this problem. One is based on the calculation of area ratio and the sum of peak areas obtained from atomic and nuclear data, in the other we modify the equations so that the sum of the peak areas doublet, rather than its components, is present. (Author) 19 refs.

  4. Standardization of I-125. Sum-Peak Coincidence Counting

    Energy Technology Data Exchange (ETDEWEB)

    Grau Carles, A.; Grau Malonda, A.

    2011-07-01

    I-125 is a nuclide which presents difficulties for standardization. The sum-peak method is one of the procedures used to standardize this radionuclide. Initially NaI (Tl)detectors and then the semiconductor detectors with higher resolution have been used.This paper describes the different methods based on the sum-peak procedure and the different expressions used to calculate the activity are deduced. We describe a general procedure for obtaining all of the above equations and many more. We analyze the influence of uncertainties in the used parameters in the uncertainty of the activity. We give a complete example of the transmission of uncertainty and the effects of correlations in the uncertainty of the activity of the sample. High-resolution spectra show an unresolved doublet of 62.0 keV and 62.8 keV. The paper presents two approaches to solve this problem. One is based on the calculation of area ratio and the sum of peak areas obtained from atomic and nuclear data, in the other we modify the equations so that the sum of the peak areas doublet, rather than its components, is present. (Author) 19 refs.

  5. Journal Rankings by Health Management Faculty Members: Are There Differences by Rank, Leadership Status, or Area of Expertise?

    Science.gov (United States)

    Menachemi, Nir; Hogan, Tory H; DelliFraine, Jami L

    2015-01-01

    Health administration (HA) faculty members publish in a variety of journals, including journals focused on management, economics, policy, and information technology. HA faculty members are evaluated on the basis of the quality and quantity of their journal publications. However, it is unclear how perceptions of these journals vary by subdiscipline, department leadership role, or faculty rank. It is also not clear how perceptions of journals may have changed over the past decade since the last evaluation of journal rankings in the field was published. The purpose of the current study is to examine how respondents rank journals in the field of HA, as well as the variation in perception by academic rank, department leadership status, and area of expertise. Data were drawn from a survey of HA faculty members at U.S. universities, which was completed in 2012. Different journal ranking patterns were noted for faculty members of different subdisciplines. The health management-oriented journals (Health Care Management Review and Journal of Healthcare Management) were ranked higher than in previous research, suggesting that journal ranking perceptions may have changed over the intervening decade. Few differences in perceptions were noted by academic rank, but we found that department chairs were more likely than others to select Health Affairs in their top three most prestigious journals (β = 0.768; p journal prestige varied between a department chair and untenured faculty in different disciplines, and this perceived difference could have implications for promotion and tenure decisions.

  6. A new concept for stainless steels ranking upon the resistance to cavitation erosion

    Science.gov (United States)

    Bordeasu, I.; Popoviciu, M. O.; Salcianu, L. C.; Ghera, C.; Micu, L. M.; Badarau, R.; Iosif, A.; Pirvulescu, L. D.; Podoleanu, C. E.

    2017-01-01

    In present, the ranking of materials as their resistance to cavitation erosion is obtained by using laboratory tests finalized with the characteristic curves mean depth erosion against time MDE(t) and mean depth erosion rate against time MDER(t). In some previous papers, Bordeasu and co-workers give procedures to establish exponential equation representing the curves, with minimum scatter of the experimental obtained results. For a given material, both exponential equations MDE(t) and MDER(t) have the same values for the parameters of scale and for the shape one. For the ranking of materials is sometimes important to establish single figure. Till now in Timisoara Polytechnic University Cavitation Laboratory were used three such numbers: the stable value of the curve MDER(t), the resistance to cavitation erosion (Rcav ≡ 1/MDERstable) and the normalized cavitation resistance Rns which is the rate between vs = MDERstable for the analyzed material and vse= MDERse the mean depth erosion rate for the steel OH12NDL (Rns = vs/vse ). OH12NDL is a material used for manufacturing the blades of numerous Kaplan turbines in Romania for which both cavitation erosion laboratory tests and field measurements of cavitation erosions are available. In the present paper we recommend a new method for ranking the materials upon cavitation erosion resistance. This method uses the scale and shape parameters of the exponential equations which represents the characteristic cavitation erosion curves. Till now the method was applied only for stainless steels. The experimental results show that the scale parameter represents an excellent method for ranking the stainless steels. In the future this kind of ranking will be tested also for other materials especially for bronzes used for manufacturing ship propellers.

  7. Algebraic and computational aspects of real tensor ranks

    CERN Document Server

    Sakata, Toshio; Miyazaki, Mitsuhiro

    2016-01-01

    This book provides comprehensive summaries of theoretical (algebraic) and computational aspects of tensor ranks, maximal ranks, and typical ranks, over the real number field. Although tensor ranks have been often argued in the complex number field, it should be emphasized that this book treats real tensor ranks, which have direct applications in statistics. The book provides several interesting ideas, including determinant polynomials, determinantal ideals, absolutely nonsingular tensors, absolutely full column rank tensors, and their connection to bilinear maps and Hurwitz-Radon numbers. In addition to reviews of methods to determine real tensor ranks in details, global theories such as the Jacobian method are also reviewed in details. The book includes as well an accessible and comprehensive introduction of mathematical backgrounds, with basics of positive polynomials and calculations by using the Groebner basis. Furthermore, this book provides insights into numerical methods of finding tensor ranks through...

  8. Two non-parametric methods for derivation of constraints from radiotherapy dose–histogram data

    International Nuclear Information System (INIS)

    Ebert, M A; Kennedy, A; Joseph, D J; Gulliford, S L; Buettner, F; Foo, K; Haworth, A; Denham, J W

    2014-01-01

    Dose constraints based on histograms provide a convenient and widely-used method for informing and guiding radiotherapy treatment planning. Methods of derivation of such constraints are often poorly described. Two non-parametric methods for derivation of constraints are described and investigated in the context of determination of dose-specific cut-points—values of the free parameter (e.g., percentage volume of the irradiated organ) which best reflect resulting changes in complication incidence. A method based on receiver operating characteristic (ROC) analysis and one based on a maximally-selected standardized rank sum are described and compared using rectal toxicity data from a prostate radiotherapy trial. Multiple test corrections are applied using a free step-down resampling algorithm, which accounts for the large number of tests undertaken to search for optimal cut-points and the inherent correlation between dose–histogram points. Both methods provide consistent significant cut-point values, with the rank sum method displaying some sensitivity to the underlying data. The ROC method is simple to implement and can utilize a complication atlas, though an advantage of the rank sum method is the ability to incorporate all complication grades without the need for grade dichotomization. (note)

  9. Light-cone sum rules: A SCET-based formulation

    CERN Document Server

    De Fazio, F; Hurth, Tobias; Feldmann, Th.

    2007-01-01

    We describe the construction of light-cone sum rules (LCSRs) for exclusive $B$-meson decays into light energetic hadrons from correlation functions within soft-collinear effective theory (SCET). As an example, we consider the SCET sum rule for the $B \\to \\pi$ transition form factor at large recoil, including radiative corrections from hard-collinear loop diagrams at first order in the strong coupling constant.

  10. Complex-energy approach to sum rules within nuclear density functional theory

    Science.gov (United States)

    Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; Olsen, Erik

    2015-04-01

    Background: The linear response of the nucleus to an external field contains unique information about the effective interaction, the correlations governing the behavior of the many-body system, and the properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or the nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. Purpose: To establish an efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random-phase approximation (QRPA). Methods: To compute sum rules, we carry out contour integration of the response function in the complex-energy plane. We benchmark our results against the conventional matrix formulation of the QRPA theory, the Thouless theorem for the energy-weighted sum rule, and the dielectric theorem for the inverse-energy-weighted sum rule. Results: We derive the sum-rule expressions from the contour integration of the complex-energy FAM. We demonstrate that calculated sum-rule values agree with those obtained from the matrix formulation of the QRPA. We also discuss the applicability of both the Thouless theorem about the energy-weighted sum rule and the dielectric theorem for the inverse-energy-weighted sum rule to nuclear density functional theory in cases when the EDF is not based on a Hamiltonian. Conclusions: The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method

  11. Linear Subspace Ranking Hashing for Cross-Modal Retrieval.

    Science.gov (United States)

    Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A

    2017-09-01

    Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.

  12. Augmenting the Deliberative Method for Ranking Risks.

    Science.gov (United States)

    Susel, Irving; Lasley, Trace; Montezemolo, Mark; Piper, Joel

    2016-01-01

    The Department of Homeland Security (DHS) characterized and prioritized the physical cross-border threats and hazards to the nation stemming from terrorism, market-driven illicit flows of people and goods (illegal immigration, narcotics, funds, counterfeits, and weaponry), and other nonmarket concerns (movement of diseases, pests, and invasive species). These threats and hazards pose a wide diversity of consequences with very different combinations of magnitudes and likelihoods, making it very challenging to prioritize them. This article presents the approach that was used at DHS to arrive at a consensus regarding the threats and hazards that stand out from the rest based on the overall risk they pose. Due to time constraints for the decision analysis, it was not feasible to apply multiattribute methodologies like multiattribute utility theory or the analytic hierarchy process. Using a holistic approach was considered, such as the deliberative method for ranking risks first published in this journal. However, an ordinal ranking alone does not indicate relative or absolute magnitude differences among the risks. Therefore, the use of the deliberative method for ranking risks is not sufficient for deciding whether there is a material difference between the top-ranked and bottom-ranked risks, let alone deciding what the stand-out risks are. To address this limitation of ordinal rankings, the deliberative method for ranking risks was augmented by adding an additional step to transform the ordinal ranking into a ratio scale ranking. This additional step enabled the selection of stand-out risks to help prioritize further analysis. © 2015 Society for Risk Analysis.

  13. Advances in ranking and selection, multiple comparisons, and reliability methodology and applications

    CERN Document Server

    Balakrishnan, N; Nagaraja, HN

    2007-01-01

    S. Panchapakesan has made significant contributions to ranking and selection and has published in many other areas of statistics, including order statistics, reliability theory, stochastic inequalities, and inference. Written in his honor, the twenty invited articles in this volume reflect recent advances in these areas and form a tribute to Panchapakesan's influence and impact on these areas. Thematically organized, the chapters cover a broad range of topics from: Inference; Ranking and Selection; Multiple Comparisons and Tests; Agreement Assessment; Reliability; and Biostatistics. Featuring

  14. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.

    Science.gov (United States)

    Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo

    2018-04-01

    In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.

  15. The Subset Sum game.

    Science.gov (United States)

    Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim

    2014-03-16

    In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.

  16. Population models and simulation methods: The case of the Spearman rank correlation.

    Science.gov (United States)

    Astivia, Oscar L Olvera; Zumbo, Bruno D

    2017-11-01

    The purpose of this paper is to highlight the importance of a population model in guiding the design and interpretation of simulation studies used to investigate the Spearman rank correlation. The Spearman rank correlation has been known for over a hundred years to applied researchers and methodologists alike and is one of the most widely used non-parametric statistics. Still, certain misconceptions can be found, either explicitly or implicitly, in the published literature because a population definition for this statistic is rarely discussed within the social and behavioural sciences. By relying on copula distribution theory, a population model is presented for the Spearman rank correlation, and its properties are explored both theoretically and in a simulation study. Through the use of the Iman-Conover algorithm (which allows the user to specify the rank correlation as a population parameter), simulation studies from previously published articles are explored, and it is found that many of the conclusions purported in them regarding the nature of the Spearman correlation would change if the data-generation mechanism better matched the simulation design. More specifically, issues such as small sample bias and lack of power of the t-test and r-to-z Fisher transformation disappear when the rank correlation is calculated from data sampled where the rank correlation is the population parameter. A proof for the consistency of the sample estimate of the rank correlation is shown as well as the flexibility of the copula model to encompass results previously published in the mathematical literature. © 2017 The British Psychological Society.

  17. Sums of Generalized Harmonic Series

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 20; Issue 9. Sums of Generalized Harmonic Series: For Kids from Five to Fifteen. Zurab Silagadze. General Article Volume 20 Issue 9 September 2015 pp 822-843. Fulltext. Click here to view fulltext PDF. Permanent link:

  18. QV modal distance displacement - a criterion for contingency ranking

    Energy Technology Data Exchange (ETDEWEB)

    Rios, M.A.; Sanchez, J.L.; Zapata, C.J. [Universidad de Los Andes (Colombia). Dept. of Electrical Engineering], Emails: mrios@uniandes.edu.co, josesan@uniandes.edu.co, cjzapata@utp.edu.co

    2009-07-01

    This paper proposes a new methodology using concepts of fast decoupled load flow, modal analysis and ranking of contingencies, where the impact of each contingency is measured hourly taking into account the influence of each contingency over the mathematical model of the system, i.e. the Jacobian Matrix. This method computes the displacement of the reduced Jacobian Matrix eigenvalues used in voltage stability analysis, as a criterion of contingency ranking, considering the fact that the lowest eigenvalue in the normal operation condition is not the same lowest eigenvalue in N-1 contingency condition. It is made using all branches in the system and specific branches according to the IBPF index. The test system used is the IEEE 118 nodes. (author)

  19. Model of Decision Making through Consensus in Ranking Case

    Science.gov (United States)

    Tarigan, Gim; Darnius, Open

    2018-01-01

    The basic problem to determine ranking consensus is a problem to combine some rankings those are decided by two or more Decision Maker (DM) into ranking consensus. DM is frequently asked to present their preferences over a group of objects in terms of ranks, for example to determine a new project, new product, a candidate in a election, and so on. The problem in ranking can be classified into two major categories; namely, cardinal and ordinal rankings. The objective of the study is to obtin the ranking consensus by appying some algorithms and methods. The algorithms and methods used in this study were partial algorithm, optimal ranking consensus, BAK (Borde-Kendal)Model. A method proposed as an alternative in ranking conssensus is a Weighted Distance Forward-Backward (WDFB) method, which gave a little difference i ranking consensus result compare to the result oethe example solved by Cook, et.al (2005).

  20. Individual recognition of social rank and social memory performance depends on a functional circadian system.

    Science.gov (United States)

    Müller, L; Weinert, D

    2016-11-01

    In a natural environment, social abilities of an animal are important for its survival. Particularly, it must recognize its own social rank and the social rank of a conspecific and have a good social memory. While the role of the circadian system for object and spatial recognition and memory is well known, the impact of the social rank and circadian disruptions on social recognition and memory were not investigated so far. In the present study, individual recognition of social rank and social memory performance of Djungarian hamsters revealing different circadian phenotypes were investigated. Wild type (WT) animals show a clear and well-synchronized daily activity rhythm, whereas in arrhythmic (AR) hamsters, the suprachiasmatic nuclei (SCN) do not generate a circadian signal. The aim of the study was to investigate putative consequences of these deteriorations in the circadian system for animalś cognitive abilities. Hamsters were bred and kept under standardized housing conditions with food and water ad libitum and a 14l/10 D lighting regimen. Experimental animals were assigned to different groups (WT and AR) according to their activity pattern obtained by means of infrared motion sensors. Before the experiments, the animals were given to develop a dominant-subordinate relationship in a dyadic encounter. Experiment 1 dealt with individual recognition of social rank. Subordinate and dominant hamsters were tested in an open arena for their behavioral responses towards a familiar (known from the agonistic encounters) or an unfamiliar hamster (from another agonistic encounter) which had the same or an opposite social rank. The investigation time depended on the social rank of the WT subject hamster and its familiarity with the stimulus animal. Both subordinate and dominant WT hamsters preferred an unfamiliar subordinate stimulus animal. In contrast, neither subordinate nor dominant AR hamsters preferred any of the stimulus animals. Thus, disruptions in circadian

  1. 3He electron scattering sum rules

    International Nuclear Information System (INIS)

    Kim, Y.E.; Tornow, V.

    1982-01-01

    Electron scattering sum rules for 3 He are derived with a realistic ground-state wave function. The theoretical results are compared with the experimentally measured integrated cross sections. (author)

  2. The Privilege of Ranking: Google Plays Ball.

    Science.gov (United States)

    Wiggins, Richard

    2003-01-01

    Discussion of ranking systems used in various settings, including college football and academic admissions, focuses on the Google search engine. Explains the PageRank mathematical formula that scores Web pages by connecting the number of links; limitations, including authenticity and accuracy of ranked Web pages; relevancy; adjusting algorithms;…

  3. Probabilistic relation between In-Degree and PageRank

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2008-01-01

    This paper presents a novel stochastic model that explains the relation between power laws of In-Degree and PageRank. PageRank is a popularity measure designed by Google to rank Web pages. We model the relation between PageRank and In-Degree through a stochastic equation, which is inspired by the

  4. A toolbox for Harmonic Sums and their analytic continuations

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [RISC, J. Kepler University, Linz (Austria); Bluemlein, Johannes [DESY, Zeuthen (Germany)

    2010-07-01

    The package HarmonicSums implemented in the computer algebra system Mathematica is presented. It supports higher loop calculations in QCD and QED to represent single-scale quantities like anomalous dimensions and Wilson coefficients. The package allows to reduce general harmonic sums due to their algebraic and different structural relations. We provide a general framework for these reductions and the explicit representations up to weight w=8. For the use in experimental analyzes we also provide an analytic formalism to continue the harmonic sums form their integer arguments into the complex plane, which includes their recursions and asymptotic representations. The main ideas are illustrated by specific examples.

  5. Robust Tracking with Discriminative Ranking Middle-Level Patches

    Directory of Open Access Journals (Sweden)

    Hong Liu

    2014-04-01

    Full Text Available The appearance model has been shown to be essential for robust visual tracking since it is the basic criterion to locating targets in video sequences. Though existing tracking-by-detection algorithms have shown to be greatly promising, they still suffer from the drift problem, which is caused by updating appearance models. In this paper, we propose a new appearance model composed of ranking middle-level patches to capture more object distinctiveness than traditional tracking-by-detection models. Targets and backgrounds are represented by both low-level bottom-up features and high-level top-down patches, which can compensate each other. Bottom-up features are defined at the pixel level, and each feature gets its discrimination score through selective feature attention mechanism. In top-down feature extraction, rectangular patches are ranked according to their bottom-up discrimination scores, by which all of them are clustered into irregular patches, named ranking middle-level patches. In addition, at the stage of classifier training, the online random forests algorithm is specially refined to reduce drifting problems. Experiments on challenging public datasets and our test videos demonstrate that our approach can effectively prevent the tracker drifting problem and obtain competitive performance in visual tracking.

  6. PageRank, HITS and a unified framework for link analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Chris; He, Xiaofeng; Husbands, Parry; Zha, Hongyuan; Simon, Horst

    2001-10-01

    Two popular webpage ranking algorithms are HITS and PageRank. HITS emphasizes mutual reinforcement between authority and hub webpages, while PageRank emphasizes hyperlink weight normalization and web surfing based on random walk models. We systematically generalize/combine these concepts into a unified framework. The ranking framework contains a large algorithm space; HITS and PageRank are two extreme ends in this space. We study several normalized ranking algorithms which are intermediate between HITS and PageRank, and obtain closed-form solutions. We show that, to first order approximation, all ranking algorithms in this framework, including PageRank and HITS, lead to same ranking which is highly correlated with ranking by indegree. These results support the notion that in web resource ranking indegree and outdegree are of fundamental importance. Rankings of webgraphs of different sizes and queries are presented to illustrate our analysis.

  7. Virtual drug screen schema based on multiview similarity integration and ranking aggregation.

    Science.gov (United States)

    Kang, Hong; Sheng, Zhen; Zhu, Ruixin; Huang, Qi; Liu, Qi; Cao, Zhiwei

    2012-03-26

    The current drug virtual screen (VS) methods mainly include two categories. i.e., ligand/target structure-based virtual screen and that, utilizing protein-ligand interaction fingerprint information based on the large number of complex structures. Since the former one focuses on the one-side information while the later one focuses on the whole complex structure, they are thus complementary and can be boosted by each other. However, a common problem faced here is how to present a comprehensive understanding and evaluation of the various virtual screen results derived from various VS methods. Furthermore, there is still an urgent need for developing an efficient approach to fully integrate various VS methods from a comprehensive multiview perspective. In this study, our virtual screen schema based on multiview similarity integration and ranking aggregation was tested comprehensively with statistical evaluations, providing several novel and useful clues on how to perform drug VS from multiple heterogeneous data sources. (1) 18 complex structures of HIV-1 protease with ligands from the PDB were curated as a test data set and the VS was performed with five different drug representations. Ritonavir ( 1HXW ) was selected as the query in VS and the weighted ranks of the query results were aggregated from multiple views through four similarity integration approaches. (2) Further, one of the ranking aggregation methods was used to integrate the similarity ranks calculated by gene ontology (GO) fingerprint and structural fingerprint on the data set from connectivity map, and two typical HDAC and HSP90 inhibitors were chosen as the queries. The results show that rank aggregation can enhance the result of similarity searching in VS when two or more descriptions are involved and provide a more reasonable similarity rank result. Our study shows that integrated VS based on multiple data fusion can achieve a remarkable better performance compared to that from individual ones and

  8. Generalized PageRank on Directed Configuration Networks

    NARCIS (Netherlands)

    Chen, Ningyuan; Litvak, Nelli; Olvera-Cravioto, Mariana

    2017-01-01

    Note: formula is not displayed correctly. This paper studies the distribution of a family of rankings, which includes Google’s PageRank, on a directed configuration model. In particular, it is shown that the distribution of the rank of a randomly chosen node in the graph converges in distribution to

  9. A Linear Time Algorithm for the k Maximal Sums Problem

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2007-01-01

     k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all......Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....

  10. Sum Rules, Classical and Quantum - A Pedagogical Approach

    Science.gov (United States)

    Karstens, William; Smith, David Y.

    2014-03-01

    Sum rules in the form of integrals over the response of a system to an external probe provide general analytical tools for both experiment and theory. For example, the celebrated f-sum rule gives a system's plasma frequency as an integral over the optical-dipole absorption spectrum regardless of the specific spectral distribution. Moreover, this rule underlies Smakula's equation for the number density of absorbers in a sample in terms of the area under their absorption bands. Commonly such rules are derived from quantum-mechanical commutation relations, but many are fundamentally classical (independent of ℏ) and so can be derived from more transparent mechanical models. We have exploited this to illustrate the fundamental role of inertia in the case of optical sum rules. Similar considerations apply to sum rules in many other branches of physics. Thus, the ``attenuation integral theorems'' of ac circuit theory reflect the ``inertial'' effect of Lenz's Law in inductors or the potential energy ``storage'' in capacitors. These considerations are closely related to the fact that the real and imaginary parts of a response function cannot be specified independently, a result that is encapsulated in the Kramers-Kronig relations. Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.

  11. Counter-ions at single charged wall: Sum rules.

    Science.gov (United States)

    Samaj, Ladislav

    2013-09-01

    For inhomogeneous classical Coulomb fluids in thermal equilibrium, like the jellium or the two-component Coulomb gas, there exists a variety of exact sum rules which relate the particle one-body and two-body densities. The necessary condition for these sum rules is that the Coulomb fluid possesses good screening properties, i.e. the particle correlation functions or the averaged charge inhomogeneity, say close to a wall, exhibit a short-range (usually exponential) decay. In this work, we study equilibrium statistical mechanics of an electric double layer with counter-ions only, i.e. a globally neutral system of equally charged point-like particles in the vicinity of a plain hard wall carrying a fixed uniform surface charge density of opposite sign. At large distances from the wall, the one-body and two-body counter-ion densities go to zero slowly according to the inverse-power law. In spite of the absence of screening, all known sum rules are shown to hold for two exactly solvable cases of the present system: in the weak-coupling Poisson-Boltzmann limit (in any spatial dimension larger than one) and at a special free-fermion coupling constant in two dimensions. This fact indicates an extended validity of the sum rules and provides a consistency check for reasonable theoretical approaches.

  12. Motion in fourth-rank gravity

    International Nuclear Information System (INIS)

    Tapia, V.

    1992-04-01

    Recently we have explored the consequences of describing the metric properties of our universe through a quartic line element. In this geometry the natural object is a fourth-rank metric, i.e., a tensor with four indices. Based on this geometry we constructed a simple field theory for the gravitational field. The field equations coincide with the Einstein field equations in the vacuum case. This fact, however, does not guarantee the observational equivalence of both theories since one must still verify that, as a consequence of the field equations, test particles move along geodesics. This letter is aimed at establishing this result. (author). 7 refs

  13. Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

    KAUST Repository

    Huang, Huang

    2017-07-16

    separability and full symmetry. We formulate test functions as functions of temporal lags for each pair of spatial locations and develop a rank-based testing procedure induced by functional data depth for assessing these properties. The method is illustrated using simulated data from widely used spatio-temporal covariance models, as well as real datasets from weather stations and climate model outputs.

  14. OutRank

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Steinhausen, Uwe

    2008-01-01

    Outlier detection is an important data mining task for consistency checks, fraud detection, etc. Binary decision making on whether or not an object is an outlier is not appropriate in many applications and moreover hard to parametrize. Thus, recently, methods for outlier ranking have been proposed...

  15. Ranking Theory and Conditional Reasoning.

    Science.gov (United States)

    Skovgaard-Olsen, Niels

    2016-05-01

    Ranking theory is a formal epistemology that has been developed in over 600 pages in Spohn's recent book The Laws of Belief, which aims to provide a normative account of the dynamics of beliefs that presents an alternative to current probabilistic approaches. It has long been received in the AI community, but it has not yet found application in experimental psychology. The purpose of this paper is to derive clear, quantitative predictions by exploiting a parallel between ranking theory and a statistical model called logistic regression. This approach is illustrated by the development of a model for the conditional inference task using Spohn's (2013) ranking theoretic approach to conditionals. Copyright © 2015 Cognitive Science Society, Inc.

  16. Moessbauer sum rules for use with synchrotron sources

    International Nuclear Information System (INIS)

    Lipkin, H.J.

    1995-01-01

    The availability of tunable synchrotron radiation sources with millivolt resolution has opened prospects for exploring dynamics of complex systems with Moessbauer spectroscopy. Early Moessbauer treatments and moment sum rules are extended to treat inelastic excitations measured in synchrotron experiments, with emphasis on the unique conditions absent in neutron scattering and arising in resonance scattering: prompt absorption, delayed emission, recoilfree transitions, and coherent forward scattering. The first moment sum rule normalizes the inelastic spectrum. Sum rules obtained for higher moments include the third moment proportional to the second derivative of the potential acting on the Moessbauer nucleus and independent of temperature in the harmonic approximation. Interesting information may be obtained on the behavior of the potential acting on this nucleus in samples not easily investigated with neutron scattering, e.g., small samples, thin films, time-dependent structures, and amorphous-metallic high pressure phases

  17. The structure of completely positive matrices according to their CP-rank and CP-plus-rank

    NARCIS (Netherlands)

    Dickinson, Peter James Clair; Bomze, Immanuel M.; Still, Georg J.

    2015-01-01

    We study the topological properties of the cp-rank operator $\\mathrm{cp}(A)$ and the related cp-plus-rank operator $\\mathrm{cp}^+(A)$ (which is introduced in this paper) in the set $\\mathcal{S}^n$ of symmetric $n\\times n$-matrices. For the set of completely positive matrices, $\\mathcal{CP}^n$, we

  18. Communities in Large Networks: Identification and Ranking

    DEFF Research Database (Denmark)

    Olsen, Martin

    2008-01-01

    We study the problem of identifying and ranking the members of a community in a very large network with link analysis only, given a set of representatives of the community. We define the concept of a community justified by a formal analysis of a simple model of the evolution of a directed graph. ...... and its immediate surroundings. The members are ranked with a “local” variant of the PageRank algorithm. Results are reported from successful experiments on identifying and ranking Danish Computer Science sites and Danish Chess pages using only a few representatives....

  19. Nominal versus Attained Weights in Universitas 21 Ranking

    Science.gov (United States)

    Soh, Kaycheng

    2014-01-01

    Universitas 21 Ranking of National Higher Education Systems (U21 Ranking) is one of the three new ranking systems appearing in 2012. In contrast with the other systems, U21 Ranking uses countries as the unit of analysis. It has several features which lend it with greater trustworthiness, but it also shared some methodological issues with the other…

  20. A Comprehensive Analysis of Marketing Journal Rankings

    Science.gov (United States)

    Steward, Michelle D.; Lewis, Bruce R.

    2010-01-01

    The purpose of this study is to offer a comprehensive assessment of journal standings in Marketing from two perspectives. The discipline perspective of rankings is obtained from a collection of published journal ranking studies during the past 15 years. The studies in the published ranking stream are assessed for reliability by examining internal…

  1. QCD sum rule for nucleon in nuclear matter

    International Nuclear Information System (INIS)

    Mallik, S.; Sarkar, Sourav

    2010-01-01

    We consider the two-point function of nucleon current in nuclear matter and write a QCD sum rule to analyse the residue of the nucleon pole as a function of nuclear density. The nucleon self-energy needed for the sum rule is taken as input from calculations using phenomenological N N potential. Our result shows a decrease in the residue with increasing nuclear density, as is known to be the case with similar quantities. (orig.)

  2. GDH sum rule measurement at low Q2

    International Nuclear Information System (INIS)

    Bianchi, N.

    1996-01-01

    The Gerasimov-Drell-Hearn (GDH) sum rule is based on a general dispersive relation for the forward Compton scattering. Multipole analysis suggested the possible violation of the sum rule. Some propositions have been made to modify the original GDH expression. An effort is now being made in several laboratories to shred some light on this topic. The purposes of the different planned experiments are briefly presented according to their Q 2 range

  3. Structural relations of harmonic sums and Mellin transforms up to weight w=5

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes

    2009-01-15

    We derive the structural relations between the Mellin transforms of weighted Nielsen integrals emerging in the calculation of massless or massive single-scale quantities in QED and QCD, such as anomalous dimensions and Wilson coefficients, and other hard scattering cross sections depending on a single scale. The set of all multiple harmonic sums up to weight five cover the sums needed in the calculation of the 3-loop anomalous dimensions. The relations extend the set resulting from the quasi-shuffle product between harmonic sums studied earlier. Unlike the shuffle relations, they depend on the value of the quantities considered. Up to weight w=5, 242 nested harmonic sums contribute. In the present physical applications it is sufficient to consider the sub-set of harmonic sums not containing an index i=-1, which consists out of 69 sums. The algebraic relations reduce this set to 30 sums. Due to the structural relations a final reduction of the number of harmonic sums to 15 basic functions is obtained. These functions can be represented in terms of factorial series, supplemented by harmonic sums which are algebraically reducible. Complete analytic representations are given for these 15 meromorphic functions in the complex plane deriving their asymptotic- and recursion relations. A general outline is presented on the way nested harmonic sums and multiple zeta values emerge in higher order calculations of zero- and single scale quantities. (orig.)

  4. The BiPublishers ranking: Main results and methodological problems when constructing rankings of academic publishers

    Directory of Open Access Journals (Sweden)

    Torres-Salinas, Daniel

    2015-12-01

    Full Text Available We present the results of the Bibliometric Indicators for Publishers project (also known as BiPublishers. This project represents the first attempt to systematically develop bibliometric publisher rankings. The data for this project was derived from the Book Citation Index and the study time period was 2009-2013. We have developed 42 rankings: 4 by fields and 38 by disciplines. We display six indicators for publishers divided into three types: output, impact and publisher’s profile. The aim is to capture different characteristics of the research performance of publishers. 254 publishers were processed and classified according to publisher type: commercial publishers and university presses. We present the main publishers by field and then discuss the principal challenges presented when developing this type of tool. The BiPublishers ranking is an on-going project which aims to develop and explore new data sources and indicators to better capture and define the research impact of publishers.Presentamos los resultados del proyecto Bibliometric Indicators for Publishers (BiPublishers. Es el primer proyecto que desarrolla de manera sistemática rankings bibliométricos de editoriales. La fuente de datos empleada es el Book Citation Index y el periodo de análisis 2009-2013. Se presentan 42 rankings: 4 por áreas y 38 por disciplinas. Mostramos seis indicadores por editorial divididos según su tipología: producción, impacto y características editoriales. Se procesaron 254 editoriales y se clasificaron según el tipo: comerciales y universitarias. Se presentan las principales editoriales por áreas. Después, se discuten los principales retos a superar en el desarrollo de este tipo de herramientas. El ranking Bipublishers es un proyecto en desarrollo que persigue analizar y explorar nuevas fuentes de datos e indicadores para captar y definir el impacto de las editoriales académicas.

  5. Potential applications of rapid/elementary nonparametric statistical techniques (NST) to electrochemical problems

    International Nuclear Information System (INIS)

    Fahidy, Thomas Z.

    2009-01-01

    A major advantage of NST lies in the unimportance of the probability distribution of observations. In this paper, the sign test, the rank-sum test, the Kruskal-Wallis test, the Friedman test, and the runs test illustrate the potential of certain rapid NST for the evaluation of electrochemical process performance.

  6. PageRank in scale-free random graphs

    NARCIS (Netherlands)

    Chen, Ningyuan; Litvak, Nelli; Olvera-Cravioto, Mariana; Bonata, Anthony; Chung, Fan; Pralat, Paweł

    2014-01-01

    We analyze the distribution of PageRank on a directed configuration model and show that as the size of the graph grows to infinity, the PageRank of a randomly chosen node can be closely approximated by the PageRank of the root node of an appropriately constructed tree. This tree approximation is in

  7. 46 CFR 282.11 - Ranking of flags.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Ranking of flags. 282.11 Section 282.11 Shipping... COMMERCE OF THE UNITED STATES Foreign-Flag Competition § 282.11 Ranking of flags. The operators under each... priority of costs which are representative of the flag. For liner cargo vessels, the ranking of operators...

  8. 338 Résumé

    African Journals Online (AJOL)

    ISONIC

    sumé. Cardisoma armatum, est une espèce de crabe de terre rencontrée en Afrique de l'ouest en particulier en ... optique suite au traitement histologique ont permis la mise en évidence de quelques critères d'identification de l'espèce et ...... En Côte d'Ivoire il n'est pas rare de voir durant les saisons propices. Cardisoma ...

  9. Distance-Ranked Fault Identification of Reconfigurable Hardware Bitstreams via Functional Input

    Directory of Open Access Journals (Sweden)

    Naveed Imran

    2014-01-01

    Full Text Available Distance-Ranked Fault Identification (DRFI is a dynamic reconfiguration technique which employs runtime inputs to conduct online functional testing of fielded FPGA logic and interconnect resources without test vectors. At design time, a diverse set of functionally identical bitstream configurations are created which utilize alternate hardware resources in the FPGA fabric. An ordering is imposed on the configuration pool as updated by the PageRank indexing precedence. The configurations which utilize permanently damaged resources and hence manifest discrepant outputs, receive lower rank are thus less preferred for instantiation on the FPGA. Results indicate accurate identification of fault-free configurations in a pool of pregenerated bitstreams with a low number of reconfigurations and input evaluations. For MCNC benchmark circuits, the observed reduction in input evaluations is up to 75% when comparing the DRFI technique to unguided evaluation. The DRFI diagnosis method is seen to isolate all 14 healthy configurations from a pool of 100 pregenerated configurations, and thereby offering a 100% isolation accuracy provided the fault-free configurations exist in the design pool. When a complete recovery is not feasible, graceful degradation may be realized which is demonstrated by the PSNR improvement of images processed in a video encoder case study.

  10. Evaluation of the convolution sum involving the sum of divisors function for 22, 44 and 52

    Directory of Open Access Journals (Sweden)

    Ntienjem Ebénézer

    2017-04-01

    \\end{array} $ where αβ = 22, 44, 52, is evaluated for all natural numbers n. Modular forms are used to achieve these evaluations. Since the modular space of level 22 is contained in that of level 44, we almost completely use the basis elements of the modular space of level 44 to carry out the evaluation of the convolution sums for αβ = 22. We then use these convolution sums to determine formulae for the number of representations of a positive integer by the octonary quadratic forms a(x12+x22+x32+x42+b(x52+x62+x72+x82, $a\\,(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}+b\\,(x_{5}^{2}+x_{6}^{2}+x_{7}^{2}+x_{8}^{2},$ where (a, b = (1, 11, (1, 13.

  11. Spin Sum Rules and Polarizabilities: Results from Jefferson Lab

    International Nuclear Information System (INIS)

    Jian-Ping Chen

    2006-01-01

    The nucleon spin structure has been an active, exciting and intriguing subject of interest for the last three decades. Recent experimental data on nucleon spin structure at low to intermediate momentum transfers provide new information in the confinement regime and the transition region from the confinement regime to the asymptotic freedom regime. New insight is gained by exploring moments of spin structure functions and their corresponding sum rules (i.e. the generalized Gerasimov-Drell-Hearn, Burkhardt-Cottingham and Bjorken). The Burkhardt-Cottingham sum rule is verified to good accuracy. The spin structure moments data are compared with Chiral Perturbation Theory calculations at low momentum transfers. It is found that chiral perturbation calculations agree reasonably well with the first moment of the spin structure function g 1 at momentum transfer of 0.05 to 0.1 GeV 2 but fail to reproduce the neutron data in the case of the generalized polarizability (delta) LT (the (delta) LT puzzle). New data have been taken on the neutron ( 3 He), the proton and the deuteron at very low Q 2 down to 0.02 GeV 2 . They will provide benchmark tests of Chiral dynamics in the kinematic region where the Chiral Perturbation theory is expected to work

  12. Ranking Entities in Networks via Lefschetz Duality

    DEFF Research Database (Denmark)

    Aabrandt, Andreas; Hansen, Vagn Lundsgaard; Poulsen, Bjarne

    2014-01-01

    then be ranked according to how essential their positions are in the network by considering the effect of their respective absences. Defining a ranking of a network which takes the individual position of each entity into account has the purpose of assigning different roles to the entities, e.g. agents......, in the network. In this paper it is shown that the topology of a given network induces a ranking of the entities in the network. Further, it is demonstrated how to calculate this ranking and thus how to identify weak sub-networks in any given network....

  13. MRI reconstruction of multi-image acquisitions using a rank regularizer with data reordering

    Energy Technology Data Exchange (ETDEWEB)

    Adluru, Ganesh, E-mail: gadluru@gmail.com; Anderson, Jeffrey [UCAIR, Department of Radiology, University of Utah, Salt Lake City, Utah 84108 (United States); Gur, Yaniv [IBM Almaden Research Center, San Jose, California 95120 (United States); Chen, Liyong; Feinberg, David [Advanced MRI Technologies, Sebastpool, California, 95472 (United States); DiBella, Edward V. R. [UCAIR, Department of Radiology, University of Utah, Salt Lake City, Utah 84108 and Department of Bioengineering, University of Utah, Salt Lake City, Utah 84112 (United States)

    2015-08-15

    Purpose: To improve rank constrained reconstructions for undersampled multi-image MRI acquisitions. Methods: Motivated by the recent developments in low-rank matrix completion theory and its applicability to rapid dynamic MRI, a new reordering-based rank constrained reconstruction of undersampled multi-image data that uses prior image information is proposed. Instead of directly minimizing the nuclear norm of a matrix of estimated images, the nuclear norm of reordered matrix values is minimized. The reordering is based on the prior image estimates. The method is tested on brain diffusion imaging data and dynamic contrast enhanced myocardial perfusion data. Results: Good quality images from data undersampled by a factor of three for diffusion imaging and by a factor of 3.5 for dynamic cardiac perfusion imaging with respiratory motion were obtained. Reordering gave visually improved image quality over standard nuclear norm minimization reconstructions. Root mean squared errors with respect to ground truth images were improved by ∼18% and ∼16% with reordering for diffusion and perfusion applications, respectively. Conclusions: The reordered low-rank constraint is a way to inject prior image information that offers improvements over a standard low-rank constraint for undersampled multi-image MRI reconstructions.

  14. MRI reconstruction of multi-image acquisitions using a rank regularizer with data reordering

    International Nuclear Information System (INIS)

    Adluru, Ganesh; Anderson, Jeffrey; Gur, Yaniv; Chen, Liyong; Feinberg, David; DiBella, Edward V. R.

    2015-01-01

    Purpose: To improve rank constrained reconstructions for undersampled multi-image MRI acquisitions. Methods: Motivated by the recent developments in low-rank matrix completion theory and its applicability to rapid dynamic MRI, a new reordering-based rank constrained reconstruction of undersampled multi-image data that uses prior image information is proposed. Instead of directly minimizing the nuclear norm of a matrix of estimated images, the nuclear norm of reordered matrix values is minimized. The reordering is based on the prior image estimates. The method is tested on brain diffusion imaging data and dynamic contrast enhanced myocardial perfusion data. Results: Good quality images from data undersampled by a factor of three for diffusion imaging and by a factor of 3.5 for dynamic cardiac perfusion imaging with respiratory motion were obtained. Reordering gave visually improved image quality over standard nuclear norm minimization reconstructions. Root mean squared errors with respect to ground truth images were improved by ∼18% and ∼16% with reordering for diffusion and perfusion applications, respectively. Conclusions: The reordered low-rank constraint is a way to inject prior image information that offers improvements over a standard low-rank constraint for undersampled multi-image MRI reconstructions

  15. Social norms and rank-based nudging: Changing willingness to pay for healthy food.

    Science.gov (United States)

    Aldrovandi, Silvio; Brown, Gordon D A; Wood, Alex M

    2015-09-01

    People's evaluations in the domain of healthy eating are at least partly determined by the choice context. We systematically test reference level and rank-based models of relative comparisons against each other and explore their application to social norms nudging, an intervention that aims at influencing consumers' behavior by addressing their inaccurate beliefs about their consumption relative to the consumption of others. Study 1 finds that the rank of a product or behavior among others in the immediate comparison context, rather than its objective attributes, influences its evaluation. Study 2 finds that when a comparator is presented in isolation the same rank-based process occurs based on information retrieved from memory. Study 3 finds that telling people how their consumption ranks within a normative comparison sample increases willingness to pay for a healthy food by over 30% relative to the normal social norms intervention that tells them how they compare to the average. We conclude that social norms interventions should present rank information (e.g., "you are in the most unhealthy 10% of eaters") rather than information relative to the average (e.g., "you consume 500 calories more than the average person"). (c) 2015 APA, all rights reserved).

  16. An abstract approach to some spectral problems of direct sum differential operators

    Directory of Open Access Journals (Sweden)

    Maksim S. Sokolov

    2003-07-01

    Full Text Available In this paper, we study the common spectral properties of abstract self-adjoint direct sum operators, considered in a direct sum Hilbert space. Applications of such operators arise in the modelling of processes of multi-particle quantum mechanics, quantum field theory and, specifically, in multi-interval boundary problems of differential equations. We show that a direct sum operator does not depend in a straightforward manner on the separate operators involved. That is, on having a set of self-adjoint operators giving a direct sum operator, we show how the spectral representation for this operator depends on the spectral representations for the individual operators (the coordinate operators involved in forming this sum operator. In particular it is shown that this problem is not immediately solved by taking a direct sum of the spectral properties of the coordinate operators. Primarily, these results are to be applied to operators generated by a multi-interval quasi-differential system studied, in the earlier works of Ashurov, Everitt, Gesztezy, Kirsch, Markus and Zettl. The abstract approach in this paper indicates the need for further development of spectral theory for direct sum differential operators.

  17. On QCD sum rules of the Laplace transform type and light quark masses

    International Nuclear Information System (INIS)

    Narison, S.

    1981-04-01

    We discuss the relation between the usual dispersion relation sum rules and the Laplace transform type sum rules in quantum chromodynamics. Two specific examples corresponding to the S-coupling constant sum rule and the light quark masses sum rules are considered. An interpretation, within QCD, of Leutwyler's formula for the current algebra quark masses is also given

  18. A Hybrid Distance-Based Ideal-Seeking Consensus Ranking Model

    Directory of Open Access Journals (Sweden)

    Madjid Tavana

    2007-01-01

    Full Text Available Ordinal consensus ranking problems have received much attention in the management science literature. A problem arises in situations where a group of k decision makers (DMs is asked to rank order n alternatives. The question is how to combine the DM rankings into one consensus ranking. Several different approaches have been suggested to aggregate DM responses into a compromise or consensus ranking; however, the similarity of consensus rankings generated by the different algorithms is largely unknown. In this paper, we propose a new hybrid distance-based ideal-seeking consensus ranking model (DCM. The proposed hybrid model combines parts of the two commonly used consensus ranking techniques of Beck and Lin (1983 and Cook and Kress (1985 into an intuitive and computationally simple model. We illustrate our method and then run a Monte Carlo simulation across a range of k and n to compare the similarity of the consensus rankings generated by our method with the best-known method of Borda and Kendall (Kendall 1962 and the two methods proposed by Beck and Lin (1983 and Cook and Kress (1985. DCM and Beck and Lin's method yielded the most similar consensus rankings, whereas the Cook-Kress method and the Borda-Kendall method yielded the least similar consensus rankings.

  19. Application of Optical Coherence Tomography and Contrast Sensitivity Test for Observing Fundus Changes of Patients With Pregnancy-Induced Hypertension Syndrome.

    Science.gov (United States)

    Wang, Zhixue; Zou, Yuanyuan; Li, Wenying; Wang, Xueyan; Zhang, Min; Wang, Wenying

    2015-11-01

    This study was aimed to investigate the fundus changes of patients with pregnancy-induced hypertension syndrome (PIHS) using optical coherence tomography (OCT) technology and contrast sensitivity (CS) tests.Ninety-eight patients with PIHS underwent routine eye examinations including vision correction, fundus examination, OCT, and CS tests. The CS test was performed at low, medium, and high frequency, respectively. Moreover, the difference in CS tests between 2 groups was analyzed by independent-samples T test. The Kruskal-Wallis rank sum test and linear regression model were used to detect the correlation of OCT with CS, respectively. Meanwhile Satterthwaite approximate T test was adopted for pairwise comparisons after nonparametric analysis of variance.The OCT test revealed that 56.76% of the examined eyes showed shallow retinal detachment in the macula lutea and around the optic disk. The differences in CS at each spatial frequency between the case and control group were statistically significant (P tests might be valuable methods in observing fundus changes for PIHS patients.

  20. Wave functions constructed from an invariant sum over histories satisfy constraints

    International Nuclear Information System (INIS)

    Halliwell, J.J.; Hartle, J.B.

    1991-01-01

    Invariance of classical equations of motion under a group parametrized by functions of time implies constraints between canonical coordinates and momenta. In the Dirac formulation of quantum mechanics, invariance is normally imposed by demanding that physical wave functions are annihilated by the operator versions of these constraints. In the sum-over-histories quantum mechanics, however, wave functions are specified, directly, by appropriate functional integrals. It therefore becomes an interesting question whether the wave functions so specified obey the operator constraints of the Dirac theory. In this paper, we show for a wide class of theories, including gauge theories, general relativity, and first-quantized string theories, that wave functions constructed from a sum over histories are, in fact, annihilated by the constraints provided that the sum over histories is constructed in a manner which respects the invariance generated by the constraints. By this we mean a sum over histories defined with an invariant action, invariant measure, and an invariant class of paths summed over

  1. Symbolic methods for the evaluation of sum rules of Bessel functions

    International Nuclear Information System (INIS)

    Babusci, D.; Dattoli, G.; Górska, K.; Penson, K. A.

    2013-01-01

    The use of the umbral formalism allows a significant simplification of the derivation of sum rules involving products of special functions and polynomials. We rederive in this way known sum rules and addition theorems for Bessel functions. Furthermore, we obtain a set of new closed form sum rules involving various special polynomials and Bessel functions. The examples we consider are relevant for applications ranging from plasma physics to quantum optics

  2. Generalized harmonic, cyclotomic, and binomial sums, their polylogarithms and special numbers

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, J.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC); Bluemlein, J. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2013-10-15

    A survey is given on mathematical structures which emerge in multi-loop Feynman diagrams. These are multiply nested sums, and, associated to them by an inverse Mellin transform, specific iterated integrals. Both classes lead to sets of special numbers. Starting with harmonic sums and polylogarithms we discuss recent extensions of these quantities as cyclotomic, generalized (cyclotomic), and binomially weighted sums, associated iterated integrals and special constants and their relations.

  3. Generalized harmonic, cyclotomic, and binomial sums, their polylogarithms and special numbers

    International Nuclear Information System (INIS)

    Ablinger, J.; Schneider, C.; Bluemlein, J.

    2013-10-01

    A survey is given on mathematical structures which emerge in multi-loop Feynman diagrams. These are multiply nested sums, and, associated to them by an inverse Mellin transform, specific iterated integrals. Both classes lead to sets of special numbers. Starting with harmonic sums and polylogarithms we discuss recent extensions of these quantities as cyclotomic, generalized (cyclotomic), and binomially weighted sums, associated iterated integrals and special constants and their relations.

  4. Validity of early MRI structural damage end points and potential impact on clinical trial design in rheumatoid arthritis

    DEFF Research Database (Denmark)

    Baker, Joshua F; Conaghan, Philip G; Emery, Paul

    2016-01-01

    Wilcoxon rank sum tests and tests of proportion estimated the sample size required to detect differences between combination therapy (methotrexate+golimumab) and methotrexate-monotherapy arms in (A) change in damage score and (B) proportion of patients progressing. RESULTS: Patients with early MRI...

  5. Renormalization group summation of Laplace QCD sum rules for scalar gluon currents

    Directory of Open Access Journals (Sweden)

    Farrukh Chishtie

    2016-03-01

    Full Text Available We employ renormalization group (RG summation techniques to obtain portions of Laplace QCD sum rules for scalar gluon currents beyond the order to which they have been explicitly calculated. The first two of these sum rules are considered in some detail, and it is shown that they have significantly less dependence on the renormalization scale parameter μ2 once the RG summation is used to extend the perturbative results. Using the sum rules, we then compute the bound on the scalar glueball mass and demonstrate that the 3 and 4-Loop perturbative results form lower and upper bounds to their RG summed counterparts. We further demonstrate improved convergence of the RG summed expressions with respect to perturbative results.

  6. Sum-Trigger-II status and prospective physics

    Energy Technology Data Exchange (ETDEWEB)

    Dazzi, Francesco; Mirzoyan, Razmik; Schweizer, Thomas; Teshima, Masahiro [Max Planck Institut fuer Physik, Munich (Germany); Herranz, Diego; Lopez, Marcos [Universidad Complutense, Madrid (Spain); Mariotti, Mose [Universita degli Studi di Padova (Italy); Nakajima, Daisuke [The University of Tokio (Japan); Rodriguez Garcia, Jezabel [Max Planck Institut fuer Physik, Munich (Germany); Instituto Astrofisico de Canarias, Tenerife (Spain)

    2015-07-01

    MAGIC is a stereoscopic system of 2 Imaging Air Cherenkov Telescopes (IACTs) for very high energy gamma-ray astronomy, located at La Palma (Spain). Lowering the energy threshold of IACTs is crucial for the observation of Pulsars, high redshift AGNs and GRBs. A novel trigger strategy, based on the analogue sum of a patch of pixels, can lead to a lower threshold compared to conventional digital triggers. In the last years, a major upgrade of the MAGIC telescopes took place in order to optimize the performances, mainly in the low energy domain. The PMTs camera and the reflective surface of MAGIC-I, as well as both readout systems, have been deeply renovated. The last important milestone is the implementation of a new stereoscopic analogue trigger, dubbed Sum-Trigger-II. The installation successfully ended in 2014 and the first data set has been already taken. Currently the fine-tuning of the main parameters as well as the comparison with Monte Carlo studies is ongoing. In this talk the status of Sum-Trigger-II and the future prospective physics cases at very low energy are presented.

  7. B --> K$*\\gamma$ from hybrid sum rule

    CERN Document Server

    Narison, Stéphan

    1994-01-01

    Using the {\\it hybrid} moments-Laplace sum rule (HSR), which is well-defined for M_b \\rar \\infty, in contrast with the popular double Borel (Laplace) sum rule (DLSR), which blows up in this limit when applied to the heavy-to-light processes, we show that the form factor of the B \\rar K^* \\ \\gamma radiative transition is dominated by the light-quark condensate for M_b \\rar \\infty and behaves like \\sqrt M_b. The form factor is found to be F^{B\\rar K^*}_1(0) \\simeq (30.8 \\pm 1.3 \\pm 3.6 \\pm 0.6)\\times 10^{-2}, where the errors come respectively from the procedure in the sum rule analysis, the errors in the input and in the SU(3)_f-breaking parameters. This result leads to Br(B\\rar K^* \\ \\gamma) \\simeq (4.45 \\pm 1.12) \\times 10^{-5} in agreement with the recent CLEO data. Parametrization of the M_b-dependence of the form factor including the SU(3)_f-breaking effects is given in (26), which leads to F^{B\\rar K^*}_1(0)/ F^{B\\rar \\rho}_1(0) \\simeq (1.14 \\pm 0.02).

  8. Inequalities for finite trigonometric sums. An interplay: with some series related to harmonic numbers

    Directory of Open Access Journals (Sweden)

    Omran Kouba

    2016-07-01

    Full Text Available Abstract An interplay between the sum of certain series related to harmonic numbers and certain finite trigonometric sums is investigated. This allows us to express the sum of these series in terms of the considered trigonometric sums, and permits us to find sharp inequalities bounding these trigonometric sums. In particular, this answers positively an open problem of Chen (Excursions in Classical Analysis, 2010.

  9. Ranking of Prokaryotic Genomes Based on Maximization of Sortedness of Gene Lengths.

    Science.gov (United States)

    Bolshoy, A; Salih, B; Cohen, I; Tatarinova, T

    How variations of gene lengths (some genes become longer than their predecessors, while other genes become shorter and the sizes of these factions are randomly different from organism to organism) depend on organismal evolution and adaptation is still an open question. We propose to rank the genomes according to lengths of their genes, and then find association between the genome rank and variousproperties, such as growth temperature, nucleotide composition, and pathogenicity. This approach reveals evolutionary driving factors. The main purpose of this study is to test effectiveness and robustness of several ranking methods. The selected method of evaluation is measuring of overall sortedness of the data. We have demonstrated that all considered methods give consistent results and Bubble Sort and Simulated Annealing achieve the highest sortedness. Also, Bubble Sort is considerably faster than the Simulated Annealing method.

  10. Fair ranking of researchers and research teams.

    Science.gov (United States)

    Vavryčuk, Václav

    2018-01-01

    The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier).

  11. Chain hexagonal cacti with the extremal eccentric distance sum.

    Science.gov (United States)

    Qu, Hui; Yu, Guihai

    2014-01-01

    Eccentric distance sum (EDS), which can predict biological and physical properties, is a topological index based on the eccentricity of a graph. In this paper we characterize the chain hexagonal cactus with the minimal and the maximal eccentric distance sum among all chain hexagonal cacti of length n, respectively. Moreover, we present exact formulas for EDS of two types of hexagonal cacti.

  12. Direct and reverse inclusions for strongly multiple summing operators

    Indian Academy of Sciences (India)

    and strongly multiple summing operators under the assumption that the range has finite cotype. Keywords. .... multiple (q, p)-summing, if there exists a constant C ≥ 0 such that for every choice of systems (x j i j )1≤i j ≤m j ...... Ideals and their Applications in Theoretical Physics (1983) (Leipzig: Teubner-Texte) pp. 185–199.

  13. Treatment plan ranking using physical and biological indices

    International Nuclear Information System (INIS)

    Ebert, M. A.; University of Western Asutralia, WA

    2001-01-01

    Full text: The ranking of dose distributions is of importance in several areas such as i) comparing rival treatment plans, ii) comparing iterations in an optimisation routine, and iii) dose-assessment of clinical trial data. This study aimed to investigate the influence of choice of objective function in ranking tumour dose distributions. A series of physical (mean, maximum, minimum, standard deviation of dose) dose-volume histogram (DVH) reduction indices and biologically-based (tumour-control probability - TCP; equivalent uniform dose -EUD) indices were used to rank a series of hypothetical DVHs, as well as DVHs obtained from a series of 18 prostate patients. The distribution in ranking and change in distribution with change in indice parameters were investigated. It is found that not only is the ranking of DVHs dependent on the actual model used to perform the DVH reduction, it is also found to depend on the inherent characteristics of each model (i.e., selected parameters). The adjacent figure shows an example where the 18 prostate patients are ranked (grey-scale from black to white) by EUD when an α value of 0.8 Gy -1 is used in the model. The change of ranking as α varies is evident. Conclusion: This study has shown that the characteristics of the model selected in plan optimisation or DVH ranking will have an impact on the ranking obtained. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  14. Path-sum calculations for rf current drive

    International Nuclear Information System (INIS)

    Belo, Jorge H.; Bizarro, Joao P.S.; Rodrigues, Paulo

    2001-01-01

    Path sums and Gaussian short-time propagators are used to solve two-dimensional Fokker-Planck models of lower-hybrid (LH) and electron-cyclotron (EC) current drive (CD), and are shown to be well suited to the two limiting situations where the rf quasilinear diffusion coefficient is either relatively small, D rf ≅0.1, or very large, D rf →∞, the latter case enabling a special treatment. Results are given for both LHCD and ECCD in the small D rf case, whereas the limiting situation is illustrated only for ECCD. To check the accuracy of path-sum calculations, comparisons with finite difference solutions are provided

  15. Ordering individuals with sum scores: the introduction of the nonparametric Rasch model

    NARCIS (Netherlands)

    Zwitser, R.J.; Maris, G.

    2016-01-01

    When a simple sum or number-correct score is used to evaluate the ability of individual testees, then, from an accountability perspective, the inferences based on the sum score should be the same as the inferences based on the complete response pattern. This requirement is fulfilled if the sum score

  16. Rank diversity of languages: generic behavior in computational linguistics.

    Science.gov (United States)

    Cocho, Germinal; Flores, Jorge; Gershenson, Carlos; Pineda, Carlos; Sánchez, Sergio

    2015-01-01

    Statistical studies of languages have focused on the rank-frequency distribution of words. Instead, we introduce here a measure of how word ranks change in time and call this distribution rank diversity. We calculate this diversity for books published in six European languages since 1800, and find that it follows a universal lognormal distribution. Based on the mean and standard deviation associated with the lognormal distribution, we define three different word regimes of languages: "heads" consist of words which almost do not change their rank in time, "bodies" are words of general use, while "tails" are comprised by context-specific words and vary their rank considerably in time. The heads and bodies reflect the size of language cores identified by linguists for basic communication. We propose a Gaussian random walk model which reproduces the rank variation of words in time and thus the diversity. Rank diversity of words can be understood as the result of random variations in rank, where the size of the variation depends on the rank itself. We find that the core size is similar for all languages studied.

  17. Evaluation of treatment effects by ranking

    DEFF Research Database (Denmark)

    Halekoh, U; Kristensen, K

    2008-01-01

    In crop experiments measurements are often made by a judge evaluating the crops' conditions after treatment. In the present paper an analysis is proposed for experiments where plots of crops treated differently are mutually ranked. In the experimental layout the crops are treated on consecutive...... plots usually placed side by side in one or more rows. In the proposed method a judge ranks several neighbouring plots, say three, by ranking them from best to worst. For the next observation the judge moves on by no more than two plots, such that up to two plots will be re-evaluated again...... in a comparison with the new plot(s). Data from studies using this set-up were analysed by a Thurstonian random utility model, which assumed that the judge's rankings were obtained by comparing latent continuous utilities or treatment effects. For the latent utilities a variance component model was considered...

  18. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan

    2013-09-01

    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  19. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan

    2009-05-01

    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  20. A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.

    Science.gov (United States)

    Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho

    2014-10-01

    Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.